repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
Santana9937/Classification_ML_Specialization
Week_2_Learning_Linear_Classifiers/week_2_assign_2_lin_reg_L2_reg.ipynb
mit
[ "Logistic Regression with L2 regularization\nIn this notebook, you will implement your own logistic regression classifier with L2 regularization. You will do the following:\n\nExtract features from Amazon product reviews.\nWrite a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.\nImplement gradient ascent with an L2 penalty.\nEmpirically explore how the L2 penalty can ameliorate overfitting.\n\nImporting Libraries", "import os\nimport zipfile\nimport string\nimport numpy as np\nimport pandas as pd\nfrom sklearn import linear_model\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Unzipping files with Amazon Baby Products Reviews\nFor this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.", "# Put files in current direction into a list\nfiles_list = [f for f in os.listdir('.') if os.path.isfile(f)]\n\n# Filename of unzipped file\nunzipped_file = 'amazon_baby_subset.csv'\n\n# If upzipped file not in files_list, unzip the file\nif unzipped_file not in files_list:\n zip_file = unzipped_file + '.zip'\n unzipping = zipfile.ZipFile(zip_file)\n unzipping.extractall()\n unzipping.close", "Loading the products data\nWe will use a dataset consisting of baby product reviews on Amazon.com.", "products = pd.read_csv(\"amazon_baby_subset.csv\")", "Now, let us see a preview of what the dataset looks like.", "products.head()", "One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.", "products['sentiment']", "Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.", "products.head(10)['name']\n\nprint '# of positive reviews =', len(products[products['sentiment']==1])\nprint '# of negative reviews =', len(products[products['sentiment']==-1])", "Apply text cleaning on the review data\nIn this section, we will perform some simple feature cleaning. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file. \nNow, we will load these words from this JSON file:", "import json\nwith open('important_words.json', 'r') as f: # Reads the list of most frequent words\n important_words = json.load(f)\nimportant_words = [str(s) for s in important_words]\n\nprint important_words", "Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nCompute word counts (only for important_words)\n\nWe start with Step 1 which can be done as follows:\nBefore removing the punctuation from the strings in the review column, we will fill all NA values with empty string.", "products[\"review\"] = products[\"review\"].fillna(\"\")", "Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.", "products[\"review_clean\"] = products[\"review\"].str.translate(None, string.punctuation)", "Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.\nNote: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.", "for word in important_words:\n products[word] = products['review_clean'].apply(lambda s : s.split().count(word))", "The products DataFrame now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.", "products['perfect']", "Train-Validation split\nWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. \nNote: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.\nLoading the JSON files with the indicies from the training data and the validation data into a a list.", "with open('module-4-assignment-train-idx.json', 'r') as f:\n train_idx_lst = json.load(f)\ntrain_idx_lst = [int(entry) for entry in train_idx_lst]\n\nwith open('module-4-assignment-validation-idx.json', 'r') as f:\n validation_idx_lst = json.load(f)\nvalidation_idx_lst = [int(entry) for entry in validation_idx_lst]", "Using the list of the training data indicies and the validation data indicies to get a DataFrame with the training data and a DataFrame with the validation data.", "train_data = products.ix[train_idx_lst]\nvalidation_data = products.ix[validation_idx_lst]\n\nprint 'Training set : %d data points' % len(train_data)\nprint 'Validation set : %d data points' % len(validation_data)", "Convert DataFrame to NumPy array\nJust like in the second assignment of the previous module, we provide you with a function that extracts columns from a DataFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. \nNote: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.", "def get_numpy_data(data_frame, features, label):\n data_frame['intercept'] = 1\n features = ['intercept'] + features\n features_frame = data_frame[features]\n feature_matrix = data_frame.as_matrix(columns=features)\n label_array = data_frame[label]\n label_array = label_array.values\n return(feature_matrix, label_array)", "We convert both the training and validation sets into NumPy arrays.\nWarning: This may take a few minutes.", "feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')\nfeature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') ", "Building on logistic regression with no L2 penalty assignment\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nwhere the feature vector $h(\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\mathbf{x}_i$. \nWe will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)", "'''\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\nestimate ranges between 0 and 1.\n'''\ndef predict_probability(feature_matrix, coefficients):\n \n # Take dot product of feature_matrix and coefficients \n arg_exp = np.dot(coefficients,feature_matrix.transpose())\n \n # Compute P(y_i = +1 | x_i, w) using the link function\n predictions = 1.0/(1.0 + np.exp(-arg_exp))\n \n # return predictions\n return predictions", "Adding L2 penalty\nLet us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.\nRecall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\n Adding L2 penalty to the derivative \nIt takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.\n\nRecall from the lecture that the link function is still the sigmoid:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nWe add the L2 penalty term to the per-coefficient derivative of log likelihood:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\n\nThe per-coefficient derivative for logistic regression with an L2 penalty is as follows:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\nand for the intercept term, we have\n$$\n\\frac{\\partial\\ell}{\\partial w_0} = \\sum{i=1}^N h_0(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\nNote: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.\nWrite a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:\n * errors vector containing $(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w}))$ for all $i$\n * feature vector containing $h_j(\\mathbf{x}_i)$ for all $i$\n * coefficient containing the current value of coefficient $w_j$.\n * l2_penalty representing the L2 penalty constant $\\lambda$\n * feature_is_constant telling whether the $j$-th feature is constant or not.", "def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): \n \n # Compute the dot product of errors and feature\n derivative = np.dot(feature.transpose(), errors)\n\n # add L2 penalty term for any feature that isn't the intercept.\n if not feature_is_constant: \n derivative = derivative - 2.0*l2_penalty*coefficient\n \n return derivative", "Quiz question: In the code above, was the intercept term regularized?\nNo\nTo verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).\n$$\\ell\\ell(\\mathbf{w}) = \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) \\color{red}{-\\lambda\\|\\mathbf{w}\\|_2^2} $$", "def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):\n indicator = (sentiment==+1)\n scores = np.dot(feature_matrix, coefficients)\n \n lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)\n \n return lp", "Quiz question: Does the term with L2 regularization increase or decrease $\\ell\\ell(\\mathbf{w})$?\n Decreases \nThe logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.", "def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):\n coefficients = np.array(initial_coefficients) # make sure it's a numpy array\n for itr in xrange(max_iter):\n # Predict P(y_i = +1|x_i,w) using your predict_probability() function\n predictions = predict_probability(feature_matrix, coefficients)\n \n # Compute indicator value for (y_i = +1)\n indicator = (sentiment==+1)\n \n # Compute the errors as indicator - predictions\n errors = indicator - predictions\n for j in xrange(len(coefficients)): # loop over each coefficient\n is_intercept = (j == 0)\n # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].\n # Compute the derivative for coefficients[j]. Save it in a variable called derivative\n derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],coefficients[j],l2_penalty, is_intercept)\n \n # add the step size times the derivative to the current coefficient\n coefficients[j] = coefficients[j] + step_size*derivative\n \n # Checking whether log likelihood is increasing\n if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \\\n or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:\n lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)\n print 'iteration %*d: log likelihood of observed labels = %.8f' % \\\n (int(np.ceil(np.log10(max_iter))), itr, lp)\n return coefficients", "Explore effects of L2 regularization\nNow that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.\nBelow, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.", "# run with L2 = 0\ncoefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=0, max_iter=501)\n\n# run with L2 = 4\ncoefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=4, max_iter=501)\n\n# run with L2 = 10\ncoefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=10, max_iter=501)\n\n# run with L2 = 1e2\ncoefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e2, max_iter=501)\n\n# run with L2 = 1e3\ncoefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e3, max_iter=501)\n\n# run with L2 = 1e5\ncoefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e5, max_iter=501)", "Compare coefficients\nWe now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.\nBelow is a simple helper function that will help us create this table.", "def add_coefficients_to_table(coefficients, column_name):\n return pd.Series(coefficients, index = column_name)", "Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.", "coeff_L2_0_table = add_coefficients_to_table(coefficients_0_penalty, ['intercept'] + important_words)\ncoeff_L2_4_table = add_coefficients_to_table(coefficients_4_penalty, ['intercept'] + important_words)\ncoeff_L2_10_table = add_coefficients_to_table(coefficients_10_penalty, ['intercept'] + important_words)\ncoeff_L2_1e2_table = add_coefficients_to_table(coefficients_1e2_penalty, ['intercept'] + important_words)\ncoeff_L2_1e3_table = add_coefficients_to_table(coefficients_1e3_penalty, ['intercept'] + important_words)\ncoeff_L2_1e5_table = add_coefficients_to_table(coefficients_1e5_penalty, ['intercept'] + important_words)", "Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.", "positive_words = coeff_L2_0_table.sort_values(ascending=False)[0:5].index.tolist()\nnegative_words = coeff_L2_0_table.sort_values(ascending=True)[0:5].index.tolist()", "Quiz Question. Which of the following is not listed in either positive_words or negative_words?", "print \"positive_words: \", positive_words\nprint \"negative_words: \", negative_words", "Plotting the Coefficient Path with Increase in L2 Penalty\nLet us observe the effect of increasing L2 penalty on the 10 words just selected.\nFirst, let's put the 6 L2 penalty values we considered in a list.", "l2_pen_vals = [0.0, 4.0, 10.0, 1.0e2, 1.0e3, 1.0e5]", "Next, let's put all the words we considered as features for the classification model plus the intercept features", "feature_words_lst = ['intercept'] + important_words", "Now, we will fill-in 2 dictionaries, one with the 5 positive words as the index for the dictionary and the other with the 5 negative words as the index for the dictionary. For each index (word), we fill in a list which has the coefficient value of the index (word) for the 6 different L2 penalties we considered.", "pos_word_coeff_dict = {}\n\nfor curr_word in positive_words:\n # Finding the index of the word we are considering in the feature_words_lst\n word_index = feature_words_lst.index(curr_word)\n # Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.\n pos_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],\n coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],\n coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]\n\nneg_word_coeff_dict = {}\n\nfor curr_word in negative_words:\n # Finding the index of the word we are considering in the feature_words_lst\n word_index = feature_words_lst.index(curr_word)\n # Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.\n neg_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],\n coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],\n coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]", "Plotting coefficient path for positive words", "plt.figure(figsize=(10,6))\nfor pos_word in positive_words:\n plt.semilogx(l2_pen_vals, pos_word_coeff_dict[pos_word], linewidth =2, label = pos_word )\nplt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = \"black\")\nplt.axis([4.0, 1.0e5, -0.5, 1.5])\nplt.title(\"Positive Words Coefficient Path\", fontsize=18)\nplt.xlabel(\"L2 Penalty ($\\lambda$)\", fontsize=18)\nplt.ylabel(\"Coefficient Value\", fontsize=18)\nplt.legend(loc = \"upper right\", fontsize=18)", "Plotting coefficient path for negative words", "plt.figure(figsize=(10,6))\nfor pos_word in negative_words:\n plt.semilogx(l2_pen_vals, neg_word_coeff_dict[pos_word], linewidth =2, label = pos_word )\nplt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = \"black\")\nplt.axis([4.0, 1.0e5, -1.5, 0.5])\nplt.title(\"Negative Words Coefficient Path\", fontsize=18)\nplt.xlabel(\"L2 Penalty ($\\lambda$)\", fontsize=18)\nplt.ylabel(\"Coefficient Value\", fontsize=18)\nplt.legend(loc = \"lower right\", fontsize=18)", "The following 2 questions relate to the 2 figures above.\nQuiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.\n True \nQuiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)\n False \nMeasuring accuracy\nNow, let us compute the accuracy of the classifier model. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified data points}}{\\mbox{# total data points}}\n$$\nRecall from lecture that that the class prediction is calculated using\n$$\n\\hat{y}_i = \n\\left{\n\\begin{array}{ll}\n +1 & h(\\mathbf{x}_i)^T\\mathbf{w} > 0 \\\n -1 & h(\\mathbf{x}_i)^T\\mathbf{w} \\leq 0 \\\n\\end{array} \n\\right.\n$$\nNote: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.\nStep 1: First compute the scores using feature_matrix and coefficients using a dot product. Do this for the training data and the validation data.", "# Compute the scores as a dot product between feature_matrix and coefficients.\nscores_l2_pen_0_train = np.dot(feature_matrix_train, coefficients_0_penalty)\nscores_l2_pen_4_train = np.dot(feature_matrix_train, coefficients_4_penalty)\nscores_l2_pen_10_train = np.dot(feature_matrix_train, coefficients_10_penalty)\nscores_l2_pen_1e2_train = np.dot(feature_matrix_train, coefficients_1e2_penalty)\nscores_l2_pen_1e3_train = np.dot(feature_matrix_train, coefficients_1e3_penalty)\nscores_l2_pen_1e5_train = np.dot(feature_matrix_train, coefficients_1e5_penalty)\n\nscores_l2_pen_0_valid = np.dot(feature_matrix_valid, coefficients_0_penalty)\nscores_l2_pen_4_valid = np.dot(feature_matrix_valid, coefficients_4_penalty)\nscores_l2_pen_10_valid = np.dot(feature_matrix_valid, coefficients_10_penalty)\nscores_l2_pen_1e2_valid = np.dot(feature_matrix_valid, coefficients_1e2_penalty)\nscores_l2_pen_1e3_valid = np.dot(feature_matrix_valid, coefficients_1e3_penalty)\nscores_l2_pen_1e5_valid = np.dot(feature_matrix_valid, coefficients_1e5_penalty)", "Step 2: Using the formula above, compute the class predictions from the scores.\nFirst, writing a helper function that will return an array with the predictions.", "def get_pred_from_score(scores_array):\n # First, set predictions equal to scores array\n predictions = scores_array\n # Replace <= 0 scores with negative review classification (-1)\n scores_array[scores_array<=0] = -1\n # Replace > 0 scores with positive review classification (+1)\n scores_array[scores_array>0] = 1\n \n return predictions", "Now, getting the predictions for the training data and the validation data for the 6 L2 penalties we considered.", "pred_l2_pen_0_train = get_pred_from_score(scores_l2_pen_0_train)\npred_l2_pen_4_train = get_pred_from_score(scores_l2_pen_4_train)\npred_l2_pen_10_train = get_pred_from_score(scores_l2_pen_10_train)\npred_l2_pen_1e2_train = get_pred_from_score(scores_l2_pen_1e2_train)\npred_l2_pen_1e3_train = get_pred_from_score(scores_l2_pen_1e3_train)\npred_l2_pen_1e5_train = get_pred_from_score(scores_l2_pen_1e5_train)\n\npred_l2_pen_0_valid = get_pred_from_score(scores_l2_pen_0_valid)\npred_l2_pen_4_valid = get_pred_from_score(scores_l2_pen_4_valid)\npred_l2_pen_10_valid = get_pred_from_score(scores_l2_pen_10_valid)\npred_l2_pen_1e2_valid = get_pred_from_score(scores_l2_pen_1e2_valid)\npred_l2_pen_1e3_valid = get_pred_from_score(scores_l2_pen_1e3_valid)\npred_l2_pen_1e5_valid = get_pred_from_score(scores_l2_pen_1e5_valid)", "Step 3: Getting the accurary for the training set data and the validation set data", "train_accuracy = {}\ntrain_accuracy[0] = np.sum(pred_l2_pen_0_train==sentiment_train)/float(len(sentiment_train))\ntrain_accuracy[4] = np.sum(pred_l2_pen_4_train==sentiment_train)/float(len(sentiment_train))\ntrain_accuracy[10] = np.sum(pred_l2_pen_10_train==sentiment_train)/float(len(sentiment_train))\ntrain_accuracy[1e2] = np.sum(pred_l2_pen_1e2_train==sentiment_train)/float(len(sentiment_train))\ntrain_accuracy[1e3] = np.sum(pred_l2_pen_1e3_train==sentiment_train)/float(len(sentiment_train))\ntrain_accuracy[1e5] = np.sum(pred_l2_pen_1e5_train==sentiment_train)/float(len(sentiment_train))\n\nvalidation_accuracy = {}\nvalidation_accuracy[0] = np.sum(pred_l2_pen_0_valid==sentiment_valid)/float(len(sentiment_valid))\nvalidation_accuracy[4] = np.sum(pred_l2_pen_4_valid==sentiment_valid)/float(len(sentiment_valid))\nvalidation_accuracy[10] = np.sum(pred_l2_pen_10_valid==sentiment_valid)/float(len(sentiment_valid))\nvalidation_accuracy[1e2] = np.sum(pred_l2_pen_1e2_valid==sentiment_valid)/float(len(sentiment_valid))\nvalidation_accuracy[1e3] = np.sum(pred_l2_pen_1e3_valid==sentiment_valid)/float(len(sentiment_valid))\nvalidation_accuracy[1e5] = np.sum(pred_l2_pen_1e5_valid==sentiment_valid)/float(len(sentiment_valid))\n\n# Build a simple report\nfor key in sorted(validation_accuracy.keys()):\n print \"L2 penalty = %g\" % key\n print \"train accuracy = %s, validation_accuracy = %s\" % (train_accuracy[key], validation_accuracy[key])\n print \"--------------------------------------------------------------------------------\"", "Cleating a list of tuples with the entries as (accuracy, l2_penalty) for the training set and the validation set.", "accuracy_training_data = [(train_accuracy[0], 0), (train_accuracy[4], 4), (train_accuracy[10], 10),\n(train_accuracy[1e2], 1e2), (train_accuracy[1e3], 1e3), (train_accuracy[1e5], 1e5)]\naccuracy_validation_data = [(validation_accuracy[0], 0), (validation_accuracy[4], 4), (validation_accuracy[10], 10),\n (validation_accuracy[1e2], 1e2), (validation_accuracy[1e3], 1e3), (validation_accuracy[1e5], 1e5)]", "Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?", "max(accuracy_training_data)[1]", "Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?", "max(accuracy_validation_data)[1]", "Quiz question: Does the highest accuracy on the training data imply that the model is the best one?\n No, this model probably suffers from overfitting" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Wx1ng/Python4DataScience.CH
Series_1_Scientific_Python/S1EP3_Pandas.ipynb
cc0-1.0
[ "和大熊猫们(Pandas)一起游戏吧!Wx1ng修改测试git命令233\nPandas是Python的一个用于数据分析的库: http://pandas.pydata.org\nAPI速查:http://pandas.pydata.org/pandas-docs/stable/api.html\n基于NumPy,SciPy的功能,在其上补充了大量的数据操作(Data Manipulation)功能。\n统计、分组、排序、透视表自由转换,如果你已经很熟悉结构化数据库(RDBMS)与Excel的功能,就会知道Pandas有过之而无不及!\n0. 上手玩:Why Pandas?\n普通的程序员看到一份数据会怎么做?", "import codecs\nimport requests\nimport numpy as np\nimport scipy as sp\nimport scipy.stats as spstat\nimport pandas as pd\nimport datetime\nimport json\n\nr = requests.get(\"http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data\")\nwith codecs.open('S1EP3_Iris.txt','w',encoding='utf-8') as f:\n f.write(r.text)\n\nwith codecs.open('S1EP3_Iris.txt','r',encoding='utf-8') as f:\n lines = f.readlines()\n\nfor idx,line in enumerate(lines):\n print line,\n if idx==10:\n break", "Pandas的意义就在于\n快速的识别结构化数据", "import pandas as pd\nirisdata = pd.read_csv('S1EP3_Iris.txt',header = None, encoding='utf-8')\nirisdata", "快速的操作元数据", "cnames = ['sepal_length','sepal_width','petal_length','petal_width','class']\nirisdata.columns = cnames\nirisdata", "快速过滤", "irisdata[irisdata['petal_width']==irisdata.petal_width.max()]", "快速切片", "irisdata.iloc[::30,:2]", "快速统计", "print irisdata['class'].value_counts()\n\nfor x in xrange(4):\n s = irisdata.iloc[:,x]\n print '{0:<12}'.format(s.name.upper()), \" Statistics: \", \\\n '{0:>5} {1:>5} {2:>5} {3:>5}'.format(s.max(), s.min(), round(s.mean(),2),round(s.std(),2))", "快速“MapReduce”", "slogs = lambda x:sp.log(x)*x\nentpy = lambda x:sp.exp((slogs(x.sum())-x.map(slogs).sum())/x.sum())\nirisdata.groupby('class').agg(entpy)", "1. 欢迎来到大熊猫世界\nPandas的重要数据类型\n\nDataFrame(二维表)\nSeries(一维序列)\nIndex(行索引,行级元数据)\n\n1.1 Series:pandas的长枪(数据表中的一列或一行,观测向量,一维数组...)\n数据世界中对于任意一个个体的全面观测,或者对于任意一组个体某一属性的观测,全部可以抽象为Series的概念。\n用值构建一个Series:\n由默认index和values组成。", "Series1 = pd.Series(np.random.randn(4))\nprint Series1,type(Series1)\nprint Series1.index\nprint Series1.values", "Series支持过滤的原理就如同NumPy:", "print Series1>0\nprint Series1[Series1>0]", "当然也支持Broadcasting:", "print Series1*2\nprint Series1+5", "以及Universal Function:", "print np.exp(Series1)\n#NumPy Universal Function\nf_np = np.frompyfunc(lambda x:np.exp(x*2+5),1,1)\nprint f_np(Series1)", "在序列上就使用行标,而不是创建一个2列的数据表,能够轻松辨别哪里是数据,哪里是元数据:", "Series2 = pd.Series(Series1.values,index=['norm_'+unicode(i) for i in xrange(4)])\nprint Series2,type(Series2)\nprint Series2.index\nprint type(Series2.index)\nprint Series2.values", "虽然行是有顺序的,但是仍然能够通过行级的index来访问到数据:\n(当然也不尽然像Ordered Dict,因为行索引甚至可以重复,不推荐重复的行索引不代表不能用)", "print Series2[['norm_0','norm_3']]\n\nprint 'norm_0' in Series2\nprint 'norm_6' in Series2", "默认行索引就像行号一样:", "print Series1.index", "从Key不重复的Ordered Dict或者从Dict来定义Series就不需要担心行索引重复:", "Series3_Dict = {\"Japan\":\"Tokyo\",\"S.Korea\":\"Seoul\",\"China\":\"Beijing\"}\nSeries3_pdSeries = pd.Series(Series3_Dict)\nprint Series3_pdSeries\nprint Series3_pdSeries.values\nprint Series3_pdSeries.index", "与Dict区别一: 有序", "Series4_IndexList = [\"Japan\",\"China\",\"Singapore\",\"S.Korea\"]\nSeries4_pdSeries = pd.Series( Series3_Dict ,index = Series4_IndexList)\nprint Series4_pdSeries\nprint Series4_pdSeries.values\nprint Series4_pdSeries.index\nprint Series4_pdSeries.isnull()\nprint Series4_pdSeries.notnull()", "与Dict区别二: index内值可以重复,尽管不推荐。", "Series5_IndexList = ['A','B','B','C']\nSeries5 = pd.Series(Series1.values,index = Series5_IndexList)\nprint Series5\nprint Series5[['B','A']]", "整个序列级别的元数据信息:name\n当数据序列以及index本身有了名字,就可以更方便的进行后续的数据关联啦!", "print Series4_pdSeries.name\nprint Series4_pdSeries.index.name\n\nSeries4_pdSeries.name = \"Capital Series\"\nSeries4_pdSeries.index.name = \"Nation\"\nprint Series4_pdSeries\npd.DataFrame(Series4_pdSeries)", "1.2 DataFrame:pandas的战锤(数据表,二维数组)\nSeries的有序集合,就像R的DataFrame一样方便。\n仔细想想,绝大部分的数据形式都可以表现为DataFrame。\n从NumPy二维数组、从文件或者从数据库定义:数据虽好,勿忘列名", "dataNumPy = np.asarray([('Japan','Tokyo',4000),\\\n ('S.Korea','Seoul',1300),('China','Beijing',9100)])\nDF1 = pd.DataFrame(dataNumPy,columns=['nation','capital','GDP'])\nDF1", "等长的列数据保存在一个字典里(JSON):很不幸,字典key是无序的", "dataDict = {'nation':['Japan','S.Korea','China'],\\\n 'capital':['Tokyo','Seoul','Beijing'],'GDP':[4900,1300,9100]}\nDF2 = pd.DataFrame(dataDict)\nDF2", "从另一个DataFrame定义DataFrame:啊,强迫症犯了!", "DF21 = pd.DataFrame(DF2,columns=['nation','capital','GDP'])\nDF21\n\nDF22 = pd.DataFrame(DF2,columns=['nation','capital','GDP'],index = [2,0,1])\nDF22", "从DataFrame中取出列?两种方法(与JavaScript完全一致!)\n\n'.'的写法容易与其他预留关键字产生冲突\n'[ ]'的写法最安全。", "print DF22.nation\nprint DF22.capital\nprint DF22['GDP']", "从DataFrame中取出行?(至少)两种方法:", "print DF22[0:1] #给出的实际是DataFrame\nprint DF22.ix[0] #通过对应Index给出行", "像NumPy切片一样的终极招式:iloc", "print DF22.iloc[0,:]\nprint DF22.iloc[:,0]", "听说你从Alter Table地狱来,大熊猫笑了\n然而动态增加列无法用\".\"的方式完成,只能用\"[ ]\"", "DF22['population'] = [1600,130,55]\nDF22['region'] = 'East_Asian'\nDF22", "1.3 Index:pandas进行数据操纵的鬼牌(行级索引)\n行级索引是\n\n元数据\n可能由真实数据产生,因此可以视作数据\n可以由多重索引也就是多个列组合而成\n可以和列名进行交换,也可以进行堆叠和展开,达到Excel透视表效果\n\nIndex有四种...哦不,很多种写法,一些重要的索引类型包括\n\npd.Index(普通)\nInt64Index(数值型索引)\nMultiIndex(多重索引,在数据操纵中更详细描述)\nDatetimeIndex(以时间格式作为索引)\nPeriodIndex (含周期的时间格式作为索引)\n\n直接定义普通索引,长得就和普通的Series一样", "index_names = ['a','b','c']\nSeries_for_Index = pd.Series(index_names)\nprint pd.Index(index_names)\nprint pd.Index(Series_for_Index)", "可惜Immutable,牢记!", "index_names = ['a','b','c']\nindex0 = pd.Index(index_names)\nprint index0.get_values()\nindex0[2] = 'd'", "扔进去一个含有多元组的List,就有了MultiIndex\n可惜,如果这个List Comprehension改成小括号,就不对了。", "#print [('Row_'+str(x+1),'Col_'+str(y+1)) for x in xrange(4) for y in xrange(4)]\nmulti1 = pd.Index([('Row_'+str(x+1),'Col_'+str(y+1)) for x in xrange(4) for y in xrange(4)])\nmulti1.name = ['index1','index2']\nprint multi1", "对于Series来说,如果拥有了多重Index,数据,变形!\n下列代码说明:\n\n二重MultiIndex的Series可以unstack()成DataFrame\nDataFrame可以stack成拥有二重MultiIndex的Series", "data_for_multi1 = pd.Series(xrange(0,16),index=multi1)\ndata_for_multi1\n\ndata_for_multi1.unstack()\n\ndata_for_multi1.unstack().stack()", "我们来看一下非平衡数据的例子:\nRow_1,2,3,4和Col_1,2,3,4并不是全组合的。", "multi2 = pd.Index([('Row_'+str(x),'Col_'+str(y+1)) \\\n for x in xrange(5) for y in xrange(x)])\nmulti2\n\ndata_for_multi2 = pd.Series(np.arange(10),index = multi2)\ndata_for_multi2\n\ndata_for_multi2.unstack()\n\ndata_for_multi2.unstack().stack()", "DateTime标准库如此好用,你值得拥有", "dates = [datetime.datetime(2015,1,1),datetime.datetime(2015,1,8),datetime.datetime(2015,1,30)]\npd.DatetimeIndex(dates)", "如果你不仅需要时间格式统一,时间频率也要统一的话", "periodindex1 = pd.period_range('2015-01','2015-04',freq='M')\nprint periodindex1", "月级精度和日级精度如何转换?\n有的公司统一以1号代表当月,有的公司统一以最后一天代表当月,转化起来很麻烦,可以asfreq", "print periodindex1.asfreq('D',how='start')\nprint periodindex1.asfreq('D',how='end')", "最后的最后,我要真正把两种频率的时间精度匹配上?", "periodindex_mon = pd.period_range('2015-01','2015-03',freq='M').asfreq('D',how='start')\nperiodindex_day = pd.period_range('2015-01-01','2015-03-31',freq='D')\n\nprint periodindex_mon\nprint periodindex_day", "粗粒度数据+reindex+ffill/bfill", "#print pd.Series(periodindex_mon,index=periodindex_mon).reindex(periodindex_day)\nfull_ts = pd.Series(periodindex_mon,index=periodindex_mon).reindex(periodindex_day)\nfull_ts\n\nfull_ts = pd.Series(periodindex_mon,index=periodindex_mon).reindex(periodindex_day,method='ffill')\nfull_ts", "关于索引,方便的操作有?\n前面描述过了,索引有序,重复,但一定程度上又能通过key来访问,也就是说,某些集合操作都是可以支持的。", "index1 = pd.Index(['A','B','B','C','C'])\nindex2 = pd.Index(['C','D','E','E','F'])\nindex3 = pd.Index(['B','C','A'])\nprint index1.append(index2)\nprint index1.difference(index2)\nprint index1.intersection(index2)\nprint index1.union(index2) # Support unique-value Index well\nprint index1.isin(index2)\nprint index1.delete(2)\nprint index1.insert(0,'K') # Not suggested\nprint index3.drop('A') # Support unique-value Index well\nprint index1.is_monotonic,index2.is_monotonic,index3.is_monotonic\nprint index1.is_unique,index2.is_unique,index3.is_unique", "2. 大熊猫世界来去自如:Pandas的I/O\n老生常谈,从基础来看,我们仍然关心pandas对于与外部数据是如何交互的。\n2.1 结构化数据输入输出\n\nread_csv与to_csv 是一对输入输出的工具,read_csv直接返回pandas.DataFrame,而to_csv只要执行命令即可写文件\nread_table:功能类似\nread_fwf:操作fixed width file\nread_excel与to_excel方便的与excel交互\n\n还记得刚开始的例子吗?\n\nheader 表示数据中是否存在列名,如果在第0行就写就写0,并且开始读数据时跳过相应的行数,不存在可以写none\nnames 表示要用给定的列名来作为最终的列名\nencoding 表示数据集的字符编码,通常而言一份数据为了方便的进行文件传输都以utf-8作为标准\n\n提问:下列例子中,header=4,names=cnames时,究竟会读到怎样的数据?", "print cnames\nirisdata = pd.read_csv('S1EP3_Iris.txt',header = None, names = cnames,\\\n encoding='utf-8')\nirisdata[::30]", "希望了解全部参数的请移步API:\nhttp://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv\n这里介绍一些常用的参数:\n读取处理:\n\nskiprows:跳过一定的行数\nnrows:仅读取一定的行数\nskipfooter:尾部有固定的行数永不读取\nskip_blank_lines:空行跳过\n\n内容处理:\n\nsep/delimiter:分隔符很重要,常见的有逗号,空格和Tab('\\t')\nna_values:指定应该被当作na_values的数值\nthousands:处理数值类型时,每千位分隔符并不统一 (1.234.567,89或者1,234,567.89都可能),此时要把字符串转化为数字需要指明千位分隔符\n\n收尾处理:\n\nindex_col:将真实的某列(列的数目,甚至列名)当作index\nsqueeze:仅读到一列时,不再保存为pandas.DataFrame而是pandas.Series\n\n2.1.x Excel ... ?\n对于存储着极为规整数据的Excel而言,其实是没必要一定用Excel来存,尽管Pandas也十分友好的提供了I/O接口。", "irisdata.to_excel('S1EP3_irisdata.xls',index = None,encoding='utf-8')\nirisdata_from_excel = pd.read_excel('S1EP3_irisdata.xls',header=0, encoding='utf-8')\nirisdata_from_excel[::30]", "唯一重要的参数:sheetname=k,标志着一个excel的第k个sheet页将会被取出。(从0开始)\n2.2 半结构化数据\nJSON:网络传输中常用的一种数据格式。\n仔细看一下,实际上这就是我们平时收集到异源数据的风格是一致的:\n\n列名不能完全匹配\n关联键可能并不唯一\n元数据被保存在数据里", "json_data = [{'name':'Wang','sal':50000,'job':'VP'},\\\n {'name':'Zhang','job':'Manager','report':'VP'},\\\n {'name':'Li','sal':5000,'report':'Manager'}]\ndata_employee = pd.read_json(json.dumps(json_data))\ndata_employee_ri = data_employee.reindex(columns=['name','job','sal','report'])\ndata_employee_ri", "2.3 数据库连接流程(Optional)\n使用下列包,通过数据库配置建立Connection\n\npymysql\npyODBC\ncx_Oracle\n\n通过pandas.read_sql_query,read_sql_table,to_sql进行数据库操作。\nPython与数据库的交互方案有很多种,从数据分析师角度看pandas方案比较适合,之后的讲义中会结合SQL语法进行讲解。\n进行数据库连接首先你需要类似的这样一组信息:", "IP = '127.0.0.1'\nus = 'root'\npw = '123456'", "举例说明如果是MySQL:", "import pymysql\nimport pymysql.cursors\nconnection = pymysql.connect(host=IP,\\\n user=us,\\\n password=pw,\\\n charset='utf8mb4',\\\n cursorclass=pymysql.cursors.DictCursor)\n#pd.read_sql_query(\"sql\",connection)\n#df.to_sql('tablename',connection,flavor='mysql')", "3. 深入Pandas数据操纵\n在第一部分的基础上,数据会有更多种操纵方式:\n\n通过列名、行index来取数据,结合ix、iloc灵活的获取数据的一个子集(第一部分已经介绍)\n按记录拼接(就像Union All)或者关联(join)\n方便的自定义函数映射\n排序\n缺失值处理\n与Excel一样灵活的数据透视表(在第四部分更详细介绍)\n\n3.1 数据整合:方便灵活\n3.1.1 横向拼接:直接DataFrame", "pd.DataFrame([np.random.rand(2),np.random.rand(2),np.random.rand(2)],columns=['C1','C2'])", "3.1.2 横向拼接:Concatenate", "pd.concat([data_employee_ri,data_employee_ri,data_employee_ri])\n\npd.concat([data_employee_ri,data_employee_ri,data_employee_ri],ignore_index=True)", "3.1.3 纵向拼接:Merge\n根据数据列关联,使用on关键字\n\n可以指定一列或多列\n可以使用left_on和right_on", "pd.merge(data_employee_ri,data_employee_ri,on='name')\n\npd.merge(data_employee_ri,data_employee_ri,on=['name','job'])", "根据index关联,可以直接使用left_index和right_index", "data_employee_ri.index.name = 'index1'\npd.merge(data_employee_ri,data_employee_ri,\\\n left_index='index1',right_index='index1')", "TIPS: 增加how关键字,并指定\n* how = 'inner'\n* how = 'left'\n* how = 'right'\n* how = 'outer'\n结合how,可以看到merge基本再现了SQL应有的功能,并保持代码整洁。", "DF31xA = pd.DataFrame({'name':[u'老王',u'老张',u'老李'],'sal':[5000,3000,1000]})\nDF31xA\n\nDF31xB = pd.DataFrame({'name':[u'老王',u'老刘'],'job':['VP','Manager']})\nDF31xB", "how='left': 保留左表信息", "pd.merge(DF31xA,DF31xB,on='name',how='left')", "how='right': 保留右表信息", "pd.merge(DF31xA,DF31xB,on='name',how='right')", "how='inner': 保留两表交集信息,这样尽量避免出现缺失值", "pd.merge(DF31xA,DF31xB,on='name',how='inner')", "how='outer': 保留两表并集信息,这样会导致缺失值,但最大程度的整合了已有信息", "pd.merge(DF31xA,DF31xB,on='name',how='outer')", "3.2 数据清洗三剑客\n接下来的三个功能,map,applymap,apply,功能,是绝大多数数据分析师在数据清洗这一步骤中的必经之路。\n他们分别回答了以下问题:\n\n我想根据一列数据新做一列数据,怎么办?(Series->Series)\n我想根据整张表的数据新做整张表,怎么办? (DataFrame->DataFrame)\n我想根据很多列的数据新做一列数据,怎么办? (DataFrame->Series)\n\n不要再写什么for循环了!改变思维,提高编码和执行效率", "dataNumPy32 = np.asarray([('Japan','Tokyo',4000),('S.Korea','Seoul',1300),('China','Beijing',9100)])\nDF32 = pd.DataFrame(dataNumPy,columns=['nation','capital','GDP'])\nDF32", "map: 以相同规则将一列数据作一个映射,也就是进行相同函数的处理", "def GDP_Factorize(v):\n fv = np.float64(v)\n if fv > 6000.0:\n return 'High'\n elif fv < 2000.0:\n return 'Low'\n else:\n return 'Medium'\n\nDF32['GDP_Level'] = DF32['GDP'].map(GDP_Factorize)\nDF32['NATION'] = DF32.nation.map(str.upper)\nDF32", "类似的功能还有applymap,可以对一个dataframe里面每一个元素像map那样全局操作", "DF32.applymap(lambda x: float(x)*2 if x.isdigit() else x.upper())", "apply则可以对一个DataFrame操作得到一个Series\n他会有点像我们后面介绍的agg,但是apply可以按行操作和按列操作,用axis控制即可。", "DF32.apply(lambda x:x['nation']+x['capital']+'_'+x['GDP'],axis=1)", "3.3 数据排序\n\nsort: 按一列或者多列的值进行行级排序\nsort_index: 根据index里的取值进行排序,而且可以根据axis决定是重排行还是列", "dataNumPy33 = np.asarray([('Japan','Tokyo',4000),('S.Korea','Seoul',1300),('China','Beijing',9100)])\nDF33 = pd.DataFrame(dataNumPy33,columns=['nation','capital','GDP'])\nDF33\n\nDF33.sort(['capital','nation'])\n\nDF33.sort('GDP',ascending=False)\n\nDF33.sort('GDP').sort(ascending=False)\n\nDF33.sort_index(axis=1,ascending=True)", "一个好用的功能:Rank", "DF33\n\nDF33.rank()\n\nDF33.rank(ascending=False)", "注意tied data(相同值)的处理:\n* method = 'average'\n* method = 'min'\n* method = 'max'\n* method = 'first'", "DF33x = pd.DataFrame({'name':[u'老王',u'老张',u'老李',u'老刘'],'sal':np.array([5000,3000,5000,9000])})\nDF33x", "DF33x.rank()默认使用method='average',两条数据相等时,处理排名时大家都用平均值", "DF33x.sal.rank()", "method='min',处理排名时大家都用最小值", "DF33x.sal.rank(method='min')", "method='max',处理排名时大家都用最大值", "DF33x.sal.rank(method='max')", "method='first',处理排名时谁先出现就先给谁较小的数值。", "DF33x.sal.rank(method='first')", "3.4 缺失数据处理", "DF34 = data_for_multi2.unstack()\nDF34", "忽略缺失值:", "DF34.mean(skipna=True)\n\nDF34.mean(skipna=False)", "如果不想忽略缺失值的话,就需要祭出fillna了:", "DF34\n\nDF34.fillna(0).mean(axis=1,skipna=False)", "4. “一组”大熊猫:Pandas的groupby\ngroupby的功能类似SQL的group by关键字:\nSplit-Apply-Combine\n\nSplit,就是按照规则分组\nApply,通过一定的agg函数来获得输入pd.Series返回一个值的效果\nCombine,把结果收集起来\n\nPandas的groupby的灵活性:\n\n分组的关键字可以来自于index,也可以来自于真实的列数据\n分组规则可以通过一列或者多列", "from IPython.display import Image\nImage(filename=\"S1EP3_group.png\")", "分组的具体逻辑", "irisdata_group = irisdata.groupby('class')\nirisdata_group\n\nfor level,subsetDF in irisdata_group:\n print level\n print subsetDF[::20]", "分组可以快速实现MapReduce的逻辑\n\nMap: 指定分组的列标签,不同的值就会被扔到不同的分组处理\nReduce: 输入多个值,返回一个值,一般可以通过agg实现,agg能接受一个函数", "irisdata.groupby('class').agg(\\\n lambda x:((x-x.mean())**3).sum()*(len(x)-0.0)/\\\n (len(x)-1.0)/(len(x)-2.0)/(x.std()*np.sqrt((len(x)-0.0)/(len(x)-1.0)))**3 if len(x)>2 else None)\n\nirisdata.groupby('class').agg(spstat.skew)", "汇总之后的广播操作\n在OLAP数据库上,为了避免groupby+join的二次操作,提出了sum()over(partition by)的开窗操作。\n在Pandas中,这种操作能够进一步被transform所取代。", "pd.concat([irisdata,irisdata.groupby('class').transform('mean')],axis=1)[::20]", "产生 MultiIndex(多列分组)后的数据透视表操作\n一般来说,多列groupby的一个副作用就是.groupby().agg()之后你的行index已经变成了一个多列分组的分级索引。\n如果我们希望达到Excel的数据透视表的效果,行和列的索引自由交换,达到统计目的,究竟应该怎么办呢?", "factor1 = np.random.randint(0,3,50)\nfactor2 = np.random.randint(0,2,50)\nfactor3 = np.random.randint(0,3,50)\nvalues = np.random.randn(50)\n\nhierindexDF = pd.DataFrame({'F1':factor1,'F2':factor2,'F3':factor3,'F4':values})\nhierindexDF\n\nhierindexDF_gbsum = hierindexDF.groupby(['F1','F2','F3']).sum()\nhierindexDF_gbsum", "观察Index:", "hierindexDF_gbsum.index", "unstack:\n\n无参数时,把最末index置换到column上\n有数字参数时,把指定位置的index置换到column上\n有列表参数时,依次把特定位置的index置换到column上", "hierindexDF_gbsum.unstack()\n\nhierindexDF_gbsum.unstack(0)\n\nhierindexDF_gbsum.unstack(1)\n\nhierindexDF_gbsum.unstack([2,0])", "更进一步的,stack的功能是和unstack对应,把column上的多级索引换到index上去", "hierindexDF_gbsum.unstack([2,0]).stack([1,2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nsrchemie/code_guild
wk2/extras/arrays_strings/rotation/rotation_challenge.ipynb
mit
[ "<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nChallenge Notebook\nProblem: Determine if a string s1 is a rotation of another string s2, by calling (only once) a function is_substring\n\nConstraints\nTest Cases\nAlgorithm\nCode\nUnit Test\nSolution Notebook\n\nConstraints\n\nCan you assume the string is ASCII?\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\nCan you use additional data structures? \nYes\n\n\nIs this case sensitive?\nYes\n\n\n\nTest Cases\n\nAny strings that differ in size -> False\nNone, 'foo' -> False (any None results in False)\n' ', 'foo' -> False\n' ', ' ' -> True\n'foobarbaz', 'barbazfoo' -> True\n\nAlgorithm\nRefer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.\nCode", "def is_substring(s1, s2):\n # TODO: Implement me\n pass\n\n\ndef is_rotation(s1, s2):\n # TODO: Implement me\n # Call is_substring only once\n pass", "Unit Test\nThe following unit test is expected to fail until you solve the challenge.", "# %load test_rotation.py\nfrom nose.tools import assert_equal\n\n\nclass TestRotation(object):\n\n def test_rotation(self):\n assert_equal(is_rotation('o', 'oo'), False)\n assert_equal(is_rotation(None, 'foo'), False)\n assert_equal(is_rotation('', 'foo'), False)\n assert_equal(is_rotation('', ''), True)\n assert_equal(is_rotation('foobarbaz', 'barbazfoo'), True)\n print('Success: test_rotation')\n\n\ndef main():\n test = TestRotation()\n test.test_rotation()\n\n\nif __name__ == '__main__':\n main()", "Solution Notebook\nReview the Solution Notebook for a discussion on algorithms and code solutions." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
NYUDataBootcamp/Projects
UG_S16/Cheaney-Leacock-Poverty.ipynb
mit
[ "An Investigation of Extreme Poverty in Sub-Saharan Africa\nAuthors: Michael Cheaney, Raneisha Leacock\nDate Created: May 10th, 2016\nINTRODUCTION\nExtreme poverty means living on the edge of subsistence. Over the past 30 years, the absolute number of people living in extreme poverty worldwide has been drastically reduced - at least according to the monetary measure of extreme poverty at $1.90 per day. But in one region - Sub-Saharan Africa - it appears that number is actually increasing despite the laudable poverty alleviation efforts by global organizations. \nThis project uses existing World Bank data to compare data on four major world regions - East Asia & Pacific, Europe & Central Asia, Latin America & the Caribbean,and Sub-Saharan Africa. From this data we attempted to identify what factors might be playing a significant role in this phenomenon. More specifically, why is poverty increasing in Sub-Saharan Africa yet is falling in every other region in the world?\nDATA REPORT\nSource 1: Absolute number of people living in poverty\nThis dataset shows the total population considered to be living in extreme poverty. The dataset provides figures for different countries and regions around the world for various years between 1981 to 2011. \nThe data were extracted from the Our World in Data website[1]. \nSince the website does not allow for data to be read off directly, we downloaded a csv file of the data. This csv was then published to a public dropbox folder, from which we read in the link.\nSource 2: Birth Rate\nThis dataset shows the crude birth rates per 1,000 people in different countries and regions around the world from 1981 to 2011. The data were extracted from the World Bank’s website using the pandas World Bank api and the indicator 'SP.DYN.CBRT.IN'.\nSource 3: Inflation\nThis dataset shows the annual inflation rate in different countries and regions around the world from 1981 to 2011. The data were extracted from the World Bank’s website using the pandas World Bank api and the indicator 'NY.GDP.DEFL.KD.ZG'.\nDISCUSSION", "import pandas as pd # data package\nfrom pandas.io import data, wb # World Bank api\nimport matplotlib.pyplot as plt # graphics \nimport sys # system module, used to get Python version \nimport os # operating system tools (check files)\nimport seaborn as sns # seaborn graphics package\n\n%matplotlib inline \n\nprint('\\nPython version: ', sys.version) \nprint('Pandas version: ', pd.__version__)", "Poverty Headcount", "url1='https://dl.dropboxusercontent.com/u/105344610/'\nurl2='the-number-of-people-living-in-extreme-poverty-by-world-region-1981-2011.csv'\nurl = url1 + url2\npov = pd.read_csv(url)\npov.columns=['Region', 'Year', 'People']\npov['People'] = pov['People']/1000000\npov = pov.set_index('Year')\n\n\nsns.set()\n\npoveap = pov[60:72]\n\nfig, ax = plt.subplots(figsize=(12,8))\npoveap.plot(ax=ax,\n kind='bar', \n color='#005FE5',\n legend=False)\nax.set_xlabel('Year', fontsize='14', fontweight='bold')\nax.set_ylabel('People (in millions)', fontsize='14', fontweight='bold')\nax.set_title('Absolute number of people living in Extreme Poverty', fontsize='14')\nfig.suptitle('East Asia and Pacific', fontsize='16', fontweight='bold')\n\nsns.set()\n\npoveur = pov[48:60]\n\nfig, ax = plt.subplots(figsize=(12,8))\npoveur.plot(ax=ax,\n kind='bar', \n color='#4544B2',\n legend=False)\nax.set_xlabel('Year', fontsize='14', fontweight='bold')\nax.set_ylabel('People (in millions)', fontsize='14', fontweight='bold')\nax.set_title('Absolute number of people living in Extreme Poverty', fontsize='14')\nfig.suptitle('Europe and Central Asia', fontsize='16', fontweight='bold')\n\nsns.set()\n\npovlat = pov[36:48]\n\nfig, ax = plt.subplots(figsize=(12,8))\npovlat.plot(ax=ax,\n kind='bar', \n color='teal',\n legend=False)\nax.set_xlabel('Year', fontsize='14', fontweight='bold')\nax.set_ylabel('People (in millions)', fontsize='14', fontweight='bold')\nax.set_title('Absolute number of people living in Extreme Poverty', fontsize='14')\nfig.suptitle('Latin America and the Caribbean', fontsize='16', fontweight='bold')\n\nsns.set()\n\npovsub = pov[:12]\n\nfig, ax = plt.subplots(figsize=(12,8))\npovsub.plot(ax=ax,\n kind='bar', \n color='purple',\n legend=False)\nax.set_xlabel('Year', fontsize='14', fontweight='bold')\nax.set_ylabel('People (in millions)', fontsize='14', fontweight='bold')\nax.set_title('Absolute number of people living in Extreme Poverty', fontsize='14')\nfig.suptitle('Sub-Saharan Africa', fontsize='16', fontweight='bold')", "While the absolute number of people living in poverty has decreased in every other region, there is an upward trend in Sub-Saharan Africa.\nBirth Rate", "brt = wb.download(indicator='SP.DYN.CBRT.IN', country=['all'], start=1981, end=2011)\nbrt=brt.stack()\nbrt= brt.unstack(level=[0,2])\nbrt=brt[['East Asia & Pacific (all income levels)', \n'Europe & Central Asia (all income levels)',\n'Latin America & Caribbean (all income levels)',\n'Sub-Saharan Africa (all income levels)',\n'World']]\nbrt.columns = ['East Asia & Pacific', \n'Europe & Central Asia',\n'Latin America & Caribbean',\n'Sub-Saharan Africa',\n'World']\n\nfig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', figsize=(16,12))\nfig.suptitle('Annual Birth Rate in Major World Regions', fontsize='16', fontweight='bold')\n\nfig.text(0.5, 0.04, 'Year', ha='center', fontsize='14', fontweight='bold')\nfig.text(0.04, 0.5, 'Number of Births per 1000 Inhabitants', va='center', rotation='vertical',\n fontsize='14', fontweight='bold')\n\nax1.plot(brt[[0]], color='#005FE5')\nax1.set_title('East Asia and Pacific', fontsize='14')\nax2.plot(brt[[1]], color='#4544B2')\nax2.set_title('Europe and Central Asia', fontsize='14')\nax3.plot(brt[[2]], color='teal')\nax3.set_title('Latin American and the Caribbean', fontsize='14')\nax4.plot(brt[[3]], color='purple')\nax4.set_title('Sub-Saharan Africa', fontsize='14')", "We can clearly see that the birthrate is going down in every region, and therefore worldwide. However, while Sub-Saharan's rate is on a steep decline, the absolute value of that rate still far exceeds any other region's. We conclude from this graph that the reason for the growth of the extremely poor population in Sub-Saharan Africa is that the poverty reduction rate has been outpaced by the general population growth.\nInflation", "inf = wb.download(indicator='NY.GDP.DEFL.KD.ZG', country=['all'], start=1981, end=2011)\ninf.columns=['Annual Inflation']\ninf=inf.stack()\ninf=inf.unstack(level=[0,2])\ninf=inf[['East Asia & Pacific (all income levels)', \n'Europe & Central Asia (all income levels)',\n'Latin America & Caribbean (all income levels)',\n'Sub-Saharan Africa (all income levels)',\n'World']]\ninf.columns = ['East Asia & Pacific', \n'Europe & Central Asia',\n'Latin America & Caribbean',\n'Sub-Saharan Africa',\n'World']\ninf.head()\n\nsns.set()\n\nfig, ax = plt.subplots(figsize=(12,8))\n\ninf.plot(ax=ax, \n color=['#005FE5', '#4544B2', 'teal', 'purple', 'grey']\n )\nax.set_title('Annual Inflation Rate in Major World Regions', fontsize='16', fontweight='bold')\nax.set_xlabel('Year', fontsize='14', fontweight='bold')\nax.set_ylabel('Inflation Rate (%)', fontsize='14', fontweight='bold')\nax.legend(fontsize='12')", "The World Bank itself admits that “for those who have been able to move out of poverty, progress is often temporary: economic shocks, food insecurity and climate change threaten to rob them of their hard-won gains and force them back into poverty” [2].\nOne reason for this might be that the international poverty line remains constant while inflation fluctuates heavily in the region. This means that if you were able to constantly earn $1.90 over a period, the value of that sum is very volatile. This means that some years, even if you are above the extreme poverty line by World Bank standards, it is very likely that you are even worse off than before due to a sudden increase in prices. \nLIMITATIONS\nThe primary limitation we encountered was the significant lack of information for many countries and many years. As such, the data we used to perform this analysis was incomplete. \nAnother significant limitation was that some of the datasets were not consistent in terms of what data was collected. For example, some datasets included all countries in a region while others only provided data on the developing countries within a particular region. For this reason, we were limited by which regions we could compare.\nThe data for the absolute number of people living in extreme is read in from a completed file and not a regularly-updated website. Therefore to remain current as time passes, the csv file will have to be changed in the future.\nLastly, some of the available data only extended until 2011. As such, we were unable to use the most recent figures throughout the project and we decided to limit all data to 2011 for consistency across the graphs.\nHowever, this analysis is based on one of the most accurate and readily-available economic data sources on the internet.\nCONCLUSION\nWhile some may conclude from the first set of graphs that the war against poverty is failing in Sub-Saharan Africa, it is important to note that a key factor contributing to the increase in poverty in is due to a factor out of organizations’ immediate control. Encouragingly, the fact that the portion of the population living in extreme poverty has decreased over time shows that economic growth and aid efforts have been making a difference. Unfortunately, the birth rate has outpaced the rate of extreme poverty reduction and in doing so has diminished the impact of poverty-alleviation efforts.\nREFERENCES\n[1] : Max Roser (2016) – ‘World Poverty’. Published online at OurWorldInData.org. Retrieved from: https://ourworldindata.org/world-poverty/ [Online Resource]\n[2] : The World Bank (2016) - 'Poverty Overview'. Published online at WorldBank.org. Retrieved from: http://www.worldbank.org/en/topic/poverty/overview" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive/09_sequence/sinewaves.ipynb
apache-2.0
[ "<h1> Time series prediction, end-to-end </h1>\n\nThis notebook illustrates several models to find the next value of a time-series:\n<ol>\n<li> Linear\n<li> DNN\n<li> CNN \n<li> RNN\n</ol>", "# You must update BUCKET, PROJECT, and REGION to proceed with the lab\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\nSEQ_LEN = 50\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\nos.environ['SEQ_LEN'] = str(SEQ_LEN)\nos.environ['TFVERSION'] = '1.15'", "<h3> Simulate some time-series data </h3>\n\nEssentially a set of sinusoids with random amplitudes and frequencies.", "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport tensorflow as tf\nprint(tf.__version__)\n\nimport numpy as np\nimport seaborn as sns\n\ndef create_time_series():\n freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6\n ampl = np.random.random() + 0.5 # 0.5 to 1.5\n noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed\n x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise\n return x\n\nflatui = [\"#9b59b6\", \"#3498db\", \"#95a5a6\", \"#e74c3c\", \"#34495e\", \"#2ecc71\"]\nfor i in range(0, 5):\n sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series\n\ndef to_csv(filename, N):\n with open(filename, 'w') as ofp:\n for lineno in range(0, N):\n seq = create_time_series()\n line = \",\".join(map(str, seq))\n ofp.write(line + '\\n')\n\nimport os\ntry:\n os.makedirs('data/sines/')\nexcept OSError:\n pass\n\nnp.random.seed(1) # makes data generation reproducible\n\nto_csv('data/sines/train-1.csv', 1000) # 1000 sequences\nto_csv('data/sines/valid-1.csv', 250)\n\n!head -5 data/sines/*-1.csv", "<h3> Train model locally </h3>\n\nMake sure the code works as intended.\nPlease remember to update the \"--model=\" variable on the last line of the command\nYou may ignore any tensorflow deprecation warnings.\n<b>Note:</b> This step will be complete when you see a message similar to the following: \n\"INFO : tensorflow :Loss for final step: N.NNN...N\"", "%%bash\nDATADIR=$(pwd)/data/sines\nOUTDIR=$(pwd)/trained/sines\nrm -rf $OUTDIR\ngcloud ai-platform local train \\\n --module-name=sinemodel.task \\\n --package-path=${PWD}/sinemodel \\\n -- \\\n --train_data_path=\"${DATADIR}/train-1.csv\" \\\n --eval_data_path=\"${DATADIR}/valid-1.csv\" \\\n --output_dir=${OUTDIR} \\\n --model=linear --train_steps=10 --sequence_length=$SEQ_LEN", "<h3> Cloud AI Platform</h3>\n\nNow to train on Cloud AI Platform with more data.", "import shutil\nshutil.rmtree('data/sines', ignore_errors=True)\nos.makedirs('data/sines/')\nnp.random.seed(1) # makes data generation reproducible\nfor i in range(0,10):\n to_csv('data/sines/train-{}.csv'.format(i), 1000) # 1000 sequences\n to_csv('data/sines/valid-{}.csv'.format(i), 250)\n\n%%bash\ngsutil -m rm -rf gs://${BUCKET}/sines/*\ngsutil -m cp data/sines/*.csv gs://${BUCKET}/sines\n\n%%bash\nfor MODEL in linear dnn cnn rnn rnn2 rnnN; do\n OUTDIR=gs://${BUCKET}/sinewaves/${MODEL}\n JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S)\n gsutil -m rm -rf $OUTDIR\n gcloud ai-platform jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=sinemodel.task \\\n --package-path=${PWD}/sinemodel \\\n --job-dir=$OUTDIR \\\n --scale-tier=BASIC \\\n --runtime-version=$TFVERSION \\\n -- \\\n --train_data_path=\"gs://${BUCKET}/sines/train*.csv\" \\\n --eval_data_path=\"gs://${BUCKET}/sines/valid*.csv\" \\\n --output_dir=$OUTDIR \\\n --train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL\ndone", "Results\nWhen I ran it, these were the RMSEs that I got for different models. Your results will vary:\n| Model | Sequence length | # of steps | Minutes | RMSE |\n| --- | ----| --- | --- | --- | \n| linear | 50 | 3000 | 10 min | 0.150 |\n| dnn | 50 | 3000 | 10 min | 0.101 |\n| cnn | 50 | 3000 | 10 min | 0.105 |\n| rnn | 50 | 3000 | 11 min | 0.100 |\n| rnn2 | 50 | 3000 | 14 min |0.105 |\n| rnnN | 50 | 3000 | 15 min | 0.097 |\nAnalysis\nYou can see there is a significant improvement when switching from the linear model to non-linear models. But within the the non-linear models (DNN/CNN/RNN) performance for all is pretty similar. \nPerhaps it's because this is too simple of a problem to require advanced deep learning models. In the next lab we'll deal with a problem where an RNN is more appropriate.\nCopyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jacobdein/alpine-soundscapes
utilities/Organize recordings by site.ipynb
mit
[ "Organize recording files by site\nThis notebook copies sound files (recordings) into the pumilio directory structure based on a csv file containing information about the time and location of recording.\nRequired packages\n<a href=\"https://github.com/pydata/pandas\">pandas</a> <br />\nVariable declarations\ncsv_filepath – path to a csv file containing information about the time and location of each recording <br />\nsound_directory – path to the directory that will contain the recordings <br />\nsource_directory – path to the directory containing the unorganized recordings", "csv_filepath = \"\"\n\nsound_directory = \"\"\n\nsource_directory = os.path.dirname(os.path.dirname(working_directory))", "Import statements", "import pandas\nimport os.path\nfrom datetime import datetime\nimport subprocess", "Organize recordings", "visits = pandas.read_csv(csv_filepath)\n\nvisits = visits.sort_values(by=['Time'])\n\nvisits['ID'] = visits['ID'].map('{:g}'.format)\n\nsites = visits['ID'].drop_duplicates().dropna().as_matrix()\n\nfor site in sites:\n path = os.path.join(working_directory, str(site))\n if os.path.exists(path):\n os.rmdir(path)\n os.mkdir(path)\n\nfor index, visit in visits.iterrows():\n try:\n dt = datetime.strptime(visit['Time'], '%Y-%m-%d %X')\n except TypeError:\n print('Time was not a string, but had a value of: \"{0}\"'.format(visit['Time']))\n continue\n source_file = os.path.join(source_directory, dt.strftime('%Y-%m-%d'), 'converted', '{0}.flac'.format(dt.strftime('%y%m%d-%H%M%S')))\n destination_file = os.path.join(working_directory, str(visit['ID']), '{0}.flac'.format(dt.strftime('%y%m%d-%H%M%S')))\n if os.path.exists(source_file):\n subprocess.check_output([\"cp\", source_file, destination_file])\n print('copying {0}'.format(dt.strftime('%y%m%d-%H%M%S')))\n else:\n print('\\n')\n print('{0} does not exist!'.format(dt.strftime('%y%m%d-%H%M%S')))\n print(visit['Name'])\n print('\\n')\n print('\\n')\n print('done')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ydluo/qdyn
examples/notebooks/single_asperity_3D.ipynb
gpl-3.0
[ "Single asperity simulations (3D)\nIn this tutorial, we simulate slip on a 2D fault (within a 3D medium) with a single velocity-weakening asperity, embedded in a velocity-strengthening (creeping) matrix. We begin by importing some modules.", "# Make plots interactive in the notebook\n%matplotlib notebook\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport os\nimport sys\n\n# Add QDYN source directory to PATH\n# Go up in the directory tree\nupup = [os.pardir]*2\nqdyn_dir = os.path.join(*upup)\n# Get QDYN src directory\nsrc_dir = os.path.abspath(\n os.path.join(\n os.path.join(os.path.abspath(\"\"), qdyn_dir), \"src\")\n)\n# Append src directory to Python path\nsys.path.append(src_dir)\n\n# Import QDYN wrapper and plotting library\nfrom pyqdyn import qdyn", "To prepare a simulation, the global simulation and mesh parameters will have to be specified. This is done in three steps: \n\nSpecify global parameters, like simulation duration, output resolution, mesh size, and default mesh values\nRender the mesh (assigning default values to every element)\nOverride the default mesh parameter values to create heterogeneity in the simulation\n\nIn this simulation, the only heterogeneity stems from a lateral variation in the direct effect parameter $a$, which is chosen such that the asperity has $(a-b) < 0$, and such that the matrix has $(a - b) > 0$.", "# Instantiate the QDYN class object\np = qdyn()\n\n# Predefine parameters\nt_yr = 3600 * 24 * 365.0 # Seconds per year\nL = 5e3 # Length of fault along-strike\nW = 5e3 # Length of fault along-dip\nresolution = 5 # Mesh resolution / process zone width\n\n# Get the settings dict\nset_dict = p.set_dict\n\n\"\"\" Step 1: Define simulation/mesh parameters \"\"\"\n# Global simulation parameters\nset_dict[\"MESHDIM\"] = 2 # Simulation dimensionality (1D fault in 2D medium)\nset_dict[\"FAULT_TYPE\"] = 2 # Thrust fault\nset_dict[\"TMAX\"] = 5*t_yr # Maximum simulation time [s]\nset_dict[\"NTOUT\"] = 100 # Save output every N steps\nset_dict[\"NXOUT\"] = 2 # Snapshot resolution along-strike (every N elements)\nset_dict[\"NWOUT\"] = 2 # Snapshot resolution along-dip (every N elements)\nset_dict[\"V_PL\"] = 1e-9 # Plate velocity\nset_dict[\"MU\"] = 3e10 # Shear modulus\nset_dict[\"SIGMA\"] = 1e7 # Effective normal stress [Pa]\nset_dict[\"ACC\"] = 1e-7 # Solver accuracy\nset_dict[\"SOLVER\"] = 2 # Solver type (Runge-Kutta)\nset_dict[\"Z_CORNER\"] = -1e4 # Base of the fault (depth taken <0); NOTE: Z_CORNER must be < -W !\nset_dict[\"DIP_W\"] = 30 # Dip of the fault\n\n# Setting some (default) RSF parameter values\nset_dict[\"SET_DICT_RSF\"][\"A\"] = 0.2e-2 # Direct effect (will be overwritten later)\nset_dict[\"SET_DICT_RSF\"][\"B\"] = 1e-2 # Evolution effect\nset_dict[\"SET_DICT_RSF\"][\"DC\"] = 1e-3 # Characteristic slip distance\nset_dict[\"SET_DICT_RSF\"][\"V_SS\"] = set_dict[\"V_PL\"] # Reference velocity [m/s]\nset_dict[\"SET_DICT_RSF\"][\"V_0\"] = set_dict[\"V_PL\"] # Initial velocity [m/s]\nset_dict[\"SET_DICT_RSF\"][\"TH_0\"] = 0.99 * set_dict[\"SET_DICT_RSF\"][\"DC\"] / set_dict[\"V_PL\"] # Initial (steady-)state [s]\n\n# Process zone width [m]\nLb = set_dict[\"MU\"] * set_dict[\"SET_DICT_RSF\"][\"DC\"] / (set_dict[\"SET_DICT_RSF\"][\"B\"] * set_dict[\"SIGMA\"])\n# Nucleation length [m]\nLc = set_dict[\"MU\"] * set_dict[\"SET_DICT_RSF\"][\"DC\"] / ((set_dict[\"SET_DICT_RSF\"][\"B\"] - set_dict[\"SET_DICT_RSF\"][\"A\"]) * set_dict[\"SIGMA\"])\n\nprint(f\"Process zone size: {Lb} m \\t Nucleation length: {Lc} m\")\n\n# Find next power of two for number of mesh elements\nNx = int(np.power(2, np.ceil(np.log2(resolution * L / Lb))))\nNw = int(np.power(2, np.ceil(np.log2(resolution * W / Lb))))\n\n# Spatial coordinate for mesh\nx = np.linspace(-L/2, L/2, Nx, dtype=float)\nz = np.linspace(-W/2, W/2, Nw, dtype=float)\nX, Z = np.meshgrid(x, z)\nz = -(set_dict[\"Z_CORNER\"] + (z + W/2) * np.cos(set_dict[\"DIP_W\"] * np.pi / 180.))\n\n# Set mesh size and fault length\nset_dict[\"NX\"] = Nx\nset_dict[\"NW\"] = Nw\nset_dict[\"L\"] = L\nset_dict[\"W\"] = W \nset_dict[\"DW\"] = W / Nw\n# Set time series output node to the middle of the fault\nset_dict[\"IC\"] = Nx * (Nw // 2) + Nx // 2\n\n\"\"\" Step 2: Set (default) parameter values and generate mesh \"\"\"\np.settings(set_dict)\np.render_mesh()\n\n\"\"\" Step 3: override default mesh values \"\"\"\n# Distribute direct effect a over mesh according to some arbitrary function\nscale = 1e3\np.mesh_dict[\"A\"] = 2 * set_dict[\"SET_DICT_RSF\"][\"B\"] * (1 - 0.9*np.exp(- (X**2 + Z**2) / (2 * scale**2))).ravel()\n\n# Write input to qdyn.in\np.write_input()", "To see the effect of setting a heterogeneous value of a over the mesh, we can plot $(a-b)$ versus position on the fault:", "plt.figure()\nplt.pcolormesh(x * 1e-3, z * 1e-3, (p.mesh_dict[\"A\"] - p.mesh_dict[\"B\"]).reshape(X.shape), \n vmin=-0.01, vmax=0.01, cmap=\"coolwarm\")\nplt.colorbar()\nplt.xlabel(\"x [km]\")\nplt.ylabel(\"z [km]\")\nplt.gca().invert_yaxis()\nplt.tight_layout()\nplt.show()", "As desired, the asperity is defined by $(a-b) < 0$, embedded in a stable matrix with $(a-b) > 0$.\nThe p.write() command writes a qdyn.in file to the current working directory, which is read by QDYN at the start of the simulation. To do this, call p.run(). Note that in this notebook, the screen output (stdout) is captured by the console, so you won't see any output here.", "p.run()", "During the simulation, output is flushed to disk every NTOUT time steps. This output can be reloaded without re-running the simulation, so you only have to call p.run() again if you made any changes to the input parameters. To read/process the output, call:", "p.read_output(read_ot=True, read_ox=True)", "Instead of using an auxiliary library of plotting functions (plot_functions.py), we can directly access the time series output from p.ot to plot the slip rate and shear stress in the middle of the fault:", "# Time-series plot at the middle of the fault\nplt.figure(figsize=(9, 4))\n\n# Slip rate\nplt.subplot(121)\nplt.plot(p.ot[0][\"t\"] / t_yr, p.ot[0][\"v\"])\nplt.xlabel(\"t [years]\")\nplt.ylabel(\"V [m/s]\")\nplt.yscale(\"log\")\n\n# Shear stress\nplt.subplot(122)\nplt.plot(p.ot[0][\"t\"] / t_yr, p.ot[0][\"tau\"] * 1e-6)\nplt.xlabel(\"t [years]\")\nplt.ylabel(\"stress [MPa]\")\n\nplt.tight_layout()\nplt.show()", "Similarly, we can access individual snapshots from p.ox:", "# Get the x, z coordinates of the fault\nx_ox = p.ox[\"x\"].unique()\nz_ox = p.ox[\"z\"].unique()\n\nX, Z = np.meshgrid(x_ox, z_ox)\n\n# Number of snapshots\nNt = len(p.ox[\"v\"]) // (len(x_ox) * len(z_ox))\n\n# Get velocity snapshots\nV_ox = p.ox[\"v\"].values.reshape((Nt, len(z_ox), len(x_ox)))\n\n# Plot one snapshot of slip rate\nplt.figure()\n\nplt.pcolormesh(x_ox * 1e-3, -z_ox * 1e-3, np.log10(V_ox[14]), cmap=\"magma\", vmin=-9, vmax=-2)\nplt.xlabel(\"x [km]\")\nplt.ylabel(\"z [km]\")\n\nplt.gca().invert_yaxis()\nplt.colorbar()\n\nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sdpython/ensae_teaching_cs
_doc/notebooks/exams/td_note_2017.ipynb
mit
[ "1A.e - TD noté, 16 décembre 2016\nRégression linéaire avec des variables catégorielles.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Exercice 1\nOn suppose qu'on dispose d'un ensemble d'observations $(X_i, Y_i)$ avec $X_i, Y_i \\in \\mathbb{R}$. La régression linéaire consiste une relation linéaire $Y_i = a X_i + b + \\epsilon_i$ qui minimise la variance du bruit. On pose :\n$$E(a, b) = \\sum_i (Y_i - (a X_i + b))^2$$\nOn cherche $a, b$ tels que :\n$$a^, b^ = \\arg \\min E(a, b) = \\arg \\min \\sum_i (Y_i - (a X_i + b))^2$$\nLa fonction est dérivable et on trouve :\n$$\\frac{\\partial E(a,b)}{\\partial a} = - 2 \\sum_i X_i ( Y_i - (a X_i + b)) \\text{ et } \\frac{\\partial E(a,b)}{\\partial b} = - 2 \\sum_i ( Y_i - (a X_i + b))$$\nIl suffit alors d'annuler les dérivées. On résoud un système d'équations linéaires. On note :\n$$\\begin{array}{l} \\mathbb{E} X = \\frac{1}{n}\\sum_{i=1}^n X_i \\text{ et } \\mathbb{E} Y = \\frac{1}{n}\\sum_{i=1}^n Y_i \\ \\mathbb{E}(X^2) = \\frac{1}{n}\\sum_{i=1}^n X_i^2 \\text{ et } \\mathbb{E}(XY) = \\frac{1}{n}\\sum_{i=1}^n X_i Y_i \\end{array}$$\nFinalement :\n$$\\begin{array}{l} a^ = \\frac{ \\mathbb{E}(XY) - \\mathbb{E} X \\mathbb{E} Y}{\\mathbb{E}(X^2) - (\\mathbb{E} X)^2} \\text{ et } b^ = \\mathbb{E} Y - a^* \\mathbb{E} X \\end{array}$$\nQ1\nOn génère un nuage de points avec le code suivant :", "import random\ndef generate_xy(n=100, a=0.5, b=1):\n res = []\n for i in range(0, n):\n x = random.uniform(0, 10)\n res.append((x, x*a + b + random.gauss(0,1)))\n return res\n\ngenerate_xy(10)", "Q2\nEcrire une fonction qui calcule $\\mathbb{E} X, \\mathbb{E} Y, \\mathbb{E}(XY), \\mathbb{E}(X^2)$. Plusieurs étudiants m'ont demandé ce qu'était obs. C'est simplement le résultat de la fonction précédente.", "def calcule_exyxyx2(obs):\n sx = 0\n sy = 0\n sxy = 0\n sx2 = 0\n for x, y in obs:\n sx += x\n sy += y\n sxy += x * y\n sx2 += x * x\n n = len(obs)\n return sx/n, sy/n, sxy/n, sx2/n\n\nobs = generate_xy(10)\ncalcule_exyxyx2(obs)", "Q3\nCalculer les grandeurs $a^$, $b^$. A priori, on doit retrouver quelque chose d'assez proche des valeurs choisies pour la première question : $a=0.5$, $b=1$.", "def calcule_ab(obs):\n sx, sy, sxy, sx2 = calcule_exyxyx2(obs)\n a = (sxy - sx * sx) / (sx2 - sx**2)\n b = sy - a * sx\n return a, b\n\ncalcule_ab(obs)", "Q4\nCompléter le programme.", "import random\ndef generate_caty(n=100, a=0.5, b=1, cats=[\"rouge\", \"vert\", \"bleu\"]):\n res = []\n for i in range(0, n):\n x = random.randint(0,2)\n cat = cats[x]\n res.append((cat, 10*x**2*a + b + random.gauss(0,1)))\n return res\n\ngenerate_caty(10)", "Q5\nOn voudrait quand même faire une régression de la variable $Y$ sur la variable catégorielle. On construit une fonction qui assigne un numéro quelconque mais distinct à chaque catégorie. La fonction retourne un dictionnaire avec les catégories comme clé et les numéros comme valeurs.", "def numero_cat(obs):\n mapping = {}\n for color, y in obs:\n if color not in mapping:\n mapping[color] = len(mapping)\n return mapping\n\nobs = generate_caty(100)\nnumero_cat(obs)", "Q6\nOn construit la matrice $M_{ic}$ tel que : $M_{ic}$ vaut 1 si $c$ est le numéro de la catégorie $X_i$, 0 sinon.", "import numpy\ndef construit_M(obs):\n mapping = numero_cat(obs)\n M = numpy.zeros((len(obs), 3))\n for i, (color, y) in enumerate(obs):\n cat = mapping[color]\n M[i, cat] = 1.0\n return M\n \nM = construit_M(obs)\nM[:5]", "Q7\nIl est conseillé de convertir la matrice $M$ et les $Y$ au format numpy. On ajoute un vecteur constant à la matrice $M$. La requête numpy add column sur un moteur de recherche vous directement à ce résultat : How to add an extra column to an numpy array.", "def convert_numpy(obs):\n M = construit_M(obs)\n Mc = numpy.hstack([M, numpy.ones((M.shape[0], 1))])\n Y = numpy.array([y for c, y in obs])\n return M, Mc, Y.reshape((M.shape[0], 1))\n\nM, Mc, Y = convert_numpy(obs)\nMc[:5], Y[:5]", "Q8\nOn résoud la régression multidimensionnelle en appliquant la formule $C^* = (M'M)^{-1}M'Y$. La question 7 ne servait pas à grand chose excepté faire découvrir la fonction hstack car le rang de la matrice Mc est 3 < 4.", "alpha = numpy.linalg.inv(M.T @ M) @ M.T @ Y\nalpha", "Q9\nLa régression détermine les coefficients $\\alpha$ dans la régression $Y_i = \\alpha_{rouge} \\mathbb{1}{X_i = rouge} + \\alpha{vert} \\mathbb{1}{X_i = vert} + \\alpha{bleu} \\mathbb{1}_{X_i = bleu} + \\epsilon_i$.\nConstruire le vecteur $\\hat{Y_i} = \\alpha_{rouge} \\mathbb{1}{X_i = rouge} + \\alpha{vert} \\mathbb{1}{X_i = vert} + \\alpha{bleu} \\mathbb{1}_{X_i = bleu}$.", "Yp = numpy.zeros((M.shape[0], 1))\nfor i in range(3):\n Yp[ M[:,i] == 1, 0] = alpha[i, 0]\nYp[:5]", "Q10\nUtiliser le résultat de la question 3 pour calculer les coefficients de la régression $Y_i = a^ \\hat{Y_i} + b^$.", "obs = [(x, y) for x, y in zip(Yp, Y)]\ncalcule_ab( obs )", "On aboutit au résultat $Y = \\hat{Y} + \\epsilon$. On a associé une valeur à chaque catégorie de telle sorte que la régression de $Y$ sur cette valeur soit cette valeur. Autrement dit, c'est la meilleur approximation de $Y$ sur chaque catégorie. A quoi cela correspond-il ? C'est le second énoncé qui répond à cette question.\nExercice 2\nQ1 - Q2 - Q3\nCe sont les mêmes réponses.\nQ4\nUn moyen très simple de simuler une loi multinomiale est de partir d'une loi uniforme et discrète à valeur dans entre 1 et 10. On tire un nombre, s'il est inférieur ou égal à 5, ce sera la catégorie 0, 1 si c'est inférieur à 8, 2 sinon.", "def generate_caty(n=100, a=0.5, b=1, cats=[\"rouge\", \"vert\", \"bleu\"]):\n res = []\n for i in range(0, n):\n # on veut 50% de rouge, 30% de vert, 20% de bleu\n x = random.randint(1, 10)\n if x <= 5: x = 0\n elif x <= 8: x = 1\n else : x = 2\n cat = cats[x]\n res.append((cat, 10*x**2*a + b + random.gauss(0,1)))\n return res\n\nobs = generate_caty(10)\nobs", "Q5\nOn voudrait quand même faire une régression de la variable $Y$ sur la variable catégorielle. On commence par les compter. Construire une fonction qui compte le nombre de fois qu'une catégorie est présente dans les données (un histogramme).", "def histogram_cat(obs):\n h = dict()\n for color, y in obs:\n h[color] = h.get(color, 0) + 1\n return h\n\nhistogram_cat(obs)", "Q6\nConstruire une fonction qui calcule la moyenne des $Y_i$ pour chaque catégorie : $\\mathbb{E}(Y | rouge)$, $\\mathbb{E}(Y | vert)$, $\\mathbb{E}(Y | bleu)$. La fonction retourne un dictionnaire {couleur:moyenne}.", "def moyenne_cat(obs):\n h = dict()\n sy = dict()\n for color, y in obs:\n h[color] = h.get(color, 0) + 1\n sy[color] = sy.get(color, 0) + y\n for k, v in h.items():\n sy[k] /= v\n return sy\n\nmoyenne_cat(obs)", "L'énoncé induisait quelque peu en erreur car la fonction suggérée ne permet de calculer ces moyennes. Il suffit de changer.\nQ7\nConstruire le vecteur $Z_i = \\mathbb{E}(Y | rouge)\\mathbb{1}{X_i = rouge} + \\mathbb{E}(Y | vert) \\mathbb{1}{X_i = vert} + \\mathbb{E}(Y | bleu) \\mathbb{1}_{X_i = bleu}$.", "moys = moyenne_cat(obs)\nZ = [moys[c] for c, y in obs]\nZ[:5]", "Q8\nUtiliser le résultat de la question 3 pour calculer les coefficients de la régression $Y_i = a^ Z_i + b^$.", "obs2 = [(z, y) for (c, y), z in zip(obs, Z)]\ncalcule_ab( obs2 )", "On aboutit au résultat $Y = \\hat{Y} + \\epsilon$. On a associé une valeur à chaque catégorie de telle sorte que la régression de $Y$ sur cette valeur soit cette valeur.\nQ9\nCalculer la matrice de variance / covariance pour les variables $(Y_i)$, $(Z_i)$, $(Y_i - Z_i)$, $\\mathbb{1}{X_i = rouge}$, $\\mathbb{1}{X_i = vert}$, $\\mathbb{1}_{X_i = bleu}$.", "bigM = numpy.empty((len(obs), 6))\nbigM[:, 0] = [o[1] for o in obs]\nbigM[:, 1] = Z\nbigM[:, 2] = bigM[:, 0] - bigM[:, 1]\nbigM[:, 3] = [ 1 if o[0] == \"rouge\" else 0 for o in obs]\nbigM[:, 4] = [ 1 if o[0] == \"vert\" else 0 for o in obs]\nbigM[:, 5] = [ 1 if o[0] == \"bleu\" else 0 for o in obs]\nbigM[:5]", "On utilise la fonction cov.", "c = numpy.cov(bigM.T)\nc", "On affiche un peu mieux les résultats :", "import pandas\npandas.DataFrame(c).applymap(lambda x: '%1.3f' % x)", "Q10\nOn permute rouge et vert. Construire le vecteur $W_i = \\mathbb{E}(Y | rouge)\\mathbb{1}{X_i = vert} + \\mathbb{E}(Y | vert)\\mathbb{1}{X_i = rouge} + \\mathbb{E}(Y | bleu)\\mathbb{1}_{X_i = bleu}$. Utiliser le résultat de la question 3 pour calculer les coefficients de la régression $Y_i = a^ W_i + b^$. Vérifiez que l'erreur est supérieure.", "moys = moyenne_cat(obs)\nmoys[\"rouge\"], moys[\"vert\"] = moys.get(\"vert\", 0), moys.get(\"rouge\", 0)\n\nW = [moys[c] for c, y in obs]\nobs3 = [(w, y) for (c, y), w in zip(obs, W)]\ncalcule_ab( obs3 )\n\ndef calcule_erreur(obs):\n a, b = calcule_ab(obs)\n e = [(a*x + b - y)**2 for x, y in obs]\n return sum(e) / len(obs)\n\ncalcule_erreur(obs2), calcule_erreur(obs3)", "C'est carrément supérieur.\nConclusion\nL'analyse des correspondances multiples est une façon d'étudier les modalités de variables catégorielles mais cela ne fait pas de la prédiction. Le modèle logit - probit prédit une variable binaire à partir de variables continue mais dans notre cas, c'est la variable à prédire qui est continue. Pour effectuer une prédiction, il convertit les catégories en variables numériques (voir Categorical Variables). Le langage R est plus outillé pour cela : Regression on categorical variables. Le module categorical-encoding est disponible en python. Cet examen décrit une méthode parmi d'autres pour transformer les catégories en variables continues." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yhat/ggplot
docs/how-to/Making Multiple Plots.ipynb
bsd-2-clause
[ "%matplotlib inline\nfrom ggplot import *", "Making Multiple Plots\nMaking multiple plots at the same time is easy in ggplot. Plots can either be show inline using the print function, the __repr__ (what python prints out by default for something), or ggplot's save function.\nSince we're trying to make multiple plots, the __repr__ example doesn't really appy here, so we'll be focusing on using print and save to make multiple plots.\nLet's say you want to generate a histogram of price data for each type of diamond cut in the diamonds dataset. First thing you need to do is split your dataset up into groups. You can do this using the groupby function in pandas.", "for i, group in diamonds.groupby(\"cut\"):\n print group.head()", "Now to create and render a plot for each one of these subgroups, just make a ggplot object add a geom_histogram layer, and then print the plot.", "for name, group in diamonds.groupby(\"cut\"):\n p = ggplot(group, aes(x='price')) + geom_histogram() + ggtitle(name)\n print(p)", "If you want to save the plots to a file instead, just use save instead of print.", "for name, group in diamonds.groupby(\"cut\"):\n p = ggplot(group, aes(x='price')) + geom_histogram() + ggtitle(name)\n filename = \"price-distribution-\" + name + \".png\"\n p.save(filename)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
juanshishido/tufte
tufte-in-python.ipynb
gpl-2.0
[ "Tufte\nA Jupyter notebook with examples of how to use tufte.\nIntroduction\nCurrently, there are four supported plot types:\n* bar\n* boxplot\n* line\n* scatter\nThe designs are based on Edward R. Tufte's designs in The Visual Display of Quantitative Information.\nThis module is built on top of matplotlib, which means that it's possible to use those functions or methods in conjunction with tufte plots. In addition, an effort has been made to keep most changes to matplotlibrc properties contained within the module. That is, we try not to make global changes that will affect other plots.\nUse\nLet's start by importing several libraries.", "%matplotlib inline\n\nimport string\nimport random\nfrom collections import defaultdict\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nimport tufte", "tufte plots can take inputs of several types: list, np.ndarray, pd.Series, and, in some cases, pd.DataFrame.\nTo create a line plot, do the following. (Note: if you'd like higher resolution plots, use mpl.rc('savefig', dpi=200).)", "tufte.line(range(3), range(3), figsize=(5, 5))", "You'll notice that the default Tufte line style includes circle markers with gaps between line segments. You are also able to specify the figure size directly to the line function.\nThere are several other differences. We'll create another plot below as an example.", "x = range(1967, 1977 + 1)\ny = [310.2, 330, 375, 385, 385.6, 395, 387.5, 380, 392, 407.1, 380]\n\ntufte.line(x, y, figsize=(8, 4))", "First, we use Tufte's range-frame concept, which aims to make the frame (axis) lines \"effective data-communicating element[s]\" by showing the minimum and maximum values in each axis. This way, the tick labels are more informative. In this example, the range of the outcome variable is 96.9 units (407.1 - 310.2). Similarly, this data covers the years 1967 through 1977, inclusive.\nThe range-frame is applied to both axes for line and scatter plots.", "np.random.seed(8675309)\n\nfig, ax = tufte.scatter(np.random.randint(5, 95, 100), np.random.randint(1000, 1234, 100), figsize=(8, 4))\n\nplt.title('Title')\nax.set_xlabel('x-axis')", "You'll also notice that tufte.scatter() returns figure and axis objects. This is true for all tufte plots. With this, we can add a title to the figure and a label to the x-axis, for example. tufte plots are meant to be able to interact with matplotlib functions and methods.\nWhen you need to create a bar plot, do the following.", "np.random.seed(8675309)\n\ntufte.bar(range(10),\n np.random.randint(1, 25, 10),\n label=['First', 'Second', 'Third', 'Fourth', 'Fifth',\n 'Sixth', 'Seventh', 'Eight', 'Ninth', 'Tenth'],\n figsize=(8, 4))", "A feature of the bar() function is the ability for x-axis labels to auto-rotate. We can see this when we change the one of the labels.", "np.random.seed(8675309)\n\ntufte.bar(range(10),\n np.random.randint(1, 25, 10),\n label=['First', 'Second', 'Third', 'Fourth', 'Fifth',\n 'Sixth', 'Lucky 7th', 'Eight', 'Ninth', 'Tenth'],\n figsize=(8, 4))", "Tufte's boxplot is, perhaps, the most radical redesign of an existing plot. His approach is to maximize data-ink, the \"non-erasable core of a graphic,\" by removing unnecessary elements. The boxplot removes boxes (which is why we refer to it as bplot()) and caps and simply shows a dot between two lines. This plot currently only takes a list, np.ndarray, or pd.DataFrame.\nLet's create a DataFrame.", "n_cols = 10 # Must be less than or equal to 26\nsize = 100\n\nletters = string.ascii_lowercase\n\ndf_dict = defaultdict(list)\nfor c in letters[:n_cols]:\n df_dict[c] = np.random.randint(random.randint(25, 50), random.randint(75, 100), size)\n\ndf = pd.DataFrame(df_dict)\n\ntufte.bplot(df, figsize=(8, 4))", "The dot represents the median and the lines correspond to the top and bottom 25% of the data. The empty space between the lines is the interquartile range.\nIssues\nRange-Frame\nYou may have noticed&mdash;if you cloned this repo and ran the notebook&mdash;that the range-frame feature isn;t perfect. It is possible, for example, for a minimum or maximum value to be too close to an existing tick label, causing overlap.\nAdditionally, in cases where the data in a given dimension (x or y) contains float values, the tick labels are converted to float. (This isn't the issue.)", "np.random.seed(8675309)\n\ntufte.scatter(np.random.randn(100), np.random.randn(100), figsize=(8, 4))", "This becomes problematic based on our decision to round to the nearest tenth. In this example, the maximum value on the y-axis might be 2.56, which gets rounded to 2.6. A reader might incorrectly conclude that the maximum value in y is 2.6.\n(The above plot also shows what can happen with the minimum or maximum value is too close to an existing tick label. See -2.2 and -2.0 in y.)\nDevelopment\nTufte's book provides many useful and functional plots, many of which we plan to add to this module." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MBARIMike/oxyfloat
notebooks/explore_cached_oxyfloat_data.ipynb
mit
[ "Explore locally cached Argo oxygen float data - second in a series of Notebooks\nUse the oxyfloat module to get data and Pandas to operate on it for testing ability to easily perform calibrations\n(See build_oxyfloat_cache.ipynb for for the work that leads to this Notebook.)\nAdd parent directory to the path and get an ArgoData object that uses the default local cache.", "import sys\nsys.path.insert(0, '../')\n\nfrom oxyfloat import ArgoData\nad = ArgoData()", "Get the default list of floats that have oxygen data.", "wmo_list = ad.get_oxy_floats_from_status()", "We can explore the distribution of AGEs of the Argo floats by getting the status data in a DataFrame (sdf).", "sdf = ad._get_df(ad._STATUS)", "Define a function (dist_plot) and plot the distribution of the AGE column.", "%pylab inline\ndef dist_plot(df, title):\n from datetime import date\n ax = df.hist(bins=100)\n ax.set_xlabel('AGE (days)')\n ax.set_ylabel('Count')\n ax.set_title('{} as of {}'.format(title, date.today()))\n \ndist_plot(sdf['AGE'], 'Argo float AGE distribution')", "There are over 600 floats with an AGE of 0. The .get_oxy_floats_from_status() method does not select these floats as I believe they are 'inactive'. Let's count the number of non-greylisted oxygen floats at various AGEs so that we can build a reasonably sized test cache.", "sdfq = sdf.query('(AGE != 0) & (OXYGEN == 1) & (GREYLIST != 1)')\ndist_plot(sdfq['AGE'], title='Argo oxygen float AGE distribution')\nprint 'Count age_gte 0340:', len(sdfq.query('AGE >= 340'))\nprint 'Count age_gte 1000:', len(sdfq.query('AGE >= 1000'))\nprint 'Count age_gte 2000:', len(sdfq.query('AGE >= 2000'))\nprint 'Count age_gte 2200:', len(sdfq.query('AGE >= 2200'))\nprint 'Count age_gte 3000:', len(sdfq.query('AGE >= 3000'))", "Compare the 2200 count with what .get_oxy_floats_from_status(age_gte=2200) returns.", "len(ad.get_oxy_floats_from_status(age_gte=2200))", "That's reassuring! Now, let's build a custom cache file with the the 19 floats that have an AGE >= 2200 days.\nFrom a shell window execute this script:\nbash\nscripts/load_cache.py --age 2200 --profiles 2 -v\nThis will take several minutes to download the data and build the cache. Once it's finished you can execute the cells below (you will need to enter the exact name of the cache_file which the above command displays in its INFO messages).", "%%time\nad = ArgoData(cache_file='../oxyfloat/oxyfloat_fixed_cache_age2200_profiles2.hdf')\nwmo_list = ad.get_oxy_floats_from_status(2200)\ndf = ad.get_float_dataframe(wmo_list, max_profiles=2)", "Plot the profiles.", "# Parameter long_name and units copied from attributes in NetCDF files\ntime_range = '{} to {}'.format(df.index.get_level_values('time').min(), \n df.index.get_level_values('time').max())\nparms = {'TEMP_ADJUSTED': 'SEA TEMPERATURE IN SITU ITS-90 SCALE (degree_Celsius)', \n 'PSAL_ADJUSTED': 'PRACTICAL SALINITY (psu)',\n 'DOXY_ADJUSTED': 'DISSOLVED OXYGEN (micromole/kg)'}\n\nplt.rcParams['figure.figsize'] = (18.0, 8.0)\nfig, ax = plt.subplots(1, len(parms), sharey=True)\nax[0].invert_yaxis()\nax[0].set_ylabel('SEA PRESSURE (decibar)')\n\nfor i, (p, label) in enumerate(parms.iteritems()):\n ax[i].set_xlabel(label)\n ax[i].plot(df[p], df.index.get_level_values('pressure'), '.')\n \nplt.suptitle('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)", "Plot the profiles on a map.", "import pylab as plt\nfrom mpl_toolkits.basemap import Basemap\n\nplt.rcParams['figure.figsize'] = (18.0, 8.0)\nm = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl')\nm.fillcontinents(color='0.8')\n\nm.scatter(df.index.get_level_values('lon'), df.index.get_level_values('lat'), latlon=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davidwhogg/Avast
notebooks/fakedata2.ipynb
mit
[ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fmin_cg, minimize\nc = 2.99792458e8 # m/s\n\ndef doppler(v):\n frac = (1. - v/c) / (1. + v/c)\n return np.sqrt(frac)\n \ndef state(xs, xps, yps):\n # returns a, b such that ys = a * xs + b\n yps = np.concatenate((yps, [yps[-1]]), axis=0) # hack for end of grid\n xps = np.concatenate((xps, [xps[-1]+1.]), axis=0) # hack for end of grid\n mps = np.searchsorted(xps, xs, side='left')\n yps = np.concatenate(([yps[0]], yps), axis=0) # hack for end of grid\n xps = np.concatenate(([xps[0]-1.], xps), axis=0) # hack for end of grid\n ehs = (yps[mps+1] - yps[mps])/(xps[mps+1] - xps[mps])\n bes = yps[mps] - ehs * xps[mps]\n return ehs, bes\n \ndef P(ehs, bes, xs):\n return ehs * xs + bes\n \ndef dPdx(ehs):\n return ehs # lol\n \ndef Pdot(lnlambdas, lnlambdaps, lnfluxps, v):\n if v.ndim == 0:\n N = 1\n else:\n N = len(v)\n M = len(lnlambdas)\n lnlambdas_shifted = np.tile(lnlambdas, (N,1)) + np.tile(np.log(doppler(v)), (M,1)).T # N x M\n ehs, bes = state(lnlambdas_shifted, lnlambdaps, lnfluxps)\n return P(ehs, bes, lnlambdas_shifted) # this is lnfluxs", "The following is code copied from EPRV/fakedata.py to generate a realistic fake spectrum:", "def oned_gaussian(xs, mm, sig):\n return np.exp(-0.5 * (xs - mm) ** 2 / sig ** 2) / np.sqrt(2. * np.pi * sig)\n\ndef make_synth(rv, xs, ds, ms, sigs):\n \"\"\"\n `rv`: radial velocity in m/s (or same units as `c` above\n `xs`: `[M]` array of wavelength values\n `ds`: depths at line centers\n `ms`: locations of the line centers in rest wavelength\n `sigs`: Gaussian sigmas of lines\n \"\"\"\n synths = np.ones_like(xs)\n for d, m, sig in zip(ds, ms, sigs):\n synths *= np.exp(d *\n oned_gaussian(xs * doppler(rv), m, sig))\n return synths\n\ndef make_data(N, xs, ds, ms, sigs):\n \"\"\"\n `N`: number of spectra to make\n `xs`: `[M]` array of wavelength values\n `ds`: depth-like parameters for lines\n `ms`: locations of the line centers in rest wavelength\n `sigs`: Gaussian sigmas of lines\n \"\"\"\n np.random.seed(2361794231)\n M = len(xs)\n data = np.zeros((N, M))\n ivars = np.zeros((N, M))\n rvs = 30000. * np.random.uniform(-1., 1., size=N) # 30 km/s bc Earth ; MAGIC\n for n, rv in enumerate(rvs):\n ivars[n, :] = 10000. # s/n = 100 ; MAGIC\n data[n, :] = make_synth(rv, xs, ds, ms, sigs)\n data[n, :] += np.random.normal(size=M) / np.sqrt(ivars[n, :])\n return data, ivars, rvs\n\nfwhms = [0.1077, 0.1113, 0.1044, 0.1083, 0.1364, 0.1, 0.1281,\n 0.1212, 0.1292, 0.1526, 0.1575, 0.1879] # FWHM of Gaussian fit to line (A)\nsigs = np.asarray(fwhms) / 2. / np.sqrt(2. * np.log(2.)) # Gaussian sigma (A)\nms = [4997.967, 4998.228, 4998.543, 4999.116, 4999.508, 5000.206, 5000.348,\n 5000.734, 5000.991, 5001.229, 5001.483, 5001.87] # line center (A)\nds = [-0.113524, -0.533461, -0.030569, -0.351709, -0.792123, -0.234712, -0.610711,\n -0.123613, -0.421898, -0.072386, -0.147218, -0.757536] # depth of line center (normalized flux)\nws = np.ones_like(ds) # dimensionless weights\ndx = 0.01 # A\nxs = np.arange(4998. + 0.5 * dx, 5002., dx) # A\n \nN = 16\ndata, ivars, true_rvs = make_data(N, xs, ds, ms, sigs)\ndata = np.log(data)\ndata_xs = np.log(xs)\n\ndef add_tellurics(xs, all_data, true_rvs, lambdas, strengths, dx):\n N, M = np.shape(all_data)\n tellurics = np.ones_like(xs)\n for ll, s in zip(lambdas, strengths):\n tellurics *= np.exp(-s * oned_gaussian(xs, ll, dx))\n plt.plot(xs, tellurics)\n all_data *= np.repeat([tellurics,],N,axis=0)\n return all_data\n\nn_tellurics = 16 # magic\ntelluric_sig = 3.e-6 # magic\ntelluric_xs = np.random.uniform(data_xs[0], data_xs[-1], n_tellurics)\nstrengths = 0.01 * np.random.uniform(size = n_tellurics) ** 2. # magic numbers\nall_data = np.exp(data)\nall_data = add_tellurics(data_xs, all_data, true_rvs, telluric_xs, strengths, telluric_sig)\ndata = np.log(all_data)", "First step: generate some approximate models of the star and the tellurics using first-guess RVs.", "def make_template(all_data, rvs, xs, dx):\n \"\"\"\n `all_data`: `[N, M]` array of pixels\n `rvs`: `[N]` array of RVs\n `xs`: `[M]` array of wavelength values\n `dx`: linear spacing desired for template wavelength grid (A)\n \"\"\"\n (N,M) = np.shape(all_data)\n all_xs = np.empty_like(all_data)\n for i in range(N):\n all_xs[i,:] = xs + np.log(doppler(rvs[i])) # shift to rest frame\n all_data, all_xs = np.ravel(all_data), np.ravel(all_xs)\n tiny = 10.\n template_xs = np.arange(min(all_xs)-tiny*dx, max(all_xs)+tiny*dx, dx)\n template_ys = np.nan + np.zeros_like(template_xs)\n for i,t in enumerate(template_xs):\n ind = (all_xs >= t-dx/2.) & (all_xs < t+dx/2.)\n if np.sum(ind) > 0:\n template_ys[i] = np.nanmedian(all_data[ind])\n ind_nan = np.isnan(template_ys)\n template_ys[ind_nan] = np.interp(template_xs[ind_nan], template_xs[~ind_nan], template_ys[~ind_nan])\n return template_xs, template_ys\n\ndef subtract_template(data_xs, data, model_xs_t, model_ys_t, rvs_t):\n (N,M) = np.shape(data)\n data_sub = np.copy(data)\n for n,v in enumerate(rvs_t):\n model_ys_t_shifted = Pdot(data_xs, model_xs_t, model_ys_t, v)\n data_sub[n,:] -= np.ravel(model_ys_t_shifted)\n if n == 0:\n plt.plot(data_xs, data[n,:], color='k')\n plt.plot(data_xs, data_sub[n,:], color='blue')\n plt.plot(data_xs, np.ravel(model_ys_t_shifted), color='red')\n return data_sub\n\nx0_star = true_rvs + np.random.normal(0., 100., N)\nx0_t = np.zeros(N)\nmodel_xs_star, model_ys_star = make_template(data, x0_star, data_xs, np.log(6000.01) - np.log(6000.))\nmodel_xs_t, model_ys_t = make_template(data, x0_t, data_xs, np.log(6000.01) - np.log(6000.))\n\ndef chisq_star(rvs_star, rvs_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):\n pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)\n pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)\n pd = pd_star + pd_t\n return np.sum((data - pd)**2 * ivars)\n\ndef chisq_t(rvs_t, rvs_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):\n pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)\n pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)\n pd = pd_star + pd_t\n return np.sum((data - pd)**2 * ivars)\n\n\nsoln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\nsoln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\n\nx0_star = soln_star\nx0_t = soln_t\nprint np.std(x0_star - true_rvs)\nprint np.std(x0_t)\n\ndata_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)\n\ndata_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)\n\nplt.plot(data_xs, data[0,:], color='black')\nplt.plot(model_xs_star, model_ys_star, color='red')\nplt.plot(model_xs_t, model_ys_t, color='green')\n\nplt.plot(data_xs, data_star[0,:], color='blue')\nplt.plot(data_xs, data_t[0,:], color='red')\n\ntrue_star = np.log(make_data(N, xs, ds, ms, sigs)[0])\nplt.plot(data_xs, true_star[0,:], color='k')\nplt.plot(data_xs, data_star[0,:], color='blue')\nplt.plot(model_xs_star, model_ys_star, color='red')", "Next: use the template-subtracted data to get better RVs for star and template", "soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\nsoln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\n\n\nprint np.std(soln_star - true_rvs)\nprint np.std(soln_t)", "and repeat:", "for n in range(5):\n x0_star = soln_star\n x0_t = soln_t\n\n data_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)\n data_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)\n\n model_xs_star, model_ys_star = make_template(data_star, x0_star, data_xs, np.log(6000.01) - np.log(6000.))\n model_xs_t, model_ys_t = make_template(data_t, x0_t, data_xs, np.log(6000.01) - np.log(6000.))\n\n soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\n soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),\n method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']\n\n print \"iter {0}: star std = {1:.2f}, telluric std = {2:.2f}\".format(n, np.std(soln_star - true_rvs), np.std(soln_t))\n\ntrue_star = np.log(make_data(N, xs, ds, ms, sigs)[0])\nplt.plot(data_xs, true_star[0,:], color='k')\nplt.plot(data_xs, data_star[0,:], color='blue')\nplt.plot(data_xs, data_t[0,:], color='red')\n\nplt.plot(data_xs, data[0,:], color='k')\nplt.plot(data_xs, data_star[0,:] + data_t[0,:], color='red')\n\nplt.plot(data_xs, data[10,:], color='k')\nplt.plot(data_xs, data_star[10,:] + data_t[10,:], color='red')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nansencenter/nansat
docs/source/notebooks/nansat-introduction.ipynb
gpl-3.0
[ "Nansat: First Steps\nOverview\nThe NANSAT package contains several classes:\n\nNansat - open and read satellite data\nDomain - define grid for the region of interest\nFigure - create raster images (PNG, TIF)\nNSR - define spatial reference (SR)\n\nCopy sample data", "import os\nimport shutil\nimport nansat\nidir = os.path.join(os.path.dirname(nansat.__file__), 'tests', 'data/')", "Open file with Nansat", "import matplotlib.pyplot as plt\n%matplotlib inline\n\n\nfrom nansat import Nansat\nn = Nansat(idir+'gcps.tif')", "Read information ABOUT the data (METADATA)", "print(n)", "Read the actual DATA", "b1 = n[1]", "Check what kind of data we have", "%whos\nplt.imshow(b1);plt.colorbar()\nplt.show()", "Find where the image is taken", "n.write_figure('map.png', pltshow=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
daniel-koehn/Theory-of-seismic-waves-II
05_2D_acoustic_FD_modelling/lecture_notebooks/1_From_1D_to_2D_acoustic_FD_modelling.ipynb
gpl-3.0
[ "Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi", "# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../../style/custom.css'\nHTML(open(css_file, \"r\").read())", "From 1D to 2D acoustic finite difference modelling\nThe 1D acoustic wave equation is very useful to introduce the general concept and problems related to FD modelling. However, for realistic modelling and seismic imaging/inversion applications we have to solve at least the 2D acoustic wave equation.\nIn the class we implement a 2D acoustic FD modelling code based on the 1D code. I strongly recommend that you do this step by yourself starting from this 1D code. \nFinite difference solution of 2D acoustic wave equation\nAs derived in this and this lecture, the acoustic wave equation in 2D with constant density is\n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial t^2} \\ = \\ vp(x,z)^2 \\biggl(\\frac{\\partial^2 p(x,z,t)}{\\partial x^2}+\\frac{\\partial^2 p(x,z,t)}{\\partial z^2}\\biggr) + s(x,z,t) \\nonumber\n\\end{equation}\nwith pressure $p$, acoustic velocity $vp$ and source term $s$. Both second derivatives can be approximated by a 3-point difference formula. For example for the time derivative, we get:\n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial t^2} \\ \\approx \\ \\frac{p(x,z,t+dt) - 2 p(x,z,t) + p(x,z,t-dt)}{dt^2}, \\nonumber\n\\end{equation}\nand equivalently for the spatial derivatives: \n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial x^2} \\ \\approx \\ \\frac{p(x+dx,z,t) - 2 p(x,z,t) + p(x-dx,z,t)}{dx^2}, \\nonumber\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial x^2} \\ \\approx \\ \\frac{p(x,z+dz,t) - 2 p(x,z,t) + p(x,z-dz,t)}{dz^2}, \\nonumber\n\\end{equation}\nInjecting these approximations into the wave equation allows us to formulate the pressure p(x) for the time step $t+dt$ (the future) as a function of the pressure at time $t$ (now) and $t-dt$ (the past). This is called an explicit scheme allowing the $extrapolation$ of the space-dependent field into the future only looking at the nearest neighbourhood.\nIn the next step, we discretize the P-wave velocity and pressure wavefield at the discrete spatial grid points \n\\begin{align}\nx &= idx\\nonumber\\\nz &= jdz\\nonumber\\\n\\end{align}\nwith $i = 0, 1, 2, ..., nx$, $j = 0, 1, 2, ..., nz$ on a 2D Cartesian grid.\n<img src=\"../images/2D-grid_cart_ac.png\" width=\"75%\">\nUsing the discrete time steps\n\\begin{align}\nt &= n*dt\\nonumber\n\\end{align}\nwith $n = 0, 1, 2, ..., nt$ and time step $dt$, we can replace the time-dependent part (upper index time, lower indices space) by\n\\begin{equation}\n \\frac{p_{i,j}^{n+1} - 2 p_{i,j}^n + p_{i,j}^{n-1}}{\\mathrm{d}t^2} \\ = \\ vp_{i,j}^2 \\biggl( \\frac{\\partial^2 p}{\\partial x^2} + \\frac{\\partial^2 p}{\\partial z^2}\\biggr) \\ + s_{i}^n. \\nonumber\n\\end{equation}\nSolving for $p_{i,j}^{n+1}$ leads to the extrapolation scheme:\n\\begin{equation}\np_{i,j}^{n+1} \\ = \\ vp_{i,j}^2 \\mathrm{d}t^2 \\left( \\frac{\\partial^2 p}{\\partial x^2} + \\frac{\\partial^2 p}{\\partial z^2} \\right) + 2p_{i,j}^n - p_{i,j}^{n-1} + \\mathrm{d}t^2 s_{i,j}^n.\n\\end{equation}\nThe spatial derivatives are determined by \n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial x^2} \\ \\approx \\ \\frac{p_{i+1,j}^{n} - 2 p_{i,j}^n + p_{i-1,j}^{n}}{\\mathrm{d}x^2} \\nonumber\n\\end{equation}\nand\n\\begin{equation}\n\\frac{\\partial^2 p(x,z,t)}{\\partial z^2} \\ \\approx \\ \\frac{p_{i,j+1}^{n} - 2 p_{i,j}^n + p_{i,j-1}^{n}}{\\mathrm{d}z^2}. \\nonumber\n\\end{equation}\nEq. (1) is the essential core of the 2D FD modelling code. Because we derived analytical solutions for wave propagation in a homogeneous medium, we should test our first code implementation for a similar medium, by setting\n\\begin{equation}\nvp_{i,j} = vp0\\notag\n\\end{equation}\nat each spatial grid point $i = 0, 1, 2, ..., nx$; $j = 0, 1, 2, ..., nz$, in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is \n\\begin{equation}\np_{i,j}^0 = 0, \\nonumber\n\\end{equation}\nso the modelling starts with zero pressure amplitude at each spatial grid point $i, j$. As boundary conditions, we assume \n\\begin{align}\np_{0,j}^n = 0, \\nonumber\\\np_{nx,j}^n = 0, \\nonumber\\\np_{i,0}^n = 0, \\nonumber\\\np_{i,nz}^n = 0, \\nonumber\n\\end{align}\nfor all time steps n. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions.\nLet's implement it ...", "# Import Libraries \n# ----------------\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\n\n# Ignore Warning Messages\n# -----------------------\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# Definition of modelling parameters\n# ----------------------------------\nxmax = 500 # maximum spatial extension of the 1D model (m)\ndx = 1.0 # grid point distance in x-direction\n\ntmax = 0.502 # maximum recording time of the seismogram (s)\ndt = 0.0010 # time step\n\nvp0 = 580. # P-wave speed in medium (m/s)\n\n# acquisition geometry\nxr = 330.0 # receiver position (m)\nxsrc = 250.0 # source position (m)\n\nf0 = 40. # dominant frequency of the source (Hz)\nt0 = 4. / f0 # source time shift (s)", "Comparison of 2D finite difference with analytical solution\nIn the function below we solve the homogeneous 2D acoustic wave equation by the 3-point spatial/temporal difference operator and compare the numerical results with the analytical solution: \n\\begin{equation}\nG_{analy}(x,z,t) = G_{2D} * S \\nonumber \n\\end{equation}\nwith the 2D Green's function:\n\\begin{equation}\nG_{2D}(x,z,t) = \\dfrac{1}{2\\pi V_{p0}^2}\\dfrac{H\\biggl((t-t_s)-\\dfrac{|r|}{V_{p0}}\\biggr)}{\\sqrt{(t-t_s)^2-\\dfrac{r^2}{V_{p0}^2}}}, \\nonumber \n\\end{equation}\nwhere $H$ denotes the Heaviside function, $r = \\sqrt{(x-x_s)^2+(z-z_s)^2}$ the source-receiver distance (offset) and $S$ the source wavelet.\nTo play a little bit more with the modelling parameters, I restricted the input parameters to dt and dx. The number of spatial grid points and time steps, as well as the discrete source and receiver positions are estimated within this function.", "# 1D Wave Propagation (Finite Difference Solution) \n# ------------------------------------------------\ndef FD_1D_acoustic(dt,dx):\n \n nx = (int)(xmax/dx) # number of grid points in x-direction\n print('nx = ',nx)\n \n nt = (int)(tmax/dt) # maximum number of time steps \n print('nt = ',nt)\n \n ir = (int)(xr/dx) # receiver location in grid in x-direction \n isrc = (int)(xsrc/dx) # source location in grid in x-direction\n\n # Source time function (Gaussian)\n # -------------------------------\n src = np.zeros(nt + 1)\n time = np.linspace(0 * dt, nt * dt, nt)\n\n # 1st derivative of a Gaussian\n src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))\n\n # Analytical solution\n # -------------------\n G = time * 0.\n\n # Initialize coordinates\n # ----------------------\n x = np.arange(nx)\n x = x * dx # coordinate in x-direction\n\n for it in range(nt): # Calculate Green's function (Heaviside function)\n if (time[it] - np.abs(x[ir] - x[isrc]) / vp0) >= 0:\n G[it] = 1. / (2 * vp0)\n Gc = np.convolve(G, src * dt)\n Gc = Gc[0:nt]\n lim = Gc.max() # get limit value from the maximum amplitude\n \n # Initialize empty pressure arrays\n # --------------------------------\n p = np.zeros(nx) # p at time n (now)\n pold = np.zeros(nx) # p at time n-1 (past)\n pnew = np.zeros(nx) # p at time n+1 (present)\n d2px = np.zeros(nx) # 2nd space derivative of p\n\n # Initialize model (assume homogeneous model)\n # -------------------------------------------\n vp = np.zeros(nx)\n vp = vp + vp0 # initialize wave velocity in model\n\n # Initialize empty seismogram\n # ---------------------------\n seis = np.zeros(nt) \n \n # Calculate Partial Derivatives\n # -----------------------------\n for it in range(nt):\n \n # FD approximation of spatial derivative by 3 point operator\n for i in range(1, nx - 1):\n d2px[i] = (p[i + 1] - 2 * p[i] + p[i - 1]) / dx ** 2\n\n # Time Extrapolation\n # ------------------\n pnew = 2 * p - pold + vp ** 2 * dt ** 2 * d2px\n\n # Add Source Term at isrc\n # -----------------------\n # Absolute pressure w.r.t analytical solution\n pnew[isrc] = pnew[isrc] + src[it] / dx * dt ** 2\n \n # Remap Time Levels\n # -----------------\n pold, p = p, pnew\n \n # Output of Seismogram\n # -----------------\n seis[it] = p[ir] \n \n # Compare FD Seismogram with analytical solution\n # ---------------------------------------------- \n # Define figure size\n rcParams['figure.figsize'] = 12, 5\n plt.plot(time, seis, 'b-',lw=3,label=\"FD solution\") # plot FD seismogram\n Analy_seis = plt.plot(time,Gc,'r--',lw=3,label=\"Analytical solution\") # plot analytical solution\n plt.xlim(time[0], time[-1])\n plt.ylim(-lim, lim)\n plt.title('Seismogram')\n plt.xlabel('Time (s)')\n plt.ylabel('Amplitude')\n plt.legend()\n plt.grid()\n plt.show() \n\ndx = 1.0 # grid point distance in x-direction (m)\ndt = 0.0010 # time step (s)\nFD_1D_acoustic(dt,dx)", "What we learned:\n\nHow to implement a 2D acoustic FD modelling code based on an existing 1D code" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xboard/xboard.github.io
ipynb/IDH-Longevity.ipynb
mpl-2.0
[ "IDH", "%matplotlib inline\nimport pandas as pd\nimport requests as req\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom scipy.stats import ttest_ind, ttest_rel\nfrom scipy.stats import gaussian_kde\nfrom statsmodels.formula.api import ols, mixedlm, gee\nfrom statsmodels.stats.outliers_influence import OLSInfluence\nfrom statsmodels.regression.linear_model import OLSResults\nfrom patsy import dmatrix\n\nnp.set_printoptions(precision=3)", "Carregando dados de IDH-M da Wikipedia\nFontes\nEstados: http://pt.wikipedia.org/wiki/Lista_de_unidades_federativas_do_Brasil_por_IDH", "idhm_df = pd.read_csv(\"../data/brazil_states_idhl_2000_2010.csv\", index_col=0)\nidhm_df", "Análise", "idhm_df.describe()\n\nf = plt.figure(14)\nidhm_df[[\"I2000\",\"I2010\",\"Ratio\"]].hist(bins=10)\nplt.figure()\nsns.kdeplot(idhm_df[\"I2000\"], shade=True);\nsns.kdeplot(idhm_df[\"I2010\"], shade=True);\nsns.kdeplot(idhm_df[\"Ratio\"], shade=True);", "Testando hipótese\nA diferença média entre os IDHs de 2000 e 2010 é estatisticamente significativa?", "ttest_rel(idhm_df['I2000'], idhm_df['I2010'])\n\nimport scipy \nimport scikits.bootstrap as bootstrap\n \n# compute 95% confidence intervals around the mean \nCIs00 = bootstrap.ci(data=idhm_df[\"I2000\"]) \nCIs10 = bootstrap.ci(data=idhm_df[\"I2010\"])\nCIsR = bootstrap.ci(data=idhm_df[\"Ratio\"])\n\nprint(\"IDHM 2000 mean 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIs00)))\nprint(\"IDHM 2010 mean 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIs10)))\nprint(\"IDHM ratio mean 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIsR)))\n\nCIs00 = bootstrap.ci(data=idhm_df[\"I2000\"], statfunction=scipy.median) \nCIs10 = bootstrap.ci(data=idhm_df[\"I2010\"], statfunction=scipy.median)\nCIsR = bootstrap.ci(data=idhm_df[\"Ratio\"], statfunction=scipy.median)\n\nprint(\"IDHM 2000 median 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIs00)))\nprint(\"IDHM 2010 median 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIs10)))\nprint(\"IDHM ratio median 95% confidence interval. Low={0:.3f}\\tHigh={1:.3f}\".format(*tuple(CIsR)))", "A resposta de diversos testes, para um nível de 5% de significância, mostra que há fortes evidências que sim.\nMontando percentual de impacto da administração de cada partido em cada Estado da Federação.", "state_parties_df = pd.read_csv(\"../data/brazil_states_parties_2000-2010.csv\", index_col=0)\n\n\nstate_parties_df\n\nstate_regions_df = pd.read_csv(\"../data/brazil_states_regions.csv\", index_col=0)\nstate_regions_df\n\ndf = idhm_df.merge(state_parties_df, on=\"Estado\")\ndf = df.merge(state_regions_df, on=\"Estado\")\ndf\n\nsns.factorplot(\"idh_level_2000\",\"Ratio\",data=df, kind=\"box\")\n\nsns.factorplot(\"Regiao\",\"Ratio\",data=df, kind=\"box\")\n\nsns.set()\nsns.pairplot(df, hue=\"idh_level_2000\", size=2.5)\n\nsns.coefplot(\"Ratio ~ PT + PSDB + Outros + C(idh_level_2000) - 1\", df, palette=\"Set1\");\n\nsns.coefplot(\"Ratio ~ Outros==0 + Outros - 1\", df, palette=\"Set1\");\n\nsns.set(style=\"whitegrid\")\nsns.residplot(df.Outros,df.Ratio, color=\"navy\", lowess=True, order=1)\n\nsns.coefplot(\"Ratio ~ PT==0 + PT - 1\", df, palette=\"Set1\");\n\nsns.set(style=\"whitegrid\")\nsns.residplot(df[df.PT>0].PT, df[df.PT>0].Ratio, color=\"navy\", order=1)\n\nsns.coefplot(\"Ratio ~ PSDB==0 + PSDB + np.multiply(PSDB, PSDB) - 1\", df, palette=\"Set1\");\n\nsns.set(style=\"whitegrid\")\nsns.residplot(df[df.PSDB>0].PSDB, df[df.PSDB>0].Ratio, color=\"navy\", lowess=True, order=2)", "Impacto por partido ou nível do IDH-M em 2000", "sns.coefplot(\"Ratio ~ PT + PSDB + Outros + C(idh_level_2000) - 1\", df, palette=\"Set1\");\nsns.coefplot(\"Ratio ~ PT + PSDB + C(idh_level_2000)\", df, palette=\"Set1\");\nsns.coefplot(\"Ratio ~ PT + Outros + C(idh_level_2000)\", df, palette=\"Set1\");\nsns.coefplot(\"Ratio ~ PSDB + Outros + C(idh_level_2000)\", df, palette=\"Set1\");\n\nformula = \"Ratio ~ PT + PSDB + C(idh_level_2000) + C(Regiao)\"\nmodel = ols(formula, df).fit()\nmodel.summary()", "Não foi possível observar diferença significantiva entre os partidos.\nQuais estados possuem diferença significativa?\nComparando 2010 com 2000", "sns.lmplot(\"I2000\", \"I2010\", data=df, legend=True, size=10, n_boot=10000, ci=95)\n\nsns.jointplot(\"I2000\", \"I2010\", data=df, kind='resid',color=sns.color_palette()[2], size=10)\n\nsns.coefplot(\"I2010 ~ I2000\", data=df, intercept=True)\nsns.coefplot(\"I2010 ~ I2000\", data=df, groupby=\"idh_level_2000\", intercept=True)\n\nsns.lmplot(\"I2000\", \"I2010\", data=df, hue=\"idh_level_2000\", col=\"idh_level_2000\", legend=True, size=6, n_boot=10000, ci=99)\nsns.lmplot(\"I2000\", \"I2010\", data=df, hue=\"Regiao\", col=\"Regiao\", col_wrap=2, legend=True, size=6, n_boot=10000, ci=99)\n\nmd = ols(\"I2010 ~ I2000 + C(Regiao)\", df).fit()\nprint(md.summary())\n\nrrr = md.get_robustcov_results()\nrrp = rrr.outlier_test(\"fdr_bh\", 0.1)\nidx = rrp[rrp[\"fdr_bh(p)\"] <= 0.1].index\nprint(\"Estados fora da média:\\n\",df.ix[idx.values])\nrrp[rrp[\"fdr_bh(p)\"] <= 0.1]", "GEE", "import statsmodels.api as sm\nmd = gee(\"Ratio ~ PT + PSDB \", df.idh_level_2000, df, cov_struct=sm.cov_struct.Exchangeable()) \nmdf = md.fit() \nprint(mdf.summary())\nprint(mdf.cov_struct.summary())\n\nplt.plot(mdf.fittedvalues, mdf.resid, 'o', alpha=0.5)\nplt.xlabel(\"Fitted values\", size=17)\nplt.ylabel(\"Residuals\", size=17)\n\nsns.jointplot(mdf.fittedvalues, mdf.resid, size=10, kind=\"kde\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alvaroing12/CADL
session-1/lecture-1.ipynb
apache-2.0
[ "Session 1: Introduction to Tensorflow\n<p class='lead'>\nCreative Applications of Deep Learning with Tensorflow<br />\nParag K. Mital<br />\nKadenze, Inc.<br />\n</p>\n\n<a name=\"learning-goals\"></a>\nLearning Goals\n\nLearn the basic idea behind machine learning: learning from data and discovering representations\nLearn how to preprocess a dataset using its mean and standard deviation\nLearn the basic components of a Tensorflow Graph\n\nTable of Contents\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nIntroduction\nPromo\nSession Overview\n\n\nLearning From Data\nDeep Learning vs. Machine Learning\nInvariances\nScope of Learning\nExisting datasets\n\n\nPreprocessing Data\nUnderstanding Image Shapes\nThe Batch Dimension\nMean/Deviation of Images\nDataset Preprocessing\nHistograms\nHistogram Equalization\n\n\nTensorflow Basics\nVariables\nTensors\nGraphs\nOperations\nTensor\nSessions\nTensor Shapes\nMany Operations\n\n\nConvolution\nCreating a 2-D Gaussian Kernel\nConvolving an Image with a Gaussian\nConvolve/Filter an image using a Gaussian Kernel\nModulating the Gaussian with a Sine Wave to create Gabor Kernel\nManipulating an image with this Gabor\n\n\nHomework\nNext Session\nReading Material\n\n<!-- /MarkdownTOC -->\n\n<a name=\"introduction\"></a>\nIntroduction\nThis course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.​​\n<a name=\"promo\"></a>\nPromo\nDeep learning has emerged at the forefront of nearly every major computational breakthrough in the last 4 years. It is no wonder that it is already in many of the products we use today, from netflix or amazon's personalized recommendations; to the filters that block our spam; to ways that we interact with personal assistants like Apple's Siri or Microsoft Cortana, even to the very ways our personal health is monitored. And sure deep learning algorithms are capable of some amazing things. But it's not just science applications that are benefiting from this research.\nArtists too are starting to explore how Deep Learning can be used in their own practice. Photographers are starting to explore different ways of exploring visual media. Generative artists are writing algorithms to create entirely new aesthetics. Filmmakers are exploring virtual worlds ripe with potential for procedural content.\nIn this course, we're going straight to the state of the art. And we're going to learn it all. We'll see how to make an algorithm paint an image, or hallucinate objects in a photograph. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets to using them to self organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of other images. We'll even see how to teach a computer to read and synthesize new phrases.\nBut we won't just be using other peoples code to do all of this. We're going to develop everything ourselves using Tensorflow and I'm going to show you how to do it. This course isn't just for artists nor is it just for programmers. It's for people that want to learn more about how to apply deep learning with a hands on approach, straight into the python console, and learn what it all means through creative thinking and interaction.\nI'm Parag Mital, artist, researcher and Director of Machine Intelligence at Kadenze. For the last 10 years, I've been exploring creative uses of computational models making use of machine and deep learning, film datasets, eye-tracking, EEG, and fMRI recordings exploring applications such as generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora.\nBut this course isn't just about me. It's about bringing all of you together. It's about bringing together different backgrounds, different practices, and sticking all of you in the same virtual room, giving you access to state of the art methods in deep learning, some really amazing stuff, and then letting you go wild on the Kadenze platform. We've been working very hard to build a platform for learning that rivals anything else out there for learning this stuff.\nYou'll be able to share your content, upload videos, comment and exchange code and ideas, all led by the course I've developed for us. But before we get there we're going to have to cover a lot of groundwork. The basics that we'll use to develop state of the art algorithms in deep learning. And that's really so we can better interrogate what's possible, ask the bigger questions, and be able to explore just where all this is heading in more depth. With all of that in mind, Let's get started>\nJoin me as we learn all about Creative Applications of Deep Learning with Tensorflow.\n<a name=\"session-overview\"></a>\nSession Overview\nWe're first going to talk about Deep Learning, what it is, and how it relates to other branches of learning. We'll then talk about the major components of Deep Learning, the importance of datasets, and the nature of representation, which is at the heart of deep learning.\nIf you've never used Python before, we'll be jumping straight into using libraries like numpy, matplotlib, and scipy. Before starting this session, please check the resources section for a notebook introducing some fundamentals of python programming. When you feel comfortable with loading images from a directory, resizing, cropping, how to change an image datatype from unsigned int to float32, and what the range of each data type should be, then come back here and pick up where you left off. We'll then get our hands dirty with Tensorflow, Google's library for machine intelligence. We'll learn the basic components of creating a computational graph with Tensorflow, including how to convolve an image to detect interesting features at different scales. This groundwork will finally lead us towards automatically learning our handcrafted features/algorithms.\n<a name=\"learning-from-data\"></a>\nLearning From Data\n<a name=\"deep-learning-vs-machine-learning\"></a>\nDeep Learning vs. Machine Learning\nSo what is this word I keep using, Deep Learning. And how is it different to Machine Learning? Well Deep Learning is a type of Machine Learning algorithm that uses Neural Networks to learn. The type of learning is \"Deep\" because it is composed of many layers of Neural Networks. In this course we're really going to focus on supervised and unsupervised Deep Learning. But there are many other incredibly valuable branches of Machine Learning such as Reinforcement Learning, Dictionary Learning, Probabilistic Graphical Models and Bayesian Methods (Bishop), or Genetic and Evolutionary Algorithms. And any of these branches could certainly even be combined with each other or with Deep Networks as well. We won't really be able to get into these other branches of learning in this course. Instead, we'll focus more on building \"networks\", short for neural networks, and how they can do some really amazing things. Before we can get into all that, we're going to need to understand a bit more about data and its importance in deep learning.\n<a name=\"invariances\"></a>\nInvariances\nDeep Learning requires data. A lot of it. It's really one of the major reasons as to why Deep Learning has been so successful. Having many examples of the thing we are trying to learn is the first thing you'll need before even thinking about Deep Learning. Often, it is the biggest blocker to learning about something in the world. Even as a child, we need a lot of experience with something before we begin to understand it. I find I spend most of my time just finding the right data for a network to learn. Getting it from various sources, making sure it all looks right and is labeled. That is a lot of work. The rest of it is easy as we'll see by the end of this course.\nLet's say we would like build a network that is capable of looking at an image and saying what object is in the image. There are so many possible ways that an object could be manifested in an image. It's rare to ever see just a single object in isolation. In order to teach a computer about an object, we would have to be able to give it an image of an object in every possible way that it could exist.\nWe generally call these ways of existing \"invariances\". That just means we are trying not to vary based on some factor. We are invariant to it. For instance, an object could appear to one side of an image, or another. We call that translation invariance. Or it could be from one angle or another. That's called rotation invariance. Or it could be closer to the camera, or farther. and That would be scale invariance. There are plenty of other types of invariances, such as perspective or brightness or exposure to give a few more examples for photographic images.\n<a name=\"scope-of-learning\"></a>\nScope of Learning\nWith Deep Learning, you will always need a dataset that will teach the algorithm about the world. But you aren't really teaching it everything. You are only teaching it what is in your dataset! That is a very important distinction. If I show my algorithm only faces of people which are always placed in the center of an image, it will not be able to understand anything about faces that are not in the center of the image! Well at least that's mostly true.\nThat's not to say that a network is incapable of transfering what it has learned to learn new concepts more easily. Or to learn things that might be necessary for it to learn other representations. For instance, a network that has been trained to learn about birds, probably knows a good bit about trees, branches, and other bird-like hangouts, depending on the dataset. But, in general, we are limited to learning what our dataset has access to.\nSo if you're thinking about creating a dataset, you're going to have to think about what it is that you want to teach your network. What sort of images will it see? What representations do you think your network could learn given the data you've shown it?\nOne of the major contributions to the success of Deep Learning algorithms is the amount of data out there. Datasets have grown from orders of hundreds to thousands to many millions. The more data you have, the more capable your network will be at determining whatever its objective is.\n<a name=\"existing-datasets\"></a>\nExisting datasets\nWith that in mind, let's try to find a dataset that we can work with. There are a ton of datasets out there that current machine learning researchers use. For instance if I do a quick Google search for Deep Learning Datasets, i can see for instance a link on deeplearning.net, listing a few interesting ones e.g. http://deeplearning.net/datasets/, including MNIST, CalTech, CelebNet, LFW, CIFAR, MS Coco, Illustration2Vec, and there are ton more. And these are primarily image based. But if you are interested in finding more, just do a quick search or drop a quick message on the forums if you're looking for something in particular.\n\nMNIST\nCalTech\nCelebNet\nImageNet: http://www.image-net.org/\nLFW\nCIFAR10\nCIFAR100\nMS Coco: http://mscoco.org/home/\nWLFDB: http://wlfdb.stevenhoi.com/\nFlickr 8k: http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html\nFlickr 30k\n\n<a name=\"preprocessing-data\"></a>\nPreprocessing Data\nIn this section, we're going to learn a bit about working with an image based dataset. We'll see how image dimensions are formatted as a single image and how they're represented as a collection using a 4-d array. We'll then look at how we can perform dataset normalization. If you're comfortable with all of this, please feel free to skip to the next video.\nWe're first going to load some libraries that we'll be making use of.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')", "I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.", "from libs import utils\n# utils.<tab>\nfiles = utils.get_celeb_files()", "Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on:", "img = plt.imread(files[50])\n# img.<tab>\nprint(img)", "When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this:", "# If nothing is drawn and you are using notebook, try uncommenting the next line:\n#%matplotlib inline\nplt.imshow(img)", "<a name=\"understanding-image-shapes\"></a>\nUnderstanding Image Shapes\nLet's break this data down a bit more. We can see the dimensions of the data using the shape accessor:", "img.shape\n# (218, 178, 3)", "This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.", "plt.imshow(img[:, :, 0], cmap='gray')\nplt.imshow(img[:, :, 1], cmap='gray')\nplt.imshow(img[:, :, 2], cmap='gray')", "We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.\nLet's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 100 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this:", "imgs = utils.get_celeb_imgs()", "We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets:", "plt.imshow(imgs[0])", "<a name=\"the-batch-dimension\"></a>\nThe Batch Dimension\nRemember that an image has a shape describing the height, width, channels:", "imgs[0].shape", "It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.\nN x H x W x C\nA Color image should have 3 color channels, RGB.\nWe can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images.", "data = np.array(imgs)\ndata.shape", "This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.\n<a name=\"meandeviation-of-images\"></a>\nMean/Deviation of Images\nNow that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel:", "mean_img = np.mean(data, axis=0)\nplt.imshow(mean_img.astype(np.uint8))", "This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily:", "std_img = np.std(data, axis=0)\nplt.imshow(std_img.astype(np.uint8))", "So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.\nWe're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean:", "plt.imshow(np.mean(std_img, axis=2).astype(np.uint8))", "This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.\n<a name=\"dataset-preprocessing\"></a>\nDataset Preprocessing\nThink back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express all of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by \"preprocessing\" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is \"normalization\".\n<a name=\"histograms\"></a>\nHistograms\nLet's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our batch x height x width x channels array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array.", "flattened = data.ravel()\nprint(data[:1])\nprint(flattened[:10])", "We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have: [<font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>205</font>, <font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>206</font>, <font color='red'>253</font>, <font color='green'>240</font>, <font color='blue'>207</font>, ...]\nWe can visualize what the \"distribution\", or range and frequency of possible values are. This is a very useful thing to know. It tells us whether our data is predictable or not.", "plt.hist(flattened.ravel(), 255)", "The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.\n<a name=\"histogram-equalization\"></a>\nHistogram Equalization\nThe mean of our dataset looks like this:", "plt.hist(mean_img.ravel(), 255)", "When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.\nLet's try and compare the histogram before and after \"normalizing our data\":", "bins = 20\nfig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)\naxs[0].hist((data[0]).ravel(), bins)\naxs[0].set_title('img distribution')\naxs[1].hist((mean_img).ravel(), bins)\naxs[1].set_title('mean distribution')\naxs[2].hist((data[0] - mean_img).ravel(), bins)\naxs[2].set_title('(img - mean) distribution')", "What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset:", "fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)\naxs[0].hist((data[0] - mean_img).ravel(), bins)\naxs[0].set_title('(img - mean) distribution')\naxs[1].hist((std_img).ravel(), bins)\naxs[1].set_title('std deviation distribution')\naxs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins)\naxs[2].set_title('((img - mean) / std_dev) distribution')", "Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on:", "axs[2].set_xlim([-150, 150])\naxs[2].set_xlim([-100, 100])\naxs[2].set_xlim([-50, 50])\naxs[2].set_xlim([-10, 10])\naxs[2].set_xlim([-5, 5])", "What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data: most of the data will be around 0, where some deviations of it will follow between -3 to 3.\nIf our data does not end up looking like this, then we should either (1): get much more data to calculate our mean/std deviation, or (2): either try another method of normalization, such as scaling the values between 0 to 1, or -1 to 1, or possibly not bother with normalization at all. There are other options that one could explore, including different types of normalization such as local contrast normalization for images or PCA based normalization but we won't have time to get into those in this course.\n<a name=\"tensorflow-basics\"></a>\nTensorflow Basics\nLet's now switch gears and start working with Google's Library for Numerical Computation, TensorFlow. This library can do most of the things we've done so far. However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into. The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a Graph. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a Session. Let's take a look at how these both work and then we'll get into the benefits of why this is useful:\n<a name=\"variables\"></a>\nVariables\nWe're first going to import the tensorflow library:", "import tensorflow as tf", "Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function:", "x = np.linspace(-3.0, 3.0, 100)\n\n# Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0.\nprint(x)\n\n# We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values\nprint(x.shape)\n\n# and a `dtype`, in this case float64, or 64 bit floating point values.\nprint(x.dtype)", "<a name=\"tensors\"></a>\nTensors\nIn tensorflow, we could try to do the same thing using their linear space function:", "x = tf.linspace(-3.0, 3.0, 100)\nprint(x)", "Instead of a numpy.array, we are returned a tf.Tensor. The name of it is \"LinSpace:0\". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace.\nThink of tf.Tensors the same way as you would the numpy.array. It is described by its shape, in this case, only 1 dimension of 100 values. And it has a dtype, in this case, float32. But unlike the numpy.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a tf.Operation which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned.\n<a name=\"graphs\"></a>\nGraphs\nLet's try and inspect the underlying graph. We can request the \"default\" graph where all of our operations have been added:", "g = tf.get_default_graph()", "<a name=\"operations\"></a>\nOperations\nAnd from this graph, we can get a list of all the operations that have been added, and print out their names:", "[op.name for op in g.get_operations()]", "So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.\n<a name=\"tensor\"></a>\nTensor\nWe can request the output of any operation, which is a tensor, by asking the graph for the tensor's name:", "g.get_tensor_by_name('LinSpace' + ':0')", "What I've done is asked for the tf.Tensor that comes from the operation \"LinSpace\". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.\n<a name=\"sessions\"></a>\nSessions\nIn order to actually compute anything in tensorflow, we need to create a tf.Session. The session is responsible for evaluating the tf.Graph. Let's see how this works:", "# We're first going to create a session:\nsess = tf.Session()\n\n# Now we tell our session to compute anything we've created in the tensorflow graph.\ncomputed_x = sess.run(x)\nprint(computed_x)\n\n# Alternatively, we could tell the previous Tensor to evaluate itself using this session:\ncomputed_x = x.eval(session=sess)\nprint(computed_x)\n\n# We can close the session after we're done like so:\nsess.close()", "We could also explicitly tell the session which graph we want to manage:", "sess = tf.Session(graph=g)\nsess.close()", "By default, it grabs the default graph. But we could have created a new graph like so:", "g2 = tf.Graph()", "And then used this graph only in our session.\nTo simplify things, since we'll be working in iPython's interactive console, we can create an tf.InteractiveSession:", "sess = tf.InteractiveSession()\nx.eval()", "Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.\n<a name=\"tensor-shapes\"></a>\nTensor Shapes", "# We can find out the shape of a tensor like so:\nprint(x.get_shape())\n\n# %% Or in a more friendly format\nprint(x.get_shape().as_list())", "<a name=\"many-operations\"></a>\nMany Operations\nLets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve.", "# The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma.\nmean = 0.0\nsigma = 1.0\n\n# Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula.\nz = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /\n (2.0 * tf.pow(sigma, 2.0)))) *\n (1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))", "Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function:", "res = z.eval()\nplt.plot(res)\n# if nothing is drawn, and you are using ipython notebook, uncomment the next two lines:\n#%matplotlib inline\n#plt.plot(res)", "<a name=\"convolution\"></a>\nConvolution\n<a name=\"creating-a-2-d-gaussian-kernel\"></a>\nCreating a 2-D Gaussian Kernel\nLet's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions.\nSo let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so:\n<pre>\n (X_rows, X_cols) x (Y_rows, Y_cols)\n | | | |\n | |___________| |\n | ^ |\n | inner dimensions |\n | must match |\n | |\n |__________________________|\n ^\n resulting dimensions\n of matrix multiplication\n</pre>\nBut our matrix is actually a vector, or a 1 dimensional matrix. That means its dimensions are N x 1. So to multiply them, we'd have:\n<pre>\n (N, 1) x (1, N)\n | | | |\n | |___________| |\n | ^ |\n | inner dimensions |\n | must match |\n | |\n |__________________________|\n ^\n resulting dimensions\n of matrix multiplication\n</pre>", "# Let's store the number of values in our Gaussian curve.\nksize = z.get_shape().as_list()[0]\n\n# Let's multiply the two to get a 2d gaussian\nz_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))\n\n# Execute the graph\nplt.imshow(z_2d.eval())", "<a name=\"convolving-an-image-with-a-gaussian\"></a>\nConvolving an Image with a Gaussian\nA very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it as a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great:\nhttp://setosa.io/ev/image-kernels/", "# Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing \"scikit-image\".\nfrom skimage import data\nimg = data.camera().astype(np.float32)\nplt.imshow(img, cmap='gray')\nprint(img.shape)", "Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.\nN x H x W x C\nIn order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be:\n1 x H x W x 1", "# We could use the numpy reshape function to reshape our numpy array\nimg_4d = img.reshape([1, img.shape[0], img.shape[1], 1])\nprint(img_4d.shape)\n\n# but since we'll be using tensorflow, we can use the tensorflow reshape function:\nimg_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])\nprint(img_4d)", "Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list():", "print(img_4d.get_shape())\nprint(img_4d.get_shape().as_list())", "The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.\nWe'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is:\nNumber of Images x Image Height x Image Width x Number of Channels\nwe have:\nKernel Height x Kernel Width x Number of Input Channels x Number of Output Channels\nOur Kernel already has a height and width of ksize so we'll stick with that for now. The number of input channels should match the number of channels on the image we want to convolve. And for now, we just keep the same number of output channels as the input channels, but we'll later see how this comes into play.", "# Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O\nz_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1])\nprint(z_4d.get_shape().as_list())", "<a name=\"convolvefilter-an-image-using-a-gaussian-kernel\"></a>\nConvolve/Filter an image using a Gaussian Kernel\nWe can now use our previous Gaussian Kernel to convolve our image:", "convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME')\nres = convolved.eval()\nprint(res.shape)", "There are two new parameters here: strides, and padding. Strides says how to move our kernel across the image. Basically, we'll only ever use it for one of two sets of parameters:\n[1, 1, 1, 1], which means, we are going to convolve every single image, every pixel, and every color channel by whatever the kernel is.\nand the second option:\n[1, 2, 2, 1], which means, we are going to convolve every single image, but every other pixel, in every single color channel.\nPadding says what to do at the borders. If we say \"SAME\", that means we want the same dimensions going in as we do going out. In order to do this, zeros must be padded around the image. If we say \"VALID\", that means no padding is used, and the image dimensions will actually change.", "# Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by \"squeezing\" the singleton dimensions.\nplt.imshow(np.squeeze(res), cmap='gray')\n\n# Or we could specify the exact dimensions we want to visualize:\nplt.imshow(res[0, :, :, 0], cmap='gray')", "<a name=\"modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel\"></a>\nModulating the Gaussian with a Sine Wave to create Gabor Kernel\nWe've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that.\n<graphic: draw 1d gaussian wave, 1d sine, show modulation as multiplication and resulting gabor.>\nWe first use linspace to get a set of values the same range as our gaussian, which should be from -3 standard deviations to +3 standard deviations.", "xs = tf.linspace(-3.0, 3.0, ksize)", "We then calculate the sine of these values, which should give us a nice wave", "ys = tf.sin(xs)\nplt.figure()\nplt.plot(ys.eval())", "And for multiplication, we'll need to convert this 1-dimensional vector to a matrix: N x 1", "ys = tf.reshape(ys, [ksize, 1])", "We then repeat this wave across the matrix by using a multiplication of ones:", "ones = tf.ones((1, ksize))\nwave = tf.matmul(ys, ones)\nplt.imshow(wave.eval(), cmap='gray')", "We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel:", "gabor = tf.multiply(wave, z_2d)\nplt.imshow(gabor.eval(), cmap='gray')", "<a name=\"manipulating-an-image-with-this-gabor\"></a>\nManipulating an image with this Gabor\nWe've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these \"placeholders\", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network.\nLet's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to None x None. This is something special for placeholders which tells tensorflow \"let this dimension be any possible value\". 1, 5, 100, 1000, it doesn't matter.", "# This is a placeholder which will become part of the tensorflow graph, but\n# which we have to later explicitly define whenever we run/evaluate the graph.\n# Pretty much everything you do in tensorflow can have a name. If we don't\n# specify the name, tensorflow will give a default one, like \"Placeholder_0\".\n# Let's use a more useful name to help us understand what's happening.\nimg = tf.placeholder(tf.float32, shape=[None, None], name='img')\n\n\n# We'll reshape the 2d image to a 3-d tensor just like before:\n# Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify.\n# We use it to reshape our H x W image to include a channel dimension of 1\n# our new dimensions will end up being: H x W x 1\nimg_3d = tf.expand_dims(img, 2)\ndims = img_3d.get_shape()\nprint(dims)\n\n# And again to get: 1 x H x W x 1\nimg_4d = tf.expand_dims(img_3d, 0)\nprint(img_4d.get_shape().as_list())\n\n# Let's create another set of placeholders for our Gabor's parameters:\nmean = tf.placeholder(tf.float32, name='mean')\nsigma = tf.placeholder(tf.float32, name='sigma')\nksize = tf.placeholder(tf.int32, name='ksize')\n\n# Then finally redo the entire set of operations we've done to convolve our\n# image, except with our placeholders\nx = tf.linspace(-3.0, 3.0, ksize)\nz = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /\n (2.0 * tf.pow(sigma, 2.0)))) *\n (1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))\nz_2d = tf.matmul(\n tf.reshape(z, tf.stack([ksize, 1])),\n tf.reshape(z, tf.stack([1, ksize])))\nys = tf.sin(x)\nys = tf.reshape(ys, tf.stack([ksize, 1]))\nones = tf.ones(tf.stack([1, ksize]))\nwave = tf.matmul(ys, ones)\ngabor = tf.multiply(wave, z_2d)\ngabor_4d = tf.reshape(gabor, tf.stack([ksize, ksize, 1, 1]))\n\n# And finally, convolve the two:\nconvolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved')\nconvolved_img = convolved[0, :, :, 0]", "What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.\nIf we try to evaluate it without specifying placeholders beforehand, we will get an error InvalidArgumentError: You must feed a value for placeholder tensor 'img' with dtype float and shape [512,512]:", "convolved_img.eval()", "It's saying that we didn't specify our placeholder for img. In order to \"feed a value\", we use the feed_dict parameter like so:", "convolved_img.eval(feed_dict={img: data.camera()})", "But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result:", "res = convolved_img.eval(feed_dict={\n img: data.camera(), mean:0.0, sigma:1.0, ksize:100})\nplt.imshow(res, cmap='gray')", "Now, instead of having to rewrite the entire graph, we can just specify the different placeholders.", "res = convolved_img.eval(feed_dict={\n img: data.camera(),\n mean: 0.0,\n sigma: 0.5,\n ksize: 32\n })\nplt.imshow(res, cmap='gray')", "<a name=\"homework\"></a>\nHomework\nFor your first assignment, we'll work on creating our own dataset. You'll need to find at least 100 images and work through the notebook.\n<a name=\"next-session\"></a>\nNext Session\nIn the next session, we'll create our first Neural Network and see how it can be used to paint an image.\n<a name=\"reading-material\"></a>\nReading Material\nAbadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … Zheng, X. (2015). TensorFlow : Large-Scale Machine Learning on Heterogeneous Distributed Systems.\nhttps://arxiv.org/abs/1603.04467\nYoshua Bengio, Aaron Courville, Pascal Vincent. Representation Learning: A Review and New Perspectives. 24 Jun 2012.\nhttps://arxiv.org/abs/1206.5538\nJ. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, 61, p 85-117, 2015.\nhttps://arxiv.org/abs/1404.7828\nLeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep learning.” Nature 521, no. 7553 (2015): 436-444.\nIan Goodfellow Yoshua Bengio and Aaron Courville. Deep Learning. 2016.\nhttp://www.deeplearningbook.org/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
machine-learning/tag_parts_of_speech.ipynb
mit
[ "Title: Tag Parts Of Speech\nSlug: tag_parts_of_speech\nSummary: How to tag parts of speech in unstructured text data for machine learning in Python. \nDate: 2016-09-09 12:00\nCategory: Machine Learning\nTags: Preprocessing Text\nAuthors: Chris Albon\nPreliminaries", "# Load libraries\nfrom nltk import pos_tag\nfrom nltk import word_tokenize", "Create Text Data", "# Create text\ntext_data = \"Chris loved outdoor running\"", "Tag Parts Of Speech", "# Use pre-trained part of speech tagger\ntext_tagged = pos_tag(word_tokenize(text_data))\n\n# Show parts of speech\ntext_tagged", "Common Penn Treebank Parts Of Speech Tags\nThe output is a list of tuples with the word and the tag of the part of speech. NLTK uses the Penn Treebank parts for speech tags.\n<table>\n <tr>\n <th>Tag</th>\n <th>Part Of Speech</th>\n </tr>\n <tr>\n <td>NNP</td>\n <td>Proper noun, singular</td>\n </tr>\n <tr>\n <td>NN</td>\n <td>Noun, singular or mass</td>\n </tr>\n <tr>\n <td>RB</td>\n <td>Adverb</td>\n </tr>\n <tr>\n <td>VBD</td>\n <td>Verb, past tense</td>\n </tr>\n <tr>\n <td>VBG</td>\n <td>Verb, gerund or present participle</td>\n </tr>\n <tr>\n <td>JJ</td>\n <td>Adjective</td>\n </tr>\n <tr>\n <td>PRP</td>\n <td>Personal pronoun</td>\n </tr>\n</table>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nbelaid/nbelaid.github.io
dev/bike_renting/bike_renting.ipynb
mit
[ "Introduction\nWe have a data set representing shared bikes rentings according to serveral parameters. \nThe objective is to predict the number of bikes rented per hour in the city (variable \"count\").\nThroughout this case study, we will follow the following steps:\n- Dataset Presentation\n- Some preliminary assumptions and remarks\n- Data Cleansing\n- Creating and variable extraction\n- Statistical analysis\n- Prediction models \nBefore presenting the data set, here is the list of all used libraries.", "# management and data analysis \nimport numpy as np\nimport pandas as pd\nfrom datetime import datetime\n\n# visualisation\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n# Machine Learning\nfrom sklearn.linear_model import LinearRegression, Ridge\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor\nfrom sklearn.neural_network import MLPRegressor\n\n# support for Machine Learning\nfrom sklearn.model_selection import cross_val_score, train_test_split\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import mean_squared_error, r2_score\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import Pipeline\n\n# other (math, stat, etc.)\nfrom random import *\nfrom scipy import stats\nnp.random.seed(91948172)\nfrom copy import deepcopy\n\n", "Dataset presentation\nThe following lines of code show that the set contains 10886 instances and 12 variables. \nHere is a brief description of these variables:\n- datetime: date and time of the survey\n- season: 1 = spring , 2 = summer, 3 = fall, 4 = Winter\n- holiday: indique si le jour est un jour de vacances scolaires\n- workingday: indique si le jour est travaillé (ni week-end ni vacances)\n- weather: 1 = Clear to cloudy, 2 = Clear, 3 = Light rain or snow, 4 = heavy rain or snow\n- temp: temperature in degrees Celsius\n- atemp: \"feels like\" temperature in degrees Celsius\n- humidity: relative humidity\n- windspeed: wind speed\n- casual: number of non-registered user rentals initiated\n- registered: number of registered user rentals initiated\n- count: number of total rentals", "# read the data set \ndf = pd.read_csv(\"data/data.csv\")\n\n# dimensions of dataframe \ndf.shape\n\n# overview\ndf.head()", "Prior assumptions and remarks\nVariable(s) à prédire\nThe objective is to predict the number of bikes rented per hour in the city (variable \"count\"). \nMoreover, we have two variables \"casual\" and \"registered\" (which \"count\" is the sum of).", "# check the completeness of the data on the variables to predict \nprint((len(df[\"casual\"]) == 10886) \n and (len(df[\"registered\"]) == 10886) \n and (len(df[\"count\"]) == 10886))\n\n# check that \"count\" is the sum of \"casual\" and \"registered\" \nprint((df[\"casual\"] + df[\"registered\"] == df[\"count\"]).unique())", "Predictors\nThis step comes ahead of a statistical analysis. \nHere are some factors (in addition to the variables already in the dataset) that could potentially influence the number of rentals:\n- \"Time\": We do not rent the same number of bikes in the middle of the night than in the daytime. This variable can be extracted from the \"datetime\" variable.\n- \"Month\": Maybe we rent more bikes in sunny days.\n- \"Year\": Maybe there has been a progression or a regression from one year to another. \n- \"Weekend\": The rental behavior may be different in a weekday and in the weekend. \n- \"Traffic\": We do not have data on it.\n- \"Availability of bikes\": We do not have data on it. We do not have for example information on the number of people who tried to rent a bike in vain (if indeed such cases arise).\nHere are some other preliminary observations:\n- \"Month\" is perhaps more a relevant information that \"season\".\n- the influential variables on \"casual\" users may not be the same as for \"registered\" users.\nData cleaning\nMissing data\nWe find, in what follows, that there are no missing data.", "# describe the dataframe\ndf.describe()", "Data type correction\nWe will mainly change the types of categorical data that are numerical here.", "# information on the data columns, including their types \ndf.info() \n\n# data type transformation to category\ndf['weather'] = df['weather'].astype('category')\ndf['holiday'] = df['holiday'].astype('category')\ndf['workingday'] = df['workingday'].astype('category')\ndf['season'] = df['season'].astype('category')", "Variable creation\nMining and preparation of variables\nIn the \"datetime\" column, there is a lot of information. \"WorkingDay\", \"holiday\", \"season\" are already extracted. Here, we extract other information.", "# extract the hour \ndef getHour(x):\n dt = datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n return dt.hour\n\ndf['hour'] = df['datetime'].apply(getHour)\n\n# extract the day \ndef getDay(x):\n dt = datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n return dt.day\n\ndf['day'] = df['datetime'].apply(getDay)\n\n# extract the month \ndef getMonth(x):\n dt = datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n return dt.month\n\ndf['month'] = df['datetime'].apply(getMonth)\n\n# extract the year \ndef getYear(x):\n dt = datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n return dt.year\n\ndf['year'] = df['datetime'].apply(getYear)\n\n# extract the day of the week \n# ... 0 for Monday, 6 for Sunday \ndef getWeekday(x):\n dt = datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n return dt.weekday()\n\ndf['weekday'] = df['datetime'].apply(getWeekday)", "Check the data distribution\nBefore we do the statistical analysis, let us have a look on the data distribution and homogeneity. \nHere are some of our observations:\n- the data are homogeneous over 2011 and 2012.\n- we have data for the first 19 days of each month\n- there are few \"missing lines\" (you can see the columns of hours are not quite equal).", "# data is distributed over the two years 2011 and 2012 \ndf.groupby('year')['count'].count().reset_index()\n\n# data is distributed over 12 months\ncount_groupby_var = df.groupby('month')['count'].count().reset_index()\n\nfig, axs = plt.subplots(1, 1, figsize=(15, 4))\naxs.bar(count_groupby_var['month'], count_groupby_var['count'])\naxs.set_xticks(count_groupby_var['month'].unique())\nplt.show()\n\n# data is distributed over 19 days\ncount_groupby_var = df.groupby('day')['count'].count().reset_index()\n\nfig, axs = plt.subplots(1, 1, figsize=(15, 4))\naxs.bar(count_groupby_var['day'], count_groupby_var['count'])\naxs.set_xticks(count_groupby_var['day'].unique())\nplt.show()\n\n# ata is distributed over 24 hours \ncount_groupby_var = df.groupby('hour')['count'].count().reset_index()\n\nfig, axs = plt.subplots(1, 1, figsize=(15, 4))\naxs.bar(count_groupby_var['hour'], count_groupby_var['count'])\naxs.set_xticks(count_groupby_var['hour'].unique())\nplt.show()", "Statistical analysis\nWe will, throughout this statistical analysis, create functions that will help us visualize the variations of the variables to be predicted and determine the influential factors.\nAnalysis of discrete numerical variables", "# function: visualize the change of the dependent variables VS discrete variables\ndef visualize_by_discr_var(var):\n\n # means\n casual_by_var = df.groupby(var)['casual'].agg(lambda x: x.mean()).reset_index()\n registered_by_var = df.groupby(var)['registered'].agg(lambda x: x.mean()).reset_index()\n count_by_var = df.groupby(var)['count'].agg(lambda x: x.mean()).reset_index()\n\n # visualisation\n fig, axes = plt.subplots(2, 3, sharey=\"row\", figsize=(15, 8))\n plt.subplots_adjust(wspace=0.3, hspace=0.3);\n fig.suptitle('')\n \n # ligne 1 \n axt = axes[0, 0]\n axt.plot(casual_by_var[var], casual_by_var['casual'])\n axt.set_title('fig.1 : ' + 'mean_casual_by_' + var)\n axt.grid(True)\n\n axt = axes[0, 1]\n axt.plot(registered_by_var[var], registered_by_var['registered'])\n axt.set_title('fig.2 : ' + 'mean_registered_by_' + var)\n axt.grid(True)\n\n axt = axes[0, 2]\n axt.plot(count_by_var[var], count_by_var['count'])\n axt.set_title('fig.3 : ' + 'mean_count_by_' + var)\n axt.grid(True)\n \n # ligne 2 \n axt = axes[1, 0]\n df.boxplot(column='casual', by=var, ax=axes[1, 0])\n axt.set_title('fig.4 : ' + 'casual_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[1, 1]\n df.boxplot(column='registered', by=var, ax=axes[1, 1])\n axt.set_title('fig.5 : ' + 'registered_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[1, 2]\n df.boxplot(column='count', by=var, ax=axes[1, 2])\n axt.set_title('fig.6 : ' + 'count_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n \n plt.show()\n\n\nvisualize_by_discr_var('hour')", "We see two \"phenomena\":\n- With regard to \"casual\" users, the distribution is as follows: a top peak in mid-afternoon and a down peak in the middle of the night. This may correspond to tourists for example.\n- As for \"registered\" users, they represent the majority of users (visualizations are on the same \"y\" scale). Moreover, we can say that there is a peak around 5pm, another arount 8 am and, and to a least extent, a small peak arount 12pm or 1pm. This may correspond to workers or students who use the bicycle as means of transportation.\nWe also notice that we have \"outlier\". \nHowever, they are probably \"natural outliers\". \nTo overcome this, we create the logarithms of the dependent variables do the analysis once again.", "# creation of logarithms of the dependent variables\ndf['casual_log'] = np.log(df['casual'] + 1) \ndf['registered_log'] = np.log(df['registered'] + 1) \ndf['count_log'] = np.log(df['count'] + 1) \n\n\n# function: visualize the change in the ... \n# ... log of the dependent variables with regards to discrete variable \ndef visualize_log_by_discr_var(var):\n\n # mean of the dependent variables\n casual_by_var = df.groupby(var)['casual_log'].agg(lambda x: x.mean()).reset_index()\n registered_by_var = df.groupby(var)['registered_log'].agg(lambda x: x.mean()).reset_index()\n count_by_var = df.groupby(var)['count_log'].agg(lambda x: x.mean()).reset_index()\n\n # visualisation\n fig, axes = plt.subplots(2, 3, sharey=\"row\", figsize=(15, 8))\n plt.subplots_adjust(wspace=0.3, hspace=0.3);\n fig.suptitle('')\n \n # ligne 1 \n axt = axes[0, 0]\n axt.plot(casual_by_var[var], casual_by_var['casual_log'])\n axt.set_title('fig.1 : ' + 'mean_casual_by_' + var)\n axt.grid(True)\n\n axt = axes[0, 1]\n axt.plot(registered_by_var[var], registered_by_var['registered_log'])\n axt.set_title('fig.2 : ' + 'mean_registered_by_' + var)\n axt.grid(True)\n\n axt = axes[0, 2]\n axt.plot(count_by_var[var], count_by_var['count_log'])\n axt.set_title('fig.3 : ' + 'mean_count_by_' + var)\n axt.grid(True)\n \n # ligne 2 \n axt = axes[1, 0]\n df.boxplot(column='casual_log', by=var, ax=axes[1, 0])\n axt.set_title('fig.4 : ' + 'casual_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[1, 1]\n df.boxplot(column='registered_log', by=var, ax=axes[1, 1])\n axt.set_title('fig.5 : ' + 'registered_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[1, 2]\n df.boxplot(column='count_log', by=var, ax=axes[1, 2])\n axt.set_title('fig.6 : ' + 'count_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n \n plt.show()\n\n\nvisualize_log_by_discr_var('hour')", "There are indeed less outliers regarding \"casual\" users. However, this is less conclusive regarding \"registered\" users.\nIn what follows, we do the same for column \"month\" and \"day\". We will see that :\n- for \"month\": It appears visually to be correlated with the dependent variables (peak in june for example). Also, the analysis of log of data seem to minimize the presence of outliers.\n- for \"day\": There does not appear to be a correlation.", "visualize_by_discr_var('month')\n\nvisualize_log_by_discr_var('month')\n\nvisualize_log_by_discr_var('day')", "We observed that the dependent variables depend on the hour and perhaps particularly on working hours. Let us assume that the \"behavior\" are different in the weekdays, at least for the \"registered\" user.\nWe will discover that however that it is true for \"registered\" and \"casual\" users.\nAs the contrast is very clear, we decided to create a variable \"is_weekend\" to be used instead of \"weekday\".", "# aggregations of dependent variables based on hours and days of the week \nhour_agg_casual = pd.DataFrame(df.groupby([\"hour\",\"weekday\"],sort=True)[\"casual\"].mean()).reset_index()\nhour_agg_registered = pd.DataFrame(df.groupby([\"hour\",\"weekday\"],sort=True)[\"registered\"].mean()).reset_index()\nhour_agg_count = pd.DataFrame(df.groupby([\"hour\",\"weekday\"],sort=True)[\"count\"].mean()).reset_index()\n\n# visualisations \nfig, axes = plt.subplots(1, 3, sharey=\"row\", figsize=(15, 5))\n\naxt = axes[0]\nsns.pointplot(x=hour_agg_casual[\"hour\"], y=hour_agg_casual[\"casual\"],\n hue=hour_agg_casual[\"weekday\"], data=hour_agg_casual, join=True, ax=axt)\naxt.set(xlabel='', ylabel='hour',title='fig 1 : mean_casual_by_hour_and_weekday')\n\naxt = axes[1]\nsns.pointplot(x=hour_agg_registered[\"hour\"], y=hour_agg_registered[\"registered\"],\n hue=hour_agg_registered[\"weekday\"], data=hour_agg_registered, join=True, ax=axt)\naxt.set(xlabel='', ylabel='hour',title='fig 2 : mean_registered_by_hour_and_weekday')\n\naxt = axes[2]\nsns.pointplot(x=hour_agg_count[\"hour\"], y=hour_agg_count[\"count\"],\n hue=hour_agg_count[\"weekday\"], data=hour_agg_count, join=True, ax=axt)\naxt.set(xlabel='', ylabel='hour',title='fig 3 : mean_count_by_hour_and_weekday')\n\nplt.show()\n\n# creation of \"is_weekend\"\ndef is_weekend(weekday): \n return 1 if ((weekday == 5) or (weekday == 6)) else 0\n\ndf['is_weekend'] = df['weekday'].apply(is_weekend)\n\n# data type: category\ndf['is_weekend'] = df['is_weekend'].astype('category')\n", "We turn our attention to the variable year. We notice a phenomenon: there is an increase between 2011 and 2012. We decide to keep this variable. However, are not able to say relevance for future years (since we have only 2 years, it is, indeed, difficult to predict what will be the evolution to year n + 2).", "df['year_month'] = df['year'] * 100 + df['month']\n\n# mean\ncasual_by_ym = df.groupby('year_month')['casual'].mean().reset_index()\nregistered_by_ym = df.groupby('year_month')['registered'].mean().reset_index()\ncount_by_ym = df.groupby('year_month')['count'].mean().reset_index()\n\n# visualisation\nfig, axes = plt.subplots(1, 3, sharey=\"row\", figsize=(15, 4))\n\naxt = axes[0]\naxt.plot(casual_by_ym['casual'])\naxt.set_title('fig.1 : ' + 'mean_casual_by_year-month')\naxt.grid(True)\n\naxt = axes[1]\naxt.plot(registered_by_ym['registered'])\naxt.set_title('fig.2 : ' + 'mean_registered_by_year-month')\naxt.grid(True)\n\naxt = axes[2]\naxt.plot(count_by_ym['count'])\naxt.set_title('fig.3 : ' + 'mean_count_by_year-month')\naxt.grid(True)\n\nplt.show()\n", "Analysis of categorical variables\nWe directly observe the distribution of the log of dependent variables. It does not appear that these variables are particularly influential, except \"season\" on \"casual\" users. However, we will not keep \"season\" because we believe that \"month\" has probably a greater explanatory power.", "# function to visualize the change of the ... \n# ... log of dependent variables \ndef visualize_log_by_cat_var(var):\n \n # mean\n casual_by_var = df.groupby(var)['casual_log'].agg(lambda x: x.mean()).reset_index()\n registered_by_var = df.groupby(var)['registered_log'].agg(lambda x: x.mean()).reset_index()\n count_by_var = df.groupby(var)['count_log'].agg(lambda x: x.mean()).reset_index()\n\n # visualisation \n fig, axes = plt.subplots(1, 3, sharey=\"row\", figsize=(15, 4))\n \n axt = axes[0]\n df.boxplot(column='casual_log', by=var, ax=axes[0])\n axt.set_title('fig 1 : ' + 'casual_log_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[1]\n df.boxplot(column='registered_log', by=var, ax=axes[1])\n axt.set_title('fig 2 : ' + 'registered_log_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n\n axt = axes[2]\n df.boxplot(column='count_log', by=var, ax=axes[2])\n axt.set_title('fig 3 : ' + 'count_log_by_' + var)\n axt.grid(True)\n axt.get_figure().suptitle('')\n \n plt.show()\n\n\nvisualize_log_by_cat_var('season')\n\nvisualize_log_by_cat_var('holiday')\n\nvisualize_log_by_cat_var('workingday')\n\n#visualize_log_by_cat_var('weather')", "Analysis of continuous variables\nWe first see the correlation matrix. We observe that:\n- \"Temp\" is correlated with \"atemp\", which is predictable (0.98). This allows us to keep only one of the two variables. \"Temp\" is correlated with 0.47 and 0.32 respectively to \"casual\" and \"registered\".\n- \"Windspeed\" has a low correlation with the dependent variables, which is confirmed by the visualizations.\n- \"Humidity\" is negatively correlated with \"casual\".", "# calculate correlation \n# ... note that we will take the absolute value \ncorr_mat = df[[\"temp\",\"atemp\",\"humidity\",\"windspeed\",\"casual\",\"registered\", \"count\"]].corr()\ncorr_mat = abs(corr_mat)\n\nfig,ax= plt.subplots()\nfig.set_size_inches(15,10)\nsns.heatmap(corr_mat, square=True, annot=True)\n\n# function that allows to visualize the variation of the... \n# ... log variables of the dependent variables \ndef visualize_by_cont_var(var):\n \n # visualisation des dispersion des log des variables à prédire en fonction de la variable \"var\"\n f, axarr = plt.subplots(1, 3, sharey=True, figsize=(15, 4))\n \n axarr[0].scatter(df[var], df['casual'], s=1)\n axarr[0].set_title('casual_by_' + var)\n axarr[0].grid(True)\n\n axarr[1].scatter(df[var], df['registered'], s=1)\n axarr[1].set_title('registered_by_' + var)\n axarr[1].grid(True)\n\n axarr[2].scatter(df[var], df['count'], s=1)\n axarr[2].set_title('count_by_' + var)\n axarr[2].grid(True)\n\n plt.show()\n\n\nvisualize_by_cont_var('temp')", "The scatter visualization is not very clear. We decide to categorize the continuous variables.", "# discretization of the variable \"temp\"\n[df['temp_disc'], mod] = divmod(df['temp'], 5)\n\nvisualize_log_by_cat_var('temp_disc')\n\n# discretization of the variable \"humidity\"\n[df['humidity_disc'], mod] = divmod(df['humidity'], 5)\n\nvisualize_log_by_cat_var('humidity_disc')\n\n# discretization of the variable \"windspeed\"\n[df['windspeed_disc'], mod] = divmod(df['windspeed'], 5)\n\nvisualize_log_by_cat_var('windspeed_disc')", "Complementary question\nAssuming we have access to the sex and age of subscribed users, the statistical procedure that would allow us to say whether the age distributions of the two populations (women and men) are identical or not is the \"Student test.\" \nThis test has conditions such as:\n- number of data by group > 30 or the distribution follows a normal distribution\n- equality of variances in each group\nNote that if these conditions are not validated, other tests may be applied.\nThere are two possible outcomes:\n- \"pvalue\" < 0.05: In this case, we consider that the difference is significant between the two distributions.\n- \"pvalue\" > = 0.05: In this case, we consider that the difference was not significant between the two distributions.\nLet's take an example. We build a dummy population of 1,000 individuals.\nWe randomly generate gender and age independently, then we merge the lists as columns. After application of the test, we get a \"pvalue\" way above the threshold. This result is expected because we generated randomly ages regardless of sexes.", "# 1000 \"sex\" randomly \nsex_list = []\nfor i in range(1000):\n sex_list.append( choice([\"Female\", \"Male\"]) )\n \n# 1000 \"age\" randomly \nage_list = [round(gauss(50,15)) for i in range(1000)]\n \n# build a dummy data set on subscribers by combining the two lists \ndf_registered = pd.DataFrame({\n \"sex\": sex_list,\n \"age\": age_list\n })\n\n\n# separate the 2 groups \ndf_registered_male = df_registered.loc[df_registered[\"sex\"] == \"Male\",]\ndf_registered_female = df_registered.loc[df_registered[\"sex\"] == \"Female\",]\n\n# Student test\nstats.ttest_ind(df_registered_male['age'],df_registered_female['age'])", "Build a prediction model\nGiven these observations we have made previously, we make the following choices:\n- We separately the model training of the \"casual\" users and the \"registered\" users. They do not have exactly the same predictor variables. In addition, the predictor variables do not have the same effect on them.\n- We will analyze the log of the dependent variable to minimize the presence of outliers.\nProcess\nTo design the two models, here are the steps we shall follow:\n- Reserve some of the data to the final performance evaluation (we reserve 20%): df_test\n\nFor the remaining 80%, we do one of the two things:\nreserve 25% for validation (df_validate) and 75% for training the model (df_train)\nusing cross-validation with a value of 10 k-fold for example. Note that this procedure is more time consuming. However, this is the one we use because it is more accurate in the performance estimation.\nTune parameters. We will do this step along with cross-validation using \"GridSearchCV\".\nChoose the model with the best performance.\nCalculate the actual performance using the df_test.\nBuild the model again on all data (using the selected model type and with the optimal settings).\n\nChoice of models\nWe have selected some models. We will choose the one that will show the best performance with the best settings:\n- Linear regression: It is simple and fast, so it costs \"nothing\" to try. However, it tends to be simplistic, and therefore inefficient. It is also very sensitive to outliers (which we tried to correct by using the log of dependent variables).\n- knn regression: It is an effective algorithm when the amount of data is high. It can be time consuming as it is necessary to determine all inter-distances.\n- Random Forests: Decision trees, if they are not controlled, tend to overfit. To avoid this, a large number of trees are built. The average of this \"forest\" is a tree that prevents overfitting. However, random forests tend to be memory time consuming.\nOther algorithms can be used and that we do not use here, for example:\n- Ridge Regression: It is interesting when the number of dimensions is high, which is not the case here.\n- boosted decision trees: They avoid overfitting by limiting the number of possible subdivisions. The algorithm builds a sequence of trees that enables to compensate for the error of the previous trees. As random forests, they also tend to use a lot of memory.\nBefore training models, let us present the evaluation criterion.\nIn ordre to evaluate the performance, we will compute the \"mean squared error\" on log of the dependent variables (thus the \"logarithmic mean squared error\" on the variables themselves). Other criteria exist such as the \"mean absolute error\". But we prefer the \"logarithmic mean squared error\" that penalize bigger errors.\nModel taining\nHere we follow the previously described steps.", "# reserve 20% of the data for the final test \ndf_train, df_test = train_test_split(df, test_size=0.2, random_state=91948172)\n\n# predictor for \"casual\"\npredictors_4_casual = ['hour', 'is_weekend', 'month', 'year', 'temp', 'humidity']\n# preparation of the matrix X and the vector y for \"casual\"\ndf_train_X_4_casual = df_train[predictors_4_casual]\ndf_train_y_casual_log = df_train['casual_log']\n\n# predictor for \"registered\"\npredictors_4_registered = ['hour', 'is_weekend', 'month', 'year', 'temp']\n# preparation of the matrix X and the vector y for \"registered\"\ndf_train_X_4_registered = df_train[predictors_4_registered]\ndf_train_y_registered_log = df_train['registered_log']\n\n# ...'season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed',\n# ...'hour', 'month', 'year', 'day', is_weekend', \n", "To train the models we use \"scikit-learn\" 0.19.1 (the latest stable version at the moment of the case study).", "# global parameters\n\n# evaluation method \nscoring_param = 'neg_mean_squared_error'\n\n# number of groups for cross-validation\ncv_param = 10\n\n# \"linear regression\" for \"casual\" \n# note: no cross-validation here since the linear regression does not overfit\nmodel = LinearRegression()\n\nmodel.fit(df_train_X_4_casual, df_train_y_casual_log)\npred = model.predict(df_train_X_4_casual)\nscore_value = mean_squared_error(df_train_y_casual_log, pred)\nprint(\"MSLE : %0.3f\" % (score_value)) \n\n# \"linear regression\" for \"registered\" \n# note: no cross-validation here since the linear regression does not overfit\nmodel = LinearRegression()\n\nmodel.fit(df_train_X_4_registered, df_train_y_registered_log)\npred = model.predict(df_train_X_4_registered)\nscore_value = mean_squared_error(df_train_y_registered_log, pred)\nprint(\"MSLE : %0.3f\" % (score_value)) \n\n# \"k nearest 'neighbors\" for \"casual\" \n# note: using GridSearchCV (cross validation and determination of optimal parameters) \nmodel = KNeighborsRegressor()\n\nhyperparameters = {\n \"n_neighbors\": range(1,49,10)\n}\n\ngrid = GridSearchCV(model, param_grid=hyperparameters, cv=cv_param, scoring=scoring_param)\ngrid.fit(df_train_X_4_casual, df_train_y_casual_log)\n\nprint(grid.best_params_)\nprint(\"MSLE : %0.3f\" % (grid.best_score_)) \n\n# \"k nearest 'neighbors\" for \"registered\" \n# note: using GridSearchCV (cross validation and determination of optimal parameters) \nmodel = KNeighborsRegressor()\n\nhyperparameters = {\n \"n_neighbors\": range(1,49,10)\n}\n\ngrid = GridSearchCV(model, param_grid=hyperparameters, cv=cv_param, scoring=scoring_param)\ngrid.fit(df_train_X_4_registered, df_train_y_registered_log)\n\nprint(grid.best_params_)\nprint(\"MSLE : %0.3f\" % (grid.best_score_)) \n\n# \"random forest\" for \"casual\" \n# note: we use here cross validation and empirical determination of the parameters \n# ... GridSearchCV is too greedy here \nmodel = RandomForestRegressor(n_estimators=120, min_samples_split=10, min_samples_leaf=5)\n\nscores = cross_val_score(model, df_train_X_4_casual, df_train_y_casual_log, cv=cv_param, scoring=scoring_param)\nprint(\"MSLE : %0.3f\" % (scores.mean())) \n\n# random forest\" for \"registered\"\n# note: we use here cross validation and empirical determination of the parameters \n# ... GridSearchCV is too greedy here \nmodel = RandomForestRegressor(n_estimators=90, min_samples_split=10, min_samples_leaf=5)\n\nscores = cross_val_score(model, df_train_X_4_registered, df_train_y_registered_log, cv=cv_param, scoring=scoring_param)\nprint(\"MSLE : %0.3f\" % (scores.mean())) ", "Choosing the type of model and parameters\nHere are our picks:\n- As for predicting the locations for the variable \"casual\", we choose the \"RandomForestRegressor\" with the following parameters: n_estimators = 120, min_samples_split = 10 = 5 min_samples_leaf.\n- As for predicting the locations for the variable \"registered\", we choose the \"RandomForestRegressor\" with the following parameters: n_estimators = 90, = 10 min_samples_split, min_samples_leaf = 5.\nFinal evaluation\nWe use the model with the best performance to evaluate the test data (that the model has not \"seen\"). We will see that the performances are not very different from those already obtained. This may indicate that we do not need additional data (without having to compare learning curves). However, if the predictions have to be done at moments outside of the study interval, the same permformances are not guaranteed.", "# casual: test data \ndf_test_X_4_casual = df_test[predictors_4_casual]\ndf_test_y_casual_log = df_test['casual_log']\n\n# casual: reconstruction of the best model on the training data \nmodel_casual = RandomForestRegressor(n_estimators=120, min_samples_split=10, min_samples_leaf=5)\nmodel_casual.fit(df_train_X_4_casual, df_train_y_casual_log)\n\n# casual: final evaluation \npred_casual = model_casual.predict(df_test_X_4_casual)\nmsle = mean_squared_error(df_test_y_casual_log, pred_casual)\nprint(\"MSLE : %0.3f\" % (msle)) \n\n# registered: test data \ndf_test_X_4_registered = df_test[predictors_4_registered]\ndf_test_y_registered_log = df_test['registered_log']\n\n# registered: reconstruction of the best model on the training data \nmodel_registered = RandomForestRegressor(n_estimators=90, min_samples_split=10, min_samples_leaf=5)\nmodel_registered.fit(df_train_X_4_registered, df_train_y_registered_log)\n\n# registered: final evaluation \npred_registered = model_registered.predict(df_test_X_4_registered)\nmsle = mean_squared_error(df_test_y_registered_log, pred_registered)\nprint(\"MSLE : %0.3f\" % (msle)) ", "Final model\nLet us train one last time the type of model that had the best performance (with the optimal settings). This time, we do it on all the data.", "# casual: all data \ndf_X_4_casual = df[predictors_4_casual]\ndf_y_casual_log = df['casual_log']\n\n# casual: build once again and finally the model on all data\nmodel_casual = RandomForestRegressor(n_estimators=120, min_samples_split=10, min_samples_leaf=5)\nmodel_casual.fit(df_X_4_casual, df_y_casual_log)\n\n# registered: all data \ndf_X_4_registered = df[predictors_4_registered]\ndf_y_registered_log = df['registered_log']\n\n# registered: build once again and finally the model on all data\nmodel_registered = RandomForestRegressor(n_estimators=90, min_samples_split=10, min_samples_leaf=5)\nmodel_registered.fit(df_X_4_registered, df_y_registered_log)", "Let us show how to predict \"count\" for new data.\nNote that we should not forget that the changes we have applied on our current datasets will also have to be applied on the new set.\nHere is an example of how we would proceed with a dummy dataset \"df_new\" (which is a copy of some of the data we already have).", "# let us take as an example a sample of 100 rows among the available dataset \ndf_new = deepcopy(df.iloc[sample(range(len(df)), 100)])\n\n# casual: extraction of the matrix of predictors \ndf_new_X_4_casual = df_new[predictors_4_casual]\n\n# model application\npred_casual = model_casual.predict(df_new_X_4_casual)\n\n# data transformation (inverse of the previously applied log transformation)\ndf_new['casual'] = np.exp(pred_casual) - 1\n\n# registered: extraction of the matrix of predictors \ndf_new_X_4_registered = df_new[predictors_4_registered]\n\n# model application\npred_registered = model_registered.predict(df_new_X_4_registered)\n\n# data transformation (inverse of the previously applied log transformation)\ndf_new['registered'] = np.exp(pred_casual) - 1\n\n# finally addition of two variables in order to predict \"count\" \ndf_new['count'] = df_new['casual'] + df_new['registered']", "Outlooks\nHere are some areas for improvement:\n- Consider the \"flow\". Indeed, the models that have been trained do not consider the \"sequence\" in data. For example, if at any time, ther is a peak in rentings, maybe the next hour, there will be relatively less rentings compared to the previous day, with the same conditions. It would be interesting to build a model that takes this into account.\n- Understand why there has been an increase from one year to the another. For this, we could try to get more data from before 2011 or after 2012 to check the annual increase in rentals. Moreover, it would be interesting to understand what could have caused this increase: lower prices? increasing the number of workers or tourists? responsible user behavior? increased traffic?\n- Test other Machine Learning methods such as Polynomial regression, Perceptron or \"GradientBoostingRegressor\" which seems to show very good performance over a wide range of problems." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mdda/fossasia-2016_deep-learning
notebooks/work-in-progress/WIP_9-RNN-Fun.ipynb
mit
[ "RNN Character Model + Lots More\nThis example trains a RNN to create plausible words from a corpus. But it includes lots of interesting \"bells and whistles\"\nThe data used for training is one of :\n * a vocabulary/dictionary collected from the 1-Billion-Word Corpus\n * a list of Indian names (voters rolls, by year) : TODO\nAdversarial networks : http://carpedm20.github.io/faces/\nDoing this with RNNs may be pretty novel : https://www.quora.com/Can-generative-adversarial-networks-be-used-in-sequential-data-in-recurrent-neural-networks-How-effective-would-they-be", "import numpy as np\nimport theano\n\nimport lasagne\n#from lasagne.utils import floatX\n\nimport pickle\nimport gzip\nimport random\n\nimport time\n\nWORD_LENGTH_MAX = 16\n\n# Load an interesting corpus (vocabulary words with frequencies from 1-billion-word-corpus) :\n\nwith gzip.open('../data/RNN/ALL_1-vocab.txt.gz') as f:\n lines = [ l.strip().lower().split() for l in f.readlines() ]\nlines[0:10]\n\n# Here are our characters : '[a-z\\- ]'\nimport re\ninvalid_chars = r'[^a-z\\- ]'\nlines_valid = [ l for l in lines if not re.search(invalid_chars, l[0]) ]\n#lines_valid = lines_valid[0:50000]\nlines_valid[0:10], len(lines_valid)\n\n# /usr/share/dict/linux.words\nwith open('/usr/share/dict/linux.words','rt') as f:\n linux_words = [ l.strip() for l in f.readlines() ]\nlinux_wordset = set(linux_words)\n#'united' in wordset\nlines_filtered = [l for l in lines_valid \n if len(l[0])>=3 # Require each word to have 3 or more characters\n and l[0] in linux_wordset # Require each word to be found in regular dictionary\n and len(l[0])<WORD_LENGTH_MAX # And limit length (to avoid crazy roll-out of RNN)\n ]\nlines_filtered[0:10], len(lines_filtered)\n\n# Split apart the words and their frequencies (Assume these are in sorted order, at least initial few)\nwords = [ l[0] for l in lines_filtered ]\nwordset = set(words)\nwordsnp = np.array(words)\nfreqs_raw = np.array( [ int(l[1]) for l in lines_filtered ] )\n\nfreq_tot = float(freqs_raw.sum())\n\n# Frequency weighting adjustments\nfreqs = freqs_raw / freq_tot\n\ncutoff_index = 30 # All words with highter frequencies will be 'limited' at this level\nfreqs[0:cutoff_index] = freqs[cutoff_index]\n\nfreqs = freqs / freqs.sum()\nfreqs[0:50]\n\ntest_cum = np.array( [.1, .5, .9, 1.0] )\ntest_cum.searchsorted([ .05, 0.45, .9, .95])\n\n# Cumulative frequency, so that we can efficiently pick weighted random words...\n# using http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html\nfreqs_cum = freqs.cumsum()\nfreqs_cum[:10], freqs_cum[-10:], ", "Network Parameters from Corpus\nFind the set of characters used in the corpus and construct mappings between characters, integer indices, and one hot encodings", "CHARS_VALID = \"abcdefghijklmnopqrstuvwxyz- \"\nCHARS_SIZE = len(CHARS_VALID)\n\nCHAR_TO_IX = {c: i for i, c in enumerate(CHARS_VALID)}\nIX_TO_CHAR = {i: c for i, c in enumerate(CHARS_VALID)}\nCHAR_TO_ONEHOT = {c: np.eye(CHARS_SIZE)[i] for i, c in enumerate(CHARS_VALID)}\n#CHAR_TO_IX", "Unigram frequency distribution", "# Single letter frequencies\nunigram_freq = np.zeros( (CHARS_SIZE,))\nidx_end = CHAR_TO_IX[' ']\nfor i,w in enumerate(words):\n word_freq = freqs[i]\n for c in w:\n unigram_freq[ CHAR_TO_IX[c] ] += word_freq\n unigram_freq[ idx_end ] += word_freq\nunigram_freq /= unigram_freq.sum()\nunigram_freq_cum = unigram_freq.cumsum()\n[ (CHARS_VALID[i], \"%6.3f\" % f) for i,f in enumerate(unigram_freq.tolist()) ]\n#CHARS_VALID[ unigram_freq_cum.searchsorted(0.20) ]\n\ndef unigram_word():\n s=[]\n while True:\n idx = np.searchsorted(unigram_freq_cum, np.random.uniform())\n c = IX_TO_CHAR[idx]\n if c==' ':\n if len(s)>0:\n break\n else:\n continue\n s.append(c)\n return ''.join(s)\n' '.join([ unigram_word() for i in range(0,20) ])", "Bigram frequency distribution", "# two-letter frequencies\nbigram_freq = np.zeros( (CHARS_SIZE,CHARS_SIZE) )\nfor i,w in enumerate(words):\n w2 = ' '+w+' '\n word_freq = freqs[i]\n for j in range(0, len(w2)-1):\n bigram_freq[ CHAR_TO_IX[ w2[j] ], CHAR_TO_IX[ w2[j+1] ] ] += word_freq\n#[ (CHARS_VALID[i], \"%6.3f\" % f) for i,f in enumerate(bigram_freq[ CHAR_TO_IX['q'] ].tolist()) ]\n#bigram_freq.sum(axis=1)[CHAR_TO_IX['q']]\nbigram_freq /= bigram_freq.sum(axis=1)[:, np.newaxis] # Trick to enable unflattening of sum()\nbigram_freq_cum = bigram_freq.cumsum(axis=1)\n#[ (CHARS_VALID[i], \"%6.3f\" % f) for i,f in enumerate(bigram_freq_cum[ CHAR_TO_IX['q'] ].tolist()) ]\n\n#bigram_freq.sum(axis=1)[CHAR_TO_IX['q']]\n#(bigram_freq/ bigram_freq.sum(axis=1)).sum(axis=0)\n#bigram_freq.sum(axis=1)[CHAR_TO_IX['q']]\n#bigram_freq[CHAR_TO_IX['q'], :].sum()\n#(bigram_freq / bigram_freq.sum(axis=1)[:, np.newaxis]).cumsum(axis=1)\n#Letter relative frequency for letters following 'q'\n[ (CHARS_VALID[i], \"%6.3f\" % f) for i,f in enumerate(bigram_freq[ CHAR_TO_IX['q'] ].tolist()) if f>0.001]\n#bigram_freq_cum[4]\n\ndef bigram_word():\n s=[]\n idx_last = CHAR_TO_IX[' ']\n while True:\n idx = np.searchsorted(bigram_freq_cum[idx_last], np.random.uniform())\n c = IX_TO_CHAR[idx]\n if c==' ':\n if len(s)>0:\n #if len(s)<50: continue\n break\n else:\n continue\n s.append(c)\n idx_last=idx\n return ''.join(s)\n' '.join([ bigram_word() for i in range(0,20) ])", "Trigram frequency distribution", "# Three-letter frequencies\ntrigram_freq = np.zeros( (CHARS_SIZE,CHARS_SIZE,CHARS_SIZE) )\nfor i,w in enumerate(words):\n w3 = ' '+w+' '\n word_freq = freqs[i]\n for j in range(0, len(w3)-2):\n trigram_freq[ CHAR_TO_IX[ w3[j] ], CHAR_TO_IX[ w3[j+1] ], CHAR_TO_IX[ w3[j+2] ] ] += word_freq\ntrigram_freq /= trigram_freq.sum(axis=2)[:, :, np.newaxis] # Trick to enable unflattening of sum()\ntrigram_freq_cum = trigram_freq.cumsum(axis=2)\n[ \"ex-%s %6.3f\" % (CHARS_VALID[i], f) \n for i,f in enumerate(trigram_freq[ CHAR_TO_IX['e'], CHAR_TO_IX['x'] ].tolist()) if f>0.001 ]\n\ndef trigram_word():\n s=[]\n idx_1 = idx_2 = CHAR_TO_IX[' ']\n while True:\n idx = np.searchsorted(trigram_freq_cum[idx_1, idx_2], np.random.uniform())\n c = IX_TO_CHAR[idx]\n if c==' ':\n if len(s)>0:\n #if len(s)<50: continue\n break\n else:\n continue\n s.append(c)\n idx_1, idx_2 = idx_2, idx\n return ''.join(s) \n' '.join([ trigram_word() for i in range(0,20) ])", "Generate base-line scores", "sample_size=10000\nngram_hits = [0,0,0]\nfor w in [ unigram_word() for i in range(0, sample_size) ]:\n if w in wordset: ngram_hits[0] += 1\n #print(\"%s %s\" % ((\"YES\" if w in wordset else \" - \"), w, ))\nfor w in [ bigram_word() for i in range(0, sample_size) ]:\n if w in wordset: ngram_hits[1] += 1\n #print(\"%s %s\" % ((\"YES\" if w in wordset else \" - \"), w, ))\nfor w in [ trigram_word() for i in range(0, sample_size) ]:\n if w in wordset: ngram_hits[2] += 1\n #print(\"%s %s\" % ((\"YES\" if w in wordset else \" - \"), w, ))\nfor i,hits in enumerate(ngram_hits):\n print(\"%d-gram : %4.2f%%\" % (i+1, hits*100./sample_size ))\n#[ (i,w) for i,w in enumerate(words) if 'mq' in w]\n\n# Find the distribution of unigrams by sampling (sanity check)\nif False:\n sample_size=1000\n arr=[]\n for w in [ unigram_word() for i in range(0, sample_size) ]:\n arr.append(w)\n s = ' '.join(arr)\n s_len = len(s)\n for c in CHARS_VALID:\n f = len(s.split(c))-1\n print(\"%s -> %6.3f%%\" % (c, f*100./s_len))", "RNN Main Parameters", "BATCH_SIZE = 64\nRNN_HIDDEN_SIZE = CHARS_SIZE\nGRAD_CLIP_BOUND = 5.0", "An RNN 'discriminator'\nInstead of having a binary 'YES/NO' decision about whether a word is valid (via a lookup in the vocabulary), it may make it simpler to train a word-generator if we can assign a probability that a given word is valid. \nTo do this, let's create a recurrent neural network (RNN) that accepts a (one-hot-encoded) word as input, and (at the end of the sequence) gives us an estimate of the probability that the word is valid. \nActually, rather than descriminate according to whether the word is actually valid, let's 'just' try to decide whether it was produced directly from the dictionary or from the generate_bigram_word() source.\nThis can be tested by giving it lists of actual words, and lists of words generated by generate_bigram_word() and seeing whether they can be correctly classified. \nThe decision about what to do in the 12% of cases when the bigram function results in a valid word can be left until later... (since the distribution is so heavily skewed towards producing non-words).\nCreate Training / Testing dataset\nAnd a 'batch generator' function that delivers data in the right format for RNN training", "def batch_dictionary(size=BATCH_SIZE/2):\n uniform_vars = np.random.uniform( size=(size,) )\n idx = freqs_cum.searchsorted(uniform_vars)\n return wordsnp[ idx ].tolist()\n \ndef batch_bigram(size=BATCH_SIZE/2):\n return [ bigram_word()[0:WORD_LENGTH_MAX] for i in range(size) ]\n\n# Test it out\n#batch_test = lambda : batch_dictionary(size=4)\nbatch_test = lambda : batch_bigram(size=4)\nprint(batch_test())\nprint(batch_test())\nprint(batch_test())", "Lasagne RNN tutorial (including conventions &amp; rationale)\n\nhttp://colinraffel.com/talks/hammer2015recurrent.pdf\n\nLasagne Examples\n\nhttps://github.com/Lasagne/Lasagne/blob/master/lasagne/layers/recurrent.py\nhttps://github.com/Lasagne/Recipes/blob/master/examples/lstm_text_generation.py\n\nGood blog post series\n\nhttp://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/", "# After sampling a data batch, we transform it into a one hot feature representation with a mask\ndef prep_batch_for_network(batch_of_words):\n word_max_length = np.array( [ len(w) for w in batch_of_words ]).max()\n \n # translate into one-hot matrix, mask values and targets\n input_values = np.zeros((len(batch_of_words), word_max_length, CHARS_SIZE), dtype='float32')\n mask_values = np.zeros((len(batch_of_words), word_max_length), dtype='int32')\n \n for i, word in enumerate(batch_of_words):\n for j, c in enumerate(word):\n input_values[i,j] = CHAR_TO_ONEHOT[ c ]\n mask_values[i, 0:len(word) ] = 1\n\n return input_values, mask_values", "Define the Descriminating Network Symbolically", "# Symbolic variables for input. In addition to the usual features and target,\n# we need initial values for the RNN layer's hidden states\ndisc_input_sym = theano.tensor.tensor3()\ndisc_mask_sym = theano.tensor.imatrix()\n\ndisc_target_sym = theano.tensor.matrix() # probabilities of being from the dictionary (i.e. a single column matrix)\n\n# Our network has two stacked GRU layers processing the input sequence.\ndisc_input = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\ndisc_mask = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\n\ndisc_rnn1 = lasagne.layers.GRULayer(disc_input,\n num_units=RNN_HIDDEN_SIZE,\n gradient_steps=-1,\n grad_clipping=GRAD_CLIP_BOUND,\n hid_init=lasagne.init.Normal(),\n learn_init=True,\n mask_input=disc_mask,\n only_return_final=True, # Only the state at the last timestep is needed\n )\n\ndisc_decoder = lasagne.layers.DenseLayer(disc_rnn1,\n num_units=1,\n nonlinearity=lasagne.nonlinearities.sigmoid\n )\n\ndisc_final = disc_decoder\n\n# Finally, the output stage\ndisc_output = lasagne.layers.get_output(disc_final, {\n disc_input: disc_input_sym, \n disc_mask: disc_mask_sym, \n }\n )\n", "Loss Function for Training", "disc_loss = theano.tensor.nnet.binary_crossentropy(disc_output, disc_target_sym).mean()", "... and the Training and Prediction functions", "# For stability during training, gradients are clipped and a total gradient norm constraint is also used\n#MAX_GRAD_NORM = 15\n\ndisc_params = lasagne.layers.get_all_params(disc_final, trainable=True)\n\ndisc_grads = theano.tensor.grad(disc_loss, disc_params)\n#disc_grads = [theano.tensor.clip(g, -GRAD_CLIP_BOUND, GRAD_CLIP_BOUND) for g in disc_grads]\n#disc_grads, disc_norm = lasagne.updates.total_norm_constraint( disc_grads, MAX_GRAD_NORM, return_norm=True)\n\ndisc_updates = lasagne.updates.adam(disc_grads, disc_params)\n\ndisc_train = theano.function([disc_input_sym, disc_target_sym, disc_mask_sym], # , disc_rnn1_t0_sym\n [disc_loss], # , disc_output, norm, hid_out_last, hid2_out_last\n updates=disc_updates,\n )\n\ndisc_predict = theano.function([disc_input_sym, disc_mask_sym], [disc_output])\nprint(\"Discriminator network functions defined\")", "Finally, the Discriminator Training Loop\n\nTraining takes a while :: 1000 iteration takes about 20 seconds on a CPU\n... you may want to skip this and the next cell, and load the pretrained weights instead", "t0, iterations_complete = time.time(), 0\n\nepochs = 10*1000\nt1, iterations_recent = time.time(), iterations_complete\nfor epoch_i in range(epochs):\n # create a batch of words : half are dictionary, half are from bigram\n batch_of_words = batch_dictionary() + batch_bigram()\n \n # get the one-hot input values and corresponding mask matrix\n disc_input_values, disc_mask_values = prep_batch_for_network(batch_of_words)\n\n # and here are the assocated target values \n disc_target_values= np.zeros((len(batch_of_words),1), dtype='float32')\n \n disc_target_values[ 0:(BATCH_SIZE/2), 0 ] = 1.0 # First half are dictionary values\n for i, word in enumerate(batch_of_words):\n if True and i>BATCH_SIZE/2 and word in wordset:\n disc_target_values[ i , 0 ] = 1.0 # bigram has hit a dictionary word by luck...\n\n \n # Now train the discriminator RNN\n disc_loss_, = disc_train(disc_input_values, disc_target_values, disc_mask_values)\n \n #disc_output_, = disc_predict(disc_input_values, disc_mask_values)\n iterations_complete += 1\n \n if iterations_complete % 250 == 0:\n secs_per_batch = float(time.time() - t1)/ (iterations_complete - iterations_recent)\n eta_in_secs = secs_per_batch*(epochs-epoch_i)\n print(\"Iteration {:5d}, loss_train: {:.4f} ({:.1f}s per 1000 batches) eta: {:.0f}m{:02.0f}s\".format(\n iterations_complete, float(disc_loss_), \n secs_per_batch*1000., np.floor(eta_in_secs/60), np.floor(eta_in_secs % 60) \n ))\n #print('Iteration {}, output: {}'.format(iteration, disc_output_, )) # , output: {}\n t1, iterations_recent = time.time(), iterations_complete\n \nprint('Iteration {}, ran in {:.1f}sec'.format(iterations_complete, float(time.time() - t0)))", "Save the learned parameters\nUncomment the pickle.dump() to actually save to disk", "disc_param_values = lasagne.layers.get_all_param_values(disc_final)\ndisc_param_dictionary = dict(\n params = disc_param_values,\n CHARS_VALID = CHARS_VALID, \n CHAR_TO_IX = CHAR_TO_IX,\n IX_TO_CHAR = IX_TO_CHAR,\n )\n#pickle.dump(disc_param_dictionary, open('../data/RNN/disc_trained.pkl','w'), protocol=pickle.HIGHEST_PROTOCOL)", "Load pretrained weights into network", "disc_param_dictionary = pickle.load(open('../data/RNN/disc_trained_64x310k.pkl', 'r'))\nlasagne.layers.set_all_param_values(disc_final, disc_param_dictionary['params'])", "Check that the Discriminator Network 'works'", "test_text_list = [\"shape\", \"shast\", \"shaes\", \"shafg\", \"shaqw\"]\ntest_text_list = [\"opposite\", \"aposite\", \"apposite\", \"xposite\", \"rrwqsite\", \"deposit\", \"idilic\", \"idyllic\"]\n\ndisc_input_values, disc_mask_values = prep_batch_for_network(test_text_list)\n\ndisc_output_, = disc_predict(disc_input_values, disc_mask_values)\n\nfor i,v in enumerate(disc_output_.tolist()):\n print(\"%s : %5.2f%%\" % ((test_text_list[i]+' '*20)[:20], v[0]*100.))", "Create a Generative network\nNext, let's build an RNN that produces text, and train it using (a) a pure dictionary look-up, and (b) the correctness signal from the Discriminator above.\nPlan of attack : \n\nCreate a GRU that outputs a character probability distribution for every time step\nRun the RNN several times :\neach time is an additional character input longer \nwith the next character chosen according to the probability distribution given\nand then re-run with the current input words (up to that point)\nStop adding characters when they've all reached 'space'\n\nThis seems very inefficient (since the first RNN steps are being run multiple times on the same starting letters), but is the same as in https://github.com/Lasagne/Recipes/blob/master/examples/lstm_text_generation.py", "# Let's pre-calculate the logs of the bigram frequencies, since they may be mixed in below\nbigram_min_freq = 1e-10 # To prevent underflow in log...\nbigram_freq_log = np.log( bigram_freq + bigram_min_freq ).astype('float32')\n\n# Symbolic variables for input. In addition to the usual features and target,\ngen_input_sym = theano.tensor.ftensor3()\ngen_mask_sym = theano.tensor.imatrix()\n\ngen_words_target_sym = theano.tensor.imatrix() # characters generated (as character indicies)\n\n# probabilities of being from the dictionary (i.e. a single column matrix)\ngen_valid_target_sym = theano.tensor.fmatrix( )\n\n# This is a single mixing parameter (0.0 = pure RNN, 1.0=pure Bigram)\ngen_bigram_overlay = theano.tensor.fscalar()\n\n# This is 'current' since it reflects the bigram field as far as it is known during the call\ngen_bigram_freq_log_field = theano.tensor.ftensor3()\n\ngen_input = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\ngen_mask = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\n\n#gen_rnn1_t0 = lasagne.layers.InputLayer( (None, RNN_HIDDEN_SIZE) ) # batch_size, RNN_hidden_size=chars_size\n\n#n_batch, n_time_steps, n_features = gen_input.input_var.shape\nn_batch, n_time_steps, n_features = gen_input_sym.shape\n\ngen_rnn1 = lasagne.layers.GRULayer(gen_input,\n num_units=RNN_HIDDEN_SIZE,\n gradient_steps=-1,\n grad_clipping=GRAD_CLIP_BOUND,\n #hid_init=disc_rnn1_t0,\n hid_init=lasagne.init.Normal(),\n learn_init=True,\n mask_input=gen_mask,\n only_return_final=False, # Need all of the output states\n )\n\n# Before the decoder layer, we need to reshape the sequence into the batch dimension,\n# so that timesteps are decoded independently.\ngen_reshape = lasagne.layers.ReshapeLayer(gen_rnn1, (-1, RNN_HIDDEN_SIZE) )\n\ngen_prob_raw = lasagne.layers.DenseLayer(gen_reshape, \n num_units=CHARS_SIZE, \n nonlinearity=lasagne.nonlinearities.linear # No squashing (yet)\n )\n\ngen_prob = lasagne.layers.ReshapeLayer(gen_prob_raw, (-1, n_time_steps, CHARS_SIZE))\n\ngen_prob_theano = lasagne.layers.get_output(gen_prob, {\n gen_input: gen_input_sym, \n gen_mask: gen_mask_sym, \n })\n\ngen_prob_mix = gen_bigram_overlay*gen_bigram_freq_log_field + (1.0-gen_bigram_overlay)*gen_prob_theano\n\ngen_prob_mix_flattened = theano.tensor.reshape(gen_prob_mix, (-1, CHARS_SIZE))\ngen_prob_softmax_flattened = theano.tensor.nnet.nnet.softmax(gen_prob_mix_flattened)\n\n#gen_prob_final = lasagne.layers.SliceLayer(gen_prob_raw, indices=(-1), axis=1)\n\n# Finally, the output stage - this is for the training (over all the letters in the words)\n#gen_output = gen_prob_softmax_flattened\n\n# And for prediction (which is done incrementally, adding one letter at a time)\ngen_output_last = gen_prob_softmax_flattened.reshape( (-1, n_time_steps, CHARS_SIZE) )[:, -1]\n\n# The generative network is trained by encouraging the outputs across time to match the given sequence of letters\n\n# We flatten the sequence into the batch dimension before calculating the loss\n#def gen_word_cross_ent(net_output, targets):\n# preds_raw = theano.tensor.reshape(net_output, (-1, CHARS_SIZE))\n# preds_softmax = theano.tensor.nnet.nnet.softmax(preds_raw)\n# targets_flat = theano.tensor.flatten(targets)\n# cost = theano.tensor.nnet.categorical_crossentropy(preds_softmax, targets_flat)\n# return cost\n\ntargets_flat = theano.tensor.flatten(gen_words_target_sym)\ngen_cross_entropy_flat = theano.tensor.nnet.categorical_crossentropy(gen_prob_softmax_flattened, targets_flat)\ngen_cross_entropy = theano.tensor.reshape(gen_cross_entropy_flat, (-1, n_time_steps) )\n\ngen_loss_weighted = theano.tensor.dot( gen_valid_target_sym.T, gen_cross_entropy )\ngen_loss = gen_loss_weighted.mean()\n\n# For stability during training, gradients are clipped and a total gradient norm constraint is also used\n#MAX_GRAD_NORM = 15\n\ngen_predict = theano.function([gen_input_sym, \n gen_bigram_overlay, gen_bigram_freq_log_field, \n gen_mask_sym], [gen_output_last])\n\ngen_params = lasagne.layers.get_all_params(gen_prob, trainable=True)\n\ngen_grads = theano.tensor.grad(gen_loss, gen_params)\n#gen_grads = [theano.tensor.clip(g, -GRAD_CLIP_BOUND, GRAD_CLIP_BOUND) for g in gen_grads]\n#gen_grads, gen_norm = lasagne.updates.total_norm_constraint( gen_grads, MAX_GRAD_NORM, return_norm=True)\n\ngen_updates = lasagne.updates.adam(gen_grads, gen_params)\n\ngen_train = theano.function([gen_input_sym, \n gen_bigram_overlay, gen_bigram_freq_log_field, \n gen_words_target_sym, gen_valid_target_sym, \n gen_mask_sym],\n [gen_loss],\n updates=gen_updates,\n )\n\ngen_debug = theano.function([gen_input_sym, \n gen_bigram_overlay, gen_bigram_freq_log_field, \n gen_words_target_sym, gen_valid_target_sym, \n gen_mask_sym],\n [gen_cross_entropy], \n on_unused_input='ignore'\n )\nprint(\"Generator network functions defined\")", "Use the Generative Network to create sample words\nThe network above can be used to generate text...\nThe following set-up allows for the output of the RNN at each timestep to be mixed with the letter frequency that the bigram model would suggest - in a proportion bigram_overlay which can vary from 0 (being solely RNN derived) to 1.0 (being solely bigram frequencies, with the RNN output being disregarded). \nThe input is a 'random field' matrix that is used to chose each letter in each slot from the generated probability distribution.\nOnce a space is output for a specific word, then it stops being extended (equivalently, the mask is set to zero going forwards).\nOnce spaces have been observed for all words (or the maximum length reached), the process ends, and a list of the created words is returned.", "def generate_rnn_words(random_field, bigram_overlay=0.0):\n batch_size, max_word_length = random_field.shape\n \n idx_spc = CHAR_TO_IX[' ']\n def append_indices_as_chars(words_current, idx_list):\n for i, idx in enumerate(idx_list):\n if idx == idx_spc:\n pass # Words end at space\n #words_current[i] += 'x'\n else:\n words_current[i] += IX_TO_CHAR[idx]\n return words_current\n \n # Create a 'first character' by using the bigram transitions from 'space' (this is fair)\n idx_initial = [ np.searchsorted(bigram_freq_cum[idx_spc], random_field[i, 0]) for i in range(batch_size) ]\n\n bigram_freq_log_field = np.zeros( (batch_size, max_word_length, CHARS_SIZE), dtype='float32')\n bigram_freq_log_field[:,0] = bigram_freq_log[ np.array(idx_initial) , :]\n \n words_current = [ '' for _ in range(batch_size) ]\n words_current = append_indices_as_chars(words_current, idx_initial)\n \n col = 1\n while True:\n gen_input_values, gen_mask_values = prep_batch_for_network(words_current)\n #print(gen_mask_values[:,-1])\n #gen_out_, = gen_predict(gen_input_values, gen_mask_values)\n\n if gen_input_values.shape[1]<col: # Early termination\n print(\"Early termination\")\n col -= 1\n break\n \n #print(gen_input_values.shape, gen_mask_values.shape, bigram_freq_log_field.shape, col)\n probs, = gen_predict(gen_input_values, bigram_overlay, bigram_freq_log_field[:,0:col], gen_mask_values)\n #print(probs[0])\n \n # This output is the final probability[CHARS_SIZE], so let's cumsum it, etc.\n probs_cum = probs.cumsum(axis=1)\n \n idx_next = [ # Only add extra letters if we haven't already passed a space (i.e. mask[-1]==0)\n idx_spc if gen_mask_values[i,-1]==0 else np.searchsorted(probs_cum[i], random_field[i, col]) \n for i in range(batch_size) \n ]\n \n words_current = append_indices_as_chars(words_current, idx_next)\n \n words_current_max_length = np.array( [ len(w) for w in words_current ]).max()\n \n # If the words have reached the maximum length, or we didn't extend any of them...\n if words_current_max_length>=max_word_length: # Finished \n col += 1\n break\n \n # Guarded against overflow on length...\n bigram_freq_log_field[:, col] = bigram_freq_log[ np.array(idx_next) , :]\n col += 1\n\n return words_current, bigram_freq_log_field[:,0:col]\n\ndef view_rnn_generator_sample_output(bigram_overlay=0.9):\n # Create a probability distribution across all potential positions in the output 'field'\n random_field = np.random.uniform( size=(BATCH_SIZE, WORD_LENGTH_MAX) )\n\n gen_words_output, _underlying_bigram_field = generate_rnn_words(random_field, bigram_overlay=bigram_overlay)\n\n print( '\\n'.join(gen_words_output))\n #print(_underlying_bigram_field)\n\nview_rnn_generator_sample_output(bigram_overlay=0.0)\n\nview_rnn_generator_sample_output(bigram_overlay=0.9)", "Remember the initial (random) Network State\nThis will come in handy when we need to reset the network back to 'untrained' later.", "gen_param_values_initial = lasagne.layers.get_all_param_values(gen_prob)", "Now, train the Generator RNN based on the Dictionary itself\nOnce we have an output word, let's reward the RNN based on a specific training signal. We'll encapsulate the training in a function that takes the input signal as a parameter, so that we can try other training schemes (later).", "def is_good_output_dictionary(output_words):\n return np.array(\n [ (1.0 if w in wordset else 0.0) for w in output_words ],\n dtype='float32'\n )\n\nt0, iterations_complete = time.time(), 0\ndef reset_generative_network():\n global t0, iterations_complete\n t0, iterations_complete = time.time(), 0\n lasagne.layers.set_all_param_values(gen_prob, gen_param_values_initial)\n\ndef prep_batch_for_network_output(mask_values, batch_of_words):\n output_indices = np.zeros(mask_values.shape, dtype='int32')\n\n for i, word in enumerate(batch_of_words):\n word_shifted = word[1:]+' '\n for j, c in enumerate(word_shifted):\n output_indices[i,j] = CHAR_TO_IX[ c ]\n\n return output_indices\n\ndef train_generative_network(is_good_output_function=is_good_output_dictionary, epochs=10*1000, bigram_overlay=0.0):\n if bigram_overlay>=1.0: \n print(\"Cannot train with pure bigrams...\")\n return\n \n global t0, iterations_complete\n t1, iterations_recent = time.time(), iterations_complete\n for epoch_i in range(epochs):\n random_field = np.random.uniform( size=(BATCH_SIZE, WORD_LENGTH_MAX) )\n\n gen_words_output, underlying_bigram_field = generate_rnn_words(random_field, bigram_overlay=bigram_overlay)\n #print(gen_words_output[0]) \n #print(underlying_bigram_field[0])\n \n # Now, create a training set of input -> output, coupled with an intensity signal\n # first the step-by-step network inputs\n gen_input_values, gen_mask_values = prep_batch_for_network(gen_words_output)\n \n # now create step-by-step network outputs (strip off first character, add spaces) as *indicies*\n gen_output_values_int = prep_batch_for_network_output(gen_mask_values, gen_words_output)\n #print(gen_output_values_int.shape, underlying_bigram_field.shape)\n #print(gen_output_values_int[0]) # makes sense\n \n # And, since we have a set of words, we can also determine their 'goodness'\n is_good_output = is_good_output_function(gen_words_output)\n #print(is_good_output[0]) Starts at all zero. i.e. the word[0] is bad\n\n # This looks like it is the wrong way 'round...\n target_valid_row = -(np.array(is_good_output) - 0.5)\n \n ## i.e. higher values for more-correct symbols : This goes -ve, and wrong, quickly\n #target_valid_row = (np.array(is_good_output) - 0.5) \n \n #target_valid_row = np.ones( (gen_mask_values.shape[0],), dtype='float32' )\n target_valid = target_valid_row[:, np.newaxis]\n #print(target_valid.shape)\n \n if False:\n # Now debug the generator RNN\n gen_debug_, = gen_debug(gen_input_values, \n bigram_overlay, underlying_bigram_field, \n gen_output_values_int, target_valid, \n gen_mask_values)\n print(gen_debug_.shape)\n print(gen_debug_[0])\n #return\n \n # Now train the generator RNN\n gen_loss_, = gen_train( gen_input_values, \n bigram_overlay, underlying_bigram_field, \n gen_output_values_int, target_valid, \n gen_mask_values)\n #print(gen_loss_)\n # Hmm - this loss is ~ a character-level loss, and isn't comparable to a word-level score, \n # which is a pity, since the 'words' seem to get worse, not better...\n \n iterations_complete += 1\n\n if iterations_complete % 10 == 0:\n secs_per_batch = float(time.time() - t1)/ (iterations_complete - iterations_recent)\n eta_in_secs = secs_per_batch*(epochs-epoch_i)\n print(\"Iteration {:5d}, loss_train: {:.2f} word-score: {:.2f}% ({:.1f}s per 1000 batches) eta: {:.0f}m{:02.0f}s\".format(\n iterations_complete, float(gen_loss_), \n float(is_good_output.mean())*100., \n secs_per_batch*1000., np.floor(eta_in_secs/60), np.floor(eta_in_secs % 60), )\n )\n print( ' '.join(gen_words_output[:10]) )\n #print('Iteration {}, output: {}'.format(iteration, disc_output_, )) # , output: {}\n t1, iterations_recent = time.time(), iterations_complete\n\n print('Iteration {}, ran in {:.1f}sec'.format(iterations_complete, float(time.time() - t0)))\n\n#theano.config.exception_verbosity='high' # ... a little pointless with RNNs\n# See: http://deeplearning.net/software/theano/tutorial/debug_faq.html\nreset_generative_network()\n\ntrain_generative_network(is_good_output_function=is_good_output_dictionary, epochs=1*1000, bigram_overlay=0.9)", "How are we doing?", "view_rnn_generator_sample_output(bigram_overlay=0.9)", "Use training signal from Discriminator", "#def is_good_output_dictionary(output_words):\n# return np.array(\n# [ (1.0 if w in wordset else 0.0) for w in output_words ],\n# dtype='float32'\n# )\n\ndef is_good_output_discriminator(output_words):\n disc_input_values, disc_mask_values = prep_batch_for_network(output_words)\n disc_output_, = disc_predict(disc_input_values, disc_mask_values)\n \n return disc_output_.reshape( (-1,) )\n\nreset_generative_network()\ntrain_generative_network(is_good_output_function=is_good_output_discriminator, epochs=1*1000, bigram_overlay=0.9)\n#train_generative_network(is_good_output_function=is_good_output_dictionary, epochs=1*1000, bigram_overlay=0.9)", "How are we doing?", "view_rnn_generator_sample_output(bigram_overlay=0.9)", "Hmmmm\nExercises\n\nMake the above work...\nTry the Indian Names Corpus" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dgergel/VIC
samples/notebooks/example_plotting_vic_outputs.ipynb
gpl-2.0
[ "Tutorial: Plotting VIC Model Output\nThis Jupyter Notebook outlines one approach to plotting VIC output from the classic and image drivers. The tools used here are all freely available.", "%matplotlib inline\n\nimport pandas as pd\nimport xarray as xr\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\n\n# input files for example: \nasci_fname = '/Users/jhamman/workdir/VIC_tests_20160531/examples/Example-Classic-Stehekin-fewb/results/fluxes_48.1875_-120.6875.txt'\nnc_fname = '/Users/jhamman/workdir/VIC_tests_20160331/examples/Example-Image-Stehekin-base-case/results/Stehekin.history.nc'", "Plotting Classic Driver Output\nReading VIC ASCII data:\nWe'll use pandas to parse the ASCII file.", "# Use the pandas read_table function to read/parse the VIC output file\ndf = pd.read_table(asci_fname, comment='#', sep=r\"\\s*\", engine='python', \n parse_dates=[[0, 1, 2]], index_col='YEAR_MONTH_DAY')\ndf.head()", "Plot 1: Time Series of Classic Driver Variables\nHere we'll use pandas' built in plotting to plot 3 of the variables in the dataframe (df).", "# Select the precipitation, evapotranspiration, and runoff variables and plot their timeseries.\ndf[['OUT_PREC', 'OUT_EVAP', 'OUT_RUNOFF']].plot()", "Plotting Image Driver Output\nThe image driver outputs netCDF files. Here we'll use the xarray package to open the dataset, and make a few plots.", "# Open the dataset\nds = xr.open_dataset(nc_fname)\nds", "Plot 2: Time slice of image driver output\nQuick and simple, select a time slice of the EVAP variable and plot it.", "\nds['OUT_EVAP'].sel(time='1949-01-04-00').plot()", "Plot 3: Multiple time slices of image driver output\nXarray allows you to plot multiple time periods at once, here we plot the daily average SWE.", "ds['OUT_SWE'].resample('1D', dim='time', how='mean').plot(col='time', col_wrap=4, levels=10)", "Plot 4: Multiple time slices of 4d image driver output\nFor 4d variables, we can again use the xarray facet grid, now time is along the x axis and soil layer is along the y axis", "ds['OUT_SOIL_TEMP'].sel(time='1949-01-04').resample(\n '3h', dim='time', how='mean').plot(\n col='time', row='nlayer', levels=10)", "Plot 5: Using Cartopy to project VIC Image Driver Output\nOften, we want to plot maps of VIC output that are georeferenced and include things like coastlines and political boundaries. Here we use xaray plotting along with cartopy to plot the temporal mean evapotranspiration.", "fig, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.Mercator()))\n\nds['OUT_EVAP'].mean(dim='time').plot.pcolormesh('lon', 'lat', ax=ax,\n levels=10, vmin=0, vmax=0.01,\n transform=ccrs.PlateCarree())\n\n# Configure the map\ngl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,\n linewidth=2, color='gray', alpha=0.5, linestyle='--')\nax.set_extent([-125, -118, 47, 49], ccrs.Geodetic())\nax.coastlines('10m')\ngl.xlabels_top = False\ngl.ylabels_right = False", "Plot 6: Plotting domain mean timeseries from VIC Image Driver Output\nHere, we'll take the domain mean of the downward shortwave radiation and will use pandas to plot the data.", "ds['OUT_SWDOWN'].mean(dim=('lon', 'lat')).to_dataframe().plot()", "Plot 7: Plotting timeseries at a point from VIC Image Driver Output\nHere, we'll take a single point of the surface albedo variable and will use pandas to plot the data.", "ds['OUT_ALBEDO'].isel(lat=2, lon=2).plot()\nplt.ylim(0, 1)\n\nplt.close('all')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sbussmann/sensor-fusion
Code/Phone Position Classification Exercise.ipynb
mit
[ "Goal: Classify smartphone location as driver-side or passenger-side based on sensor data", "import pandas as pd\nfrom scipy.ndimage import gaussian_filter\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.metrics import accuracy_score\nimport numpy as np\n%matplotlib inline", "Load the processed sensor data for car trip (see \"Process Smartphone Sensor Data\" jupyter notebook). On this trip, I drove my car from home to Censio and back and used SensorLog on my iPhone to track the trip. The total time for the trip was about 15 minutes.", "df = pd.read_csv('../Data/shaneiphone_exp2_processed.csv', index_col='DateTime')\n\n# Use only userAcceleration and gyroscope data, since these features are expected to generalize well.\nxyz = ['X', 'Y', 'Z']\nmeasures = ['userAcceleration', 'gyroscope']\nbasefeatures = [i + j for i in measures for j in xyz]\nfeatures = [i + j for i in measures for j in xyz]\n\n# Add Gaussian smoothed features\nsmoothfeatures = []\nfor i in features:\n df[i + 'sm'] = gaussian_filter(df[i], 3)\n df[i + '2sm'] = gaussian_filter(df[i], 100)\n smoothfeatures.append(i + 'sm')\n smoothfeatures.append(i + '2sm')\nfeatures.extend(smoothfeatures)\n\n# Generate Jerk signal\njerkfeatures = []\nfor i in features:\n diffsignal = np.diff(df[i])\n df[i + 'jerk'] = np.append(0, diffsignal)\n jerkfeatures.append(i + 'jerk')\nfeatures.extend(jerkfeatures)\n\n# assign class labels\nclass0 = (df.index > '2015-08-25 14:35:00') & \\\n (df.index < '2015-08-25 14:42:00')\n\nclass1 = (df.index > '2015-08-25 14:43:00') & \\\n (df.index < '2015-08-25 14:48:00')\n\ndf['class'] = -1\ndf['class'][class0] = 0\ndf['class'][class1] = 1\n\n# separate into quarters for train and validation\nq1 = df[(df.index <= '2015-08-25 14:38:30') & \n (df.index > '2015-08-25 14:33:00')]\nq2 = df[(df.index > '2015-08-25 14:38:30') & \n (df.index <= '2015-08-25 14:42:00')]\nq3 = df[(df.index > '2015-08-25 14:43:00') & \n (df.index <= '2015-08-25 14:45:30')]\nq4 = df[(df.index > '2015-08-25 14:45:30') & \n (df.index <= '2015-08-25 14:48:00')]\ntraindf = pd.concat([q1, q3])\nvalidationdf = pd.concat([q2, q4])\n\n# check for NaNs in the dataframes\nprint(traindf.isnull().sum().sum())\nprint(validationdf.isnull().sum().sum())\n\n# drop NaNs\ntraindf = traindf.dropna()\nvalidationdf = validationdf.dropna()\n\n# Make the training and validation sets\nX_train = traindf[features].values\ny_train = traindf['class'].values\nX_test = validationdf[features].values\ny_test = validationdf['class'].values\n\n# train a random forest\nclf = RandomForestClassifier(n_estimators=200)\n\n# get the 5-fold cross-validation score\nscores = cross_val_score(clf, X_train, y_train, cv=5)\nprint(scores, scores.mean(), scores.std())\n\n# apply model to test set\nclf.fit(X_train, y_train)\npredict_y = clf.predict(X_test)\n\n# obtain accuracy score\ntestscore = accuracy_score(y_test, predict_y)\nprint(\"Accuracy score on test set: %6.3f\" % testscore)", "88% accuracy on the training set and 60% accuracy on the test set means we're overfitting the data. Accelerometer data should be the key here, since a phone on the driver side will experience a different centripetal acceleration than a phone on the passenger side during turns. But accelerometer data is super noisy!", "# Inspect feature importances\nfor i, ifeature in enumerate(features):\n print(ifeature + ': %6.4f' % clf.feature_importances_[i])", "The smoothed gyroscopeX data is the most useful feature. This is further confirmation of over-fitting, since this feature corresponds to pitch angle rotation which should be negligible in this dataset (the pitch of the car never changed significantly during the drive).", "# compare bus gyroscopeZ2sm and car gyroscopeZ2sm\n#q1['gyroscopeXsm'].plot(color='blue', figsize=(12,6), kind='hist', bins=40, alpha=0.4) # car\n#q3['gyroscopeXsm'].plot(color='green', kind='hist', bins=40, alpha=0.4) # bus\nq1['gyroscopeXsm'].plot(color='blue', figsize=(12,6)) # car\nq3['gyroscopeXsm'].plot(color='green') # bus", "This seems like an enormous difference in pitch angle and is really hard to understand. My guess is that it is somehow an artifact of the experimental setup. Perhaps the quaternion rotation of gyroscopeXYZ is not quite perfect.\nAnother interesting avenue to pursue is features in Fourier space", "# Generate Fourier Transform of features\nfftfeatures = []\nfor i in features:\n reals = np.real(np.fft.rfft(df[i]))\n imags = np.imag(np.fft.rfft(df[i]))\n complexs = [reals[0]]\n n = len(reals)\n if n % 2 == 0:\n complexs.append(imags[0])\n for j in range(1, n - 1):\n complexs.append(reals[j])\n complexs.append(imags[j])\n complexs.append(reals[j])\n if len(df) > len(complexs):\n complexs.append(imags[j])\n df['f' + i] = complexs\n fftfeatures.append('f' + i)\nfeatures.extend(fftfeatures)\n\n# separate into quarters for train and validation\nq1 = df[(df.index <= '2015-08-25 14:38:30') & \n (df.index > '2015-08-25 14:33:00')]\nq2 = df[(df.index > '2015-08-25 14:38:30') & \n (df.index <= '2015-08-25 14:42:00')]\nq3 = df[(df.index > '2015-08-25 14:43:00') & \n (df.index <= '2015-08-25 14:45:30')]\nq4 = df[(df.index > '2015-08-25 14:45:30') & \n (df.index <= '2015-08-25 14:48:00')]\ntraindf = pd.concat([q1, q3])\nvalidationdf = pd.concat([q2, q4])\n\n# Make the training and validation sets\nX_train = traindf[fftfeatures].values\ny_train = traindf['class'].values\nX_test = validationdf[fftfeatures].values\ny_test = validationdf['class'].values\n\n# train a random forest\nclf = RandomForestClassifier(n_estimators=200)\n\n# get the 5-fold cross-validation score\nscores = cross_val_score(clf, X_train, y_train, cv=5)\nprint(scores, scores.mean(), scores.std())\n\n# apply model to test set\nclf.fit(X_train, y_train)\npredict_y = clf.predict(X_test)\n\n# obtain accuracy score\ntestscore = accuracy_score(y_test, predict_y)\nprint(\"Accuracy score on test set: %6.3f\" % testscore)", "Better accuracy on the test set: 73%. We are still overfitting here, since we got 92% accuracy on the training set. Using the Fourier domain seems to be helpful.", "# Inspect feature importances\nfor i, ifeature in enumerate(fftfeatures):\n print(ifeature + ': %6.4f' % clf.feature_importances_[i])", "There is no single feature that is particularly important. This helps me feel more confident that the results will generalize well to new samples. 74% accuracy in 5 minute ride samples isn't too bad!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mdeff/ntds_2017
projects/reports/movie_success/TreatGenreV2.ipynb
mit
[ "Treat the genres of the movies\nAttribute a new number to each new genre and replace in the dataframe", "%matplotlib inline\n\nimport configparser\nimport os\n\nimport requests\nfrom tqdm import tqdm\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import sparse, stats, spatial\nimport scipy.sparse.linalg\nfrom sklearn import preprocessing, decomposition\nimport librosa\nimport IPython.display as ipd\nimport json\nfrom imdb import IMDb\nimport tmdbsimple as tmdb\nfrom pygsp import graphs, filters, plotting\n\nplt.rcParams['figure.figsize'] = (17, 5)\nplotting.BACKEND = 'matplotlib'\n\ndf = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv')\n\nprint('There are {} movies'.format(len(df)))\n\ndf['genres'][1]\n\ndf.head()\n#df.iloc[100:150]", "1. Parsing example", "df['genres'] = df['genres'].str.replace('|', ',') \n\ni = 1\nnewgenres = df['genres'][i].split(\",\")\n\nprint(newgenres)\nprint(len(newgenres))", "2. Determine different genres\nDetermine the number of different genres and what they are.", "Diffgenres = [];\ngenres = {}\nmovies_dic = {}\n\nfor i in range(0, len(df)):\n \n movies_dic[i] = df['id'][i]\n \n if df['genres'][i] == 'NaN':\n newgenres = []\n else:\n newgenres = df['genres'][i].split(\",\")\n \n genres.setdefault(i, [])\n \n for j in range (0, len(newgenres)):\n Diffgenres.append(newgenres[j])\n genres[i].append(newgenres[j])\n\nDiffgenres = set(Diffgenres)\nDiffgenres = list(Diffgenres)\n\nprint('There are {} different genres'.format(len(Diffgenres)))\nprint(Diffgenres)\n\ndf.head()", "3. Create vector of genres for each movie and a dataframe\nBinary vector where the elements are 1 if the film has the genre corresponding to the index of the film. Otherwise the elements are zero.\nQuick example:", "print(genres[0][0])\nlen(genres[0][0])\n\nvector = (genres[0][0] == np.array(Diffgenres)).astype(int)\nprint(vector)\n\ngenreArray = np.ndarray(shape=(len(df), len(Diffgenres)), dtype=int)\n\nfor i in range(0, len(df)):\n vector = np.zeros(len(Diffgenres))\n \n for j in range(0, len(genres[i])):\n vector += (genres[i][j] == np.array(Diffgenres)).astype(int)\n \n genreArray[i] = vector\n\nprint(genreArray[0])\nprint(genreArray.size)", "Observe the result in the dataframe", "Genres = pd.DataFrame(genreArray, columns=Diffgenres)\nGenres.head(10)", "VIsual example of the genres", "#Genres.iloc[120:150]\n\nplt.spy(Genres[120:150])", "3.1 Determine most frequent genres", "freqGenre = np.ndarray(shape=(1, len(Diffgenres)), dtype=int)\n\nfor i in range(0, len(Diffgenres)):\n freqGenre[0][i] = sum(Genres[Diffgenres[i]] == 1)", "Display of the number of times a genre appears in the dataframe", "NbGenre = pd.DataFrame(freqGenre, columns=Diffgenres)\nNbGenre\n\nNbGenre.to_csv('Saved_Datasets/NbGenre.csv', index=False)\n\nplt.bar(Diffgenres, freqGenre[0], align='center');\nplt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right');\nplt.xlabel('Genres');\nplt.ylabel('Counts');\nplt.savefig('images/GenreFreq.png', dpi =300, bbox_inches='tight')", "3.2 Determine genres that are most commonly associated with each other\nWe only consider two genres together at the moment", "assosGenre = np.ndarray(shape=(len(Diffgenres), len(Diffgenres)), dtype=int)\n\nfor i in range(0, len(Diffgenres)):\n for j in range(0, len(Diffgenres)):\n if i != j:\n assosGenre[i][j] = sum((Genres[Diffgenres[i]] == 1) & (Genres[Diffgenres[j]] == 1))\n else:\n assosGenre[i][j] = 0\n\n#ensure the matrix is symmetric\nassosGenreSym = assosGenre.transpose() > assosGenre\nassosGenre = assosGenre - assosGenre*assosGenreSym + assosGenre.transpose()*assosGenreSym\n\nplt.spy(assosGenre)\n\nNbGenreAssos = pd.DataFrame(assosGenre, columns=Diffgenres, index = Diffgenres)\nNbGenreAssos\n\nNbGenreAssos.to_csv('Saved_Datasets/NbGenreAssos.csv', index=False)", "Determining ranking of genre associations\n1 indicates most frequently associated and 19 is least frequently associated. \n$\\textbf{Reminder:}$ This is only the case of our dataset and may not represent reality", "assosRank = {} \nrank = np.argsort(-assosGenre, axis=1) #negative for ascending order\n\nDiffgenres[rank[0][1]]\n\nfor i in range(0, len(Diffgenres)):\n for j in range(0, len(Diffgenres)):\n assosRank.setdefault(Diffgenres[i], [])\n \n #Only if not comparing with the same genre\n if Diffgenres[i] != Diffgenres[rank[i][j]]:\n assosRank[Diffgenres[i]].append(Diffgenres[rank[i][j]])", "Display of the ranking of the other genres with which each genre is most often associated", "ranking = np.linspace(1, len(Diffgenres)-1, num=len(Diffgenres)-1, endpoint=True, retstep=False, dtype=int)\n\nRankdf = pd.DataFrame(assosRank, index=ranking)\nRankdf\n\nRankdf.to_csv('Saved_Datasets/GenreRanking.csv', index=False)", "3.3 Determine how many films are successful or are non-successful depending on the genre", "genreArray[0]\n\ngenreSuccess = np.zeros(shape=(1, len(Diffgenres)), dtype=float)\ngenreSuccessPc = np.zeros(shape=(1, len(Diffgenres)), dtype=float)\n\nfor i in range(0, len(Diffgenres)):\n for j in range(0, len(df)):\n if genreArray[j][i] == 1:\n if df['success'][j] == 1:\n genreSuccess[0][i] += 1\n \n genreSuccessPc[0][i] = (genreSuccess[0][i]/freqGenre[0][i])*100\n\nplt.bar(Diffgenres, genreSuccessPc[0], align='center');\nplt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right');\nplt.xlabel('Genres');\nplt.ylabel('Success rate [%]');", "Don't forget that the number of successful movies is not equal to the sum of the success rate of the genres since movies often have multiple genres.", "print(sum(sum(genreSuccess)))\nprint('The number of films that are succesful: {}'.format(len(df[df['success'] == 1])))\nprint('The number of films that are unsuccesful: {}'.format(len(df[df['success'] == 0])))", "4. Create a similarity graph between films depending on genre\nIn this case, the similarity is the number of genres that the movie has in common:\n$\\mathbf{W}(u,v) = sum(u \\cdot v) \\ \\in [0, 20]$", "weights = np.ndarray(shape=(len(df), len(df)), dtype=int)\n\nweights = genreArray @ genreArray.T\n\n#fill the diagonal values to zero, i.e. no self-connections\nnp.fill_diagonal(weights, 0)\n\nplt.spy(weights)\n\nplt.hist(weights[weights > 0].reshape(-1), bins=50);\n\nprint('There are {} weights equal to zero'.format(np.sum(weights == 0)))\nprint('There are {} weights equal to one'.format(np.sum(weights == 1)))\nprint('There are {} weights equal to seven'.format(np.sum(weights == 7)))\n\nmeanW = weights.mean()\nmaxW = weights.max()\nminW = weights.min()\n\nprint('The mean value of the similarity in terms of genre is: {}'.format(meanW))\nprint('The max value of the similarity is: {}'.format(maxW))\nprint('The min value of the similarity is: {}'.format(minW))", "4.1 Normalization of the matrix", "print(genreArray[1])\nprint(sum(genreArray[1]))\n\nweightsNorm = np.ndarray(shape=(len(df), len(df)), dtype=float)\nlengths = np.ndarray(shape=(1, 2), dtype=int)\nlenMax = 0;\n\nfor i in range(0, len(weights)):\n for j in range(0, len(weights)):\n if i!=j: \n lengths = [sum(genreArray[i]), sum(genreArray[j])]\n weightsNorm[i][j] = (weights[i][j])/max(lengths) \n\nnp.fill_diagonal(weightsNorm, 0)\n\nsigma = np.std(weights)\nprint(sigma)\nmu = np.mean(weights)\nprint(mu)\n#1/(sigma*math.sqrt(2*math.pi))*\nWgauss = np.exp(-((weights-mu)**2)/(2*sigma**2))\n\n#fill the diagonal values to zero, i.e. no self-connections\nnp.fill_diagonal(Wgauss, 0)", "Maximum normalization", "plt.spy(weightsNorm)\n\nplt.hist(weightsNorm.reshape(-1), bins=50);\n\nprint('The mean value is: {}'.format(weightsNorm.mean()))\nprint('The max value is: {}'.format(weightsNorm.max()))\nprint('The min value is: {}'.format(weightsNorm.min()))", "Plot the degree distribution", "degrees = np.zeros(len(weightsNorm)) \n\n#reminder: the degrees of a node for a weighted graph are the sum of its weights\n\nfor i in range(0, len(weightsNorm)):\n degrees[i] = sum(weightsNorm[i])\n\nplt.hist(degrees, bins=50);\n\nprint('The mean value is: {}'.format(degrees.mean()))\nprint('The max value is: {}'.format(degrees.max()))\nprint('The min value is: {}'.format(degrees.min()))", "Gaussian normalization", "plt.spy(Wgauss)", "4.3 Save the dataset", "NormW = pd.DataFrame(weightsNorm)\nNormW.head()\n\nNormW.to_csv('Saved_Datasets/NormalizedGenreW.csv', index=False)", "5. Graph Laplacian and Embedding for maximum normalization\n5.1 Compute the graph Laplacian\nWith pygsp", "G = graphs.Graph(weightsNorm)\nG.compute_laplacian('normalized')", "Normally", "#reminder: L = D - W for weighted graphs\nlaplacian = np.diag(degrees) - weightsNorm\n\n#computation of the normalized Laplacian\nlaplacian_norm = scipy.sparse.csgraph.laplacian(weightsNorm, normed = True)\n\nplt.spy(laplacian_norm);\n\nlaplacian_norm = sparse.csr_matrix(laplacian_norm)", "5.2 Compute the Fourier basis\nWith pygsp", "G.compute_fourier_basis(recompute=True)\nplt.plot(G.e[0:10]);", "Normally", "eigenvalues, eigenvectors = sparse.linalg.eigsh(laplacian_norm, k = 10, which = 'SM') \n\nplt.plot(eigenvalues, '.-', markersize=15);\nplt.xlabel('')\nplt.ylabel('Eigenvalues')\nplt.show()", "5.3 Graph embedding", "genres = preprocessing.LabelEncoder().fit_transform(df['success'])\n\nx = eigenvectors[:, 1] \ny = eigenvectors[:, 2] \nplt.scatter(x, y, c=genres, cmap='RdBu', alpha=0.5);\n\nG.set_coordinates(G.U[:, 1:3])\nG.plot()\n\nG.plot_signal(genres, vertex_size=20)", "6. Graph Laplacian and Embedding for gaussian normalization\n6.1. Sparsification of the graph\nKeep only a certain number of the weights", "NEIGHBORS = 300\n\n#sort the order of the weights\nsort_order = np.argsort(Wgauss, axis = 1)\n\n#declaration of a sorted weight matrix\nsorted_weights = np.zeros((len(Wgauss), len(Wgauss)))\n\nfor i in range (0, len(Wgauss)): \n for j in range(0, len(Wgauss)):\n if (j >= len(Wgauss) - NEIGHBORS):\n #copy the k strongest edges for each node\n sorted_weights[i, sort_order[i,j]] = Wgauss[i,sort_order[i,j]]\n else:\n #set the other edges to zero\n sorted_weights[i, sort_order[i,j]] = 0\n\n#ensure the matrix is symmetric\nbigger = sorted_weights.transpose() > sorted_weights\nsorted_weights = sorted_weights - sorted_weights*bigger + sorted_weights.transpose()*bigger\n\nplt.spy(sorted_weights)\n\nplt.hist(sorted_weights.reshape(-1), bins=50);", "6.1. Save the sparsed dataset", "NormW = pd.DataFrame(sorted_weights)\nNormW.head()\n\nNormW.to_csv('Saved_Datasets/NormalizedGenreWSparse.csv', index=False)", "6.2. Laplacian and graph embedding\nWith pygsp", "G = graphs.Graph(sorted_weights)\nG.compute_laplacian('normalized')", "Other", "#reminder: L = D - W for weighted graphs\nlaplacian = np.diag(degrees) - sorted_weights\n\n#computation of the normalized Laplacian\nlaplacian_norm = scipy.sparse.csgraph.laplacian(sorted_weights, normed = True)\n\nplt.spy(laplacian_norm);\n\nG.compute_fourier_basis(recompute=True)\nplt.plot(G.e[0:10]);\n\nG.set_coordinates(G.U[:, 1:3])\nG.plot()\n\nG.plot_signal(genres, vertex_size=20)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
atlury/deep-opencl
DL0110EN/2.5.2_early_stopping_v2.ipynb
lgpl-3.0
[ "<a href=\"http://cocl.us/pytorch_link_top\">\n <img src=\"https://cocl.us/Pytorch_top\" width=\"750\" alt=\"IBM 10TB Storage\" />\n</a>\n<img src=\"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width=\"200\" alt=\"cognitiveclass.ai logo\" />\n<h1>Linear regression: Training and Validation Data</h1>\n\n<h2>Table of Contents</h2>\n<p>In this lab, you will perform early stopping and save the model that minimizes the total loss on the validation data for every iteration. <br><i>( <b>Note:</b> Early Stopping is a general term. We will focus on the variant where we use the validation data. You can also use a pre-determined number iterations</i>. )</p>\n\n<ul>\n <li><a href=\"#Makeup_Data\">Make Some Data</a></li>\n <li><a href=\"#LR_Loader_Cost\">Create a Linear Regression Object, Data Loader and Criterion Function</a></li>\n <li><a href=\"#Stop\">Early Stopping and Saving the Mode</a></li>\n <li><a href=\"#Result\">View Results</a></li>\n</ul>\n\n<p>Estimated Time Needed: <strong>15 min</strong></p>\n\n<hr>\n\n<h2>Preparation</h2>\n\nWe'll need the following libraries, and set the random seed.", "# Import the libraries and set random seed\n\nfrom torch import nn\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom torch import nn,optim\nfrom torch.utils.data import Dataset, DataLoader\n\ntorch.manual_seed(1)", "<!--Empty Space for separating topics-->\n\n<h2 id=\"#Makeup_Data\">Make Some Data</h2>\n\nFirst let's create some artificial data, in a dataset class. The class will include the option to produce training data or validation data. The training data includes outliers.", "# Create Data Class\n\nclass Data(Dataset):\n \n # Constructor\n def __init__(self, train = True):\n if train == True:\n self.x = torch.arange(-3, 3, 0.1).view(-1, 1)\n self.f = -3 * self.x + 1\n self.y = self.f + 0.1 * torch.randn(self.x.size())\n self.len = self.x.shape[0]\n if train == True:\n self.y[50:] = 20\n else:\n self.x = torch.arange(-3, 3, 0.1).view(-1, 1)\n self.y = -3 * self.x + 1\n self.len = self.x.shape[0]\n \n # Getter\n def __getitem__(self, index): \n return self.x[index], self.y[index]\n \n # Get Length\n def __len__(self):\n return self.len", "We create two objects, one that contains training data and a second that contains validation data, we will assume the training data has the outliers.", "#Create train_data object and val_data object\n\ntrain_data = Data()\nval_data = Data(train = False)", "We overlay the training points in red over the function that generated the data. Notice the outliers are at x=-3 and around x=2", "# Plot the training data points\n\nplt.plot(train_data.x.numpy(), train_data.y.numpy(), 'xr')\nplt.plot(train_data.x.numpy(), train_data.f.numpy())\nplt.show()", "<!--Empty Space for separating topics-->\n\n<h2 id=\"LR_Loader_Cost\">Create a Linear Regression Class, Object, Data Loader, Criterion Function</h2>\n\nCreate linear regression model class.", "# Create linear regression model class\n\nfrom torch import nn\n\nclass linear_regression(nn.Module):\n \n # Constructor\n def __init__(self, input_size, output_size):\n super(linear_regression, self).__init__()\n self.linear = nn.Linear(input_size, output_size)\n \n # Predition\n def forward(self, x):\n yhat = self.linear(x)\n return yhat", "Create the model object", "# Create the model object\n\nmodel = linear_regression(1, 1)", "We create the optimizer, the criterion function and a Data Loader object.", "# Create optimizer, cost function and data loader object\n\noptimizer = optim.SGD(model.parameters(), lr = 0.1)\ncriterion = nn.MSELoss()\ntrainloader = DataLoader(dataset = train_data, batch_size = 1)", "<!--Empty Space for separating topics-->\n\n<h2 id=\"Stop\">Early Stopping and Saving the Mode</h2>\n\nRun several epochs of gradient descent and save the model that performs best on the validation data.", "# Train the model\n\nLOSS_TRAIN = []\nLOSS_VAL = []\nn=1;\nmin_loss = 1000\n\ndef train_model_early_stopping(epochs, min_loss):\n for epoch in range(epochs):\n for x, y in trainloader:\n yhat = model(x)\n loss = criterion(yhat, y)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n loss_train = criterion(model(train_data.x), train_data.y).data\n loss_val = criterion(model(val_data.x), val_data.y).data\n LOSS_TRAIN.append(loss_train)\n LOSS_VAL.append(loss_val)\n if loss_val < min_loss:\n value = epoch\n min_loss = loss_val\n torch.save(model.state_dict(), 'best_model.pt')\n\ntrain_model_early_stopping(20, min_loss)", "<!--Empty Space for separating topics-->\n\n<h2 id=\"Result\">View Results</h2>\n\nView the loss for every iteration on the training set and validation set.", "# Plot the loss\n\nplt.plot(LOSS_TRAIN, label = 'training loss')\nplt.plot(LOSS_VAL, label = 'validation loss')\nplt.xlabel(\"epochs\")\nplt.ylabel(\"Loss\")\nplt.legend(loc = 'upper right')\nplt.show()", "We will create a new linear regression object; we will use the parameters saved in the early stopping. The model must be the same input dimension and output dimension as the original model.", "# Create a new linear regression model object\n\nmodel_best = linear_regression(1, 1)", "Load the model parameters <code>torch.load()</code>, then assign them to the object <code>model_best</code> using the method <code>load_state_dict</code>.", "# Assign the best model to model_best\n\nmodel_best.load_state_dict(torch.load('best_model.pt'))", "Let's compare the prediction from the model obtained using early stopping and the model derived from using the maximum number of iterations.", "plt.plot(model_best(val_data.x).data.numpy(), label = 'best model')\nplt.plot(model(val_data.x).data.numpy(), label = 'maximum iterations')\nplt.plot(val_data.y.numpy(), 'rx', label = 'true line')\nplt.legend()\nplt.show()", "We can see the model obtained via early stopping fits the data points much better. For more variations of early stopping see:\nPrechelt, Lutz.<i> \"Early stopping-but when?.\" Neural Networks: Tricks of the trade. Springer, Berlin, Heidelberg, 1998. 55-69</i>.\n<!--Empty Space for separating topics-->\n\n<a href=\"http://cocl.us/pytorch_link_bottom\">\n <img src=\"https://cocl.us/pytorch_image_bottom\" width=\"750\" alt=\"PyTorch Bottom\" />\n</a>\n<h2>About the Authors:</h2>\n\n<a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\nOther contributors: <a href=\"https://www.linkedin.com/in/michelleccarey/\">Michelle Carey</a>, <a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>\n<hr>\n\nCopyright &copy; 2018 <a href=\"cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu\">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href=\"https://bigdatauniversity.com/mit-license/\">MIT License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/pyquickhelper
_unittests/ut_helpgen/notebooks_svg/td1a_unit_test_ci.ipynb
mit
[ "1A.soft - Tests unitaires, setup et ingéniérie logicielle\nOn vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\nfrom pyensae.graphhelper import draw_diagram", "Petite histoire\nSupposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.", "draw_diagram(\"blockdiag { f0 -> f1 -> f3; f2 -> f3;}\")", "Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.", "draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color=\"red\"]; f4 -> f5 [color=\"red\"]; }')", "Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.\n\nles tests unitaires\nun logiciel de suivi de source\ncalcul de couverture\nl'intégration continue\nécrire un setup\nécrire la documentation\npublier sur PyPi\n\nEcrire une fonction\nN'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.", "def solve_polynom(a, b, c):\n # ....\n return None", "Ecrire un test unitaire\nUn test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives : pytest et nose.\nCouverture ou coverage\nLa couverture de code est l'ensemble des lignes exécutées par les tests unitaires. Cela ne signifie pas toujours qu'elles soient correctes mais seulement qu'elles ont été exécutées une ou plusieurs sans provoquer d'erreur. Le module le plus simple est coverage. Il produit des rapports de ce type : mlstatpy/coverage.\nCréer un compte GitHub\nGitHub est un site qui contient la majorité des codes des projets open-source. Il faut créer un compte si vous n'en avez pas, c'est gratuit pour les projets open souce, puis créer un projet et enfin y insérer votre projet. Votre ordinateur a besoin de :\n\ngit\nGitHub destkop\n\nVous pouvez lire GitHub Pour les Nuls : Pas de Panique, Lancez-Vous ! (Première Partie) et bien sûr faire plein de recherches internet.\nNote\nTout ce que vous mettez sur GitHub pour un projet open-source est en accès libre. Veillez à ne rien mettre de personnel. Un compte GitHub fait aussi partie des choses qu'un recruteur ira regarder en premier.\nIntégration continue\nL'intégration continue a pour objectif de réduire le temps entre une modification et sa mise en production. Typiquement, un développeur fait une modification, une machine exécute tous les tests unitaires. On en déduit que le logiciel fonctionne sous tous les angles, on peut sans crainte le mettre à disposition des utilisateurs. Si je résume, l'intégration continue consiste à lancer une batterie de tests dès qu'une modification est détectée. Si tout fonctionne, le logiciel est construit et prêt à être partagé ou déployé si c'est un site web.\nLà encore pour des projets open-source, il est possible de trouver des sites qui offre ce service gratuitement :\n\ntravis - Linux\nappveyor - Windows - 1 job à la fois, pas plus d'une heure.\ncircle-ci - Linux et Mac OSX (payant)\nGitLab-ci\n\nA part GitLab-ci, ces trois services font tourner les tests unitaires sur des machines hébergés par chacun des sociétés. Il faut s'enregistrer sur le site, définir un fichier .travis.yml, .appveyor.yml ou circle.yml puis activer le projet sur le site correspondant. Quelques exemples sont disponibles à pyquickhelper ou scikit-learn. Le fichier doit être ajouté au projet sur GitHub et activé sur le site d'intégration continue choisi. La moindre modification déclenchera un nouveau build.permet\nLa plupart des sites permettent l'insertion de badge de façon à signifier que le build fonctionne.", "from IPython.display import Image\ntry:\n im = Image(\"https://travis-ci.com/sdpython/ensae_teaching_cs.png\")\nexcept TimeoutError:\n im = None\nim\n\nfrom IPython.display import SVG\ntry:\n im = SVG(\"https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg\")\nexcept TimeoutError:\n im = None\nim", "Il y a des badges un peu pour tout.\nEcrire un setup\nLe fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup : setup.\nEcrire la documentation\nL'outil est le plus utilisé est sphinx. Saurez-vous l'utiliser ?\nDernière étape : PyPi\nPyPi est un serveur qui permet de mettre un module à la disposition de tout le monde. Il suffit d'uploader le module... Packaging and Distributing Projects ou How to submit a package to PyPI. PyPi permet aussi l'insertion de badge.", "try:\n im = SVG(\"https://badge.fury.io/py/ensae_teaching_cs.svg\")\nexcept TimeoutError:\n im = None\nim" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/06_structured/labs/4_preproc.ipynb
apache-2.0
[ "<h1> Preprocessing using Dataflow </h1>\n\nThis notebook illustrates:\n<ol>\n<li> Creating datasets for Machine Learning using Dataflow\n</ol>\n<p>\nWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\npip install --user apache-beam[gcp]==2.16.0", "Run the command again if you are getting oauth2client error.\nNote: You may ignore the following responses in the cell output above:\nERROR (in Red text) related to: witwidget-gpu, fairing\nWARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client\n<b>Restart</b> the kernel before proceeding further.\nMake sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.", "# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.1\n\nimport apache_beam as beam\nprint(beam.__version__)", "You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.", "# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi", "<h2> Create ML dataset using Dataflow </h2>\nLet's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.\nIn this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. \nNote that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.\n<p>\nIf you wish to continue without doing this step, you can copy my preprocessed output:\n<pre>\ngsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/\n</pre>\nBut if you do this, you also have to use my TensorFlow model since yours might expect the fields in a different order", "import datetime, os\n\ndef to_csv(rowdict):\n import hashlib\n import copy\n\n # TODO #1:\n # Pull columns from BQ and create line(s) of CSV input\n CSV_COLUMNS = None\n \n # Create synthetic data where we assume that no ultrasound has been performed\n # and so we don't know sex of the baby. Let's assume that we can tell the difference\n # between single and multiple, but that the errors rates in determining exact number\n # is difficult in the absence of an ultrasound.\n no_ultrasound = copy.deepcopy(rowdict)\n w_ultrasound = copy.deepcopy(rowdict)\n\n no_ultrasound['is_male'] = 'Unknown'\n if rowdict['plurality'] > 1:\n no_ultrasound['plurality'] = 'Multiple(2+)'\n else:\n no_ultrasound['plurality'] = 'Single(1)'\n\n # Change the plurality column to strings\n w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]\n\n # Write out two rows for each input row, one with ultrasound and one without\n for result in [no_ultrasound, w_ultrasound]:\n data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])\n key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key\n yield str('{},{}'.format(data, key))\n \ndef preprocess(in_test_mode):\n import shutil, os, subprocess\n job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')\n\n if in_test_mode:\n print('Launching local job ... hang on')\n OUTPUT_DIR = './preproc'\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n os.makedirs(OUTPUT_DIR)\n else:\n print('Launching Dataflow job {} ... hang on'.format(job_name))\n OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)\n try:\n subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())\n except:\n pass\n\n options = {\n 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),\n 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),\n 'job_name': job_name,\n 'project': PROJECT,\n 'region': REGION,\n 'teardown_policy': 'TEARDOWN_ALWAYS',\n 'no_save_main_session': True,\n 'num_workers': 4,\n 'max_num_workers': 5\n }\n opts = beam.pipeline.PipelineOptions(flags = [], **options)\n if in_test_mode:\n RUNNER = 'DirectRunner'\n else:\n RUNNER = 'DataflowRunner'\n p = beam.Pipeline(RUNNER, options = opts)\n \n query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\nFROM\n publicdata.samples.natality\nWHERE year > 2000\nAND weight_pounds > 0\nAND mother_age > 0\nAND plurality > 0\nAND gestation_weeks > 0\nAND month > 0\n \"\"\"\n\n if in_test_mode:\n query = query + ' LIMIT 100' \n\n for step in ['train', 'eval']:\n if step == 'train':\n selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)\n else:\n selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)\n\n (p \n ## TODO Task #2: Modify the Apache Beam pipeline such that the first part of the pipe reads the data from BigQuery\n | '{}_read'.format(step) >> None \n | '{}_csv'.format(step) >> beam.FlatMap(to_csv)\n | '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))\n )\n\n job = p.run()\n if in_test_mode:\n job.wait_until_finish()\n print(\"Done!\")\n \n# TODO Task #3: Once you have verified that the files produced locally are correct, change in_test_mode to False\n# to execute this in Cloud Dataflow\npreprocess(in_test_mode = False)", "The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.", "%%bash\ngsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*", "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/1.시각화 패키지 matplotlib 소개.ipynb
mit
[ "시각화 패키지 matplotlib 소개\nmatplotlib는 파이썬에서 자료를 차트(chart)나 플롯(plot)으로 시각화(visulaization)하는 패키지이다.\nmatplotlib는 다음과 같은 정형화된 차트나 플롯 이외에도 저수준 api를 사용한 다양한 시각화 기능을 제공한다.\n\n라인 플롯 (line plot)\n스캐터 플롯 (scatter plot)\n컨투어 플롯 (contour plot)\n서피스 플롯 (surface plot)\n바 차트 (bar chart)\n히스토그램 (histogram)\n박스 플롯 (box plot)\n\nmatplotlib를 사용한 시각화 예제들을 보고 싶다면 다음 웹사이트를 방문한다.\n* http://matplotlib.org/1.5.1/gallery.html\npylab 서브패키지\nmatplotlib 패키지에는 pylab 라는 서브패키지가 존재한다. 이 pylab 서브패키지는 matlab 이라는 수치해석 소프트웨어의 시각화 명령을 거의 그대로 사용할 수 있도록 matplotlib 의 하위 API를 포장(wrapping)한 명령어 집합을 제공한다. 간단한 시각화 프로그램을 만드는 경우에는 pylab 서브패키지의 명령만으로도 충분하다. 다음에 설명할 명령어들도 별도의 설명이 없으면 pylab 패키지의 명령라고 생각하면 된다.\nmatplotlib 패키지를 사용할 때는 보통 다음과 같이 주 패키지는 mpl 이라는 alias로 임포트하고 pylab 서브패키지는 plt 라는 alias 로 별도 임포트하여 사용하는 것이 관례이므로 여기에서도 이러한 방법을 사용한다.", "import matplotlib as mpl\nimport matplotlib.pylab as plt", "라인 플롯\n가장 간단한 플롯은 선을 그리는 라인 플롯(line plot)이다. 라인 플롯은 데이터가 시간, 순서 등에 따라 어떻게 변화하는지 보여주기 위해 사용한다.\n명령은 pylab 서브패키지의 plot 명령을 사용한다.\n\nhttp://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.plot\n\n만약 데이터가 1, 4, 9, 16 으로 변화하였다면 다음과 같이 plot 명령에 데이터 리스트 혹은 ndarray 객체를 넘긴다.", "plt.plot([1, 4, 9, 16]);\nplt.show();", "이 때 x 축의 자료 위치 즉, 틱(tick)은 자동으로 0, 1, 2, 3 이 된다. 만약 이 x tick 위치를 별도로 명시하고 싶다면 다음과 같이 두 개의 같은 길이의 리스트 혹은 배열 자료를 넣는다.", "plt.plot([10, 20, 30, 40], [1, 4, 9, 16]);\nplt.show();", "show 명령은 시각화 명령을 실제로 차트로 렌더링(rendering)하고 마우스 움직임 등의 이벤트를 기다리라는 지시이다.\n만약 외부 렌더링을 하지 않고 IPython이나 Jupyter 노트북에서 내부 플롯(inline plot)을 사용하도록 다음과 같이 미리 설정하였다면 별도의 이벤트 처리를 할 수 없기 때문에 show 명령을 추가적으로 지시하지 않아도 자동으로 그림이 그려진다. 따라서 이 강의에서는 앞으로 모두 show 명령을 생략하도록 한다.\nJupyter 노트북은 서버측에서 가동되므로 반드시 내부 플롯을 사용해야 한다.", "%matplotlib inline", "만약 IPython Qt 콘솔을 사용하는 경우에는 다음 명령으로 내부 플롯을 해제하고 그림을 콘솔 내부가 아닌 외부 창(window)에 그릴 수도 있다.\n%matplotlib qt\n스타일 지정\n플롯 명령어는 보는 사람이 그림을 더 알아보기 쉽게 하기 위해 다양한 스타일(style)을 지원한다. plot 명령어에서는 다음과 같이 추가 문자열 인수를 사용하여 스타일을 지원한다.", "plt.plot([1, 4, 9, 16], 'rs--');", "스타일 문자열은 색깔(color), 마커(marker), 선 종류(line style)의 순서로 지정한다. 만약 이 중 일부가 생략되면 디폴트값이 적용된다.\n색깔\n색깔을 지정하는 방법은 색 이름 혹은 약자를 사용하거나 # 문자로 시작되는 RGB코드를 사용한다.\n자주 사용되는 색깔은 한글자 약자를 사용할 수 있으며 약자는 아래 표에 정리하였다. 전체 색깔 목록은 다음 웹사이트를 참조한다.\n\nhttp://matplotlib.org/examples/color/named_colors.html\n\n| 문자열 | 약자 |\n|-|-|\n| blue | b |\n| green | g |\n| red | r |\n| cyan | c |\n| magenta | m |\n| yellow | y |\n| black | k |\n| white | w |\n마커\n데이터 위치를 나타내는 기호를 마커(marker)라고 한다. 마커의 종류는 다음과 같다.\n| 마커 문자열 | 의미 |\n|-|-|\n| . | point marker |\n| , | pixel marker |\n| o | circle marker |\n| v | triangle_down marker |\n| ^ | triangle_up marker |\n| &lt; | triangle_left marker |\n| &gt; | triangle_right marker |\n| 1 | tri_down marker |\n| 2 | tri_up marker |\n| 3 | tri_left marker |\n| 4 | tri_right marker |\n| s | square marker |\n| p | pentagon marker |\n| * | star marker |\n| h | hexagon1 marker |\n| H | hexagon2 marker |\n| + | plus marker |\n| x | x marker |\n| D | diamond marker |\n| d | thin_diamond marker |\n선 스타일\n선 스타일에는 실선(solid), 대시선(dashed), 점선(dotted), 대시-점선(dash-dit) 이 있다. 지정 문자열은 다음과 같다.\n| 선 스타일 문자열 | 의미 |\n|-|-|\n| - | solid line style\n| -- | dashed line style\n| -. | dash-dot line style\n| : | dotted line style\n기타 스타일\n라인 플롯에서는 앞서 설명한 세 가지 스타일 이외에도 여러가지 스타일을 지정할 수 있지만 이 경우에는 인수 이름을 정확하게 지정해야 한다. 사용할 수 있는 스타일 인수의 목록은 matplotlib.lines.Line2D 클래스에 대한 다음 웹사이트를 참조한다.\n\nhttp://matplotlib.org/1.5.1/api/lines_api.html#matplotlib.lines.Line2D\n\n라인 플롯에서 자주 사용되는 기타 스타일은 다음과 같다.\n| 스타일 문자열 | 약자 | 의미 |\n|-|-|-|\n| color | c | 선 색깔 |\n| linewidth | lw | 선 굵기 |\n| linestyle | ls | 선 스타일 |\n| marker | | 마커 종류 |\n| markersize | ms | 마커 크기 |\n| markeredgecolor | mec | 마커 선 색깔 |\n| markeredgewidth | mew | 마커 선 굵기 |\n| markerfacecolor | mfc | 마커 내부 색깔 |", "plt.plot([1,4,9,16], c=\"k\", lw=10, ls=\"-.\", marker=\"1\", ms=50, mec='y', mew=3, mfc=\"b\");\n\nplt.plot([1,4,9,16], c=\"c\", lw=5, ls=\"--\", marker=\"o\", ms=15, mec=\"g\", mew=5, mfc=\"r\");", "그림 범위 지정\n플롯 그림을 보면 몇몇 점들은 그림의 범위 경계선에 있어서 잘 보이지 않는 경우가 있을 수 있다. 그림의 범위를 수동으로 지정하려면 xlim 명령과 ylim 명령을 사용한다. 이 명령들은 그림의 범위가 되는 x축, y축의 최소값과 최대값을 지정한다.", "plt.plot([1, 4, 9, 16], c=\"b\", lw=2, ls=\"-\", marker=\"+\", ms=12, mec=\"y\", mew=6, mfc=\"g\");\nplt.xlim(-0.2, 3.2);\nplt.ylim(-1, 18);\n\nplt.plot([1,4,9,16], c=\"b\", lw=5, ls=\"--\", marker=\"o\", ms=15, mec=\"g\", mew=5, mfc=\"r\");\nplt.xlim(-0.2, 3.2);\nplt.ylim(-1, 18);", "틱 설정\n플롯이나 차트에서 축상의 위치 표시 지점을 틱(tick)이라고 하고 이 틱에 써진 숫자 혹은 글자를 틱 라벨(tick label)이라고 한다. 틱의 위치나 틱 라벨은 matplotlib가 자동으로 정해주지만 만약 수동으로 설정하고 싶다면 xticks 명령이나 yticks 명령을 사용한다.", "X = np.linspace(-np.pi, np.pi, 256)\nC = np.cos(X)\nplt.plot(X, C)\nplt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi])\nplt.yticks([-1, 0, +1]);", "틱 라벨 문자열에는 $$ 사이에 LaTeX 수학 문자식을 넣을 수도 있다.", "X = np.linspace(-np.pi, np.pi, 256)\nC = np.cos(X)\nplt.plot(X, C)\nplt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [r'$-\\pi$', r'$-\\pi/2$', r'$0$', r'$+\\pi/2$', r'$+\\pi$']);\nplt.yticks([-1, 0, 1], [\"Low\", \"Zero\", \"High\"]);", "그리드 설정\n위 그림을 보면 틱 위치를 잘 보여주기 위해 그림 중간에 그리드 선(grid line)이 자동으로 그려진 것을 알 수 있다. 그리드를 사용하지 않으려면 grid(False) 명령을 사용한다. 다시 그리드를 사용하려면 grid(True)를 사용한다.", "X = np.linspace(-np.pi, np.pi, 256)\nC = np.cos(X)\nplt.plot(X, C)\nplt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [r'$-\\pi$', r'$-\\pi/2$', r'$0$', r'$+\\pi/2$', r'$+\\pi$']);\nplt.yticks([-1, 0, 1], [\"Low\", \"Zero\", \"High\"]);\nplt.grid(False)", "여러개의 선을 그리기\n라인 플롯에서 선을 하나가 아니라 여러개를 그리고 싶은 경우에는 x 데이터, y 데이터, 스타일 문자열을 반복하여 인수로 넘긴다. 이 경우에는 하나의 선을 그릴 때 처럼 x 데이터나 스타일 문자열을 생략할 수 없다.", "t = np.arange(0., 5., 0.2)\nplt.plot(t, t, 'r--', t, 0.5*t**2, 'bs:', t, 0.2*t**3, 'g^-');", "홀드 명령\n하나의 plot 명령이 아니라 복수의 plot 명령을 하나의 그림에 겹쳐서 그릴 수도 있다. 기존의 그림 위에 겹쳐 그리도록 하는 명령은 hold(True) 이다. 겹치기를 종료하는 것은 hold(False) 이다.", "plt.plot([1, 4, 9, 16], c='b', lw=5, ls=\"--\", marker='o', ms=15, mec='g', mew=5, mfc='r');\nplt.hold(True)\nplt.plot([9, 16, 4, 1], c='c', lw=3, ls=':', marker='s', ms=10, mec='k', mew=5, mfc='m');\nplt.hold(False)", "범례\n여러개의 라인 플롯을 동시에 그리는 경우에는 각 선이 무슨 자료를 표시하는지를 보여주기 위해 legend 명령으로 범례(legend)를 추가할 수 있다. 범례의 위치는 자동으로 정해지지만 수동으로 설정하고 싶으면 loc 인수를 사용한다. 인수에는 문자열 혹은 숫자가 들어가며 가능한 코드는 다음과 같다.\n| loc 문자열 | 숫자 |\n|-|-|\n| best | 0 | \n| upper right | 1 | \n| upper left | 2 | \n| lower left | 3 | \n| lower right | 4 | \n| right | 5 | \n| center left | 6 | \n| center right | 7 | \n| lower center | 8 | \n| upper center | 9 | \n| center | 10 |", "X = np.linspace(-np.pi, np.pi, 256)\nC, S = np.cos(X), np.sin(X)\nplt.plot(X, C, label=\"cosine\")\nplt.hold(True)\nplt.plot(X, S, label=\"sine\")\nplt.legend(loc=2);", "x축, y축 라벨, 타이틀\n플롯의 x축 위치와 y축 위치에는 각각 그 데이터가 의미하는 바를 표시하기 위해 라벨(label)를 추가할 수 있다. 라벨을 붙이려면 xlabel. ylabel 명령을 사용한다. 또 플롯의 위에는 title 명령으로 제목(title)을 붙일 수 있다.", "X = np.linspace(-np.pi, np.pi, 256)\nC, S = np.cos(X), np.sin(X)\nplt.plot(X, C, label=\"cosine\")\nplt.xlabel(\"time\")\nplt.ylabel(\"amplitude\")\nplt.title(\"Cosine Plot\");", "부가설명\nannotate 명령을 사용하면 그림 내에 화살표를 포함한 부가 설명 문자열을 넣을 수 있다.", "plt.plot(X, S, label=\"sine\")\nplt.scatter([0], [0], color=\"r\", linewidth=10);\nplt.annotate(r'$(0,0)$', xy=(0, 0), xycoords='data', xytext=(-50, 50), \n textcoords='offset points', fontsize=16, \n arrowprops=dict(arrowstyle=\"->\", linewidth=3, color=\"g\"));", "그림의 구조\nmatplotlib가 그리는 그림은 사실 Figure, Axes, Axis 등으로 이어지는 구조를 가진다. 다음 그림은 이 구조를 설명하고 있다.\n<img src=\"https://datascienceschool.net/upfiles/4e20efe6352e4f4fac65c26cb660f522.png\" style=\"width: 50%\">\nFigure\n모든 그림은 Figure 라고 하는 부르는 matplotlib.figure.Figure 클래스 객체에 포함되어 있다. 내부 플롯(inline plot)이 아닌 경우에는 하나의 Figure는 하나의 아이디 숫자와 윈도우(Window)를 가진다. \nFigure 객체에 대한 자세한 설명은 다음 웹사이트를 참조한다.\n\nhttp://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure\n\n원래 Figure를 생성하려면 figure 명령을 사용하여 그 반환값으로 Figure 객체를 얻어야 한다. 그러나 일반적인 plot 명령 등을 실행하면 자동으로 Figure를 생성해주기 때문에 일반적으로는 figure 명령을 잘 사용하지 않는다. figure 명령을 명시적으로 사용하는 경우는 여러개의 윈도우를 동시에 띄워야 하거나(line plot이 아닌 경우), Jupyter 노트북 등에서(line plot의 경우) 그림의 크기를 설정하고 싶을 때이다. 그림의 크기는 figsize 인수로 설정한다.", "f1 = plt.figure(figsize=(10, 2))\nplt.plot(np.random.randn(100));", "만약 명시적으로 figure 명령을 사용하지 않은 경우에 Figure 객체를 얻으려면 gcf 명령을 사용한다.", "f1 = plt.figure(1)\nplt.plot([1, 2, 3, 4], 'ro:')\nf2 = plt.gcf()\nprint(f1, id(f1))\nprint(f2, id(f2))", "Axes와 Subplot\n때로는 다음과 같이 하나의 윈도우(Figure)안에 여러개의 플롯을 배열 형태로 보여야하는 경우도 있다. Figure 안에 있는 각각의 플롯은 Axes 라고 불리는 객체에 속한다.\nAxes 객체에 대한 자세한 설명은 다음 웹사이트를 참조한다.\n\nhttp://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes\n\nFigure 안에 Axes를 생성하려면 원래는 subplot 명령을 사용하여 명시적으로 Axes 객체를 얻어야 한다. 그러나 plot 명령을 바로 사용해도 자동으로 Axes를 생성해 준다.\nsubplot 명령은 그리드(grid) 형태의 Axes 객체들을 생성하는데 Figure가 행렬(matrix)이고 Axes가 행렬의 원소라고 생각하면 된다. 예를 들어 \n위와 아래 두 개의 플롯이 있는 경우 행이 2 이고 열이 1인 2x1 행렬이다. \nsubplot 명령은 세개의 인수를 가지는데 처음 두개의 원소가 전체 그리드 행렬의 모양을 지시하는 두 숫자이고 세번째 인수가 네 개 중 어느것인지를 의미하는 숫자이다.", "x1 = np.linspace(0.0, 5.0)\nx2 = np.linspace(0.0, 2.0)\ny1 = np.cos(2 * np.pi * x1) * np.exp(-x1)\ny2 = np.cos(2 * np.pi * x2)\n\nax1 = plt.subplot(2, 1, 1);\nplt.plot(x1, y1, 'yo-');\nplt.title('A tale of 2 subplots');\nplt.ylabel('Damped oscillation');\nprint(ax1)\n\nax2 = plt.subplot(2, 1, 2);\nplt.plot(x2, y2, 'r.-');\nplt.xlabel('time (s)');\nplt.ylabel('Undamped');\nprint(ax2)", "만약 2x2 형태의 네 개의 플롯이라면 다음과 같이 그린다. 이 때 subplot 의 인수는 (2,2,1)를 줄여서 221 라는 하나의 숫자로 표시할 수도 있다.\nAxes의 위치는 왼쪽에서 오른쪽으로, 위에서 부터 아래로 카운트한다.", "plt.subplot(221); plt.plot([1, 2]); plt.title(1)\nplt.subplot(222); plt.plot([1, 2]); plt.title(2)\nplt.subplot(223); plt.plot([1, 2]); plt.title(3)\nplt.subplot(224); plt.plot([1, 2]); plt.title(4)\nplt.tight_layout()", "xkcd 스타일", "with plt.xkcd():\n plt.title('XKCD style plot!!!')\n plt.plot(X, C, label='cosine')\n t = 2 * np.pi / 3\n plt.scatter(t, np.cos(t), 50, color='blue')\n plt.annotate(r'0.5 Here!', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50),\n textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle='->'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robblack007/clase-metodos-numericos
Practicas/P7/Practica 7 - Diferenciacion numerica.ipynb
mit
[ "Diferenciación numérica\nEn esta práctica exploraremos las formulas para la diferenciación numérica, así como un par de maneras de hacer los calculos de estas mas eficientes.\nEmpecemos con una función compleja de derivar:\n$$\nf(x) = \\tanh{\\left(\\ln{\\left(x^x\\right)}\\right)}\n$$\nestas funciones se encuentran en la liberria de numpy:", "from numpy import tanh, log, linspace", "y definimos la función de manera simple con la notacion lambda:", "f = lambda x: tanh(log(x**x))", "De manera preliminar, podemos graficar unos cuantos puntos para darnos una idea de la forma general de la función, para esto definimos unos cuantos puntos:", "datos_x = linspace(0.2, 1.8, 9)\ndatos_x", "y obtenemos el valor de estos datos en la función a derivar:", "datos_y = f(datos_x)\n\nfrom matplotlib.pyplot import plot\n%matplotlib inline\n\nplot(datos_x, datos_y)", "Por la gráfica podemos ver que para el valor de $x=1$, la grafica pasa aproximadamente por $0$, evaluando en este punto, vemos que realmente es:", "f(1)", "Pero mas importante aun, este es un ejercicio del libro de texto, por lo que sabemos que en ese punto la derivada de esta función tiene un valor $f'(1) = 1$, por lo que estaremos viendo aproximaciones de este valor.", "plot(datos_x, datos_y)\nplot([0.2,1.8], [-0.8,0.8])", "Una vez que tenemos claro nuestro objetivo, empezamos a escribir las funciones que calcularán el valor de la derivada en cierto punto, por ejemplo, podemos empezar con las funciones que calculan las derivadas de dos puntos:\n$$\nf'(x_0) = \\frac{f(x_1) - f(x_ 0)}{h}\n$$\n$$\nf'(x_0) = \\frac{f(x_0) - f(x_{-1})}{h}\n$$\nen donde el valor de $h$ es calculado con el paso de los datos, o bien la diferencia entre $x_0$ y $x_1$, por facilidad de implementación, estas las calcularemos con $h= \\left| x_1 - x_0 \\right|$\n\nNote que el conjunto de valores que hemos utilizado, tiene un valor constante de $h$, es importante que los datos en diferenciación numérica sean siempre calculados asi.", "def derivada_adelante_dos(func, x0, x1):\n return (func(x1) - func(x0))/abs(x1-x0)\n\nderivada_adelante_dos(f, 1, 1.2)\n\ndef derivada_atras_dos(func, x0, x1):\n return (func(x0) - func(x1))/abs(x1-x0)\n\nderivada_atras_dos(f, 1, 0.8)", "Lo mas evidente de estos resultados es que no son exactos, mas aún, ni siquiera estan perfectamente centrados en el valor principal, podemos ver esto aun mas evidente con las formulas para la derivación con 3 puntos:\n$$\nf'(x_0) = \\frac{-3f(x_0) + 4f(x_1) - f(x_2)}{2h}\n$$\n$$\nf'(x_0) = \\frac{3f(x_0) - 4f(x_{-1}) + f(x_{-2})}{2h}\n$$", "def derivada_adelante_tres(func, x0, x1, x2):\n return (-3*func(x0) + 4*func(x1) - f(x2))/abs(x2-x0)\n\nderivada_adelante_tres(f, 1, 1.2, 1.4)\n\ndef derivada_atras_tres(func, x0, x1, x2):\n return (3*func(x0) - 4*func(x1) + f(x2))/abs(x2-x0)\n\nderivada_atras_tres(f, 1, 0.8, 0.6)", "Por lo que si quisieramos calcular el error, restando el valor de uno con el otro, nos hubiera reportado un error menor al realmente ocurrido.\nCon esto podemos concliur que no hay una manera analítica de determinar el error obtenido, tan solo tenemos la garantía de que las formulas de 2, 3 y 5 puntos nos van a entregar errores del orden de magnitud $\\mathcal{O}\\left( h \\right)$, $\\mathcal{O}\\left( h^2 \\right)$ y $\\mathcal{O}\\left( h^4 \\right)$; es decir, al menos tenemos la garantia de que usar las formulas de 5 puntos nos entregarán un menor error, siempre y cuando $h<1$.\nDefinamos pues las formulas para las derivadas de 5 puntos:\n$$\nf'(x_0) = \\frac{-25f(x_0) + 48f(x_1) - 36f(x_2) + 16f(x_3) - 3f(x_4)}{12h}\n$$\n$$\nf'(x_0) = \\frac{25f(x_0) - 48f(x_{-1}) + 36f(x_{-2}) - 16f(x_{-3}) + 3f(x_{-4})}{12h}\n$$", "def derivada_adelante_cinco(func, x0, x1, x2, x3, x4):\n return (-25*func(x0) + 48*func(x1) - 36*func(x2) + 16*func(x3) - 3*func(x4))/(3*abs(x4 - x0))\n\nderivada_adelante_cinco(f, 1, 1.2, 1.4, 1.6, 1.8)\n\ndef derivada_atras_cinco(func, x0, x1, x2, x3, x4):\n return (25*func(x0) - 48*func(x1) + 36*func(x2) - 16*func(x3) + 3*func(x4))/(3*abs(x4 - x0))\n\nderivada_atras_cinco(f, 1, 0.8, 0.6, 0.4, 0.2)", "Sin embargo, estas formulas utilizan una y otra vez la funcion original... y esto es adecuado si no te preocupa el poder de procesamiento del dispositivo en el que se calcula, esto es si tienes acceso a la función analítica.\nEn este ejemplo teniamos una formula para la función a derivar, sin embargo va a haber ocasiones en las que queremos saber la derivada de un conjunto de datos, en este caso, no sabremos la forma analítica de la función original y por lo tanto, este enfoque es inutil.\nIntroduciremos el concepto de memoización, el cual consiste en calcular todo lo posible de antemano y utilizarlo cuando nos sea necesario.\nEn este caso, ya tenemos un conjunto de datos que calculamos anteriormente, datos_x y datos_y fueron calculados tomando en cuenta $h$ y $f(x)$, por lo que realmente no necesitamos estos valores para nuestras funciones.\nSi definimos las funciones para las derivadas con dos puntos, tendremos lo siguiente:", "def dadel2(xs, ys, i):\n return (ys[i+1] - ys[i])/abs(xs[i+1]-xs[i])\n\ndadel2(datos_x, datos_y, 4)\n\ndef datr2(xs, ys, i):\n return (ys[i] - ys[i-1])/abs(xs[i] - xs[i-1])\n\ndatr2(datos_x, datos_y, 4)", "y por los resultados que nos dan, podemos ver que son equivalentes.\nProblemas\n\nObtenga funciones para la obtencion de las derivadas numéricas de 3 y 5 puntos, hacia adelante, centrales y hacia atras, utilizando la técnica de memoización (utiliza las formulas ubicadas en las páginas 456 y 457 de tu libro de texto.\nResuelva el ejercicio 5.41 del libro de texto.\nUtilizando el siguiente código para obtener el conjunto de datos, obtenga la derivada de la señal correspondiente a la posición de un servomotor para obtener su velocidad.", "import json\ndatos = None\nwith open('datos.json') as archivo:\n datos = json.load(archivo)\n \nposicion, tiempo = datos[0], datos[1]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/es-419/guide/keras/custom_callback.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Escribir callbacks de Keras personalizados\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/custom_callback\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Ver en TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es-419/guide/keras/custom_callback.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Ejecutar en Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/es-419/guide/keras/custom_callback.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Ver fuente en GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es-419/guide/keras/custom_callback.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Descargar notebook</a>\n </td>\n</table>\n\nNote: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad\nson basados en el \"mejor esfuerzo\", no hay ninguna garantia que esta sea un reflejo preciso y actual \nde la Documentacion Oficial en Ingles.\nSi tienen sugerencias sobre como mejorar esta traduccion, por favor envian un \"Pull request\"\nal siguiente repositorio tensorflow/docs.\nPara ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad\npor favor contacten al siguiente grupo docs@tensorflow.org list.\nUn callback personalizado es una herramienta poderosa para personalizar el comportamiento de un modelo de Keras durante el entrenamiento, evaluacion o inferencia, incluyendo la lectura/cambio del modelo de Keras. Ejemplos incluyen tf.keras.callbacks.TensorBoard, donde se pueden exportar y visualizar el progreso del entrenamiento y los resultados con TensorBoard, o tf.keras.callbacks.ModelCheckpoint donde el modelo es automaticamente guardado durante el entrenamiento, entre otros. En esta guia aprenderas que es un callback de Keras, cuando se llama, que puede hacer y como puedes construir una propia. Al final de la guia habra demos para la creacion de aplicaciones simples de callback para ayudarte a empezar tu propio callback personalizados.\nSetup", "import tensorflow as tf", "Introduccion a los callbacks de Keras\nEn Keras 'Callback' es una clase de python destinada a ser subclase para proporcionar una funcionalidad específica, con un conjunto de métodos llamados en varias etapas de entrenamiento (incluyendo el inicio y fin de los batch/epoch), pruebas y predicciones. Los Callbacks son útiles para tener visibilidad de los estados internos y las estadísticas del modelo durante el entrenamiento. Puedes pasar una lista de callbacks (como argumento de palabra clave callbacks) a cualquiera de los siguientes metodos tf.keras.Model.fit (),tf.keras.Model.evaluate ()ytf.keras.Model .predict (). Los metodos de los callbacks se llamaran en diferentes etapas del entrenamiento/evaluación/inferencia.\nPara comenzar, importemos TensorDlow y definamos un modelo secuencial sencillo en Keras:", "# Definir el modelo de Keras model al que se le agregaran los callbacks\ndef get_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(1, activation = 'linear', input_dim = 784))\n model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.1), loss='mean_squared_error', metrics=['mae'])\n return model", "Luego, cara el dataset de MNIST para entrenamiento y pruebas de la APLI de datasetws de Keras:", "# Cargar los datos de ejemplo de MNIST data y preprocesarlos\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\nx_train = x_train.reshape(60000, 784).astype('float32') / 255\nx_test = x_test.reshape(10000, 784).astype('float32') / 255", "Ahora, define un callback simple y personalizado para rastrear el inicio y fin de cada batch de datos. Durante esas llamadas, imprime el indice del batch actual.", "import datetime\n\nclass MyCustomCallback(tf.keras.callbacks.Callback):\n\n def on_train_batch_begin(self, batch, logs=None):\n print('Entrenamiento: batch {} comienza en {}'.format(batch, datetime.datetime.now().time()))\n\n def on_train_batch_end(self, batch, logs=None):\n print('Entrenamiento: batch {} termina en {}'.format(batch, datetime.datetime.now().time()))\n\n def on_test_batch_begin(self, batch, logs=None):\n print('Evaluacion: batch {} comienza en {}'.format(batch, datetime.datetime.now().time()))\n\n def on_test_batch_end(self, batch, logs=None):\n print('Evaluacion: batch {} termina en {}'.format(batch, datetime.datetime.now().time()))", "Dar un callback mara los metodos del modelo tales como tf.keras.Model.fit() aseguran que los metodos son llamados en dichas etapas:", "model = get_model()\n_ = model.fit(x_train, y_train,\n batch_size=64,\n epochs=1,\n steps_per_epoch=5,\n verbose=0,\n callbacks=[MyCustomCallback()])", "Metodos del Modelo que aceptan callbacks\nLos usuarios pueden dar una lista de callbacks para los siguientes metodos de tf.keras.Model:\nfit(), fit_generator()\nEntrena el modelo por una cantidad determinada de epochs (iteraciones en un dataset, o para los datos determinados por un generador de Python que va batch-por-batch).\nevaluate(), evaluate_generator()\nEvalua el modelo para determinados datos o generador de datos. Regresa la perdida (loss) y valores metricos para la evaluacion.\npredict(), predict_generator()\nGenera las predicciones a regresar para los datos ingresados o el generador de datos.\nNOTA: Toda la documentacion esta en ingles.", "_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=5,\n callbacks=[MyCustomCallback()])", "Una revision de los metodos de callback\nMetodos comunes para entrenamiento/pruebas/prediccion\nPara entrenamiento, pruebas y prediccion, los siguientes metodos se han previsto para ser sobreescritos.\non_(train|test|predict)_begin(self, logs=None)\nLlamado al inicio de fit/evaluate/predict.\non_(train|test|predict)_end(self, logs=None)\nLlamado al fin de fit/evaluate/predict.\non_(train|test|predict)_batch_begin(self, batch, logs=None)\nLlamado justo antes de procesar un batch durante entrenamiento/pruebas/prediccion. Dentro de este metodo, logs es un diccionario con las llaves batch y size disponibles, representando el numero de batch actual y las dimensiones del mismo.\non_(train|test|predict)_batch_end(self, batch, logs=None)\nLlamado al final del entrenamiento/pruebas/prediccion de un batch. dentro de este metodo, logs es un diccionario que contiene resultados metricos con estado.\nEntrenamiento de metodos especificos\nAdicionalmente, para el entrenamiento, los siguientes metodos son provistos.\non_epoch_begin(self, epoch, logs=None)\nLlamado al inicio de una epoch durante el entrenamiento.\non_epoch_end(self, epoch, logs=None)\nLlamado al final de una epoch durante el entrenamiento.\nUso del diccionario logs\nEl diccionario logs contiene el valor de perdida (loss), y todas las metricas pertinentes al final de un batch o epoch. El ejemplo a continuacion incluye la perdidad (loss) y el MAE (Mean Absolute Error).", "class LossAndErrorPrintingCallback(tf.keras.callbacks.Callback):\n\n def on_train_batch_end(self, batch, logs=None):\n print('Para el batch {}, la perdida (loss) es {:7.2f}.'.format(batch, logs['loss']))\n\n def on_test_batch_end(self, batch, logs=None):\n print('Para el batch {}, la perdida (loss) es {:7.2f}.'.format(batch, logs['loss']))\n\n def on_epoch_end(self, epoch, logs=None):\n print('La perdida promedio para la epoch {} es {:7.2f} y el MAE es {:7.2f}.'.format(epoch, logs['loss'], logs['mae']))\n\nmodel = get_model()\n_ = model.fit(x_train, y_train,\n batch_size=64,\n steps_per_epoch=5,\n epochs=3,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback()])", "De manera similar, uno puede proveer callbacks en las llamadas a evaluate().", "_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=20,\n callbacks=[LossAndErrorPrintingCallback()])", "Ejemplos de aplicaciones de callbacks de Keras\nLa siguiente seccion te guiara en la creacion de una aplicacion de callback simple.\nDetencion anticipada con perdida minima.\nEl primer ejemplo muestra la creacion de un Callback que detiene el entrenamiento de Keras cuando se alcanza el minimo de perdida mutando el atributomodel.stop_training (boolean). Opcionalmente, el usuario puede proporcionar el argumento patience para especificar cuantas epochs debe esperar el entrenamiento antes de detenerse.\ntf.keras.callbacks.EarlyStopping proporciona una implementación mas completa y general.", "import numpy as np\n\nclass EarlyStoppingAtMinLoss(tf.keras.callbacks.Callback):\n \"\"\"Detener el entrenamiento cuando la perdida (loss) esta en su minimo, i.e. la perdida (loss) deja de disminuir.\n\n Arguments:\n patience: Numero de epochs a esperar despues de que el min ha sido alcanzaado. Despues de este numero\n de no mejoras, el entrenamiento para.\n \"\"\"\n\n def __init__(self, patience=0):\n super(EarlyStoppingAtMinLoss, self).__init__()\n\n self.patience = patience\n\n # best_weights para almacenar los pesos en los cuales ocurre la perdida minima.\n self.best_weights = None\n\n def on_train_begin(self, logs=None):\n # El numero de epoch que ha esperado cuando la perdida ya no es minima.\n self.wait = 0\n # El epoch en el que en entrenamiento se detiene.\n self.stopped_epoch = 0\n # Initialize el best como infinito.\n self.best = np.Inf\n\n def on_epoch_end(self, epoch, logs=None):\n current = logs.get('loss')\n if np.less(current, self.best):\n self.best = current\n self.wait = 0\n # Guardar los mejores pesos si el resultado actual es mejor (menos).\n self.best_weights = self.model.get_weights()\n else:\n self.wait += 1\n if self.wait >= self.patience:\n self.stopped_epoch = epoch\n self.model.stop_training = True\n print('Restaurando los pesos del modelo del final de la mejor epoch.')\n self.model.set_weights(self.best_weights)\n\n def on_train_end(self, logs=None):\n if self.stopped_epoch > 0:\n print('Epoch %05d: Detencion anticipada' % (self.stopped_epoch + 1))\n\nmodel = get_model()\n_ = model.fit(x_train, y_train,\n batch_size=64,\n steps_per_epoch=5,\n epochs=30,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()])", "Programacion del Learning Rate\nAlgo que es hecho comunmente en el entrenamiento de un modelo es cambiar el learning rate conforme pasan mas epochs. El backend de Keras expone la API get_value la cual puede ser usada para definir las variables. En este ejemplo estamos mostrando como un Callback personalizado puede ser usado para cambiar dinamicamente el learning rate.\nNota: este es solo una implementacion de ejemplo, callbacks.LearningRateScheduler y keras.optimizers.schedules contienen implementaciones mas generales.", "class LearningRateScheduler(tf.keras.callbacks.Callback):\n \"\"\"Planificador de Learning rate que define el learning rate deacuerdo a lo programado.\n\n Arguments:\n schedule: una funcion que toma el indice del epoch\n (entero, indexado desde 0) y el learning rate actual\n como entradas y regresa un nuevo learning rate como salida (float).\n \"\"\"\n\n def __init__(self, schedule):\n super(LearningRateScheduler, self).__init__()\n self.schedule = schedule\n\n def on_epoch_begin(self, epoch, logs=None):\n if not hasattr(self.model.optimizer, 'lr'):\n raise ValueError('Optimizer must have a \"lr\" attribute.')\n # Obtener el learning rate actua del optimizer del modelo.\n lr = float(tf.keras.backend.get_value(self.model.optimizer.lr))\n # Llamar la funcion schedule para obtener el learning rate programado.\n scheduled_lr = self.schedule(epoch, lr)\n # Definir el valor en el optimized antes de que la epoch comience\n tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)\n print('\\nEpoch %05d: Learning rate is %6.4f.' % (epoch, scheduled_lr))\n\nLR_SCHEDULE = [\n # (epoch a comenzar, learning rate) tupla\n (3, 0.05), (6, 0.01), (9, 0.005), (12, 0.001)\n]\n\ndef lr_schedule(epoch, lr):\n \"\"\"Funcion de ayuda para recuperar el learning rate programado basado en la epoch.\"\"\"\n if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:\n return lr\n for i in range(len(LR_SCHEDULE)):\n if epoch == LR_SCHEDULE[i][0]:\n return LR_SCHEDULE[i][1]\n return lr\n\nmodel = get_model()\n_ = model.fit(x_train, y_train,\n batch_size=64,\n steps_per_epoch=5,\n epochs=15,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback(), LearningRateScheduler(lr_schedule)])", "Callbacks de Keras estandar\nAsegurate de revisar los callbacks de Keras preexistentes visitando la documentacion de la api. Las aplicaciones incluyen el registro a CSV, guardar el modelo, visualizar en TensorBoard y mucho mas.\nNOTA: La documentacion aun esta en ingles" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dedx/STAR2015
STAR2015Workshop.ipynb
mit
[ "Coding in the Classroom\nJennifer Klay jklay@calpoly.edu\nCalifornia Polytechnic State University, San Luis Obispo\nDescription\nCoding and computer programming are essential skills for 21st century learners but how can we provide opportunities to build such skills in the classroom? Open source tools and programming languages with simple and natural syntax provide one avenue. In this workshop I will introduce participants to the Python programming language using the IPython/Jupyter notebook. I will present ways to help students learn to practice algorithmic thinking, to work together to problem solve, and to apply the computer to solve problems that interest them. In addition, a variety of useful resources for developing lessons and integrating coding into existing curricula will be discussed.\nAudience: Aspiring or early-career K-12 STEM teachers hoping to include research/computing in the classroom.\nAssumed programming experience: None\nThe complete set of materials for this workshop are available online at Github.\n\n1. Introduction\nThis notebook provides a roadmap to help teachers bring coding and programming concepts to the classroom through project-based learning. With some basic programming skills you can tackle a wide array of interesting and instructive problems. \nWhat are some skills that we wish students to gain?\n\nPractice algorithmic thinking\nBreak complex problems into smaller, more manageable parts\nWork together to problem solve\nFigure out how to get unstuck/find help when things don't work as expected\nApply the computer to problems they want to solve\nFind/identify interesting problems the computer can help them solve\n\nThere are several methods for helping students build their programming skills. In particular, encouraging them to program in pairs or groups will help them brainstorm solutions while decomposing a problem into the sequence of steps taken to solve a problem - an algorithm. Once they have the steps of a solution outlined, they can attempt to put their process into the syntax of a computing language. \nThere are some best practices for writing effective programs that even seasoned coders don't always follow. Nevertheless most programmers would agree that the following set of practices are helpful and effective. One thing you can do to encourage students is to ask them to periodically evaluate their own work throughout the development process and demonstrate how they are applying these practices. Eventually they won't need to be prompted, as the practices will become second-nature. \nSome best practices for writing effective programs\n\nProgram together (in pairs or groups)\nDeconstruct a problem into simple steps and create a scaffold of the full program from the start - some parts will be empty or placeholders until their logic can be filled in.\nDevelop \"pseudo-code\" (an informal description of the algorithm that uses the structural conventions of a programming language, but is intended for human reading rather than machine reading) for more complex components to work out the logic\nFill in the \"guts\" of the program components\nTest each component individually to verify it gives expected results with a known input and output\nDocument the code as it is developed, describing the purpose of all functions, their inputs and outputs, and the purpose of the full program and how it all fits together\nDemonstrate the program works and (to the best of your ability to judge) gives meaningful results\n\nThese basic goals and guidelines are applicable to any programming language or platform. For beginning coders, it is critical to make the entry point as easily accessible as possible within a framework that also provides a comprehensive set of tools that will enable them to expand beyond the simplest concepts. The tools suggested here provide such a framework.\n\n2. Tools\nIPython/Jupyter\nThe IPython Notebook (the \"I\" in IPython stands for interactive) is an interactive computational environment, in which you can combine code execution, rich text, mathematics, plots and rich media. It is a great platform for learning to program and for solving and presenting complex problems. \nThe notebook runs inside a web browser and can be easily installed with the Python programming language and a comprehensive library of scientific computing tools using the \"Anaconda\" package provided FREE by Continuum Analytics.\nIn 2015, the IPython Notebook evolved into Project Jupyter, which now manages the language-agnostic parts of the notebook and provides one uniform platform for working in several different computing languages, including Python. \nIf you are viewing this notebook via a weblink, you won't be able to interact with it, but if you install Anaconda, you can code along with us by opening the notebook from the IPython Notebook server.\nPython\nThe Python programming language is an excellent tool for general-purpose programming, with a highly readable syntax, rich and powerful data types (strings, lists, sets, dictionaries, arbitrary length integers, etc) and a very comprehensive standard library. The language is easy to learn and because it is an interpreted language, commands are executed in real time without the need to compile a full program. \nThere is a whole ecosystem of additional libraries provided with Anaconda that can be used for mathematical and scientific computing, which will help you efficiently represent multidimensional datasets, solve linear algebra systems, perform general matrix manipulations (an essential building block of virtually all technical computing), and visually represent and interact with data of many different types.\nSciPy\nSciPy (pronounced \"Sigh-Pie\") is the premiere package for scientific computing in Python. It has many, many useful tools, which are central to scientific computing tasks you'll come across.\nThe NumPy library (pronounced \"Numb-Pie\") forms the base layer for the entire SciPy ecosystem.\nMatplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. When you need to view or present data, it is invaluable.\nThese libraries, or their individual components, can be imported and used in the IPython notebook. \nTo write and execute code within the notebook, we use a code cell (the default). We will look at the details of the syntax later. For now, here is our first code cell, in which we import some of these libraries for later use:", "#Comments begin with #\n\n#Allow graphics to render inside the notebook\n%pylab inline \n\n#import packages we might want to use\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy as sp", "To execute a cell:\n\nShift-Enter: run cell and move to new cell below.\nControl-Enter: run cell and stay in the cell.\n\n\n3. Notebook basics\nThere are two primary different kinds of cell types - \"code\" and \"markdown\". The cell in which this text was typed is a markdown cell. A code cell is one in which computer instructions or code can be typed. To execute a cell, hit Shift-Enter.\nMarkdown cells use the Markdown formatting system to allow you to include formatted text, such as italic and bold, or to create bulleted or numbered lists:\n\na list element\n\nanother list element\n\n\na numbered list element\n\nanother numbered list element\n\nThe view of these cells is determined by whether they have been executed or not. Double-clicking any executed markdown cell will bring it into \"edit\" mode. Try it with one of these cells.\nMathematical expressions can be rendered using LaTeX formatting (pronounced \"Lah-Tech\") to give full-featured symbolic representation:\n$$x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$$\nThe LaTeX \"code\" to create that equation is\n$$x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$$\nLaTeX equation typesetting within the IPython Notebook is very helpful for displaying equations and directly connecting them with the computer code that implements them numerically. \nImages can be included in markdown cells using HTML tags like\n&lt;img src=\"img/Cats.jpg\" width=200&gt;\n<img src=\"img/Cats.jpg\" width=200>\nor they can be included in code cells using the display features of IPython.", "from IPython.display import Image\nImage(filename='img/Cats.jpg')", "It is also possible to display images linked from sites around the web:", "Image(url='http://python.org/images/python-logo.gif')", "How about embedding Youtube videos in the notebook using the video tag?", "from IPython.display import YouTubeVideo\nYouTubeVideo('4disyKG7XtU')", "These features enable the notebook to provide a rich development environment for projects that incorporate computer code but are not (necessarily) all about the code. In fact, the notebook lets you expand the notion of a computer \"program\" to include context and commentary side-by-side with the code to help readers/users better understand the purpose and results of the computer code.\nNow that we have seen some of the notebook's capability in action, let's write some code.\n\n4. Writing Code\nA short workshop is not enough time to fully introduce all of programming to beginners, but we can learn some of the basics by trying them out. Here is a brief tour of simple code concepts implemented as code statements in Python.\n4.1 Basic coding syntax and logic\nPrint a message:", "print \"Hello World\"", "Create a variable and assign it a value:", "z = 5", "Print the value stored in the variable:", "print z", "Change the value of the variable with an arithmetic expression:", "z = z + 27\nprint z", "You can request input from a user that can be stored in memory and used for other purposes:", "name = raw_input(\"Hi, what's your name? \");\n\nprint \"Hi, my name is\",name", "In Python, you can define a function and pass it arguments, then execute the function:", "def intro(name):\n print \"Hi, my name is\",name", "To call the function, you type the name, with the argument(s) in parentheses as a code statement:", "intro(\"Jennifer\")", "here's a function that takes no arguments:", "def intro():\n name = raw_input(\"Hi, my name is \");\n\nintro()", "You can control the execution of code statements with if ... else:", "if z > 10:\n print \"Yay!\"\nelse:\n print \"Boo. :-(\"\n\nprint z", "For repetitive tasks, you can use a for loop:", "print \"I can count to 10!\"\nfor i in range(10):\n print i", "Oops. Not quite what you were expecting?\nNotice that in Python, indices start from 0, not 1, and count up to n-1.\nI can create a list of items and iterate over them with a for loop:", "shopping_list = [\"eggs\",\"milk\",\"bacon\",\"bread\",\"strawberries\",\"yogurt\",\"jam\"]\nfor item in shopping_list:\n print item", "I can access individual items from their location in the list using an index to that location:", "print shopping_list[2]\n\nprint \"What's for breakfast?\"\nprint shopping_list[2] + shopping_list[0]", "Let's try that again:", "print \"What's for breakfast?\"\nprint shopping_list[0] + \" and \" + shopping_list[2]", "I can also slice through the list", "print shopping_list[1:4]", "This shows the \"1th\", \"2th\", and \"3th\" elements, or start to end-1. Note the difference between this and traditional ordinal numbering: \n\n\"0th\" (Zero-th) = \"First\"\n\"1th\" (One-th) = \"Second\"\n\"2th\" (Two-th) = \"Third\"\n\"3th\" (Three-th) = \"Fourth\"\nand so on...\n\nWhen slicing a list (or array), you can leave one or the other index blank to tell it to start at the beginning or go to the end. The result will be similar to before:", "#Slice from 0th to the 1th element:\nprint shopping_list[:2]\n\n#Slice from 0th (first) to the n-1th (last) element:\nprint shopping_list[:]\n\n#Slice from 2th (third) to n-1th (last) element:\nprint shopping_list[2:]", "You can also access elements backward from the end using negative numbers:", "#Print the last element:\nprint shopping_list[-1]\n\n#Print the second-to-last element:\nprint shopping_list[-2]", "or designate a step size to jump over certain elements:", "#Print every other element:\nprint shopping_list[::2]\n\n#Print every third element (skip 2):\nprint shopping_list[::3]", "How about printing them backward?", "#Iterate from the last to the first element in steps of 1:\nprint shopping_list[-1::-1]", "Python lists can contain elements of any data type - strings of characters (e.g. words, sentences, etc.), whole numbers (called \"integers\" or ints), decimal numbers (called \"floating point numbers\" or floats), other lists, etc. \nThere are other kinds of containers in Python - dictionaries, deques, queues - each with different features and uses. We won't look into them further here, but if you are interested, consult the documentation or an introductory text such as ThinkPython by Allen Downey. \n4.2 NumPy Arrays\nNumPy arrays are better containers for purely numerical data.", "#Create an array of 10 values between 0 and 10-1\narr = np.arange(10)\n\nprint arr\n\n#Create an array of 10 evenly-spaced values between 0 and 10\narr2 = np.linspace(0,10,10)\n\nprint arr2", "The elements of the arrays can be accessed with the same indexing and slicing as we used before:", "#How many elements are in the array?\nprint len(arr)\n\n#Print the 5-th element (sixth in the list):\nprint arr[5]", "NumPy arrays can be used for simple or complex mathematical calculations.", "x = np.linspace(0, 2*np.pi, 300)\ny = np.sin(x**2)", "Here, x and y are just arrays of numbers that represent the value of the function $y(x)=\\sin(x^2)$ at each of the 300 values of $x$ between 0 and 2$\\pi$.\n4.3 Matplotlib\nYou can use matplotlib to create visual representations of such data:", "plt.plot(x, y)\nplt.title(\"A little chirp\");", "Graphs in matplotlib have many attributes that you can customize (see the documentation). The developers also provide a gallery of plots to showcase the wide variety of visual representations that are available. \nNo one can keep all of the functions and fine layout control commands in their brain. Often when I need to make a plot, I go to the gallery page and browse the images until I find one that is similar to what I want to create and then I copy the code and modify it to suit my needs.\nWhen you use the Matplotlib gallery to develop (or \"template\") a figure, you can very easily load the source code into your notebook and then modify it as needed to fit your specific needs using the Python \"magic\" command, %load. \nTry it now. After the code is loaded, just execute the cell to see the output.", "#Execute this cell to see the histogram example plot code, then execute again to make the plot.\n%load http://matplotlib.org/mpl_examples/statistics/histogram_demo_multihist.py", "4.4 IPython Widgets\nExploring data is much more fun when you can directly interact with it. IPython widgets provide a great way to do just that. Here is an example with two sine curves - one is a pure sine wave, the other is the superposition of two waves with different frequency but the same amplitude. You can interactively explore how the functions change when the parameters are changed.", "from IPython.html.widgets import interact\n\ndef sin_plot(A=5.0,f1=5.0,f2=10.):\n x = np.linspace(0,2*np.pi,1000)\n #pure sine curve\n y = A*np.sin(f1*x)\n #superposition of sine curves with different frequency\n #but same amplitude\n y2 = A*(np.sin(f1*x)+np.sin(f2*x))\n plt.plot(x,y,x,y2)\n plt.xlim(0.,2.*np.pi)\n plt.ylim(-10.,10.)\n plt.grid()\n plt.show()\n \nv3 = interact(sin_plot,A=(0.,10.), f1=(1.0,10.0), f2=(1.0,10.0))", "Here's another interactive plot that allows you to randomly sample (x,y) pairs within a circle of radius $r$. The interact object lets you increase or decrease the number of samples in the circle.", "def scatter_plot(r=0.5, n=27):\n t = np.random.uniform(0.0,2.0*np.pi,n)\n rad = r*np.sqrt(np.random.uniform(0.0,1.0,n))\n x = np.empty(n)\n y = np.empty(n)\n x = rad*np.cos(t)\n y = rad*np.sin(t)\n fig = plt.figure(figsize=(4,4),dpi=80)\n plt.scatter(x,y)\n plt.xlim(-1.,1.)\n plt.ylim(-1.,1.)\n plt.show()\n \nv2 = interact(scatter_plot,r=(0.0,1.0), n=(1,1000))", "A last fun example of using widgets can be found in the accompanying notebook on the Quantum double-slit experiment.\n4.5 When things go awry\nMaking mistakes when programming is an unavoidable part of the experience. It can be very frustrating. You might spend an hour trying to track down the source of a \"bug\" in a program, convinced that the computer is at fault. I know I have done that countless times. But computers are only as \"smart\" as their programmers. Like lemmings, they will blindly follow our instructions. It is pretty much always our fault. Get used to that. \nOccasionally, the computer will spit out an error message or give back a gibberish answer, alerting us to our mistake. Sometimes we're not so lucky. \nBug-hunting (or \"debugging\") is part of the job. And part of debugging is being able to understand error messages. Let's generate some errors to get a feel for dealing with them.", "pirnt \"Hello\nprint \"World \" + z-5", "This is an example of a syntax error. We forgot to close the quotes around our message. Python error messages are (usually) informative and tell you what the problem is and where it occurred. We can fix this problem by closing our quotes.", "pirnt \"Hello\"\nprint \"World \" + z-5", "Oops. We misspelled print, giving us another syntax error. In this case the message is not quite as clear about what the problem is, but it isn't too hard to spot the mistake. \nNotice that Python doesn't execute the second statement in the cell when it encounters an error in the first line. Execution stops at the first sign of an error. Keeping the number of lines of code in cells or functions short makes tracking errors much easier. \nNotice also that the Notebook provides syntax highlighting - Python keywords, like print, and numbers appear in <font color=green>green</font>, quoted strings appear in <font color=red>red</font>, operators like + and - appear in <font color=purple>purple</font> and variables are black. This can be helpful for spotting errors quickly. \nLet's correct our spelling.", "print \"Hello\"\nprint \"World \" + z-5", "Now what? Here is a type error. We tried to perform the \"+\" operation on a string and an integer, which is not allowed. To correct this we could force the integer object to be treated as a string by casting it as one:", "print \"Hello\"\nprint \"World \" + str(z-5)", "Let's move on to another error.", "ff = m*z + b", "This is an example of a runtime error. The syntax is correct, but in this case some of the variables have not yet been defined. Let's try that again.", "m = 0.5\nb = -3.\n\nprint ff", "Because of the previous error, ff was never defined. We have to either re-execute the cell above or re-define ff now that the other variables have been defined.", "ff = m*z + b\n\nprint z", "Runtime errors are annoying but at least they announce themselves. What happens if we make a logical error?", "def cube_root(x):\n '''This function takes a number, x, \n computes the cube root, \n and returns the result\n '''\n return np.power(x,1/3)\n\nprint cube_root(8)", "Hmmm... What went wrong here? Everything looks okay. This is an example of a logical error. The syntax is correct, the code executed, but the output is not what we expect it to be. (The cube-root of 8 is 2: 2*2*2=8.)\nHow come this function doesn't work? Python is a dynamically-typed language, which means it tries to guess what the data type of the object you are defining is and then applies the rules of those objects for subsequent calculations. Integers are whole numbers of type int. Decimal numbers are represented in the computer as floating point numbers or floats. When Python computes the fraction with the \"/\" operator on integers, it assumes you want an integer result. The \"/\" operator on integers rounds down to the nearest whole integer:", "5/4", "How do we avoid logical errors like this one? For mathematical calculations, you can explicitly define the data type of the numbers you use. If you want a floating-point result, use floating-point numbers by adding a decimal point to the end of the number:", "a = 3\nb = 4\nprint a/b\n\nc = 3.\nd = 4.\nprint c/d", "How do we avoid logical errors in general? \nNever make a logical error.\nJust kidding. The only way to minimize logical errors is to always test your code with a known result whenever possible. In the case of the cube_root function, we knew something was wrong with the logic because the cube-root of 8 should be 2. When we got a different value than expected, we knew we had a problem to track down and correct. The purpose of testing as a programming \"best practice\" is to minimize logical errors like this one. \nFix the cube_root function and test that it gives sensible results.\n4.6 Getting help\nWhen IPython needs to display additional information it will automatically invoke a pager at the bottom of the screen. You can get help with IPython in general by typing a question mark in a code cell and executing it:", "?", "Or get help on a particular function with, e.g.", "np.arange?", "There is extensive documentation for Python, IPython, NumPy, SciPy, Matplotlib, etc. on the web. Chances are, your question has been asked and answered somewhere already. Google is your friend. A few places to look when you get stuck:\n\nThe IPython website\nStackoverflow\nReddit\n\n\n5. Applications\nNow that we we have seen some of the basics, let's try using what we learned to solve a simple problem.\n5.1 Project Euler\nProject Euler is an online repository of simple and hard math puzzles that can be solved with the computer. These are great for practicing problem-solving, algorithmic thinking, and coding. Let's try a simple one together.\nMultiples of 3 and 5\nProblem 1\nIf we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.\nFind the sum of all the multiples of 3 or 5 below 1000.\nhttps://projecteuler.net/problem=1\nWork with a partner to brainstorm a solution to this problem by breaking it into logical steps. Type your set of steps into a markdown cell as a bulleted list. Then try to implement the code for these steps in a code cell. Verify your solution works for the case where N = 10 before trying N = 1000.\nSteps:\n*\n\n*", "#Your code here", "5.2 Project Ideas\nMuch of scientific computing involves reading and understanding existing programs written by others so that they can be adapted to solve different problems. Learning to recognize the underlying structure and logic of a program regardless of the language that the program is written in is an essential skill. Once you have the ability to deconstruct and understand programs you can adapt them to your needs.\nI have included a few project examples here that don't require complex domain-specific knowledge (e.g. biology, chemistry, physics) to grasp and understand. Yet these projects can be adapted or applied to interesting problems in various domains.\nSome of the project notebooks include code while others are outlines of projects with varying degrees of detailed instructions. You (and your students) probably won't be ready to really try the project notebooks until you have a good basic set of Python coding skills. You can obtain these by a variety of means, but I can suggest three different resources that I have either personally used or had students successfully use to build code skills. These are:\n\nCodecademy Python Track\nLearn Python The Hard Way\nThink Python!\n\nFor notebooks with example/tutorial code, I recommend creating a separate notebook and typing the commands by hand. This forces you to think about each command as you go and will also force you to deal with syntax errrors and mistakes, which is a good thing. Do NOT copy/paste. If you do copy/paste, you might as well not even bother, because you won't be learning how the program works. This style of learning is the method employed by Learn Python The Hard Way. Don't be fooled by the name. It is actually a really good book that is very easy to follow. It is only considered \"hard\" because you do a lot of practice work - like scales on the piano or free-throws in basketball. The book is also pretty entertaining.\nOn to the projects...\n5.2.1 Battleship\nBattleship is a simple game of hunt and destroy that you may have already played at some point. It can be easily adapted to the computer. The Codecademy course on Python walks you step-by-step through the development of such a game with text-based graphics. This version builds on that using a module called ipythonblocks to represent the game visually.\nBattleship Project Notebook\n5.2.2 Schelling Model\nThe Schelling Model is an easy-to-understand agent-based model of racial segregation that also has analogues in biology, chemistry, and physics. It can be implemented with a simple set of logical rules applied to a collection of \"agents\" in a series of rounds. The model lends itself well to visual representation with ipythonblocks and interactive widgets.\nSchelling Model Project Notebook\n5.2.3 Counting Stars\nImage processing, such as feature identification and extraction, is a cross-disciplinary research method. How does it work? This project presents a way to learn a simple underlying algorithm for counting features (stars in this case) in an image using basic Python. A second implementation that relies on the NumPy library is also provided.\n\nCounting Stars Project Notebook\nCounting Stars with NumPy Project Notebook\n\n5.2.4 Brownian Motion and Avogadro's Number\nBrownian motion is the random erratic motion of small particles immersed in a medium (like water) that is caused by the millions of tiny water molecules colliding with the larger particles. Albert Einstein formulated a quantitative theory of Brownian motion that was confirmed experimentally by Jean Baptiste Perrin, providing the first direct evidence supporting the atomic nature of matter. An estimate of Avogadro's number can be obtained from Einstein's equations applied to Perrin's experiment. \nThis project involves using image processing (like that from the Counting Stars project) to observe and quantify these tiny motions from a sequence of movie frames of a system undergoing Brownian motion. A USB microscope can be used to capture video of milkfat globules suspended in water, providing both a programming project and an experimental data collection project.\nAvogadro's Number Project Notebook\n5.2.5 Data mining with Quandl\nData science is an emerging discipline that has taken off in the last few years. We have mountains of information waiting to be queried, sorted, understood. Marketing data, scientific data, sociological, political, demographic data. What can we learn by mining this data for patterns?\nQuandl is an online data platform hosting data from hundreds of publishers on a single easy-to-use website. They make all the numerical data in the world available on their website in the exact format users want. They support open data, meaning that public data must be free, open, and accessible to all. They are an open platform - anybody can buy, sell, store or share data on Quandl.\nThere are many different kinds of projects that students could develop by imagining questions that could be answered with data. For example:\n\nHow do home prices in my zip code correlate with California drought cycles over the last 20 years?\n\nWhat kinds of datasets are there? What kinds of questions can we ask them? You decide.\n5.2.6 Other Projects\nThere are online repositories of interesting programming problems, such as the Stanford Nifty assignments, that you can search for other project ideas. \nAfter your experience as a STAR fellow this summer, you may have interesting projects of your own to share with your class. The IPython notebook could help you get them involved with them.\n\n6. Suggestions and Resources\nIn case you are thinking...\n6.1 How do I help my students if I don't know what I am doing?\nProgramming is a tool to be used for solving interesting problems. The easiest way to get excited about learning to use a new tool is having a problem to which you wish to apply it. Find something that fits you or your class' interests and help them develop a solution. \nYou don't need to know the answers or have solved the problem already to help them learn how to attack it. Let them be creative and propose challenging projects. If a problem appears too complicated to solve, help them refactor the problem into something that CAN be solved. Once they solve the simple problem, they can add complexity. \nThis is research - trying to find solutions to unsolved (unsolvable?) problems. Your research mentor this summer won't know the answer to the research problem you are here to help them with (although they probably have a good starting point and a basic direction for you to follow). Likewise, you don't need to know all the answers for your students either. \nShow your students what research is really like. Try to solve a complicated problem together.\n6.2 Getting started\nWhere to begin for yourself and your students?\n\nInstall Anaconda's IPython distribution and start working with the notebook\nLearn to program in Python, possibly with one of these three resources\nCodecademy Python Track\nLearn Python The Hard Way\n\nThink Python!\n\n\nPractice programming with Project Euler (independently, in pairs, in groups) on a regular basis. \n\nFind a project that interests you and your students\nFollow best practices for writing effective programs\nDiscuss as a group, try to decompose it into simple logical steps\nDevelop code for each step, testing and documenting as you go\n\n\nPut it all together and answer a research question with your code\nCreate a complete record of your work in a notebook and share it (online, at a conference, science fair, etc.)\nRepeat!\n\n6.3 One-stop shop for resources linked in this notebook\nLearning to Program\n* Codecademy\n* Think Python, by Allen Downey\n* Learn Python The Hard Way, by Zed Shaw\n* Software Carpentry\nTools and Software\n* Continuum Analytics\n * Anaconda\n* IPython\n * Notebook Viewer\n* Project Jupyter\n* Python\n* SciPy\n * Lecture Notes\n* NumPy\n * Tutorials\n* Matplotlib \n * Gallery\nPracticing Programming\n* Project Euler\n* Stanford Nifty Assignments\nOther\n* Github - for social coding\n * STAR 2015 Workshop materials\n * Computing 4 Physics\n* Quandl - open data\n\n7. Conclusion\nThis notebook and workshop provide a basic entry to using the IPython Notebook to introduce and reinforce programming skills in the classroom through project-based learning. There are so many ways to incorporate computing into learning, some general, some specific to particular subjects. In addition to the resources included in this notebook, there are lots of great examples all over the web. Find what interests you and your students, then develop the skills you need as you tackle a project together.\nHappy coding!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jhconning/Dev-II
notebooks/consume_opt1.ipynb
bsd-3-clause
[ "Consumer Choice and Intertemporal Choice\nWe setup and solve a very simple generic one-period consumer choice problem over two goods. We later specialize to the case of intertemporal trade over two periods and choice over lotteries. The consumer is assumed to have time-consistent preofereces. Later, in a separate notebook, we look at a a three period model where consumers have time-inconsistent (quasi-hyperbolic) preferences.\nThe code to generate the static and interactive figures is at the end of this notebook (and should be run first to make this interactive).\nChoice over two goods\nA consumer chooses a consumption bundle to maximize Cobb-Douglas utility \n$$U(c_1,c_2) = c_1^\\alpha c_2^{\\beta}$$\nsubject to the budget constraint\n$$p_1 c_1 + p_2 c_2 \\leq I $$\nTo plot the budget constraint we rearrange to get:\n$$c_2 = \\frac{I}{p_2} - \\frac{p_1}{p_2} c_1$$\nLikewise, to draw the indifference curve defined by $u(c_1,c_2) = \\bar u$ we solve for $c_2$ to get:\n$$c_2 = \\left( \\frac{\\bar u}{c_1^\\alpha}\\right)^\\frac{1}{\\beta}$$\nFor Cobb-Douglas utility the marginal rate of substitution (MRS) between $c_2$ and $c_1$ is:\n$$MRS = \\frac{U_1}{U_2} = \\frac{\\alpha}{\\beta} \\frac{c_2}{c_1}$$\nwhere $U_1 =\\frac{\\partial U}{\\partial c_1}$ and $U_2 =\\frac{\\partial U}{\\partial c_2}$", "consume_plot()", "The consumer's optimum\n$$L(c_1,c_2) = U(c_1,c_2) + \\lambda (I - p_1 c_1 - p_2 c_2) $$\nDifferentiate with respect to $c_1$ and $c_2$ and $\\lambda$ to get:\n$$ U_1 = \\lambda{p_1}$$\n$$ U_2 = \\lambda{p_2}$$\n$$ I = p_1 c_1 + p_2 c_2$$\nDividing the first equation by the second we get the familiar necessary tangency condition for an interior optimum:\n$$MRS = \\frac{U_1}{U_2} =\\frac{p_1}{p_2}$$\nUsing our earlier expression for the MRS of a Cobb-Douglas indifference curve, substituting this into the budget constraint and rearranging then allows us to solve for the Marshallian demand functions:\n$$c_1(p_1,p_2,I)=\\frac{\\alpha}{\\alpha+\\beta} \\frac{I}{p_1}$$\n$$c_1(p_1,p_2,I)=\\frac{\\beta}{\\alpha+\\beta} \\frac{I}{p_2}$$\nInteractive plot with sliders (visible if if running on a notebook server):", "interact(consume_plot,p1=(pmin,pmax,0.1),p2=(pmin,pmax,0.1), I=(Imin,Imax,10),alpha=(0.05,0.95,0.05));", "The expenditure function\nThe indirect utility function:\n$$v(p_1,p_2,I) = u(c_1(p_1,p_2,I),c_2(p_1,p_2,I))$$\n$$c_1(p_1,p_2,I)=\\alpha \\frac{I}{p_1}$$\n$$c_1(p_1,p_2,I)=(1-\\alpha)\\frac{I}{p_2}$$\n$$v(p_1,p_2,I) = I \\cdot \\alpha^\\alpha (1-\\alpha)^{1-\\alpha} \\cdot \n\\left [ \\frac{p_2}{p_1} \\right ]^\\alpha \\frac{1}{p_2} $$\n$$E(p_z,p_2,\\bar u) = \\frac{\\bar u}{\\alpha^\\alpha (1-\\alpha)^{1-\\alpha} \\cdot \n\\left [ \\frac{p_2}{p_1} \\right ]^\\alpha \\frac{1}{p_2}}$$\nTo minimize expenditure needed to achieve level of utility $\\bar u$ we solve:\n$$\\min_{c_1,c_2} p_1 c_1 + p_2 c_2 $$\ns.t.\n$$u(c_1, c_2) =\\bar u$$\nThe first order conditions are identical to those we got for the utility maximization problem, and hence we have the same tangency\n$$\\frac{U_1}{U_2} = \\frac{\\alpha}{1-\\alpha} \\frac{c_2}{c_1} = \\frac{p_1}{p_2}$$\nFrom this we can solve: $$c_2 = \\frac{p_1}{p_2} \\frac{1-\\alpha}{\\alpha} c_1 $$\nNow substitute this into the constraint to get:\n$$ u \\left ( c_1, \\frac{p_1}{p_2} \\frac{1-\\alpha}{\\alpha} c_1 \\right )=\\bar u$$\n$$ u \\left ( c_1, \\frac{p_1}{p_2} \\frac{1-\\alpha}{\\alpha} c_1 \\right )=\\bar u$$\nIntertemporal Consumption choices\nWe now look at the special case of intertemporal consumption, or consumption of the same good (say corn) over two periods. As modeled below the consumer's income is now given by the market value of and endowment bundle $(y_1,y_2)$.\nThe variables $c_1$ and $c_2$ now refer to consumption of (say corn) in period 1 and period 2. \nAs is typical of intertemporal maximization problems we will use a time-additive utility function. The consumer who has access to a competitive financial market (they can borrow or save at interest rate $r$) maximizes:\n$$U(c_1,c_2) = u(c_1) + \\delta u(c_2)$$\nsubject to the intertemporal budget constraint:\n$$ c_1 + \\frac{c_2}{1+r} = y_1 + \\frac{y_2}{1+r} $$\nThis is just like an ordinary utility maximization problem with prices $p_1 = 1$ and $p_2 =\\frac{1}{1+r}$. Think of it this way, the price of corn is \\$1 per unit in each period, but \\$1 in period 1 can be placed into savings that will grow to $(1+r)$ dollars in period 2. That means that from the standpoint of period 1 owning one unit of corn (or one dollar worth of corn) in period 2 is the equivalent of owning $\\frac{1}{1+r}$ units of corn today (because placed in savings that amount of period 1 corn would grow to $\\frac{1+r}{1+r} = 1$ units of corn in period 2).\nThe first order necessary condition for an interior optimum is:\n$$u'(c_1^) = \\delta u'(c_2^)$$\nLet's adopt the widely used Constant-Relative Risk Aversion (CRRA) felicity function of the form:\n$$\n\\begin{equation}\nu\\left(c_{t}\\right)=\\begin{cases}\n\\frac{c^{1-\\rho}}{1-\\rho}, & \\text{if } \\rho>0 \\text{ and } \\rho \\neq 1 \\\nln\\left(c\\right) & \\text{if } \\rho=1\n\\end{cases}\n\\end{equation}\n$$\nThe first order condition then becomes simply\n$${c_1^}^{-\\rho} = \\delta (1+r) {c_2^}^{-\\rho}$$\nor \n$$c_2^ = \\left [\\delta (1+r) \\right]^\\frac{1}{\\rho}c_1^ $$\nFrom the binding budget constraint we also have\n$$c_2^ = Ey-c_1^(1+r)$$\nwhere $E[y] = y_1 + \\frac{y_2}{1+r}$\nSolving for $c_1^*$ (from the FOC and this binding budget):\n$$c_1^* = \\frac{E[y]}{1+\\frac{\\left [\\delta (1+r) \\right]^\\frac{1}{\\rho}}{1+r}}$$\nIf we simplify to the simple case where $\\delta =\\frac{1}{1+r}$, where the consumer discounts future consumption at the same rate as the market interest rate then at an optimum the consumer will keep their consumption flat at $c_2^ = c_1^$. If we specialize further and assume that $r=0$ then the consumer will set $c_2^ = c_1^ =\\frac{y_1+y_2}{2}$\nSaving and borrowing visualized\nLet us visualize the situation. The consumer has CRRA preferences as described above (summarized by parameters $\\delta$ and $\\rho$ and starts with an income endowment $(y_1, y_2)$. The market cost of funds is $r$. \nA savings case\nIn the diagram below the consumer is seen to be saving, i.e. $s_1^ = y_1 - c_1^ >0$.", "consume_plot2(r, delta, rho, y1, y2)", "Interactive plot with sliders (visible if if running on a notebook server):", "interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));\n\nc1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)", "In this particular case the consumer consumes:", "c1e, c2e", "Her endowment is", "y1,y2", "And she therefore saves", "y1-c1e", "in period 1. \nA borrowing case\nIn the diagram below the consumer is seen to be borrowing, i.e. $s_1^ = y_1 - c_1^ <0$.", "y1,y2 = 20,80\nconsume_plot2(r, delta, rho, y1, y2)", "<a id='codesection'></a>\nCode section\nRun the code below first to re-generate the figures and interactions above.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact, fixed", "Code for simple consumer choice", "def U(c1, c2, alpha):\n return (c1**alpha)*(c2**(1-alpha))\n\ndef budgetc(c1, p1, p2, I):\n return (I/p2)-(p1/p2)*c1\n\ndef indif(c1, ubar, alpha):\n return (ubar/(c1**alpha))**(1/(1-alpha))\n\ndef find_opt(p1,p2,I,alpha):\n c1 = alpha * I/p1\n c2 = (1-alpha)*I/p2\n u = U(c1,c2,alpha)\n return c1, c2, u", "Parameters for default plot", "alpha = 0.5\np1, p2 = 1, 1\nI = 100\n\npmin, pmax = 1, 4\nImin, Imax = 10, 200\ncmax = (3/4)*Imax/pmin\n\ndef consume_plot(p1=p1, p2=p2, I=I, alpha=alpha):\n \n c1 = np.linspace(0.1,cmax,num=100)\n c1e, c2e, uebar = find_opt(p1, p2 ,I, alpha)\n idfc = indif(c1, uebar, alpha)\n budg = budgetc(c1, p1, p2, I)\n \n fig, ax = plt.subplots(figsize=(8,8))\n ax.plot(c1, budg, lw=2.5)\n ax.plot(c1, idfc, lw=2.5)\n ax.vlines(c1e,0,c2e, linestyles=\"dashed\")\n ax.hlines(c2e,0,c1e, linestyles=\"dashed\")\n ax.plot(c1e,c2e,'ob')\n ax.set_xlim(0, cmax)\n ax.set_ylim(0, cmax)\n ax.set_xlabel(r'$c_1$', fontsize=16)\n ax.set_ylabel('$c_2$', fontsize=16)\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n ax.grid()\n plt.show()", "Code for intertemporal Choice model", "def u(c, rho):\n return (1/rho)* c**(1-rho)\n\ndef U2(c1, c2, rho, delta):\n return u(c1, rho) + delta*u(c2, rho)\n\ndef budget2(c1, r, y1, y2):\n Ey = y1 + y2/(1+r)\n return Ey*(1+r) - c1*(1+r)\n\ndef indif2(c1, ubar, rho, delta):\n return ( ((1-rho)/delta)*(ubar - u(c1, rho)) )**(1/(1-rho))\n\ndef find_opt2(r, rho, delta, y1, y2):\n Ey = y1 + y2/(1+r)\n A = (delta*(1+r))**(1/rho)\n c1 = Ey/(1+A/(1+r))\n c2 = c1*A\n u = U2(c1, c2, rho, delta)\n return c1, c2, u", "Parameters for default plot", "rho = 0.5\ndelta = 1\nr = 0\ny1, y2 = 80, 20\n\nrmin, rmax = 0, 1\ncmax = 150\n\ndef consume_plot2(r, delta, rho, y1, y2):\n \n c1 = np.linspace(0.1,cmax,num=100)\n c1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)\n idfc = indif2(c1, uebar, rho, delta)\n budg = budget2(c1, r, y1, y2)\n \n fig, ax = plt.subplots(figsize=(8,8))\n ax.plot(c1, budg, lw=2.5)\n ax.plot(c1, idfc, lw=2.5)\n ax.vlines(c1e,0,c2e, linestyles=\"dashed\")\n ax.hlines(c2e,0,c1e, linestyles=\"dashed\")\n ax.plot(c1e,c2e,'ob')\n ax.vlines(y1,0,y2, linestyles=\"dashed\")\n ax.hlines(y2,0,y1, linestyles=\"dashed\")\n ax.plot(y1,y2,'ob')\n ax.text(y1-6,y2-6,r'$y^*$',fontsize=16)\n ax.set_xlim(0, cmax)\n ax.set_ylim(0, cmax)\n ax.set_xlabel(r'$c_1$', fontsize=16)\n ax.set_ylabel('$c_2$', fontsize=16)\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n ax.grid()\n plt.show()", "Interactive plot", "interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NEONScience/NEON-Data-Skills
tutorials/Python/Lidar/intro-lidar/merge_lidar_geotiff_files_py/merge_lidar_geotiff_files_py.ipynb
agpl-3.0
[ "syncID: 52f863b138b14d79a97e91422fc17b4f\ntitle: \"Merging GeoTIFF Files to Create a Mosaic\"\ndescription: \"Learn to merge multiple GeoTIFF files to great a larger area of interest.\" \ndateCreated: 2018-07-05 \nauthors: Bridget Hass, Donal O'Leary\ncontributors: \nestimatedTime: 30 minutes\npackagesLibraries: subprocess, gdal, osgeo, glob, numpy, matplotlib\ntopics: lidar, data-analysis, remote-sensing\nlanguagesTool: python\ndataProduct: NEON.DP3.30015, NEON.DP3.30024, NEON.DP3.30025\ncode1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/intro-lidar/merge_lidar_geotiff_files_py/merge_lidar_geotiff_files_py.py\ntutorialSeries: intro-lidar-py-series\nurlTitle: merge-lidar-geotiff-py\n\nIn your analysis you will likely want to work with an area larger than a single file, from a few tiles to an entire NEON field site. In this tutorial, we will demonstrate how to use the gdal_merge utility to mosaic multiple tiles together. You will need to install GDAL and the GDAL-python bindings on your machine in order to use the code below.\n<div id=\"ds-objectives\" markdown=\"1\">\n\n### Objectives\nAfter completing this tutorial, you will be able to:\n\n* Merge multiple geotif raster tiles into a single mosaicked raster\n* Use the functions `raster2array` to read a tif raster into a Python array\n\n### Install Python Packages\n\n* **subprocess**\n* **glob**\n* **gdal**\n* **osgeo** \n* **matplotlib** \n* **numpy**\n\n\n### Download Data\n\n<h3> NEON Teaching Data Subset: Data Institute 2018</h3> \n\nTo complete these materials, you will use data available from the NEON 2018 Data\nInstitute teaching datasets available for download. \n\nThe LiDAR and imagery data used to create this raster teaching data subset \nwere collected over the \n<a href=\"http://www.neonscience.org/\" target=\"_blank\"> National Ecological Observatory Network's</a> \n<a href=\"http://www.neonscience.org/science-design/field-sites/\" target=\"_blank\" >field sites</a>\nand processed at NEON headquarters.\nAll NEON data products can be accessed on the \n<a href=\"http://data.neonscience.org\" target=\"_blank\"> NEON data portal</a>.\n\n<a href=\"https://ndownloader.figshare.com/files/27535799\" target=\"_blank\"class=\"link--button link--arrow\">\nDownload the TEAK Aspect Tiles from FigShare</a>\n</div>\n\nIn your analysis you will likely want to work with an area larger than a single file, from a few tiles to an entire NEON field site. In this tutorial, we will demonstrate how to use the gdal_merge utility to mosaic multiple tiles together. \nThis can be done in command line, or as a system command through Python as shown in this lesson. If you installed Python using Anaconda, you should have gdal_merge.py downloaded into your folder, in a path similar to C:\\Users\\user\\AppData\\Local\\Continuum\\Anaconda3\\Scripts. You can also download it here and save it to your working directory. For details on gdal_merge refer to the <a href=\"http://www.gdal.org/gdal_merge.html\" target=\"_blank\">gdal website</a>.\nWe'll start by importing the following packages:", "import numpy as np\nimport matplotlib.pyplot as plt\nimport os, glob\nfrom osgeo import gdal", "Make a list of files to mosaic using glob.glob, and print the result. In this example, we are selecting all files ending with _aspect.tif in the folder TEAK_Aspect_Tiles. Note that you will need to change this filepath according to your local machine.", "files_to_mosaic = glob.glob('/Users/olearyd/Git/data/TEAK_Aspect_Tiles/*_aspect.tif')\nfiles_to_mosaic", "In order to run the gdal_merge function, we need these files as a series of strings. We can get them in the correct format using join:", "files_string = \" \".join(files_to_mosaic)\nprint(files_string)", "Now that we have the list of files we want to mosaic, we can run a system command to combine them into one raster.", "command = \"gdal_merge.py -o /Users/olearyd/Git/data/TEAK_Aspect_Tiles/TEAK_Aspect_Mosaic.tif -of gtiff \" + files_string\nprint(os.popen(command).read())\n\nprint(os.popen('ls /Users/olearyd/Git/data/TEAK_Aspect_Tiles/').read())", "Great! It looks like GDAL merged the files together into the TEAK_Aspect_Mosaic.tif file. Worth pointing out here is that gdal_merge function has a LOT of options and is extremely powerful and flexible. We suggest that you <a href=\"https://gdal.org/programs/gdal_merge.html\" target=\"_blank\">read the GDAL function documentation here</a> and experiement with your own commands. This may be easier to practice first in the command line, but integrating python scripts and command line functions here (as when using the os.system() function) is incredibly useful for processing large datasets.\nNow we can define and then use the function raster2array to read in the mosaiced array. This function converts the geotif file into an array, and also stores relevant metadata (eg. spatial information) into the dicitonary metadata. Load or import this function into your cell with %load raster2array. Note that this function requires the imported packages at the beginning of this notebook in order to run.", "def raster2array(geotif_file):\n metadata = {}\n dataset = gdal.Open(geotif_file)\n metadata['array_rows'] = dataset.RasterYSize\n metadata['array_cols'] = dataset.RasterXSize\n metadata['bands'] = dataset.RasterCount\n metadata['driver'] = dataset.GetDriver().LongName\n metadata['projection'] = dataset.GetProjection()\n metadata['geotransform'] = dataset.GetGeoTransform()\n \n mapinfo = dataset.GetGeoTransform()\n metadata['pixelWidth'] = mapinfo[1]\n metadata['pixelHeight'] = mapinfo[5]\n\n xMin = mapinfo[0]\n xMax = mapinfo[0] + dataset.RasterXSize/mapinfo[1]\n yMin = mapinfo[3] + dataset.RasterYSize/mapinfo[5]\n yMax = mapinfo[3]\n \n metadata['extent'] = (xMin,xMax,yMin,yMax)\n \n raster = dataset.GetRasterBand(1)\n array_shape = raster.ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float).shape\n metadata['noDataValue'] = raster.GetNoDataValue()\n metadata['scaleFactor'] = raster.GetScale()\n \n array = np.zeros((array_shape[0],array_shape[1],dataset.RasterCount),'uint8') #pre-allocate stackedArray matrix\n \n if metadata['bands'] == 1:\n raster = dataset.GetRasterBand(1)\n metadata['noDataValue'] = raster.GetNoDataValue()\n metadata['scaleFactor'] = raster.GetScale()\n \n array = dataset.GetRasterBand(1).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)\n #array[np.where(array==metadata['noDataValue'])]=np.nan\n array = array/metadata['scaleFactor']\n \n elif metadata['bands'] > 1: \n for i in range(1, dataset.RasterCount+1):\n band = dataset.GetRasterBand(i).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)\n #band[np.where(band==metadata['noDataValue'])]=np.nan\n band = band/metadata['scaleFactor']\n array[...,i-1] = band\n\n return array, metadata", "We can call this function as follows:", "TEAK_aspect_array, TEAK_aspect_metadata = raster2array('/Users/olearyd/Git/Data/TEAK_Aspect_Tiles/TEAK_Aspect_Mosaic.tif')", "Look at the size of the mosaicked tile using .shape. Since we created a mosaic of four 1000m x 1000m tiles, we expect the new tile to be 2000m x 2000m", "TEAK_aspect_array.shape", "Let's take a look at the contents of the metadata dictionary:", "#print metadata in alphabetical order\nfor item in sorted(TEAK_aspect_metadata):\n print(item + ':', TEAK_aspect_metadata[item])", "Load the function plot_array to plot the array:", "def plot_array(array,spatial_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):\n plot = plt.imshow(array,extent=spatial_extent,clim=colorlimit); \n cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap); \n cbar.set_label(cmap_title,rotation=90,labelpad=20);\n plt.title(title); ax = plt.gca(); \n ax.ticklabel_format(useOffset=False, style='plain'); \n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); ", "Finally, let's take a look at a plot of the tile mosaic:", "plot_array(TEAK_aspect_array,\n TEAK_aspect_metadata['extent'],\n (0,360),\n title='TEAK Aspect',\n cmap_title='Aspect, degrees',\n colormap='jet')", "Challenges\n\nUse the function raster2array to read in and plot each tile separately. Confirm that the mosaicked raster looks reasonable. \nDownload 9 adjacent tiles of another LiDAR L3 data product of your choice and use gdal_merge to combine them. You can find NEON data on the <a href=\"http://data.neonscience.org/home\" target=\"_blank\">NEON Data Portal</a> or NEON's Citrix FileShare system." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bhermanmit/openmc
docs/source/examples/mdgxs-part-ii.ipynb
mit
[ "This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:\n\nCalculation of multi-energy-group and multi-delayed-group cross sections for a fuel assembly\nAutomated creation, manipulation and storage of MGXS with openmc.mgxs.Library\nSteady-state pin-by-pin delayed neutron fractions (beta) for each delayed group.\nGeneration of surface currents on the interfaces and surfaces of a Mesh.\n\nGenerate Input Files", "import math\nimport pickle\n\nfrom IPython.display import Image\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport openmc\nimport openmc.mgxs\n\n%matplotlib inline", "First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.", "# Instantiate some Nuclides\nh1 = openmc.Nuclide('H1')\nb10 = openmc.Nuclide('B10')\no16 = openmc.Nuclide('O16')\nu235 = openmc.Nuclide('U235')\nu238 = openmc.Nuclide('U238')\nzr90 = openmc.Nuclide('Zr90')", "With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pins.", "# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide(u235, 3.7503e-4)\nfuel.add_nuclide(u238, 2.2625e-2)\nfuel.add_nuclide(o16, 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide(h1, 4.9457e-2)\nwater.add_nuclide(o16, 2.4732e-2)\nwater.add_nuclide(b10, 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide(zr90, 7.2758e-3)", "With our three materials, we can now create a Materials object that can be exported to an actual XML file.", "# Instantiate a Materials object\nmaterials_file = openmc.Materials((fuel, water, zircaloy))\nmaterials_file.default_xs = '71c'\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.", "# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\nmin_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')\nmin_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')\nmax_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-10., boundary_type='reflective')\nmax_z = openmc.ZPlane(z0=+10., boundary_type='reflective')", "With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.", "# Create a Universe to encapsulate a fuel pin\nfuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\nfuel_pin_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\nfuel_pin_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\nfuel_pin_universe.add_cell(moderator_cell)", "Likewise, we can construct a control rod guide tube with the same surfaces.", "# Create a Universe to encapsulate a control rod guide tube\nguide_tube_universe = openmc.Universe(name='Guide Tube')\n\n# Create guide tube Cell\nguide_tube_cell = openmc.Cell(name='Guide Tube Water')\nguide_tube_cell.fill = water\nguide_tube_cell.region = -fuel_outer_radius\nguide_tube_universe.add_cell(guide_tube_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='Guide Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\nguide_tube_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='Guide Tube Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\nguide_tube_universe.add_cell(moderator_cell)", "Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.", "# Create fuel assembly Lattice\nassembly = openmc.RectLattice(name='1.6% Fuel Assembly')\nassembly.pitch = (1.26, 1.26)\nassembly.lower_left = [-1.26 * 17. / 2.0] * 2", "Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.", "# Create array indices for guide tube locations in lattice\ntemplate_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,\n 11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])\ntemplate_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,\n 8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])\n\n# Initialize an empty 17x17 array of the lattice universes\nuniverses = np.empty((17, 17), dtype=openmc.Universe)\n\n# Fill the array with the fuel pin and guide tube universes\nuniverses[:,:] = fuel_pin_universe\nuniverses[template_x, template_y] = guide_tube_universe\n\n# Store the array of universes in the lattice\nassembly.universes = universes", "OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.", "# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = assembly\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(root_cell)", "We now must create a geometry that is assigned a root universe and export it to XML.", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry()\ngeometry.root_universe = root_universe\n\n# Export to \"geometry.xml\"\ngeometry.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.", "# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 2500\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': False}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.source.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully.", "# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.pixels = [250, 250]\nplot.width = [-10.71*2, -10.71*2]\nplot.color = 'mat'\n\n# Instantiate a Plots object, add Plot, and export to \"plots.xml\"\nplot_file = openmc.Plots([plot])\nplot_file.export_to_xml()", "With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.", "# Run openmc in plotting mode\nopenmc.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')", "As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!\nCreate an MGXS Library\nNow we are ready to generate multi-group cross sections! First, let's define 20-energy-group, 1-energy-group, and 6-delayed-group structures.", "# Instantiate a 20-group EnergyGroups object\nenergy_groups = openmc.mgxs.EnergyGroups()\nenergy_groups.group_edges = np.logspace(-3, 7.3, 21)\n\n# Instantiate a 1-group EnergyGroups object\none_group = openmc.mgxs.EnergyGroups()\none_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])\n\n# Instantiate a 6-delayed-group list\ndelayed_groups = list(range(1,7))", "Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.", "# Instantiate a tally mesh \nmesh = openmc.Mesh(mesh_id=1)\nmesh.type = 'regular'\nmesh.dimension = [17, 17, 1]\nmesh.lower_left = [-10.71, -10.71, -10000.]\nmesh.width = [1.26, 1.26, 20000.]\n\n# Initialize an 20-energy-group and 6-delayed-group MGXS Library\nmgxs_lib = openmc.mgxs.Library(geometry)\nmgxs_lib.energy_groups = energy_groups\nmgxs_lib.delayed_groups = delayed_groups\n\n# Specify multi-group cross section types to compute\nmgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',\n 'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']\n\n# Specify a \"mesh\" domain type for the cross section tally filters\nmgxs_lib.domain_type = 'mesh'\n\n# Specify the mesh domain over which to compute multi-group cross sections\nmgxs_lib.domains = [mesh]\n\n# Construct all tallies needed for the multi-group cross section library\nmgxs_lib.build_library()\n\n# Create a \"tallies.xml\" file for the MGXS Library\ntallies_file = openmc.Tallies()\nmgxs_lib.add_to_tallies_file(tallies_file, merge=True)\n\n# Instantiate a current tally\nmesh_filter = openmc.MeshFilter(mesh)\ncurrent_tally = openmc.Tally(name='current tally')\ncurrent_tally.scores = ['current']\ncurrent_tally.filters = [mesh_filter]\n\n# Add current tally to the tallies file\ntallies_file.append(current_tally)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now, we can run OpenMC to generate the cross sections.", "# Run OpenMC\nopenmc.run()", "Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.", "# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.50.h5')", "The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.", "# Initialize MGXS Library with OpenMC statepoint data\nmgxs_lib.load_from_statepoint(sp)\n\n# Extrack the current tally separately\ncurrent_tally = sp.get_tally(name='current tally')", "Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations\nFinally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a \"derived\" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations:\n$$\\frac{\\partial}{\\partial t} C_{k,d} (t) = \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t)\\Phi(\\mathbf{r},E',t) - \\lambda_{d} C_{k,d} (t) $$\n$$C_{k,d} (t=0) = \\frac{1}{\\lambda_{d}} \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t=0) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t=0)\\Phi(\\mathbf{r},E',t=0) $$", "# Set the time constants for the delayed precursors (in seconds^-1)\nprecursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])\nprecursor_lambda = -np.log(0.5) / precursor_halflife\n\nbeta = mgxs_lib.get_mgxs(mesh, 'beta')\n\n# Create a tally object with only the delayed group filter for the time constants\nbeta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]\nlambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)\nfor f in beta_filters:\n lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.\n\n# Set the mean of the lambda tally and reshape to account for nuclides and scores\nlambda_tally._mean = precursor_lambda\nlambda_tally._mean.shape = lambda_tally.std_dev.shape\n\n# Set a total nuclide and lambda score\nlambda_tally.nuclides = [openmc.Nuclide(name='total')]\nlambda_tally.scores = ['lambda']\n\ndelayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')\n\n# Use tally arithmetic to compute the precursor concentrations\nprecursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \\\n delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally\n \n# The difference is a derived tally which can generate Pandas DataFrames for inspection\nprecursor_conc.get_pandas_dataframe().head(10)", "Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.", "current_tally.get_pandas_dataframe().head(10)", "Cross Section Visualizations\nIn addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group.", "# Extract the energy-condensed delayed neutron fraction tally\nbeta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)\nbeta_by_group.mean.shape = (17, 17, 6)\nbeta_by_group.mean[beta_by_group.mean == 0] = np.nan\n\n# Plot the betas\nplt.figure(figsize=(18,9))\nfig = plt.subplot(231)\nplt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 1')\n\nfig = plt.subplot(232)\nplt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 2')\n\nfig = plt.subplot(233)\nplt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 3')\n\nfig = plt.subplot(234)\nplt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 4')\n\nfig = plt.subplot(235)\nplt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 5')\n\nfig = plt.subplot(236)\nplt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')\nplt.colorbar()\nplt.title('Beta - delayed group 6')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kidpixo/multibinner
examples/example_multibinner.ipynb
mit
[ "import matplotlib\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport collections\nimport multibinner as mb\n\nfrom skimage import io\nimage = np.flipud(io.imread('https://media4.giphy.com/media/S3mBspMr0r5HW/200_s.gif'))", "Dataset\nInitial data are read from an image, then n_data samples will be extracted from the data.\nThe image contains 200x200 = 40k pixels\nWe will extract 400k random points from the image and build a pandas.DataFrame\nThis mimics the sampling process of a spacecraft for example : looking at a target (Earth or another body) and getting way more data points you need to reconstruct a coherent representation.\nMoreover, visualize 400k x 3 columns of point is difficult, thus we will multibin the DataFrame to 200 bins on the x and 200 on the y direction, calculate the average for each bin and return 200x200 array of data in output.\nThe multibin.MultiBinnedDataFrame could generate as many dimension as one like, the 2D example here is for the sake of representation.", "image_df = pd.DataFrame(image.reshape(-1,image.shape[-1]),columns=['red','green','blue'])\nimage_df.describe()\n\nn_data = image.reshape(-1,image.shape[-1]).shape[0]*10 # 10 times the original number of pixels : overkill!\nx = np.random.random_sample(n_data)*image.shape[1]\ny = np.random.random_sample(n_data)*image.shape[0]\n\ndata = pd.DataFrame({'x' : x, 'y' : y })\n\n# extract the random point from the original image and add some noise\nfor index,name in zip(*(range(image.shape[-1]),['red','green','blue'])):\n data[name] = image[data.y.astype(int),data.x.astype(int),index]+np.random.rand(n_data)*.1\n\ndata.describe().T", "Data Visualization\n[It is a downsampled version of the dataset, the full version would take around 1 minute per plot to visualize...]\nDoes this dataset make sense for you? can you guess the original imgage?", "pd.tools.plotting.scatter_matrix(data.sample(n=1000), alpha=0.5 , lw=0, figsize=(12, 12), diagonal='hist');\n\n# Let's multibinning!\n\n# functions we want to apply on the data in a single multidimensional bin:\naggregated_functions = {\n 'red' : {'elements' : len ,'average' : np.average},\n 'green' : {'average' : np.average},\n 'blue' : {'average' : np.average}\n }\n\n# the columns we want to have in output:\nout_columns = ['red','green','blue']\n\n# define the bins for sepal_length\ngroup_variables = collections.OrderedDict([\n ('y',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[0], 'n_bins' : image.shape[0]})),\n ('x',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[1], 'n_bins' : image.shape[1]}))\n ])\n# I use OrderedDict to have fixed order, a normal dict is fine too.\n\n# that is the object collecting all the data that define the multi binning\nmbdf = mb.MultiBinnedDataFrame(binstocolumns = True,\n dataframe = data,\n group_variables = group_variables,\n aggregated_functions = aggregated_functions,\n out_columns = out_columns)\n\nmbdf.MBDataFrame.describe().T\n\n# reconstruct the multidimensional array defined by group_variables\noutstring = []\n\nfor key,val in mbdf.group_variables.iteritems():\n outstring.append('{} bins ({})'.format(val['n_bins'],key))\n\nkey = 'red_average'\n\nprint '{} array = {}'.format(key,' x '.join(outstring))\nprint \nprint mbdf.col_df_to_array(key)\n\nfig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(figsize=[16,10], ncols=2, nrows=2)\n\ncm = plt.get_cmap('jet')\n\nkey = 'red_elements'\nimgplot = ax1.imshow(mbdf.col_df_to_array(key), cmap = cm, \n interpolation='none',origin='lower')\nplt.colorbar(imgplot, orientation='vertical', ax = ax1)\nax1.set_title('elements per bin')\nax1.grid(False) \n\nkey = 'red_average'\nimgplot = ax2.imshow(mbdf.col_df_to_array(key), cmap = cm,\n interpolation='none',origin='lower')\nplt.colorbar(imgplot, orientation='vertical', ax = ax2)\nax2.set_title(key)\nax2.grid(False) \n\nkey = 'green_average'\nimgplot = ax3.imshow(mbdf.col_df_to_array(key), cmap = cm, \n interpolation='none',origin='lower')\nplt.colorbar(imgplot, orientation='vertical', ax = ax3)\nax3.set_title(key)\nax3.grid(False) \n\nkey = 'blue_average'\nimgplot = ax4.imshow(mbdf.col_df_to_array(key), cmap = cm, \n interpolation='none',origin='lower')\nplt.colorbar(imgplot, orientation='vertical', ax = ax4)\nax4.set_title(key)\nax4.grid(False) \n\nrgb_image_dict = mbdf.all_df_to_array()\n\nrgb_image = rgb_image_dict['red_average']\n\nfor name in ['green_average','blue_average']:\n rgb_image = np.dstack((rgb_image,rgb_image_dict[name]))\n\nfig, (ax1,ax2) = plt.subplots(figsize=[16,10], ncols=2)\nax1.imshow(255-rgb_image,interpolation='bicubic',origin='lower')\nax1.set_title('MultiBinnedDataFrame')\n\nax2.imshow(image ,interpolation='bicubic',origin='lower')\nax2.set_title('Original Image')", "In the images above, on the right the original one and on the left the result of picking 400k random point on the image, rebinning to 200x200 on the (x,y) columns and calculating the average on each of the resulting 40kbins.\nThe bins contain from 1 to 29 point (10 on average).\nThanks from me and Mario!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.19/_downloads/ef405502e399e876a3a866ae1539bb30/plot_object_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "The :class:Epochs &lt;mne.Epochs&gt; data structure: epoched data\n:class:Epochs &lt;mne.Epochs&gt; objects are a way of representing continuous\ndata as a collection of time-locked trials, stored in an array of shape\n(n_events, n_channels, n_times). They are useful for many statistical\nmethods in neuroscience, and make it easy to quickly overview what occurs\nduring a trial.", "import mne\nimport os.path as op\nimport numpy as np\nfrom matplotlib import pyplot as plt", ":class:Epochs &lt;mne.Epochs&gt; objects can be created in three ways:\n 1. From a :class:Raw &lt;mne.io.Raw&gt; object, along with event times\n 2. From an :class:Epochs &lt;mne.Epochs&gt; object that has been saved as a\n .fif file\n 3. From scratch using :class:EpochsArray &lt;mne.EpochsArray&gt;. See\n tut_creating_data_structures", "data_path = mne.datasets.sample.data_path()\n# Load a dataset that contains events\nraw = mne.io.read_raw_fif(\n op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))\n\n# If your raw object has a stim channel, you can construct an event array\n# easily\nevents = mne.find_events(raw, stim_channel='STI 014')\n\n# Show the number of events (number of rows)\nprint('Number of events:', len(events))\n\n# Show all unique event codes (3rd column)\nprint('Unique event codes:', np.unique(events[:, 2]))\n\n# Specify event codes of interest with descriptive labels.\n# This dataset also has visual left (3) and right (4) events, but\n# to save time and memory we'll just look at the auditory conditions\n# for now.\nevent_id = {'Auditory/Left': 1, 'Auditory/Right': 2}", "Now, we can create an :class:mne.Epochs object with the events we've\nextracted. Note that epochs constructed in this manner will not have their\ndata available until explicitly read into memory, which you can do with\n:func:get_data &lt;mne.Epochs.get_data&gt;. Alternatively, you can use\npreload=True.\nExpose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event\nonsets", "epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,\n baseline=(None, 0), preload=True)\nprint(epochs)", "Epochs behave similarly to :class:mne.io.Raw objects. They have an\n:class:info &lt;mne.Info&gt; attribute that has all of the same\ninformation, as well as a number of attributes unique to the events contained\nwithin the object.", "print(epochs.events[:3])\nprint(epochs.event_id)", "You can select subsets of epochs by indexing the :class:Epochs &lt;mne.Epochs&gt;\nobject directly. Alternatively, if you have epoch names specified in\nevent_id then you may index with strings instead.", "print(epochs[1:5])\nprint(epochs['Auditory/Right'])", "Note the '/'s in the event code labels. These separators allow tag-based\nselection of epoch sets; every string separated by '/' can be entered, and\nreturns the subset of epochs matching any of the strings. E.g.,", "print(epochs['Right'])\nprint(epochs['Right', 'Left'])", "Note that MNE will not complain if you ask for tags not present in the\nobject, as long as it can find some match: the below example is parsed as\n(inclusive) 'Right' OR 'Left'. However, if no match is found, an error is\nreturned.", "epochs_r = epochs['Right']\nepochs_still_only_r = epochs_r[['Right', 'Left']]\nprint(epochs_still_only_r)\n\ntry:\n epochs_still_only_r[\"Left\"]\nexcept KeyError:\n print(\"Tag-based selection without any matches raises a KeyError!\")", "It is also possible to iterate through :class:Epochs &lt;mne.Epochs&gt; objects\nin this way. Note that behavior is different if you iterate on Epochs\ndirectly rather than indexing:", "# These will be epochs objects\nfor i in range(3):\n print(epochs[i])\n\n# These will be arrays\nfor ep in epochs[:2]:\n print(ep)", "You can manually remove epochs from the Epochs object by using\n:func:epochs.drop(idx) &lt;mne.Epochs.drop&gt;, or by using rejection or flat\nthresholds with :func:epochs.drop_bad(reject, flat) &lt;mne.Epochs.drop_bad&gt;.\nYou can also inspect the reason why epochs were dropped by looking at the\nlist stored in epochs.drop_log or plot them with\n:func:epochs.plot_drop_log() &lt;mne.Epochs.plot_drop_log&gt;. The indices\nfrom the original set of events are stored in epochs.selection.", "epochs.drop([0], reason='User reason')\nepochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)\nprint(epochs.drop_log)\nepochs.plot_drop_log()\nprint('Selection from original events:\\n%s' % epochs.selection)\nprint('Removed events (from numpy setdiff1d):\\n%s'\n % (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))\nprint('Removed events (from list comprehension -- should match!):\\n%s'\n % ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))", "If you wish to save the epochs as a file, you can do it with\n:func:mne.Epochs.save. To conform to MNE naming conventions, the\nepochs file names should end with '-epo.fif'.", "epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')\nepochs.save(epochs_fname, overwrite=True)", "Later on you can read the epochs with :func:mne.read_epochs. For reading\nEEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use\npreload=False to save memory, loading the epochs from disk on demand.", "epochs = mne.read_epochs(epochs_fname, preload=False)", "If you wish to look at the average across trial types, then you may do so,\ncreating an :class:Evoked &lt;mne.Evoked&gt; object in the process. Instances\nof Evoked are usually created by calling :func:mne.Epochs.average. For\ncreating Evoked from other data structures see :class:mne.EvokedArray and\ntut_creating_data_structures.", "ev_left = epochs['Auditory/Left'].average()\nev_right = epochs['Auditory/Right'].average()\n\nf, axs = plt.subplots(3, 2, figsize=(10, 5))\n_ = f.suptitle('Left / Right auditory', fontsize=20)\n_ = ev_left.plot(axes=axs[:, 0], show=False, time_unit='s')\n_ = ev_right.plot(axes=axs[:, 1], show=False, time_unit='s')\nplt.tight_layout()", "To export and manipulate Epochs using Pandas see\ntut-epochs-dataframe,\nor to work directly with metadata in MNE-Python see\ntut-epochs-metadata." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eshlykov/mipt-day-after-day
labs/term-6/lab-4-1.ipynb
unlicense
[ "Изучение спектров атома водорода и молекулы йода", "import numpy as np; import scipy as sps; import matplotlib.pyplot as plt; import pandas as pd\n%matplotlib inline", "Неон и ртуть", "table_1 = pd.read_excel('lab-4-1.xlsx', '1'); table_1.iloc[:, :4]", "Водород", "table_2 = pd.read_excel('lab-4-1.xlsx', '2'); table_2.iloc[:, :]\n\ndegrees = table_1.values[:, 0].tolist()[::-1]; len_waves = table_1.values[:, 3][::-1]; red, blue, violet = table_2.values[:, 0]\n\nplt.figure(figsize=(16, 8)); plt.title('Длина волны в зависимости от поворота', fontsize=18); plt.grid(ls='-')\nplt.plot(degrees, len_waves, lw=2, label='Длина волны', color='black')\nplt.xlabel('Градусы на барабане, у. е.', fontsize=15); plt.ylabel('Длина волны, Å', fontsize=15)\nplt.xlim((750, 2600)); plt.ylim((4000, 7100))\nplt.vlines(red, 4000, 7100, color='red'); plt.vlines(blue, 4000, 7100, color='blue'); plt.vlines(violet, 4000, 7100, color='violet')\nplt.errorbar(degrees, len_waves, xerr=[15] * 30, fmt='o', color='black')\nplt.show()\n\ndef max_lt(seq, val):\n res = 0\n while seq[res] < val:\n res += 1\n if res > 0:\n res -= 1\n return res\ndef count_len(x):\n l = max_lt(degrees, x)\n r = l + 1\n coef = (len_waves[r] - len_waves[l]) / (degrees[r] - degrees[l])\n return len_waves[l] + coef * (x - degrees[l])\n\nlred, lblue, lviolet = count_len(red), count_len(blue), count_len(violet)\n\nprint('Длина волны красного света, Å -', round(lred))\nprint('Длина волны синего света, Å -', round(lblue))\nprint('Длина волны фиолетового света, Å -', round(lviolet))", "Погрешность измерения барабана 12 градусов. Отсюда находим погрешность измерения длин волн:\n$$H_{\\alpha} = 4033 \\pm 120~Å,$$ $$H_{\\beta} = 4334 \\pm 60~Å,$$ $$H_{\\delta} = 6554 \\pm 36~Å$$\nСравним с реальностью: по формуле $\\frac{1}{\\lambda} = R(\\frac{1}{4} - \\frac{1}{n^2})$, $n = 3, 5, 6$, $R = 109737 ~ см^{-1}$", "R_real = 109737\nlred_real = (R_real * (0.25 - 1 / 9)) ** (-1) * 10 ** 8\nlblue_real = (R_real * (0.25 - 1 / 25)) ** (-1) * 10 ** 8\nlviolet_real = (R_real * (0.25 - 1 / 36)) ** (-1) * 10 ** 8\nprint(\"Длина волны красного света, Å -\", round(lred_real))\nprint(\"Длина волны синего света, Å -\", round(lblue_real))\nprint(\"Длина волны фиолетового света, Å -\", round(lviolet_real))", "Определение постоянной Ридберга - из формулы выше $R = \\frac{4n^2}{\\lambda (n^2 - 4)}$", "Rred = 4 * 9 / (lred * 5) * 10 ** 8\nRblue = 4 * 25 / (lblue * 21) * 10 ** 8\nRviolet = 4 * 36 / (lviolet * 32) * 10 ** 8\nR = (Rred + Rblue + Rviolet) / 3\nprint('Постоянная для красного света, 1/см -', round(Rred))\nprint('Постоянная для синего света, 1/см -', round(Rblue))\nprint('Постоянная для фиолетового света, 1/см -', round(Rviolet))\nprint('Среднее значение, 1/см -', round(Rred))\nprint('Правильное значение, 1/см -', round(R_real))", "Погрешность измерения постоянной Ридберга зависит только от погрешности измерения барабана, откуда мы получаем:\n$$R_{\\text{red}} = (1.09 \\pm 0.04) \\cdot 10^5 ~ см^{-1}$$\n$$R_{\\text{blue}} = (1.09 \\pm 0.04) \\cdot 10^5 ~ см^{-1}$$\n$$R_{\\text{violet}} = (1.11 \\pm 0.06) \\cdot 10^5 ~ см^{-1}$$\n$$R = (1.09 \\pm 0.04) \\cdot 10^5 ~ см^{-1}$$\nЙод", "l10 = 5401 + (5852 - 5401) * (2320 - 2248) / (2505 - 2248)\nl15 = 5401 + (5852 - 5401) * (2245 - 2248) / (2505 - 2248)\nlgr = 5401 + (5852 - 5401) * (2176 - 2248) / (2505 - 2248)\nprint('Длина волны для n_10, Å -', round(l10))\nprint('Длина волны для n_15, Å -', round(l15))\nprint('Длина волны для n_gr, Å -', round(lgr))", "Энергия колебательного кванта возбужденного состояния молекулы йода: $h\\nu_2 = (h\\nu_{1,5} - h\\nu_{1,0}) / 5$, $\\nu=\\frac{c}{\\lambda}$, $c = 3 \\cdot 10^8 ~ м/c$, $h = 4.13 \\cdot 10^{-15} ~ эВ \\cdot с$", "h = 4.13 * 10 ** -15; c = 3 * 10 ** 8\nnu10 = c / (l10 * 10 ** -10); nu15 = c / (l15 * 10 ** -10); nugr = c / (lgr * 10 ** -10); hnu2 = h * (nu15 - nu10) / 5\nprint('Энергия, эВ -', round(hnu2, 3))", "а) Энергия электронного перехода: $h\\nu_{эл} = h\\nu_{1,0} + h\\nu_1$\nб) Энергия диссоциации молекулы в основном состоянии: $Д_2 = h\\nu_{gr} - h\\nu_1$\nв) Энергия диссоциации молекулы в возбужденногом состоянии: $Д_1 = Д_2 - E_A$\nЗдесь $h\\nu_1 = 0.027 ~ эВ$, $E_A = 0.94 ~ эВ$", "hnu1 = 0.027; E_A = 0.94\nhnuel = h * nu10 + hnu1; D2 = h * nugr- hnu1; D1 = D2 - E_A \nprint('Энергия электронного перехода, эВ -', round(hnuel, 2))\nprint('Энергия диссоциации молекулы в основном состоянии, эВ -', round(D2, 2))\nprint('Энергия диссоциации молекулы в возбужденногом состоянии, эВ -', round(D1, 2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bjodah/pyodesys
examples/van_der_pol_formulations.ipynb
bsd-2-clause
[ "Van der Pol oscillator\nWe will look at the second order differentual equation (see https://en.wikipedia.org/wiki/Van_der_Pol_oscillator):\n$$\n{d^2y_0 \\over dx^2}-\\mu(1-y_0^2){dy_0 \\over dx}+y_0= 0\n$$", "from __future__ import division, print_function\nimport itertools\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\n#from pyodesys.native.gsl import NativeGSLSys as SymbolicSys\nfrom pyodesys.native.cvode import NativeCvodeSys as SymbolicSys\nsp.init_printing()\n%matplotlib inline\nprint(sp.__version__)", "Note that we imported NativeCvodeSys as SymbolicSys, this speed up the time of integration by more than an order of magnitude due to using compiled C++ code for our mathematical expressions.\nOne way to reduce the order of our second order differential equation is to formulate a system of first order ODEs, using:\n$$ y_1 = \\dot y_0 $$\nwhich gives us:\n$$\n\\begin{cases}\n\\dot y_0 = y_1 \\\n\\dot y_1 = \\mu(1-y_0^2) y_1-y_0\n\\end{cases}\n$$\nLet's call this system of ordinary differential equations vdp1:", "vdp1 = lambda x, y, p: [y[1], -y[0] + p[0]*y[1]*(1 - y[0]**2)]\nmu_val = 2.5\ny0_1 = [0.0, 1.0]\ny0_1, (y0_1[0], vdp1(0, y0_1, [mu_val])[0])", "An alternative would be to use use the Liénard transformation:\n$$ y = x - x^3/3 - \\dot x/\\mu $$", "transf = lambda y, dydx, p: [y[0], y[0] - y[0]**3/3 - dydx[0]/p[0]]\nx, mu = sp.symbols('x mu', real=True)\ny = [yi(x) for yi in sp.symbols('y:2', cls=sp.Function)]\ndydx = [yi.diff(x) for yi in y]\n[sp.Eq(yi, expr, evaluate=False) for yi, expr in zip(y, transf(y, dydx, [mu]))] # Just for displaying", "which gives us (we could generate this result using SymPy):\n$$\n\\begin{cases}\n\\dot y_0 = \\mu \\left(y_0-\\frac{1}{3}y_0^3-y_1\\right) \\\n\\dot y_1 = \\frac{1}{\\mu} y_0\n\\end{cases}\n$$", "vdp2 = lambda x, y, p: [p[0]*(y[0] - y[0]**3/3 - y[1]), y[0]/p[0]]\ncalc_y0_2 = lambda y0, mu: transf(y0, vdp1(0, y0, [mu]), [mu])\ny0_2 = calc_y0_2(y0_1, mu_val)\n(y0_2, y0_2[0], vdp2(0, y0_2, [mu_val])[0])\n\ndef solve_and_plot(odesys, y0, tout, mu, indices=None, integrator='native', **kwargs):\n plt.figure(figsize=(16, 4))\n xout, yout, info = odesys.integrate(tout, y0, [mu], integrator=integrator, **kwargs)\n plt.subplot(1, 2, 1)\n odesys.plot_result(indices=indices, ls=('-',), c=('k', 'r'))\n plt.legend(loc='best')\n plt.subplot(1, 2, 2)\n odesys.plot_phase_plane()\n info.pop('internal_xout') # too much output\n info.pop('internal_yout')\n return len(xout), info\n\ntend = 25\n\nodesys1 = SymbolicSys.from_callback(vdp1, 2, 1, names='y0 y1'.split())\nodesys1.exprs\n\nfor mu in [0, 3, 9]:\n solve_and_plot(odesys1, y0_1, np.linspace(0, tend, 500), mu)", "As we see, the period ($\\tau$) varies with $\\mu$, in 1952 Mary Cartwright derived an approximate formula for $\\tau$ (valid for large $\\mu$):\n$$\n\\tau = (3 - 2 \\ln 2)\\mu + 2 \\alpha \\mu^{-1/3}\n$$\nwhere $\\alpha \\approx 2.338$", "tau = lambda mu: 1.6137056388801094*mu + 4.676*mu**(-1./3)\nfor mu in [20, 40, 60]:\n solve_and_plot(odesys1, y0_1, np.linspace(0, 5*tau(mu), 500), mu)", "For larger values of $\\mu$ we run into trouble (the numerical solver fails).\nThe phase portrait is not well resolved due to rapid variations in y1.\nLet us look at our alternative formulation:", "odesys2 = SymbolicSys.from_callback(vdp2, 2, 1, names='y0 y1'.split())\nodesys2.exprs\n\nsolve_and_plot(odesys2, y0_2, tend, mu_val, nsteps=2000)", "This looks much better. Let's see if the solver has an easier time dealing with this formulation of y2 for large values of $\\mu$:", "ls = itertools.cycle(('-', '--', ':'))\nfor mu in [84, 160, 320]:\n y0_2 = calc_y0_2(y0_1, mu)\n print(y0_2)\n solve_and_plot(odesys2, y0_2, np.linspace(0, 5*tau(mu), 500), mu)", "Indeed it has.\nStiffness\nLet us compare the performance of explicit and implicit steppers (Adams and BDF respectivecly) for varying values of $\\mu$", "solve_and_plot(odesys2, calc_y0_2(y0_1, mu_val), tend, mu_val, nsteps=2000)\nJ = odesys2.get_jac()\nJ", "For this simple system we can afford calculating the eigenvalues analytically", "odesys2._NativeCode._written_files\n\nsymbs = odesys2.dep + tuple(odesys2.params)\nsymbs\n\nJeig = J.eigenvals().keys()\neig_cbs = [sp.lambdify(symbs, eig, modules='numpy') for eig in Jeig]\nJeig\n\neigvals = np.array([(eig_cbs[0](*(tuple(yvals)+(mu_val + 0j,))),\n eig_cbs[1](*(tuple(yvals)+(mu_val + 0j,)))) for yvals in odesys2._internal[1]])\n\nplt.plot(odesys2._internal[0], odesys2.stiffness(), label='from SVD')\nplt.plot(odesys2._internal[0], np.abs(eigvals[:,0])/np.abs(eigvals[:,1]), label='analytic')\nplt.legend()", "Audio\nPlotting is instructive from a mathematical standpoint, but these equations were often investigated by to listening to audio amplified by electrical circuits modelled by the equation. So let us generate some audio.", "def arr_to_wav(arr, rate=44100):\n from IPython.display import Audio\n from scipy.io.wavfile import write\n scaled = np.int16(arr/np.max(np.abs(arr)) * 32767)\n write('test.wav', rate, scaled)\n return Audio('test.wav')\n\nxout, yout, info = odesys2.integrate(np.linspace(0, 500*tau(40.0), 2*44100), y0_1, [40.0], integrator='native')\narr_to_wav(yout[:, 0])\n\ndef overlay(tend_mu, odesys=odesys2, time=3, rate=44100, plot=False):\n yout_tot = None\n for tend, mu in tend_mu:\n xout, yout, info = odesys.integrate(np.linspace(0, tend*tau(mu[0]), time*rate), y0_1, mu, integrator='native')\n print(tend, mu, tend*tau(mu[0]))\n if yout_tot is None:\n yout_tot = yout[:, 0]\n else:\n yout_tot += yout[:, 0]\n if plot:\n plt.figure(figsize=(16,4))\n plt.plot(yout_tot[slice(None) if plot is True else slice(0, plot)])\n return arr_to_wav(yout_tot, rate=rate)\n\noverlay([\n (400, [2.0]),\n (410, [2.1]),\n], plot=10000)", "Forced van der pol oscillator", "vdp_forced = lambda x, y, p: [y[1], p[1]*sp.sin(p[2]*x) - y[0] + p[0]*y[1]*(1 - y[0]**2)]\nodesys_forced = SymbolicSys.from_callback(vdp_forced, 2, 3)\noverlay([(700, [8, 1, 0.5])], odesys_forced, plot=5000) # Non-chaotic behavior\n\noverlay([(700, [8, 1.2, 0.6])], odesys_forced, plot=5000) # Chaotic behavior", "Transient $\\mu$", "vdp_transient = lambda x, y, p: [y[1], - y[0] + p[0]*sp.exp(-p[1]*x)*y[1]*(1 - y[0]**2)]\nodesys_transient = SymbolicSys.from_callback(vdp_transient, 2, 2)\nodesys_transient.exprs\n\noverlay([\n (440, [0.1, 1/2500.]),\n (445, [0.5, 1/1000.]),\n (890, [0.1, 2/2500.]),\n (896, [0.5, 2/1000.]),\n], odesys_transient, plot=-1)\n\nodesys2._native._written_files" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
HazyResearch/snorkel
tutorials/workshop/Workshop_7_Advanced_BRAT_Annotator.ipynb
apache-2.0
[ "Creating Gold Annotation Labels with BRAT\nThis is a short tutorial on how to use BRAT (Brat Rapid Annotation Tool), an\nonline environment for collaborative text annotation. \nhttp://brat.nlplab.org/", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\nimport numpy as np\n\n# Connect to the database backend and initalize a Snorkel session\nfrom lib.init import *", "Step 1: Define a Candidate Type", "Spouse = candidate_subclass('Spouse', ['person1', 'person2'])", "a) Select an example Candidate and Document\nCandidates are divided into 3 splits mapping to a unique integer id:\n- 0: training\n- 1: development\n- 2: testing \nIn this tutorial, we'll load our training set candidates and create gold labels for a document using the BRAT interface\nStep 2: Launching BRAT\nBRAT runs as as seperate server application. When you first initialize this server, you need to provide your applications Candidate type. For this tutorial, we use the Spouse relation defined above, which consists of a pair of PERSON named entities connected by marriage. \nCurrently, we only support 1 relation type per-application.", "from snorkel.contrib.brat import BratAnnotator\n\nbrat = BratAnnotator(session, Spouse, encoding='utf-8')", "a) Initialize our document collection\nBRAT creates a local copy of all the documents and annotations found in a split set. We initialize or document collection by passing in a set of candidates via the split id. Annotations are stored as plain text files in standoff format.\n<img align=\"left\" src=\"imgs/brat-login.jpg\" width=\"200px\" style=\"margin-right:50px\">\nAfter launching the BRAT annotator for the first time, you will need to login to begin editing annotations. Navigate your mouse to the upper right-hand corner of the BRAT interface (see Fig. 1) click 'login' and enter the following information:\n\nlogin: brat\npassword: brat\n\nAdvanced BRAT users can setup multiple annotator accounts by adding USER/PASSWORD key pairs to the USER_PASSWORD dictionary found in snokel/contrib/brat/brat-v1.3_Crunchy_Frog/config.py. This is useful if you would like to keep track of multiple annotator judgements for later adjudication or use as labeling functions as per our tutorial on using Snorkel for Crowdsourcing.", "brat.init_collection(\"spouse/train\", split=0)", "We've already generated some BRAT annotations, so import and existing collection for purposes of this tutorial.", "brat.import_collection(\"data/brat-spouse.zip\", overwrite=True)", "b) Launch BRAT Interface in a New Window\nOnce our collection is initialized, we can view specific documents for annotation. The default mode is to generate a HTML link to a new BRAT browser window. Click this link to connect to launch the annotator editor.", "doc_name = '5ede8912-59c9-4ba9-93df-c58cebb542b7'\ndoc = session.query(Document).filter(Document.name==doc_name).one()\n\nbrat.view(\"spouse/train\", doc)", "If you do not have a specific document to edit, you can optionally launch BRAT and use their file browser to navigate through all files found in the target collection.", "brat.view(\"spouse/train\")", "Step 3: Creating Gold Label Annotations\na) Annotating Named Entities\nSpouse relations consist of 2 PERSON named entities. When annotating our validation documents, \nthe first task is to identify our target entities. In this tutorial, we will annotate all PERSON \nmentions found in our example document, though for your application you may choose to only label \nthose that particpate in a true relation. \n<img align=\"right\" src=\"imgs/brat-anno-dialog.jpg\" width=\"400px\" style=\"margin-left:50px\">\nBegin by selecting and highlighting the text corresponding to a PERSON entity. Once highlighted, an annotation dialog will appear on your screen (see image of the BRAT Annotation Dialog Window to the right). If this is correct, click ok. Repeat this for every entity you find in the document.\nAnnotation Guidelines\nWhen developing gold label annotations, you should always discuss and agree on a set of annotator guidelines to share with human labelers. These are the guidelines we used to label the Spouse relation:\n\n<span style=\"color:red\">Do not</span> include formal titles associated with professional roles e.g., Pastor Jeff, Prime Minister Prayut Chan-O-Cha\nDo include English honorifics unrelated to a professional role, e.g., Mr. John Cleese.\n<span style=\"color:red\">Do not</span> include family names/surnames that do not reference a single individual, e.g., the Duggar family.\nDo include informal titles, stage names, fictional characters, and nicknames, e.g., Dog the Bounty Hunter\nInclude possessive's, e.g., Anna's.\n\nb) Annotating Relations\nTo annotate Spouse relations, we look through all pairs of PERSON entities found within a single sentence. BRAT identifies the bounds of each sentence and renders a numbered row in the annotation window (see the left-most column in the image below). \n<img align=\"right\" src=\"imgs/brat-relation.jpg\" width=\"500px\" style=\"margin-left:50px\">\nAnnotating relations is done through simple drag and drop. Begin by clicking and holding on a single PERSON entity and then drag that entity to its corresponding spouse entity. That is it!\nAnnotation Guidelines\n\nRestrict PERSON pairs to those found in the same sentence.\nThe order of PERSON arguments does not matter in this application.\n<span style=\"color:red\">Do not</span> include relations where a PERSON argument is wrong or otherwise incomplete.\n\nStep 4: Scoring Models using BRAT Labels\na) Evaluating System Recall\nCreating gold validation data with BRAT is a critical evaluation step because it allows us to compute an estimate of our model's true recall. When we create labeled data over a candidate set created by Snorkel, we miss mentions of relations that our candidate extraction step misses. This causes us to overestimate the system's true recall.\nIn the code below, we show how to map BRAT annotations to an existing set of Snorkel candidates and compute some associated metrics.", "train_cands = session.query(Candidate).filter(Candidate.split==0).all()", "b) Mapping BRAT Annotations to Snorkel Candidates\nWe annotated a single document using BRAT to illustrate the difference in scores when we factor in the effects of candidate generation.", "%time brat.import_gold_labels(session, \"spouse/train\", train_cands)", "Our candidate extractor only captures 7/14 (50%) of true mentions in this document. Our real system's recall is likely even worse, since we won't correctly predict the label for all true candidates. \nc) Re-loading the Trained LSTM\nWe'll load the LSTM model we trained in Workshop_4_Discriminative_Model_Training.ipynb and use to to predict marginals for our test candidates.", "test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()\n\nfrom snorkel.learning.disc_models.rnn import reRNN\n\nlstm = reRNN(seed=1701, n_threads=None)\nlstm.load(\"spouse.lstm\")\n\nmarginals = lstm.marginals(test_cands)", "d) Create a Subset of Test for Evaluation\nOur measures assume BRAT annotations are complete for the given set of documents! Rather than manually annotating the entire test set, we define a small subset of 10 test documents for hand lableing. We'll then compute the full, recall-corrected metrics for this subset.\nFirst, let's build a query to initalize this candidate collection.", "doc_ids = set(open(\"data/brat_test_docs.tsv\",\"rb\").read().splitlines())\ncid_query = [c.id for c in test_cands if c.get_parent().document.name in doc_ids]\n\nbrat.init_collection(\"spouse/test-subset\", cid_query=cid_query)\n\nbrat.view(\"spouse/test-subset\")", "e) Comparing Unadjusted vs. Adjusted Scores", "import matplotlib.pyplot as plt\nplt.hist(marginals, bins=20)\nplt.show()\n\nfrom snorkel.annotations import load_gold_labels\n\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, load_as_array=True, zero_one=True)\nL_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True)\n\ntp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)\n\nbrat.score(session, test_cands, marginals, \"spouse/test-subset\")\n\nbrat.score(session, test_cands, marginals, \"spouse/test-subset\", recall_correction=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
biothings/biothings_explorer
jupyter notebooks/Multiomics + Service.ipynb
apache-2.0
[ "Use case\nDescription: For a patient with disease X, what are some factors (such as genetic features, comorbidities, etc) that could cause sensitivity or resistance to drug Y?", "from biothings_explorer.query.predict import Predict\nfrom biothings_explorer.query.visualize import display_graph\nimport nest_asyncio\nnest_asyncio.apply()\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "Use Case 1: Mutations in what genes cause sensitivity to drug Y in patients with Lung Adenocarcinoma\nStep 1: Retrieve representation of Lung Adenocarcinoma in BTE", "from biothings_explorer.hint import Hint\nht = Hint()\nluad = ht.query(\"lung adenocarcinoma\")['Disease'][0]\nluad", "Step 2: Use BTE to perform query", "query_config = {\n \"filters\": [\n {\n \"frequency\": {\n \">\": 0.1\n },\n }, \n {\n \"disease_context\": {\n \"=\": \"MONDO:0005061\"\n },\n \"pvalue\": {\n \"<\": 0.05\n },\n \"effect_size\": {\n \"<\": 0.00\n }\n }\n ],\n \"predicates\": [None, None, \"physically_interacts_with\"]\n}\n\npd = Predict(\n input_objs=[luad],\n intermediate_nodes =['Gene', 'ChemicalSubstance'], \n output_types =['Gene'], \n config=query_config\n)\npd.connect(verbose=True)\n\ndf = pd.display_table_view()\ndf\n\ndf[[\"input_label\", \"pred1\", \"pred1_source\", \"node1_label\", \"pred2\", \"pred2_source\", \"node2_label\", \"pred3\", \"pred3_source\", \"output_label\"]]\n\ndf1 = df.query(\"node1_id == output_id\").drop_duplicates()\ndf1.head()\n\nres = display_graph(df1)", "Use Case 2: Mutations in what genes cause sensitivity to drug Y in patients with pancreatic adenocarcinoma", "paad = ht.query(\"MONDO:0006047\")['Disease'][0]\npaad\n\nquery_config = {\n \"filters\": [\n {\n \"frequency\": {\n \">\": 0.1\n },\n }, \n {\n \"disease_context\": {\n \"=\": \"MONDO:0006047\"\n },\n \"pvalue\": {\n \"<\": 0.05\n },\n \"effect_size\": {\n \"<\": 0.00\n }\n }\n ],\n \"predicates\": [None, None, \"physically_interacts_with\"]\n}\n\npd = Predict([paad], ['Gene', 'ChemicalSubstance'], ['Gene'], config=query_config)\npd.connect(verbose=True)\n\ndf1 = pd.display_table_view()\ndf1\n\ndf2 = df1.query(\"node1_id == output_id\").drop_duplicates()\ndf2[[\"input_label\", \"pred1\", \"pred1_source\", \"node1_label\", \"pred2\", \"pred2_source\", \"node2_label\", \"pred3\", \"pred3_source\", \"output_label\"]]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jtwhite79/pyemu
autotest/smoother/.ipynb_checkpoints/verify_null_space_proj-checkpoint.ipynb
bsd-3-clause
[ "verify pyEMU null space projection with the freyberg problem", "%matplotlib inline\nimport os\nimport shutil\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport pyemu", "instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian", "mc = pyemu.MonteCarlo(jco=\"freyberg.jcb\",verbose=False)\nmc.drop_prior_information()\njco_ord = mc.jco.get(mc.pst.obs_names,mc.pst.par_names)\nord_base = \"freyberg_ord\"\njco_ord.to_binary(ord_base + \".jco\") \nmc.pst.control_data.parsaverun = ' '\nmc.pst.write(ord_base+\".pst\")", "Draw some vectors from the prior and write the vectors to par files", "# setup the dirs to hold all this stuff\npar_dir = \"prior_par_draws\"\nproj_dir = \"proj_par_draws\"\nparfile_base = os.path.join(par_dir,\"draw_\")\nprojparfile_base = os.path.join(proj_dir,\"draw_\")\nif os.path.exists(par_dir):\n shutil.rmtree(par_dir)\nos.mkdir(par_dir)\nif os.path.exists(proj_dir):\n shutil.rmtree(proj_dir)\nos.mkdir(proj_dir)\n\n# make some draws\nmc.draw(10)\n\n#write them to files\nmc.parensemble.to_parfiles(parfile_base)", "Run pnulpar", "exe = os.path.join(\"exe\",\"pnulpar.exe\")\nargs = [ord_base+\".pst\",\"y\",\"5\",\"y\",\"pnulpar_qhalfx.mat\",parfile_base,projparfile_base]\nin_file = os.path.join(\"misc\",\"pnulpar.in\")\nwith open(in_file,'w') as f:\n f.write('\\n'.join(args)+'\\n') \nos.system(exe + ' <'+in_file)\n\n\npnul_en = pyemu.ParameterEnsemble(mc.pst)\nparfiles =[os.path.join(proj_dir,f) for f in os.listdir(proj_dir) if f.endswith(\".par\")]\npnul_en.read_parfiles(parfiles)\n\npnul_en.loc[:,\"fname\"] = pnul_en.index\npnul_en.index = pnul_en.fname.apply(lambda x:str(int(x.split('.')[0].split('_')[-1])))\nf = pnul_en.pop(\"fname\")\n\n\npnul_en.sort(axis=1)\npnul_en", "Now for pyemu", "print(mc.parensemble.islog)\n\nen = mc.project_parensemble(nsing=5,inplace=False)\nprint(mc.parensemble.islog)\n\nen.sort(axis=1)\nen\n\npnul_en.sort(inplace=True)\nen.sort(inplace=True)\ndiff = 100.0 * np.abs(pnul_en - en) / en\ndmax = diff.max(axis=0)\ndmax.sort(ascending=False,inplace=True)\ndmax.plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YuriyGuts/kaggle-quora-question-pairs
notebooks/feature-oofp-nn-siamese-lstm-attention.ipynb
mit
[ "Feature: Out-Of-Fold Predictions from a Siamese LSTM with Attention\n<img src=\"assets/siamese-lstm-attention.png\" alt=\"Network Architecture\" style=\"height: 700px;\" />\nImports\nThis utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.", "from pygoose import *\n\nimport gc\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.metrics import *\n\nkg.gpu.cuda_use_gpus(gpu_ids=0)\n\nfrom keras import backend as K\nfrom keras.models import Model, Sequential\nfrom keras.layers import *\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint", "Config\nAutomatically discover the paths to various data folders and compose the project structure.", "project = kg.Project.discover()", "Identifier for storing these features on disk and referring to them later.", "feature_list_id = 'oofp_nn_siamese_lstm_attention'", "Make subsequent NN runs reproducible.", "RANDOM_SEED = 42\n\nnp.random.seed(RANDOM_SEED)", "Read data\nWord embedding lookup matrix.", "embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')", "Padded sequences of word indices for every question.", "X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')\nX_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')\n\nX_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')\nX_test_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle')\n\ny_train = kg.io.load(project.features_dir + 'y_train.pickle')", "Word embedding properties.", "EMBEDDING_DIM = embedding_matrix.shape[-1]\nVOCAB_LENGTH = embedding_matrix.shape[0]\nMAX_SEQUENCE_LENGTH = X_train_q1.shape[-1]\n\nprint(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH)", "Define models", "def contrastive_loss(y_true, y_pred):\n \"\"\"\n Contrastive loss from Hadsell-et-al.'06\n http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf\n \"\"\" \n margin = 1\n return K.mean((1 - y_true) * K.square(y_pred) +\n y_true * K.square(K.maximum(margin - y_pred, 0)))\n\nclass AttentionWithContext(Layer):\n \"\"\"\n Attention operation, with a context/query vector, for temporal data.\n Supports Masking.\n \n Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]\n \"Hierarchical Attention Networks for Document Classification\" by using a context\n vector to assist the attention.\n \n # Input shape\n 3D tensor with shape: `(samples, steps, features)`.\n # Output shape\n 2D tensor with shape: `(samples, features)`.\n\n Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True.\n \n The dimensions are inferred based on the output shape of the RNN.\n Example:\n model.add(LSTM(64, return_sequences=True))\n model.add(AttentionWithContext())\n \"\"\"\n\n def __init__(self, init='glorot_uniform',\n kernel_regularizer=None, bias_regularizer=None,\n kernel_constraint=None, bias_constraint=None, **kwargs):\n \n self.supports_masking = True\n self.init = initializers.get(init)\n self.kernel_initializer = initializers.get('glorot_uniform')\n\n self.kernel_regularizer = regularizers.get(kernel_regularizer)\n self.bias_regularizer = regularizers.get(bias_regularizer)\n\n self.kernel_constraint = constraints.get(kernel_constraint)\n self.bias_constraint = constraints.get(bias_constraint)\n\n super(AttentionWithContext, self).__init__(**kwargs)\n\n def build(self, input_shape):\n self.kernel = self.add_weight(\n (input_shape[-1], 1),\n initializer=self.kernel_initializer,\n name='{}_W'.format(self.name),\n regularizer=self.kernel_regularizer,\n constraint=self.kernel_constraint\n )\n self.b = self.add_weight(\n (input_shape[1],),\n initializer='zero',\n name='{}_b'.format(self.name),\n regularizer=self.bias_regularizer,\n constraint=self.bias_constraint\n )\n self.u = self.add_weight(\n (input_shape[1],),\n initializer=self.kernel_initializer,\n name='{}_u'.format(self.name),\n regularizer=self.kernel_regularizer,\n constraint=self.kernel_constraint\n )\n self.built = True\n\n def compute_mask(self, input, mask):\n return None\n\n def call(self, x, mask=None):\n multdata = K.dot(x, self.kernel) # (x, 40, 300) * (300, 1) => (x, 40, 1)\n multdata = K.squeeze(multdata, -1) # (x, 40)\n multdata = multdata + self.b # (x, 40) + (40,)\n\n multdata = K.tanh(multdata) # (x, 40)\n\n multdata = multdata * self.u # (x, 40) * (40, 1) => (x, 1)\n multdata = K.exp(multdata) # (x, 1)\n\n # Apply mask after the exp. will be re-normalized next.\n if mask is not None:\n mask = K.cast(mask, K.floatx()) # (x, 40)\n multdata = mask * multdata # (x, 40) * (x, 40, )\n\n # In some cases, especially in the early stages of training, the sum may be almost zero\n # and this results in NaN's. A workaround is to add a very small positive number ε to the sum.\n # a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())\n multdata /= K.cast(K.sum(multdata, axis=1, keepdims=True) + K.epsilon(), K.floatx())\n multdata = K.expand_dims(multdata)\n weighted_input = x * multdata\n return K.sum(weighted_input, axis=1)\n\n def compute_output_shape(self, input_shape):\n return (input_shape[0], input_shape[-1],)\n\ndef create_model(params):\n embedding_layer = Embedding(\n VOCAB_LENGTH,\n EMBEDDING_DIM,\n weights=[embedding_matrix],\n input_length=MAX_SEQUENCE_LENGTH,\n trainable=False,\n )\n lstm_layer = LSTM(\n params['num_lstm'],\n dropout=params['lstm_dropout_rate'],\n recurrent_dropout=params['lstm_dropout_rate'],\n return_sequences=True,\n )\n attention_layer = AttentionWithContext()\n\n sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')\n embedded_sequences_1 = embedding_layer(sequence_1_input)\n x1 = attention_layer(lstm_layer(embedded_sequences_1))\n\n sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')\n embedded_sequences_2 = embedding_layer(sequence_2_input)\n y1 = attention_layer(lstm_layer(embedded_sequences_2))\n\n merged = concatenate([x1, y1])\n merged = Dropout(params['dense_dropout_rate'])(merged)\n merged = BatchNormalization()(merged)\n\n merged = Dense(params['num_dense'], activation='relu')(merged)\n merged = Dropout(params['dense_dropout_rate'])(merged)\n merged = BatchNormalization()(merged)\n\n output = Dense(1, activation='sigmoid')(merged)\n\n model = Model(\n inputs=[sequence_1_input, sequence_2_input],\n outputs=output\n )\n\n model.compile(\n loss=contrastive_loss,\n optimizer='nadam',\n metrics=['accuracy']\n )\n\n return model\n\ndef predict(model, X_q1, X_q2):\n \"\"\"\n Mirror the pairs, compute two separate predictions, and average them.\n \"\"\"\n \n y1 = model.predict([X_q1, X_q2], batch_size=1024, verbose=1).reshape(-1) \n y2 = model.predict([X_q2, X_q1], batch_size=1024, verbose=1).reshape(-1) \n return (y1 + y2) / 2", "Partition the data", "NUM_FOLDS = 5\n\nkfold = StratifiedKFold(\n n_splits=NUM_FOLDS,\n shuffle=True,\n random_state=RANDOM_SEED\n)", "Create placeholders for out-of-fold predictions.", "y_train_oofp = np.zeros_like(y_train, dtype='float64')\n\ny_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS))", "Define hyperparameters", "BATCH_SIZE = 2048\n\nMAX_EPOCHS = 200", "Best values picked by Bayesian optimization.", "model_params = {\n 'dense_dropout_rate': 0.164,\n 'lstm_dropout_rate': 0.324,\n 'num_dense': 132,\n 'num_lstm': 254,\n}", "The path where the best weights of the current model will be saved.", "model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5'", "Fit the folds and compute out-of-fold predictions", "%%time\n\n# Iterate through folds.\nfor fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)):\n \n # Augment the training set by mirroring the pairs.\n X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]])\n X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix_train]])\n\n X_fold_val_q1 = np.vstack([X_train_q1[ix_val], X_train_q2[ix_val]])\n X_fold_val_q2 = np.vstack([X_train_q2[ix_val], X_train_q1[ix_val]])\n\n # Ground truth should also be \"mirrored\".\n y_fold_train = np.concatenate([y_train[ix_train], y_train[ix_train]])\n y_fold_val = np.concatenate([y_train[ix_val], y_train[ix_val]])\n \n print()\n print(f'Fitting fold {fold_num + 1} of {kfold.n_splits}')\n print()\n \n # Compile a new model.\n model = create_model(model_params)\n\n # Train.\n model.fit(\n [X_fold_train_q1, X_fold_train_q2], y_fold_train,\n validation_data=([X_fold_val_q1, X_fold_val_q2], y_fold_val),\n\n batch_size=BATCH_SIZE,\n epochs=MAX_EPOCHS,\n verbose=1,\n \n callbacks=[\n # Stop training when the validation loss stops improving.\n EarlyStopping(\n monitor='val_loss',\n min_delta=0.001,\n patience=3,\n verbose=1,\n mode='auto',\n ),\n # Save the weights of the best epoch.\n ModelCheckpoint(\n model_checkpoint_path,\n monitor='val_loss',\n save_best_only=True,\n verbose=2,\n ),\n ],\n )\n \n # Restore the best epoch.\n model.load_weights(model_checkpoint_path)\n \n # Compute out-of-fold predictions.\n y_train_oofp[ix_val] = predict(model, X_train_q1[ix_val], X_train_q2[ix_val])\n y_test_oofp[:, fold_num] = predict(model, X_test_q1, X_test_q2)\n \n # Clear GPU memory.\n K.clear_session()\n del X_fold_train_q1\n del X_fold_train_q2\n del X_fold_val_q1\n del X_fold_val_q2\n del model\n gc.collect()\n\ncv_score = log_loss(y_train, y_train_oofp)\nprint('CV score:', cv_score)", "Save features", "features_train = y_train_oofp.reshape((-1, 1))\n\nfeatures_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1))\n\nprint('X train:', features_train.shape)\nprint('X test: ', features_test.shape)\n\nfeature_names = [feature_list_id]\n\nproject.save_features(features_train, features_test, feature_names, feature_list_id)", "Explore", "pd.DataFrame(features_test).plot.hist()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/1_training_at_scale.ipynb
apache-2.0
[ "Training at scale with AI Platform Training Service\nLearning Objectives:\n 1. Learn how to organize your training code into a Python package\n 1. Train your model using cloud infrastructure via Google Cloud AI Platform Training Service\n 1. (optional) Learn how to run your training package using Docker containers and push training Docker images on a Docker registry\nIntroduction\nIn this notebook we'll make the jump from training locally, to do training in the cloud. We'll take advantage of Google Cloud's AI Platform Training Service. \nAI Platform Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.\nSpecify your project name and bucket name in the cell below.", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\nimport os\n\nfrom google.cloud import bigquery", "Change the following cell as necessary:", "# Change with your own bucket and project below:\nBUCKET = \"<BUCKET>\"\nPROJECT = \"<PROJECT>\"\nREGION = \"<YOUR REGION>\"\n\nOUTDIR = \"gs://{bucket}/taxifare/data\".format(bucket=BUCKET)\n\nos.environ['BUCKET'] = BUCKET\nos.environ['OUTDIR'] = OUTDIR\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\nos.environ['TFVERSION'] = \"2.1\"", "Confirm below that the bucket is regional and its region equals to the specified region:", "%%bash\ngsutil ls -Lb gs://$BUCKET | grep \"gs://\\|Location\"\necho $REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "Create BigQuery tables\nIf you have not already created a BigQuery dataset for our data, run the following cell:", "bq = bigquery.Client(project = PROJECT)\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\ntry:\n bq.create_dataset(dataset)\n print(\"Dataset created\")\nexcept:\n print(\"Dataset already exists\")", "Let's create a table with 1 million examples.\nNote that the order of columns is exactly what was in our CSV files.", "%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0", "Make the validation dataset be 1/10 the size of the training dataset.", "%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_valid_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0", "Export the tables as CSV files", "%%bash\n\necho \"Deleting current contents of $OUTDIR\"\ngsutil -m -q rm -rf $OUTDIR\n\necho \"Extracting training data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_training_data \\\n $OUTDIR/taxi-train-*.csv\n\necho \"Extracting validation data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_valid_data \\\n $OUTDIR/taxi-valid-*.csv\n\ngsutil ls -l $OUTDIR\n\n!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2", "Make code compatible with AI Platform Training Service\nIn order to make our code compatible with AI Platform Training Service we need to make the following changes:\n\nUpload data to Google Cloud Storage \nMove code into a trainer Python package\nSubmit training job with gcloud to train on AI Platform\n\nUpload data to Google Cloud Storage (GCS)\nCloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.", "!gsutil ls gs://$BUCKET/taxifare/data", "Move code into a python package\nThe first thing to do is to convert your training code snippets into a regular Python package that we will then pip install into the Docker container. \nA Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.\nCreate the package directory\nOur package directory contains 3 files:", "ls ./taxifare/trainer/", "Paste existing code into model.py\nA Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.\nIn the cell below, we write the contents of the cell into model.py packaging the model we \ndeveloped in the previous labs so that we can deploy it to AI Platform Training Service.", "%%writefile ./taxifare/trainer/model.py\nimport datetime\nimport logging\nimport os\nimport shutil\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow.keras import activations\nfrom tensorflow.keras import callbacks\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import models\n\nfrom tensorflow import feature_column as fc\n\nlogging.info(tf.version.VERSION)\n\n\nCSV_COLUMNS = [\n 'fare_amount',\n 'pickup_datetime',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'passenger_count',\n 'key',\n]\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\n\n\ndef features_and_labels(row_data):\n for unwanted_col in ['key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label\n\n\ndef load_dataset(pattern, batch_size, num_repeat):\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=pattern,\n batch_size=batch_size,\n column_names=CSV_COLUMNS,\n column_defaults=DEFAULTS,\n num_epochs=num_repeat,\n )\n return dataset.map(features_and_labels)\n\n\ndef create_train_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=None)\n return dataset.prefetch(1)\n\n\ndef create_eval_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=1)\n return dataset.prefetch(1)\n\n\ndef parse_datetime(s):\n if type(s) is not str:\n s = s.numpy().decode('utf-8')\n return datetime.datetime.strptime(s, \"%Y-%m-%d %H:%M:%S %Z\")\n\n\ndef euclidean(params):\n lon1, lat1, lon2, lat2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff*londiff + latdiff*latdiff)\n\n\ndef get_dayofweek(s):\n ts = parse_datetime(s)\n return DAYS[ts.weekday()]\n\n\n@tf.function\ndef dayofweek(ts_in):\n return tf.map_fn(\n lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),\n ts_in\n )\n\n\n@tf.function\ndef fare_thresh(x):\n return 60 * activations.relu(x)\n\n\ndef transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):\n # Pass-through columns\n transformed = inputs.copy()\n del transformed['pickup_datetime']\n\n feature_columns = {\n colname: fc.numeric_column(colname)\n for colname in NUMERIC_COLS\n }\n\n # Scaling longitude from range [-70, -78] to [0, 1]\n for lon_col in ['pickup_longitude', 'dropoff_longitude']:\n transformed[lon_col] = layers.Lambda(\n lambda x: (x + 78)/8.0,\n name='scale_{}'.format(lon_col)\n )(inputs[lon_col])\n\n # Scaling latitude from range [37, 45] to [0, 1]\n for lat_col in ['pickup_latitude', 'dropoff_latitude']:\n transformed[lat_col] = layers.Lambda(\n lambda x: (x - 37)/8.0,\n name='scale_{}'.format(lat_col)\n )(inputs[lat_col])\n\n # Adding Euclidean dist (no need to be accurate: NN will calibrate it)\n transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([\n inputs['pickup_longitude'],\n inputs['pickup_latitude'],\n inputs['dropoff_longitude'],\n inputs['dropoff_latitude']\n ])\n feature_columns['euclidean'] = fc.numeric_column('euclidean')\n\n # hour of day from timestamp of form '2010-02-08 09:17:00+00:00'\n transformed['hourofday'] = layers.Lambda(\n lambda x: tf.strings.to_number(\n tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),\n name='hourofday'\n )(inputs['pickup_datetime'])\n feature_columns['hourofday'] = fc.indicator_column(\n fc.categorical_column_with_identity(\n 'hourofday', num_buckets=24))\n\n latbuckets = np.linspace(0, 1, nbuckets).tolist()\n lonbuckets = np.linspace(0, 1, nbuckets).tolist()\n b_plat = fc.bucketized_column(\n feature_columns['pickup_latitude'], latbuckets)\n b_dlat = fc.bucketized_column(\n feature_columns['dropoff_latitude'], latbuckets)\n b_plon = fc.bucketized_column(\n feature_columns['pickup_longitude'], lonbuckets)\n b_dlon = fc.bucketized_column(\n feature_columns['dropoff_longitude'], lonbuckets)\n ploc = fc.crossed_column(\n [b_plat, b_plon], nbuckets * nbuckets)\n dloc = fc.crossed_column(\n [b_dlat, b_dlon], nbuckets * nbuckets)\n pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)\n feature_columns['pickup_and_dropoff'] = fc.embedding_column(\n pd_pair, 100)\n\n return transformed, feature_columns\n\n\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model(nbuckets, nnsize, lr):\n # input layer is all float except for pickup_datetime which is a string\n STRING_COLS = ['pickup_datetime']\n NUMERIC_COLS = (\n set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)\n )\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n inputs.update({\n colname: layers.Input(name=colname, shape=(), dtype='string')\n for colname in STRING_COLS\n })\n\n # transforms\n transformed, feature_columns = transform(\n inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)\n\n x = dnn_inputs\n for layer, nodes in enumerate(nnsize):\n x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)\n output = layers.Dense(1, name='fare')(x)\n\n model = models.Model(inputs, output)\n #TODO 1a\n lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)\n model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])\n\n return model\n\n\ndef train_and_evaluate(hparams):\n #TODO 1b\n batch_size = hparams['batch_size']\n nbuckets = hparams['nbuckets']\n lr = hparams['lr']\n nnsize = hparams['nnsize']\n eval_data_path = hparams['eval_data_path']\n num_evals = hparams['num_evals']\n num_examples_to_train_on = hparams['num_examples_to_train_on']\n output_dir = hparams['output_dir']\n train_data_path = hparams['train_data_path']\n\n timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n savedmodel_dir = os.path.join(output_dir, 'export/savedmodel')\n model_export_path = os.path.join(savedmodel_dir, timestamp)\n checkpoint_path = os.path.join(output_dir, 'checkpoints')\n tensorboard_path = os.path.join(output_dir, 'tensorboard')\n\n if tf.io.gfile.exists(output_dir):\n tf.io.gfile.rmtree(output_dir)\n\n model = build_dnn_model(nbuckets, nnsize, lr)\n logging.info(model.summary())\n\n trainds = create_train_dataset(train_data_path, batch_size)\n evalds = create_eval_dataset(eval_data_path, batch_size)\n\n steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)\n\n checkpoint_cb = callbacks.ModelCheckpoint(\n checkpoint_path,\n save_weights_only=True,\n verbose=1\n )\n tensorboard_cb = callbacks.TensorBoard(tensorboard_path)\n\n history = model.fit(\n trainds,\n validation_data=evalds,\n epochs=num_evals,\n steps_per_epoch=max(1, steps_per_epoch),\n verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch\n callbacks=[checkpoint_cb, tensorboard_cb]\n )\n\n # Exporting the model with default serving function.\n tf.saved_model.save(model, model_export_path)\n return history\n", "Modify code to read data from and write checkpoint files to GCS\nIf you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)\nThis is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.\nWe specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.", "%%writefile taxifare/trainer/task.py\nimport argparse\n\nfrom trainer import model\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--batch_size\",\n help=\"Batch size for training steps\",\n type=int,\n default=32\n )\n parser.add_argument(\n \"--eval_data_path\",\n help=\"GCS location pattern of eval files\",\n required=True\n )\n parser.add_argument(\n \"--nnsize\",\n help=\"Hidden layer sizes (provide space-separated sizes)\",\n nargs=\"+\",\n type=int,\n default=[32, 8]\n )\n parser.add_argument(\n \"--nbuckets\",\n help=\"Number of buckets to divide lat and lon with\",\n type=int,\n default=10\n )\n parser.add_argument(\n \"--lr\",\n help = \"learning rate for optimizer\",\n type = float,\n default = 0.001\n )\n parser.add_argument(\n \"--num_evals\",\n help=\"Number of times to evaluate model on eval data training.\",\n type=int,\n default=5\n )\n parser.add_argument(\n \"--num_examples_to_train_on\",\n help=\"Number of examples to train on.\",\n type=int,\n default=100\n )\n parser.add_argument(\n \"--output_dir\",\n help=\"GCS location to write checkpoints and export models\",\n required=True\n )\n parser.add_argument(\n \"--train_data_path\",\n help=\"GCS location pattern of train files containing eval URLs\",\n required=True\n )\n parser.add_argument(\n \"--job-dir\",\n help=\"this model ignores this field, but it is required by gcloud\",\n default=\"junk\"\n )\n args = parser.parse_args()\n hparams = args.__dict__\n hparams.pop(\"job-dir\", None)\n\n model.train_and_evaluate(hparams)\n", "Run trainer module package locally\nNow we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.", "%%bash\n\nEVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*\nTRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*\nOUTPUT_DIR=./taxifare-model\n\ntest ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR}\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n \npython3 -m trainer.task \\\n--eval_data_path $EVAL_DATA_PATH \\\n--output_dir $OUTPUT_DIR \\\n--train_data_path $TRAIN_DATA_PATH \\\n--batch_size 5 \\\n--num_examples_to_train_on 100 \\\n--num_evals 1 \\\n--nbuckets 10 \\\n--lr 0.001 \\\n--nnsize 32 8", "Run your training package on Cloud AI Platform\nOnce the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:\n- jobid: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness\n- region: Cloud region to train in. See here for supported AI Platform Training Service regions\nThe arguments before -- \\ are for AI Platform Training Service.\nThe arguments after -- \\ are sent to our task.py.\nBecause this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Cloud AI Platform section.", "%%bash\n\n# Output directory and jobID\nOUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)\nJOBID=taxifare_$(date -u +%y%m%d_%H%M%S)\necho ${OUTDIR} ${REGION} ${JOBID}\ngsutil -m rm -rf ${OUTDIR}\n\n# Model and training hyperparameters\nBATCH_SIZE=50\nNUM_EXAMPLES_TO_TRAIN_ON=100\nNUM_EVALS=100\nNBUCKETS=10\nLR=0.001\nNNSIZE=\"32 8\"\n\n# GCS paths\nGCS_PROJECT_PATH=gs://$BUCKET/taxifare\nDATA_PATH=$GCS_PROJECT_PATH/data\nTRAIN_DATA_PATH=$DATA_PATH/taxi-train*\nEVAL_DATA_PATH=$DATA_PATH/taxi-valid*\n\n#TODO 2\ngcloud ai-platform jobs submit training $JOBID \\\n --module-name=trainer.task \\\n --package-path=taxifare/trainer \\\n --staging-bucket=gs://${BUCKET} \\\n --python-version=3.7 \\\n --runtime-version=${TFVERSION} \\\n --region=${REGION} \\\n -- \\\n --eval_data_path $EVAL_DATA_PATH \\\n --output_dir $OUTDIR \\\n --train_data_path $TRAIN_DATA_PATH \\\n --batch_size $BATCH_SIZE \\\n --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \\\n --num_evals $NUM_EVALS \\\n --nbuckets $NBUCKETS \\\n --lr $LR \\\n --nnsize $NNSIZE ", "(Optional) Run your training package using Docker container\nAI Platform Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on AI Platform Training. \nIn this last section, we'll see how to submit a Cloud training job using a customized Docker image. \nContainerizing our ./taxifare/trainer package involves 3 steps:\n\nWriting a Dockerfile in ./taxifare\nBuilding the Docker image\nPushing it to the Google Cloud container registry in our GCP project\n\nThe Dockerfile specifies\n1. How the container needs to be provisioned so that all the dependencies in our code are satisfied\n2. Where to copy our trainer Package in the container and how to install it (pip install /trainer)\n3. What command to run when the container is ran (the ENTRYPOINT line)", "%%writefile ./taxifare/Dockerfile\nFROM gcr.io/deeplearning-platform-release/tf2-cpu\n# TODO 3\n\nCOPY . /code\n\nWORKDIR /code\n\nENTRYPOINT [\"python3\", \"-m\", \"trainer.task\"]\n\n!gcloud auth configure-docker\n\n%%bash \n\nPROJECT_DIR=$(cd ./taxifare && pwd)\nPROJECT_ID=$(gcloud config list project --format \"value(core.project)\")\nIMAGE_NAME=taxifare_training_container\nDOCKERFILE=$PROJECT_DIR/Dockerfile\nIMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_NAME\n\ndocker build $PROJECT_DIR -f $DOCKERFILE -t $IMAGE_URI\n\ndocker push $IMAGE_URI", "Remark: If you prefer to build the container image from the command line, we have written a script for that ./taxifare/scripts/build.sh. This script reads its configuration from the file ./taxifare/scripts/env.sh. You can configure these arguments the way you want in that file. You can also simply type make build from within ./taxifare to build the image (which will invoke the build script). Similarly, we wrote the script ./taxifare/scripts/push.sh to push the Docker image, which you can also trigger by typing make push from within ./taxifare.\nTrain using a custom container on AI Platform\nTo submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:\n- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness\n- master-image-uri: The uri of the Docker image we pushed in the Google Cloud registry\n- region: Cloud region to train in. See here for supported AI Platform Training Service regions\nThe arguments before -- \\ are for AI Platform Training Service.\nThe arguments after -- \\ are sent to our task.py.\nYou can track your job and view logs using cloud console.", "%%bash\n\nPROJECT_ID=$(gcloud config list project --format \"value(core.project)\")\nBUCKET=$PROJECT_ID\nREGION=\"us-central1\"\n\n# Output directory and jobID\nOUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)\nJOBID=taxifare_container_$(date -u +%y%m%d_%H%M%S)\necho ${OUTDIR} ${REGION} ${JOBID}\ngsutil -m rm -rf ${OUTDIR}\n\n# Model and training hyperparameters\nBATCH_SIZE=50\nNUM_EXAMPLES_TO_TRAIN_ON=100\nNUM_EVALS=100\nNBUCKETS=10\nNNSIZE=\"32 8\"\n\n# AI-Platform machines to use for training\nMACHINE_TYPE=n1-standard-4\nSCALE_TIER=CUSTOM\n\n# GCS paths.\nGCS_PROJECT_PATH=gs://$BUCKET/taxifare\nDATA_PATH=$GCS_PROJECT_PATH/data\nTRAIN_DATA_PATH=$DATA_PATH/taxi-train*\nEVAL_DATA_PATH=$DATA_PATH/taxi-valid*\n\nIMAGE_NAME=taxifare_training_container\nIMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_NAME\n\ngcloud beta ai-platform jobs submit training $JOBID \\\n --staging-bucket=gs://$BUCKET \\\n --region=$REGION \\\n --master-image-uri=$IMAGE_URI \\\n --master-machine-type=$MACHINE_TYPE \\\n --scale-tier=$SCALE_TIER \\\n -- \\\n --eval_data_path $EVAL_DATA_PATH \\\n --output_dir $OUTDIR \\\n --train_data_path $TRAIN_DATA_PATH \\\n --batch_size $BATCH_SIZE \\\n --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \\\n --num_evals $NUM_EVALS \\\n --nbuckets $NBUCKETS \\\n --nnsize $NNSIZE \n", "Remark: If you prefer submitting your jobs for training on the AI-platform using the command line, we have written the ./taxifare/scripts/submit.sh for you (that you can also invoke using make submit from within ./taxifare). As the other scripts, it reads it configuration variables from ./taxifare/scripts/env.sh.\nCopyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wanderer2/pymc3
docs/source/notebooks/lda-advi-aevb.ipynb
apache-2.0
[ "Automatic autoencoding variational Bayes for latent dirichlet allocation with PyMC3\nFor probabilistic models with latent variables, autoencoding variational Bayes (AEVB; Kingma and Welling, 2014) is an algorithm which allows us to perform inference efficiently for large datasets with an encoder. In AEVB, the encoder is used to infer variational parameters of approximate posterior on latent variables from given samples. By using tunable and flexible encoders such as multilayer perceptrons (MLPs), AEVB approximates complex variational posterior based on mean-field approximation, which does not utilize analytic representations of the true posterior. Combining AEVB with ADVI (Kucukelbir et al., 2015), we can perform posterior inference on almost arbitrary probabilistic models involving continuous latent variables. \nI have implemented AEVB for ADVI with mini-batch on PyMC3. To demonstrate flexibility of this approach, we will apply this to latent dirichlet allocation (LDA; Blei et al., 2003) for modeling documents. In the LDA model, each document is assumed to be generated from a multinomial distribution, whose parameters are treated as latent variables. By using AEVB with an MLP as an encoder, we will fit the LDA model to the 20-newsgroups dataset. \nIn this example, extracted topics by AEVB seem to be qualitatively comparable to those with a standard LDA implementation, i.e., online VB implemented on scikit-learn. Unfortunately, the predictive accuracy of unseen words is less than the standard implementation of LDA, it might be due to the mean-field approximation. However, the combination of AEVB and ADVI allows us to quickly apply more complex probabilistic models than LDA to big data with the help of mini-batches. I hope this notebook will attract readers, especially practitioners working on a variety of machine learning tasks, to probabilistic programming and PyMC3.", "%matplotlib inline\nimport sys, os\n\nimport theano\ntheano.config.floatX = 'float64'\n\nfrom collections import OrderedDict\nfrom copy import deepcopy\nimport numpy as np\nfrom time import time\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.datasets import fetch_20newsgroups\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom theano import shared\nimport theano.tensor as tt\nfrom theano.sandbox.rng_mrg import MRG_RandomStreams\n\nimport pymc3 as pm\nfrom pymc3 import Dirichlet\nfrom pymc3.distributions.transforms import t_stick_breaking\nfrom pymc3.variational.advi import advi, sample_vp", "Dataset\nHere, we will use the 20-newsgroups dataset. This dataset can be obtained by using functions of scikit-learn. The below code is partially adopted from an example of scikit-learn (http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf_lda.html). We set the number of words in the vocabulary to 1000.", "# The number of words in the vocaburary\nn_words = 1000\n\nprint(\"Loading dataset...\")\nt0 = time()\ndataset = fetch_20newsgroups(shuffle=True, random_state=1,\n remove=('headers', 'footers', 'quotes'))\ndata_samples = dataset.data\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf (raw term count) features for LDA.\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_words,\n stop_words='english')\n\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nfeature_names = tf_vectorizer.get_feature_names()\nprint(\"done in %0.3fs.\" % (time() - t0))", "Each document is represented by 1000-dimensional term-frequency vector. Let's check the data.", "plt.plot(tf[:10, :].toarray().T);", "We split the whole documents into training and test sets. The number of tokens in the training set is 480K. Sparsity of the term-frequency document matrix is 0.025%, which implies almost all components in the term-frequency matrix is zero.", "n_samples_tr = 10000\nn_samples_te = tf.shape[0] - n_samples_tr\ndocs_tr = tf[:n_samples_tr, :]\ndocs_te = tf[n_samples_tr:, :]\nprint('Number of docs for training = {}'.format(docs_tr.shape[0]))\nprint('Number of docs for test = {}'.format(docs_te.shape[0]))\n\nn_tokens = np.sum(docs_tr[docs_tr.nonzero()])\nprint('Number of tokens in training set = {}'.format(n_tokens))\nprint('Sparsity = {}'.format(\n len(docs_tr.nonzero()[0]) / float(docs_tr.shape[0] * docs_tr.shape[1])))", "Log-likelihood of documents for LDA\nFor a document $d$ consisting of tokens $w$, the log-likelihood of the LDA model with $K$ topics is given as\n\\begin{eqnarray}\n \\log p\\left(d|\\theta_{d},\\beta\\right) & = & \\sum_{w\\in d}\\log\\left[\\sum_{k=1}^{K}\\exp\\left(\\log\\theta_{d,k} + \\log \\beta_{k,w}\\right)\\right]+const, \n\\end{eqnarray}\nwhere $\\theta_{d}$ is the topic distribution for document $d$ and $\\beta$ is the word distribution for the $K$ topics. We define a function that returns a tensor of the log-likelihood of documents given $\\theta_{d}$ and $\\beta$.", "def logp_lda_doc(beta, theta):\n \"\"\"Returns the log-likelihood function for given documents. \n \n K : number of topics in the model\n V : number of words (size of vocabulary)\n D : number of documents (in a mini-batch)\n \n Parameters\n ----------\n beta : tensor (K x V)\n Word distributions. \n theta : tensor (D x K)\n Topic distributions for documents. \n \"\"\"\n def ll_docs_f(docs):\n dixs, vixs = docs.nonzero()\n vfreqs = docs[dixs, vixs]\n ll_docs = vfreqs * pm.math.logsumexp(\n tt.log(theta[dixs]) + tt.log(beta.T[vixs]), axis=1).ravel()\n \n # Per-word log-likelihood times num of tokens in the whole dataset\n return tt.sum(ll_docs) / tt.sum(vfreqs) * n_tokens \n \n return ll_docs_f", "In the inner function, the log-likelihood is scaled for mini-batches by the number of tokens in the dataset. \nLDA model\nWith the log-likelihood function, we can construct the probabilistic model for LDA. doc_t works as a placeholder to which documents in a mini-batch are set. \nFor ADVI, each of random variables $\\theta$ and $\\beta$, drawn from Dirichlet distributions, is transformed into unconstrained real coordinate space. To do this, by default, PyMC3 uses a centered stick-breaking transformation. Since these random variables are on a simplex, the dimension of the unconstrained coordinate space is the original dimension minus 1. For example, the dimension of $\\theta_{d}$ is the number of topics (n_topics) in the LDA model, thus the transformed space has dimension (n_topics - 1). It shuold be noted that, in this example, we use t_stick_breaking, which is a numerically stable version of stick_breaking used by default. This is required to work ADVI for the LDA model. \nThe variational posterior on these transformed parameters is represented by a spherical Gaussian distributions (meanfield approximation). Thus, the number of variational parameters of $\\theta_{d}$, the latent variable for each document, is 2 * (n_topics - 1) for means and standard deviations. \nIn the last line of the below cell, DensityDist class is used to define the log-likelihood function of the model. The second argument is a Python function which takes observations (a document matrix in this example) and returns the log-likelihood value. This function is given as a return value of logp_lda_doc(beta, theta), which has been defined above.", "n_topics = 10\nminibatch_size = 128\n\n# Tensor for documents\ndoc_t = shared(np.zeros((minibatch_size, n_words)), name='doc_t')\n\nwith pm.Model() as model:\n theta = Dirichlet('theta', a=(1.0 / n_topics) * np.ones((minibatch_size, n_topics)), \n shape=(minibatch_size, n_topics), transform=t_stick_breaking(1e-9))\n beta = Dirichlet('beta', a=(1.0 / n_topics) * np.ones((n_topics, n_words)), \n shape=(n_topics, n_words), transform=t_stick_breaking(1e-9))\n doc = pm.DensityDist('doc', logp_lda_doc(beta, theta), observed=doc_t)", "Mini-batch\nTo perform ADVI with stochastic variational inference for large datasets, whole training samples are splitted into mini-batches. PyMC3's ADVI function accepts a Python generator which send a list of mini-batches to the algorithm. Here is an example to make a generator. \nTODO: replace the code using the new interface", "def create_minibatch(data):\n rng = np.random.RandomState(0)\n \n while True:\n # Return random data samples of a size 'minibatch_size' at each iteration\n ixs = rng.randint(data.shape[0], size=minibatch_size)\n yield [data[ixs]]\n \nminibatches = create_minibatch(docs_tr.toarray())", "The ADVI function replaces the values of Theano tensors with samples given by generators. We need to specify those tensors by a list. The order of the list should be the same with the mini-batches sent from the generator. Note that doc_t has been used in the model creation as the observation of the random variable named doc.", "# The value of doc_t will be replaced with mini-batches\nminibatch_tensors = [doc_t]", "To tell the algorithm that random variable doc is observed, we need to pass them as an OrderedDict. The key of OrderedDict is an observed random variable and the value is a scalar representing the scaling factor. Since the likelihood of the documents in mini-batches have been already scaled in the likelihood function, we set the scaling factor to 1.", "# observed_RVs = OrderedDict([(doc, n_samples_tr / minibatch_size)])\nobserved_RVs = OrderedDict([(doc, 1)])", "Encoder\nGiven a document, the encoder calculates variational parameters of the (transformed) latent variables, more specifically, parameters of Gaussian distributions in the unconstrained real coordinate space. The encode() method is required to output variational means and stds as a tuple, as shown in the following code. As explained above, the number of variational parameters is 2 * (n_topics) - 1. Specifically, the shape of zs_mean (or zs_std) in the method is (minibatch_size, n_topics - 1). It should be noted that zs_std is defined as log-transformed standard deviation and this is automativally exponentiated (thus bounded to be positive) in advi_minibatch(), the estimation function. \nTo enhance generalization ability to unseen words, a bernoulli corruption process is applied to the inputted documents. Unfortunately, I have never see any significant improvement with this.", "class LDAEncoder:\n \"\"\"Encode (term-frequency) document vectors to variational means and (log-transformed) stds. \n \"\"\"\n def __init__(self, n_words, n_hidden, n_topics, p_corruption=0, random_seed=1):\n rng = np.random.RandomState(random_seed)\n self.n_words = n_words\n self.n_hidden = n_hidden\n self.n_topics = n_topics\n self.w0 = shared(0.01 * rng.randn(n_words, n_hidden).ravel(), name='w0')\n self.b0 = shared(0.01 * rng.randn(n_hidden), name='b0')\n self.w1 = shared(0.01 * rng.randn(n_hidden, 2 * (n_topics - 1)).ravel(), name='w1')\n self.b1 = shared(0.01 * rng.randn(2 * (n_topics - 1)), name='b1')\n self.rng = MRG_RandomStreams(seed=random_seed)\n self.p_corruption = p_corruption\n \n def encode(self, xs):\n if 0 < self.p_corruption:\n dixs, vixs = xs.nonzero()\n mask = tt.set_subtensor(\n tt.zeros_like(xs)[dixs, vixs], \n self.rng.binomial(size=dixs.shape, n=1, p=1-self.p_corruption)\n )\n xs_ = xs * mask\n else:\n xs_ = xs\n\n w0 = self.w0.reshape((self.n_words, self.n_hidden))\n w1 = self.w1.reshape((self.n_hidden, 2 * (self.n_topics - 1)))\n hs = tt.tanh(xs_.dot(w0) + self.b0)\n zs = hs.dot(w1) + self.b1\n zs_mean = zs[:, :(self.n_topics - 1)]\n zs_std = zs[:, (self.n_topics - 1):]\n return zs_mean, zs_std\n \n def get_params(self):\n return [self.w0, self.b0, self.w1, self.b1]", "To feed the output of the encoder to the variational parameters of $\\theta$, we set an OrderedDict of tuples as below.", "encoder = LDAEncoder(n_words=n_words, n_hidden=100, n_topics=n_topics, p_corruption=0.0)\nlocal_RVs = OrderedDict([(theta, (encoder.encode(doc_t), n_samples_tr / minibatch_size))])", "theta is the random variable defined in the model creation and is a key of an entry of the OrderedDict. The value (encoder.encode(doc_t), n_samples_tr / minibatch_size) is a tuple of a theano expression and a scalar. The theano expression encoder.encode(doc_t) is the output of the encoder given inputs (documents). The scalar n_samples_tr / minibatch_size specifies the scaling factor for mini-batches. \nADVI optimizes the parameters of the encoder. They are passed to the function for ADVI.", "encoder_params = encoder.get_params()", "AEVB with ADVI\nadvi_minibatch() can be used to run AEVB with ADVI on the LDA model.", "def run_advi():\n with model:\n v_params = pm.variational.advi_minibatch(\n n=3000, minibatch_tensors=minibatch_tensors, minibatches=minibatches, \n local_RVs=local_RVs, observed_RVs=observed_RVs, encoder_params=encoder_params, \n learning_rate=2e-2, epsilon=0.1, n_mcsamples=1 \n )\n \n return v_params\n\n%time v_params = run_advi()\nplt.plot(v_params.elbo_vals)", "We can see ELBO increases as optimization proceeds. The trace of ELBO looks jaggy because at each iteration documents in the mini-batch are replaced. \nExtraction of characteristic words of topics based on posterior samples\nBy using estimated variational parameters, we can draw samples from the variational posterior. To do this, we use function sample_vp(). Here we use this function to obtain posterior mean of the word-topic distribution $\\beta$ and show top-10 words frequently appeared in the 10 topics.", "def print_top_words(beta, feature_names, n_top_words=10):\n for i in range(len(beta)):\n print((\"Topic #%d: \" % i) + \" \".join([feature_names[j]\n for j in beta[i].argsort()[:-n_top_words - 1:-1]]))\n\ndoc_t.set_value(docs_te.toarray()[:minibatch_size, :])\n\nwith model:\n samples = sample_vp(v_params, draws=100, local_RVs=local_RVs)\n beta_pymc3 = samples['beta'].mean(axis=0)\n\nprint_top_words(beta_pymc3, feature_names)", "We compare these topics to those obtained by a standard LDA implementation on scikit-learn, which is based on an online stochastic variational inference (Hoffman et al., 2013). We can see that estimated words in the topics are qualitatively similar.", "from sklearn.decomposition import LatentDirichletAllocation\n\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,\n learning_method='online', learning_offset=50.,\n random_state=0)\n%time lda.fit(docs_tr)\nbeta_sklearn = lda.components_ / lda.components_.sum(axis=1)[:, np.newaxis]\n\nprint_top_words(beta_sklearn, feature_names)", "Predictive distribution\nIn some papers (e.g., Hoffman et al. 2013), the predictive distribution of held-out words was proposed as a quantitative measure for goodness of the model fitness. The log-likelihood function for tokens of the held-out word can be calculated with posterior means of $\\theta$ and $\\beta$. The validity of this is explained in (Hoffman et al. 2013).", "def calc_pp(ws, thetas, beta, wix):\n \"\"\"\n Parameters\n ----------\n ws: ndarray (N,)\n Number of times the held-out word appeared in N documents. \n thetas: ndarray, shape=(N, K)\n Topic distributions for N documents. \n beta: ndarray, shape=(K, V)\n Word distributions for K topics. \n wix: int\n Index of the held-out word\n \n Return\n ------\n Log probability of held-out words.\n \"\"\"\n return ws * np.log(thetas.dot(beta[:, wix]))\n\ndef eval_lda(transform, beta, docs_te, wixs):\n \"\"\"Evaluate LDA model by log predictive probability. \n \n Parameters\n ----------\n transform: Python function\n Transform document vectors to posterior mean of topic proportions. \n wixs: iterable of int\n Word indices to be held-out. \n \"\"\"\n lpss = []\n docs_ = deepcopy(docs_te)\n thetass = []\n wss = []\n total_words = 0\n for wix in wixs:\n ws = docs_te[:, wix].ravel()\n if 0 < ws.sum():\n # Hold-out\n docs_[:, wix] = 0\n \n # Topic distributions\n thetas = transform(docs_)\n \n # Predictive log probability\n lpss.append(calc_pp(ws, thetas, beta, wix))\n \n docs_[:, wix] = ws\n thetass.append(thetas)\n wss.append(ws)\n total_words += ws.sum()\n else:\n thetass.append(None)\n wss.append(None)\n \n # Log-probability\n lp = np.sum(np.hstack(lpss)) / total_words\n \n return {\n 'lp': lp, \n 'thetass': thetass, \n 'beta': beta, \n 'wss': wss\n }", "To apply the above function for the LDA model, we redefine the probabilistic model because the number of documents to be tested changes. Since variational parameters have already been obtained, we can reuse them for sampling from the approximate posterior distribution.", "n_docs_te = docs_te.shape[0]\ndoc_t = shared(docs_te.toarray(), name='doc_t')\n\nwith pm.Model() as model:\n theta = Dirichlet('theta', a=(1.0 / n_topics) * np.ones((n_docs_te, n_topics)), \n shape=(n_docs_te, n_topics), transform=t_stick_breaking(1e-9))\n beta = Dirichlet('beta', a=(1.0 / n_topics) * np.ones((n_topics, n_words)), \n shape=(n_topics, n_words), transform=t_stick_breaking(1e-9))\n doc = pm.DensityDist('doc', logp_lda_doc(beta, theta), observed=doc_t)\n\n# Encoder has already been trained\nencoder.p_corruption = 0\nlocal_RVs = OrderedDict([(theta, (encoder.encode(doc_t), 1))])", "transform() function is defined with sample_vp() function. This function is an argument to the function for calculating log predictive probabilities.", "def transform_pymc3(docs):\n with model:\n doc_t.set_value(docs)\n samples = sample_vp(v_params, draws=100, local_RVs=local_RVs)\n \n return samples['theta'].mean(axis=0)", "The mean of the log predictive probability is about -7.00.", "%time result_pymc3 = eval_lda(transform_pymc3, beta_pymc3, docs_te.toarray(), np.arange(100))\nprint('Predictive log prob (pm3) = {}'.format(result_pymc3['lp']))", "We compare the result with the scikit-learn LDA implemented The log predictive probability is significantly higher (-6.04) than AEVB-ADVI, though it shows similar words in the estimated topics. It may because that the mean-field approximation to distribution on the simplex (topic and/or word distributions) is less accurate. See https://gist.github.com/taku-y/f724392bc0ad633deac45ffa135414d3.", "def transform_sklearn(docs):\n thetas = lda.transform(docs)\n return thetas / thetas.sum(axis=1)[:, np.newaxis]\n\n%time result_sklearn = eval_lda(transform_sklearn, beta_sklearn, docs_te.toarray(), np.arange(100))\nprint('Predictive log prob (sklearn) = {}'.format(result_sklearn['lp']))", "Summary\nWe have seen that PyMC3 allows us to estimate random variables of LDA, a probabilistic model with latent variables, based on automatic variational inference. Variational parameters of the local latent variables in the probabilistic model are encoded from observations. The parameters of the encoding model, MLP in this example, are optimized with variational parameters of the global latent variables. Once the probabilistic and the encoding models are defined, parameter optimization is done just by invoking a function (advi_minibatch()) without need to derive complex update equations. \nUnfortunately, the estimation result was not accurate compared to LDA in sklearn, which is based on the conjugate priors and thus not relying on the mean field approximation. To improve the estimation accuracy, some researchers proposed post processings that moves Monte Carlo samples to improve variational lower bound (e.g., Rezende and Mohamed, 2015; Salinams et al., 2015). By implementing such methods on PyMC3, we may achieve more accurate estimation while automated as shown in this notebook. \nReferences\n\nKingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. stat, 1050, 1.\nKucukelbir, A., Ranganath, R., Gelman, A., & Blei, D. (2015). Automatic variational inference in Stan. In Advances in neural information processing systems (pp. 568-576).\nBlei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022.\nHoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. W. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14(1), 1303-1347.\nRezende, D. J., & Mohamed, S. (2015). Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770.\nSalimans, T., Kingma, D. P., & Welling, M. (2015). Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning (pp. 1218-1226)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tjctw/PythonNote
6.041/A Trivial A Day - Bayes's rule - 1 -zh-TW.ipynb
cc0-1.0
[ "前言\n大學時候一直沒辦法學好這個被教授屢屢稱為trivial的科目。\n這個系列文章我們將以淺入淺出的原則幫作者複習一些機率的概念。\n題1 我是不是該去看醫生\n這個是個簡單的機率問題。假定有個醫學界阿宅檢定號稱通過考試的阿宅有95%的機率是個阿宅,而當前人口有99.9%是正常人。\n請問\n1.)隨便抓一個人去檢驗而出現陽性反應的機率有多少?\n$${ P}(B)={ P}(A){ P}(B\\mid A)+{ P}(A^ c){ P}(B\\mid A^ c) =0.001 \\cdot 0.95 + 0.999 \\cdot 0.05 =0.0509.$$", "pa = 0.001\npbga = 0.95\npac = 1-pa\npbgac = 0.05\nprint \"Total probability of P(B) is \" + \\\n str(0.001*0.95 + 0.05* 0.999)", "你被檢出,但妳趁的事\n$${ P}(A\\mid B)=\\frac{{ P}(A){ P}(B\\mid A)}{{ P}(B)} =\\frac{0.001 \\cdot 0.95}{0.0509} \\approx 0.01866.$$", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport mpld3\n\nmpld3.enable_notebook()\n\nfig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))\nax.grid(color='white', linestyle='solid')\n\nN = 50\nscatter = ax.scatter(np.random.normal(size=N),\n np.random.normal(size=N),\n c=np.random.random(size=N),\n s = 1000 * np.random.random(size=N),\n alpha=0.3,\n cmap=plt.cm.jet)\n\nax.set_title(\"D3 Scatter Plot\", size=18);", "A test for a certain rare disease is assumed to be correct 95% of the time: if a person has the disease, the test result is positive with probability 0.95, and if the person does not have the disease, the test result is negative with probability 0.95. A person drawn at random from a certain population has probability 0.001 of having the disease.\nFind the probability that a random person tests positive.\nGiven that the person just tested positive, what is the probability he actually has the disease?" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs
site/en/guide/migrate/multi_worker_cpu_gpu_training.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Migrate multi-worker CPU/GPU training\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/migrate/multi_worker_cpu_gpu_training\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/multi_worker_cpu_gpu_training.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/multi_worker_cpu_gpu_training.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/multi_worker_cpu_gpu_training.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis guide demonstrates how to migrate your multi-worker distributed training workflow from TensorFlow 1 to TensorFlow 2.\nTo perform multi-worker training with CPUs/GPUs:\n\nIn TensorFlow 1, you traditionally use the tf.estimator.train_and_evaluate and tf.estimator.Estimator APIs.\nIn TensorFlow 2, use the Keras APIs for writing the model, the loss function, the optimizer, and metrics. Then, distribute the training with Keras Model.fit API or a custom training loop (with tf.GradientTape) across multiple workers with tf.distribute.experimental.ParameterServerStrategy or tf.distribute.MultiWorkerMirroredStrategy. For more details, refer to the following tutorials:\nDistributed training with TensorFlow\nParameter server training with Keras Model.fit/a custom training loop\nMultiWorkerMirroredStrategy with Keras Model.fit\nMultiWorkerMirroredStrategy with a custom training loop.\n\nSetup\nStart with some necessary imports and a simple dataset for demonstration purposes:", "# The notebook uses a dataset instance for `Model.fit` with\n# `ParameterServerStrategy`, which depends on symbols in TF 2.7.\n# Install a utility needed for this demonstration\n!pip install portpicker\n\nimport tensorflow as tf\nimport tensorflow.compat.v1 as tf1\n\nfeatures = [[1., 1.5], [2., 2.5], [3., 3.5]]\nlabels = [[0.3], [0.5], [0.7]]\neval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]\neval_labels = [[0.8], [0.9], [1.]]", "You will need the 'TF_CONFIG' configuration environment variable for training on multiple machines in TensorFlow. Use 'TF_CONFIG' to specify the 'cluster' and the 'task's' addresses. (Learn more in the Distributed_training guide.)", "import json\nimport os\n\ntf_config = {\n 'cluster': {\n 'chief': ['localhost:11111'],\n 'worker': ['localhost:12345', 'localhost:23456', 'localhost:21212'],\n 'ps': ['localhost:12121', 'localhost:13131'],\n },\n 'task': {'type': 'chief', 'index': 0}\n}\n\nos.environ['TF_CONFIG'] = json.dumps(tf_config)", "Note: Unfortunately, since multi-worker training with tf.estimator APIs in TensorFlow 1 requires multiple clients (which would be especially tricky to be done here in this Colab notebook), you will make the notebook runnable without a 'TF_CONFIG' environment variable, so it falls back to local training. (Learn more in the Setting up the 'TF_CONFIG' environment variable section in the Distributed training with TensorFlow guide.)\nUse the del statement to remove the variable (but in real-world multi-worker training in TensorFlow 1, you won't have to do this):", "del os.environ['TF_CONFIG']", "TensorFlow 1: Multi-worker distributed training with tf.estimator APIs\nThe following code snippet demonstrates the canonical workflow of multi-worker training in TF1: you will use a tf.estimator.Estimator, a tf.estimator.TrainSpec, a tf.estimator.EvalSpec, and the tf.estimator.train_and_evaluate API to distribute the training:", "def _input_fn():\n return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)\n\ndef _eval_input_fn():\n return tf1.data.Dataset.from_tensor_slices(\n (eval_features, eval_labels)).batch(1)\n\ndef _model_fn(features, labels, mode):\n logits = tf1.layers.Dense(1)(features)\n loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)\n optimizer = tf1.train.AdagradOptimizer(0.05)\n train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())\n return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)\n\nestimator = tf1.estimator.Estimator(model_fn=_model_fn)\ntrain_spec = tf1.estimator.TrainSpec(input_fn=_input_fn)\neval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn)\ntf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec)", "TensorFlow 2: Multi-worker training with distribution strategies\nIn TensorFlow 2, distributed training across multiple workers with CPUs, GPUs, and TPUs is done via tf.distribute.Strategys.\nThe following example demonstrates how to use two such strategies: tf.distribute.experimental.ParameterServerStrategy and tf.distribute.MultiWorkerMirroredStrategy, both of which are designed for CPU/GPU training with multiple workers.\nParameterServerStrategy employs a coordinator ('chief'), which makes it more friendly with the environment in this Colab notebook. You will be using some utilities here to set up the supporting elements essential for a runnable experience here: you will create an in-process cluster, where threads are used to simulate the parameter servers ('ps') and workers ('worker'). For more information about parameter server training, refer to the Parameter server training with ParameterServerStrategy tutorial.\nIn this example, first define the 'TF_CONFIG' environment variable with a tf.distribute.cluster_resolver.TFConfigClusterResolver to provide the cluster information. If you are using a cluster management system for your distributed training, check if it provides 'TF_CONFIG' for you already, in which case you don't need to explicitly set this environment variable. (Learn more in the Setting up the 'TF_CONFIG' environment variable section in the Distributed training with TensorFlow guide.)", "# Find ports that are available for the `'chief'` (the coordinator),\n# `'worker'`s, and `'ps'` (parameter servers).\nimport portpicker\n\nchief_port = portpicker.pick_unused_port()\nworker_ports = [portpicker.pick_unused_port() for _ in range(3)]\nps_ports = [portpicker.pick_unused_port() for _ in range(2)]\n\n# Dump the cluster information to `'TF_CONFIG'`.\ntf_config = {\n 'cluster': {\n 'chief': [\"localhost:%s\" % chief_port],\n 'worker': [\"localhost:%s\" % port for port in worker_ports],\n 'ps': [\"localhost:%s\" % port for port in ps_ports],\n },\n 'task': {'type': 'chief', 'index': 0}\n}\nos.environ['TF_CONFIG'] = json.dumps(tf_config)\n\n# Use a cluster resolver to bridge the information to the strategy created below.\ncluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()", "Then, create tf.distribute.Servers for the workers and parameter servers one-by-one:", "# Workers need some inter_ops threads to work properly.\n# This is only needed for this notebook to demo. Real servers\n# should not need this.\nworker_config = tf.compat.v1.ConfigProto()\nworker_config.inter_op_parallelism_threads = 4\n\nfor i in range(3):\n tf.distribute.Server(\n cluster_resolver.cluster_spec(),\n job_name=\"worker\",\n task_index=i,\n config=worker_config)\n\nfor i in range(2):\n tf.distribute.Server(\n cluster_resolver.cluster_spec(),\n job_name=\"ps\",\n task_index=i)", "In real-world distributed training, instead of starting all the tf.distribute.Servers on the coordinator, you will be using multiple machines, and the ones that are designated as \"worker\"s and \"ps\" (parameter servers) will each run a tf.distribute.Server. Refer to Clusters in the real world section in the Parameter server training tutorial for more details.\nWith everything ready, create the ParameterServerStrategy object:", "strategy = tf.distribute.experimental.ParameterServerStrategy(cluster_resolver)", "Once you have created a strategy object, define the model, the optimizer, and other variables, and call the Keras Model.compile within the Strategy.scope API to distribute the training. (Refer to the Strategy.scope API docs for more information.)\nIf you prefer to customize your training by, for instance, defining the forward and backward passes, refer to Training with a custom training loop section in Parameter server training tutorial for more details.", "dataset = tf.data.Dataset.from_tensor_slices(\n (features, labels)).shuffle(10).repeat().batch(64)\n\neval_dataset = tf.data.Dataset.from_tensor_slices(\n (eval_features, eval_labels)).repeat().batch(1)\n\nwith strategy.scope():\n model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])\n optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)\n model.compile(optimizer, \"mse\")\n\nmodel.fit(dataset, epochs=5, steps_per_epoch=10)\n\nmodel.evaluate(eval_dataset, steps=10, return_dict=True)", "Partitioners (tf.distribute.experimental.partitioners)\nParameterServerStrategy in TensorFlow 2 supports variable partitioning and offers same partitioners as TensorFlow 1, with less confusing names:\n- tf.compat.v1.variable_axis_size_partitioner -> tf.distribute.experimental.partitioners.MaxSizePartitioner: a partitioner that keeps shards under a maximum size).\n- tf.compat.v1.min_max_variable_partitioner -> tf.distribute.experimental.partitioners.MinSizePartitioner: a partitioner that allocates a minimum size per shard.\n- tf.compat.v1.fixed_size_partitioner -> tf.distribute.experimental.partitioners.FixedShardsPartitioner: a partitioner that allocates a fixed number of shards.\n\nAlternatively, you can use a MultiWorkerMirroredStrategy object:", "# To clean up the `TF_CONFIG` used for `ParameterServerStrategy`.\ndel os.environ['TF_CONFIG']\nstrategy = tf.distribute.MultiWorkerMirroredStrategy()", "You can replace the strategy used above with a MultiWorkerMirroredStrategy object to perform training with this strategy.\nAs with the tf.estimator APIs, since MultiWorkerMirroredStrategy is a multi-client strategy, there is no easy way to run distributed training in this Colab notebook. Therefore, replacing the code above with this strategy ends up running things locally. The Multi-worker training with Keras Model.fit/a custom training loop tutorials demonstrate how to run multi-worker training with\n the 'TF_CONFIG' variable set up, with two workers on a localhost in Colab. In practice, you would create multiple workers on external IP addresses/ports, and use the 'TF_CONFIG' variable to specify the cluster configuration for each worker.\nNext steps\nTo learn more about multi-worker distributed training with tf.distribute.experimental.ParameterServerStrategy and tf.distribute.MultiWorkerMirroredStrategy in TensorFlow 2, consider the following resources:\n\nTutorial: Parameter server training with ParameterServerStrategy and Keras Model.fit/a custom training loop\nTutorial: Multi-worker training with MultiWorkerMirroredStrategy and Keras Model.fit\nTutorial: Multi-worker training with MultiWorkerMirroredStrategy and a custom training loop\nGuide: Distributed training with TensorFlow\nGuide: Optimize TensorFlow GPU performance with the TensorFlow Profiler\nGuide: Use a GPU (the Using multiple GPUs section)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fastai/course-v3
nbs/dl2/06_cuda_cnn_hooks_init.ipynb
apache-2.0
[ "%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline\n\n#export\nfrom exp.nb_05b import *\ntorch.set_num_threads(2)", "ConvNet\nJump_to lesson 10 video", "x_train,y_train,x_valid,y_valid = get_data()", "Helper function to quickly normalize with the mean and standard deviation from our training set:", "#export\ndef normalize_to(train, valid):\n m,s = train.mean(),train.std()\n return normalize(train, m, s), normalize(valid, m, s)\n\nx_train,x_valid = normalize_to(x_train,x_valid)\ntrain_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)", "Let's check it behaved properly.", "x_train.mean(),x_train.std()\n\nnh,bs = 50,512\nc = y_train.max().item()+1\nloss_func = F.cross_entropy\n\ndata = DataBunch(*get_dls(train_ds, valid_ds, bs), c)", "To refactor layers, it's useful to have a Lambda layer that can take a basic function and convert it to a layer you can put in nn.Sequential.\nNB: if you use a Lambda layer with a lambda function, your model won't pickle so you won't be able to save it with PyTorch. So it's best to give a name to the function you're using inside your Lambda (like flatten below).", "#export\nclass Lambda(nn.Module):\n def __init__(self, func):\n super().__init__()\n self.func = func\n\n def forward(self, x): return self.func(x)\n\ndef flatten(x): return x.view(x.shape[0], -1)", "This one takes the flat vector of size bs x 784 and puts it back as a batch of images of 28 by 28 pixels:", "def mnist_resize(x): return x.view(-1, 1, 28, 28)", "We can now define a simple CNN.", "def get_cnn_model(data):\n return nn.Sequential(\n Lambda(mnist_resize),\n nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14\n nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7\n nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4\n nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2\n nn.AdaptiveAvgPool2d(1),\n Lambda(flatten),\n nn.Linear(32,data.c)\n )\n\nmodel = get_cnn_model(data)", "Basic callbacks from the previous notebook:", "cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]\n\nopt = optim.SGD(model.parameters(), lr=0.4)\nlearn = Learner(model, opt, loss_func, data)\nrun = Runner(cb_funcs=cbfs)\n\n%time run.fit(1, learn)", "CUDA\nThis took a long time to run, so it's time to use a GPU. A simple Callback can make sure the model, inputs and targets are all on the same device.\nJump_to lesson 10 video", "# Somewhat more flexible way\ndevice = torch.device('cuda',0)\n\nclass CudaCallback(Callback):\n def __init__(self,device): self.device=device\n def begin_fit(self): self.model.to(self.device)\n def begin_batch(self): self.run.xb,self.run.yb = self.xb.to(self.device),self.yb.to(self.device)\n\n# Somewhat less flexible, but quite convenient\ntorch.cuda.set_device(device)\n\n#export\nclass CudaCallback(Callback):\n def begin_fit(self): self.model.cuda()\n def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()\n\ncbfs.append(CudaCallback)\n\nmodel = get_cnn_model(data)\n\nopt = optim.SGD(model.parameters(), lr=0.4)\nlearn = Learner(model, opt, loss_func, data)\nrun = Runner(cb_funcs=cbfs)\n\n%time run.fit(3, learn)", "Now, that's definitely faster!\nRefactor model\nFirst we can regroup all the conv/relu in a single function:\nJump_to lesson 10 video", "def conv2d(ni, nf, ks=3, stride=2):\n return nn.Sequential(\n nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())", "Another thing is that we can do the mnist resize in a batch transform, that we can do with a Callback.", "#export\nclass BatchTransformXCallback(Callback):\n _order=2\n def __init__(self, tfm): self.tfm = tfm\n def begin_batch(self): self.run.xb = self.tfm(self.xb)\n\ndef view_tfm(*size):\n def _inner(x): return x.view(*((-1,)+size))\n return _inner\n\nmnist_view = view_tfm(1,28,28)\ncbfs.append(partial(BatchTransformXCallback, mnist_view))", "With the AdaptiveAvgPool, this model can now work on any size input:", "nfs = [8,16,32,32]\n\ndef get_cnn_layers(data, nfs):\n nfs = [1] + nfs\n return [\n conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)\n for i in range(len(nfs)-1)\n ] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]\n\ndef get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))", "And this helper function will quickly give us everything needed to run the training.", "#export\ndef get_runner(model, data, lr=0.6, cbs=None, opt_func=None, loss_func = F.cross_entropy):\n if opt_func is None: opt_func = optim.SGD\n opt = opt_func(model.parameters(), lr=lr)\n learn = Learner(model, opt, loss_func, data)\n return learn, Runner(cb_funcs=listify(cbs))\n\nmodel = get_cnn_model(data, nfs)\nlearn,run = get_runner(model, data, lr=0.4, cbs=cbfs)\n\nmodel\n\nrun.fit(3, learn)", "Hooks\nManual insertion\nLet's say we want to do some telemetry, and want the mean and standard deviation of each activations in the model. First we can do it manually like this:\nJump_to lesson 10 video", "class SequentialModel(nn.Module):\n def __init__(self, *layers):\n super().__init__()\n self.layers = nn.ModuleList(layers)\n self.act_means = [[] for _ in layers]\n self.act_stds = [[] for _ in layers]\n \n def __call__(self, x):\n for i,l in enumerate(self.layers):\n x = l(x)\n self.act_means[i].append(x.data.mean())\n self.act_stds [i].append(x.data.std ())\n return x\n \n def __iter__(self): return iter(self.layers)\n\nmodel = SequentialModel(*get_cnn_layers(data, nfs))\nlearn,run = get_runner(model, data, lr=0.9, cbs=cbfs)\n\nrun.fit(2, learn)", "Now we can have a look at the means and stds of the activations at the beginning of training.", "for l in model.act_means: plt.plot(l)\nplt.legend(range(6));\n\nfor l in model.act_stds: plt.plot(l)\nplt.legend(range(6));\n\nfor l in model.act_means: plt.plot(l[:10])\nplt.legend(range(6));\n\nfor l in model.act_stds: plt.plot(l[:10])\nplt.legend(range(6));", "Pytorch hooks\nHooks are PyTorch object you can add to any nn.Module. A hook will be called when a layer, it is registered to, is executed during the forward pass (forward hook) or the backward pass (backward hook).\nHooks don't require us to rewrite the model.\nJump_to lesson 10 video", "model = get_cnn_model(data, nfs)\nlearn,run = get_runner(model, data, lr=0.5, cbs=cbfs)\n\nact_means = [[] for _ in model]\nact_stds = [[] for _ in model]", "A hook is attached to a layer, and needs to have a function that takes three arguments: module, input, output. Here we store the mean and std of the output in the correct position of our list.", "def append_stats(i, mod, inp, outp):\n act_means[i].append(outp.data.mean())\n act_stds [i].append(outp.data.std())\n\nfor i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))\n\nrun.fit(1, learn)\n\nfor o in act_means: plt.plot(o)\nplt.legend(range(5));", "Hook class\nWe can refactor this in a Hook class. It's very important to remove the hooks when they are deleted, otherwise there will be references kept and the memory won't be properly released when your model is deleted.\nJump_to lesson 10 video", "#export\ndef children(m): return list(m.children())\n\nclass Hook():\n def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))\n def remove(self): self.hook.remove()\n def __del__(self): self.remove()\n\ndef append_stats(hook, mod, inp, outp):\n if not hasattr(hook,'stats'): hook.stats = ([],[])\n means,stds = hook.stats\n means.append(outp.data.mean())\n stds .append(outp.data.std())", "NB: In fastai we use a bool param to choose whether to make it a forward or backward hook. In the above version we're only supporting forward hooks.", "model = get_cnn_model(data, nfs)\nlearn,run = get_runner(model, data, lr=0.5, cbs=cbfs)\n\nhooks = [Hook(l, append_stats) for l in children(model[:4])]\n\nrun.fit(1, learn)\n\nfor h in hooks:\n plt.plot(h.stats[0])\n h.remove()\nplt.legend(range(4));", "A Hooks class\nLet's design our own class that can contain a list of objects. It will behave a bit like a numpy array in the sense that we can index into it via:\n- a single index\n- a slice (like 1:5)\n- a list of indices\n- a mask of indices ([True,False,False,True,...])\nThe __iter__ method is there to be able to do things like for x in ....\nJump_to lesson 10 video", "#export\nclass ListContainer():\n def __init__(self, items): self.items = listify(items)\n def __getitem__(self, idx):\n if isinstance(idx, (int,slice)): return self.items[idx]\n if isinstance(idx[0],bool):\n assert len(idx)==len(self) # bool mask\n return [o for m,o in zip(idx,self.items) if m]\n return [self.items[i] for i in idx]\n def __len__(self): return len(self.items)\n def __iter__(self): return iter(self.items)\n def __setitem__(self, i, o): self.items[i] = o\n def __delitem__(self, i): del(self.items[i])\n def __repr__(self):\n res = f'{self.__class__.__name__} ({len(self)} items)\\n{self.items[:10]}'\n if len(self)>10: res = res[:-1]+ '...]'\n return res\n\nListContainer(range(10))\n\nListContainer(range(100))\n\nt = ListContainer(range(10))\nt[[1,2]], t[[False]*8 + [True,False]]", "We can use it to write a Hooks class that contains several hooks. We will also use it in the next notebook as a container for our objects in the data block API.", "#export\nfrom torch.nn import init\n\nclass Hooks(ListContainer):\n def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])\n def __enter__(self, *args): return self\n def __exit__ (self, *args): self.remove()\n def __del__(self): self.remove()\n\n def __delitem__(self, i):\n self[i].remove()\n super().__delitem__(i)\n \n def remove(self):\n for h in self: h.remove()\n\nmodel = get_cnn_model(data, nfs).cuda()\nlearn,run = get_runner(model, data, lr=0.9, cbs=cbfs)\n\nhooks = Hooks(model, append_stats)\nhooks\n\nhooks.remove()\n\nx,y = next(iter(data.train_dl))\nx = mnist_resize(x).cuda()\n\nx.mean(),x.std()\n\np = model[0](x)\np.mean(),p.std()\n\nfor l in model:\n if isinstance(l, nn.Sequential):\n init.kaiming_normal_(l[0].weight)\n l[0].bias.data.zero_()\n\np = model[0](x)\np.mean(),p.std()", "Having given an __enter__ and __exit__ method to our Hooks class, we can use it as a context manager. This makes sure that onces we are out of the with block, all the hooks have been removed and aren't there to pollute our memory.", "with Hooks(model, append_stats) as hooks:\n run.fit(2, learn)\n fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))\n for h in hooks:\n ms,ss = h.stats\n ax0.plot(ms[:10])\n ax1.plot(ss[:10])\n plt.legend(range(6));\n \n fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))\n for h in hooks:\n ms,ss = h.stats\n ax0.plot(ms)\n ax1.plot(ss)\n plt.legend(range(6));", "Other statistics\nLet's store more than the means and stds and plot histograms of our activations now.\nJump_to lesson 10 video", "def append_stats(hook, mod, inp, outp):\n if not hasattr(hook,'stats'): hook.stats = ([],[],[])\n means,stds,hists = hook.stats\n means.append(outp.data.mean().cpu())\n stds .append(outp.data.std().cpu())\n hists.append(outp.data.cpu().histc(40,0,10)) #histc isn't implemented on the GPU\n\nmodel = get_cnn_model(data, nfs).cuda()\nlearn,run = get_runner(model, data, lr=0.9, cbs=cbfs)\n\nfor l in model:\n if isinstance(l, nn.Sequential):\n init.kaiming_normal_(l[0].weight)\n l[0].bias.data.zero_()\n\nwith Hooks(model, append_stats) as hooks: run.fit(1, learn)\n\n# Thanks to @ste for initial version of histgram plotting code\ndef get_hist(h): return torch.stack(h.stats[2]).t().float().log1p()", "Jump_to lesson 10 video", "fig,axes = plt.subplots(2,2, figsize=(15,6))\nfor ax,h in zip(axes.flatten(), hooks[:4]):\n ax.imshow(get_hist(h), origin='lower')\n ax.axis('off')\nplt.tight_layout()", "From the histograms, we can easily get more informations like the min or max of the activations", "def get_min(h):\n h1 = torch.stack(h.stats[2]).t().float()\n return h1[:2].sum(0)/h1.sum(0)\n\nfig,axes = plt.subplots(2,2, figsize=(15,6))\nfor ax,h in zip(axes.flatten(), hooks[:4]):\n ax.plot(get_min(h))\n ax.set_ylim(0,1)\nplt.tight_layout()", "Generalized ReLU\nNow let's use our model with a generalized ReLU that can be shifted and with maximum value.\nJump_to lesson 10 video", "#export\ndef get_cnn_layers(data, nfs, layer, **kwargs):\n nfs = [1] + nfs\n return [layer(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)\n for i in range(len(nfs)-1)] + [\n nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]\n\ndef conv_layer(ni, nf, ks=3, stride=2, **kwargs):\n return nn.Sequential(\n nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))\n\nclass GeneralRelu(nn.Module):\n def __init__(self, leak=None, sub=None, maxv=None):\n super().__init__()\n self.leak,self.sub,self.maxv = leak,sub,maxv\n\n def forward(self, x): \n x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)\n if self.sub is not None: x.sub_(self.sub)\n if self.maxv is not None: x.clamp_max_(self.maxv)\n return x\n\ndef init_cnn(m, uniform=False):\n f = init.kaiming_uniform_ if uniform else init.kaiming_normal_\n for l in m:\n if isinstance(l, nn.Sequential):\n f(l[0].weight, a=0.1)\n l[0].bias.data.zero_()\n\ndef get_cnn_model(data, nfs, layer, **kwargs):\n return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))\n\ndef append_stats(hook, mod, inp, outp):\n if not hasattr(hook,'stats'): hook.stats = ([],[],[])\n means,stds,hists = hook.stats\n means.append(outp.data.mean().cpu())\n stds .append(outp.data.std().cpu())\n hists.append(outp.data.cpu().histc(40,-7,7))\n\nmodel = get_cnn_model(data, nfs, conv_layer, leak=0.1, sub=0.4, maxv=6.)\ninit_cnn(model)\nlearn,run = get_runner(model, data, lr=0.9, cbs=cbfs)\n\nwith Hooks(model, append_stats) as hooks:\n run.fit(1, learn)\n fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))\n for h in hooks:\n ms,ss,hi = h.stats\n ax0.plot(ms[:10])\n ax1.plot(ss[:10])\n h.remove()\n plt.legend(range(5));\n \n fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))\n for h in hooks:\n ms,ss,hi = h.stats\n ax0.plot(ms)\n ax1.plot(ss)\n plt.legend(range(5));\n\nfig,axes = plt.subplots(2,2, figsize=(15,6))\nfor ax,h in zip(axes.flatten(), hooks[:4]):\n ax.imshow(get_hist(h), origin='lower')\n ax.axis('off')\nplt.tight_layout()\n\ndef get_min(h):\n h1 = torch.stack(h.stats[2]).t().float()\n return h1[19:22].sum(0)/h1.sum(0)\n\nfig,axes = plt.subplots(2,2, figsize=(15,6))\nfor ax,h in zip(axes.flatten(), hooks[:4]):\n ax.plot(get_min(h))\n ax.set_ylim(0,1)\nplt.tight_layout()", "Jump_to lesson 10 video", "#export\ndef get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):\n model = get_cnn_model(data, nfs, layer, **kwargs)\n init_cnn(model, uniform=uniform)\n return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)\n\nsched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)]) \n\nlearn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs+[partial(ParamScheduler,'lr', sched)])\n\nrun.fit(8, learn)", "Uniform init may provide more useful initial weights (normal distribution puts a lot of them at 0).", "learn,run = get_learn_run(nfs, data, 1., conv_layer, uniform=True,\n cbs=cbfs+[partial(ParamScheduler,'lr', sched)])\n\nrun.fit(8, learn)", "Export\nHere's a handy way to export our module without needing to update the file name - after we define this, we can just use nb_auto_export() in the future (h/t Stas Bekman):", "#export\nfrom IPython.display import display, Javascript\ndef nb_auto_export():\n display(Javascript(\"\"\"{\nconst ip = IPython.notebook\nif (ip) {\n ip.save_notebook()\n console.log('a')\n const s = `!python notebook2script.py ${ip.notebook_name}`\n if (ip.kernel) { ip.kernel.execute(s) }\n}\n}\"\"\"))\n\nnb_auto_export()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
haraldschilly/amnesty-germany-jahresberichte-2015
amnesty-jahresberichte.ipynb
apache-2.0
[ "Amnesty International Deutschland: Jahresberichte 2015\nThe following steps download and extract all the Jahresberichte for the year 2015.\nIt runs in Python 3, and uses the requests and bs4 ($ conda install beautiful-soup) libraries.\nStep 1: Index page has 4 pages of up to 50 links each:", "import bs4\nimport requests\n\njbindexurl = lambda page: \"http://www.amnesty.de/laenderbericht/australien?page=%d&country=&topic=&node_type=ai_annual_report&from_month=0&from_year=&to_month=0&to_year=&submit_x=103&submit_y=13&submit=Auswahl+anzeigen&result_limit=50&form_id=ai_core_search_form\" % page\njbindices = [bs4.BeautifulSoup(requests.get(jbindexurl(i)).text) for i in range(4)]", "Step 2: downloading each linked HTML page\nFor those 4 index pages, download all linked pages where the link itself matches the given RegEx.", "import re\nar2015 = re.compile(\"Amnesty Report 2015\")\n\nreports = {}\nfor jbindex in jbindices:\n a_reports = jbindex.find_all(\"a\", text=ar2015)\n for a in a_reports:\n country = ' '.join(a.contents[0].split()[3:])\n reports[country] = requests.get(\"http://www.amnesty.de\" + a.get(\"href\")).text\n print(country, end=\", \")", "Step 3: HTML template and write to HTML file\nOnly the actual HTML of the report is written to a small HTML file. It's the parent of the parent of the &lt;h3&gt; header \"Amnesty Report 2015\" … and it also removes the remaining link-bar at the top.", "TMPL = \"\"\"\\\n<!DOCTYPE html>\n<html>\n<head>\n<title>Amnesty Report 2015 {country}</title>\n</head>\n<body>\n{content}\n</body>\n</html>\n\"\"\"\n\nfrom codecs import open\n\nfor country, report in reports.items():\n bs = bs4.BeautifulSoup(report)\n h3 = bs.find(\"h3\", text=ar2015)\n \n # parent of parent contains the main content\n content = h3.parent.parent\n \n # changing the h3 header to a proper h1 header\n h3.name = \"h1\"\n \n # we neither want the top bar nor the bar at the bottom for \"zurück\"\n # (extract() removes it from the DOM)\n for bar in content.find_all(\"ul\", class_ = \"ai_core_service_bar\"):\n bar.extract()\n \n # writing to html file\n with open(country.lower().replace(\" \", \"_\") + \".html\", \"w\", \"utf8\") as f:\n f.write(TMPL.format(country = country, content = str(content)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Aniruddha-Tapas/Applied-Machine-Learning
Miscellaneous/Leaf Classification.ipynb
mit
[ "Leaf Classification\n\nThe dataset we would be considering in this example is the Leaf dataset which consists in a collection of shape and texture features extracted from digital images of leaf specimens originating from a total of 40 different plant species.\nIt can be downloaded from : https://archive.ics.uci.edu/ml/datasets/Leaf\nAttribute Information:\n\nClass (Species) \nSpecimen Number \nEccentricity \nAspect Ratio \nElongation \nSolidity \nStochastic Convexity \nIsoperimetric Factor \nMaximal Indentation Depth \nLobedness \nAverage Intensity \nAverage Contrast \nSmoothness \nThird moment \nUniformity \nEntropy", "import os\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nimport pandas as pd\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import cross_validation, metrics\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom time import time\nfrom sklearn import preprocessing\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import roc_auc_score , classification_report\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report\n\n\n# read .csv from provided dataset\ncsv_filename=\"leaf/leaf.csv\"\n\n# df=pd.read_csv(csv_filename,index_col=0)\ndf=pd.read_csv(csv_filename,\n names=[\"Class\", \"No\" , \"Eccentricity\" , \"Aspect-Ratio\" , \"Elongation\" , \"Solidity\",\n \"Stochastic-Convexity\", \"Isoperimetric-Factor\" , \"Max-Indentation-Depth\" ,\n \"Lobedness\" , \"Avg-Intensity\" , \"Avg-Contrast\" , \"Smoothness\" ,\n \"Third-Moment\" , \"Uniformity\" , \"Entropy\"])\n\ndf.head()\n\ndf.describe()\n\n#Convert animal labels to numbers\nle = preprocessing.LabelEncoder()\ndf['Class'] = le.fit_transform(df.Class)\n\ndf['Class'].unique()\n\nfeatures = list(df.columns)\n\nfeatures.remove('Class')\n\ndf.shape\n\nfor f in features:\n\n #Get binarized columns\n df[f] = pd.get_dummies(df[f])\n \n # Build new array\n# train_data = pd.concat([hour, days, district], axis=1)\n# train_data['crime']=crime\n\ndf.head()\n\nX = df[features]\ny = df['Class']\n\nX.describe()\n\n# split dataset to 60% training and 40% testing\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)\n\nprint X_train.shape, y_train.shape", "Feature importances with forests of trees\nThis examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.ensemble import ExtraTreesClassifier\n\n# Build a classification task using 3 informative features\n\n# Build a forest and compute the feature importances\nforest = ExtraTreesClassifier(n_estimators=250,\n random_state=0)\n\nforest.fit(X, y)\nimportances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X.shape[1]):\n print(\"%d. feature %d - %s (%f) \" % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))\n\n# Plot the feature importances of the forest\nplt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')\nplt.title(\"Feature importances\")\nplt.bar(range(X.shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(X.shape[1]), indices)\nplt.xlim([-1, X.shape[1]])\nplt.show()\n\nimportances[indices[:5]]\n\nfor f in range(5):\n print(\"%d. feature %d - %s (%f)\" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))\n\nbest_features = []\nfor i in indices[:5]:\n best_features.append(features[i])\n\n# Plot the top 5 feature importances of the forest\nplt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')\nplt.title(\"Feature importances\")\nplt.bar(range(5), importances[indices][:5], \n color=\"r\", yerr=std[indices][:5], align=\"center\")\nplt.xticks(range(5), best_features)\nplt.xlim([-1, 5])\nplt.show()", "Decision Tree accuracy and time elapsed caculation", "t0=time()\nprint \"DecisionTree\"\n\ndt = DecisionTreeClassifier(min_samples_split=20,random_state=99)\n# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)\n\nclf_dt=dt.fit(X_train,y_train)\n\nprint \"Acurracy: \", clf_dt.score(X_test,y_test)\nt1=time()\nprint \"time elapsed: \", t1-t0", "cross validation for DT", "tt0=time()\nprint \"cross result========\"\nscores = cross_validation.cross_val_score(dt, X, y, cv=3)\nprint scores\nprint scores.mean()\ntt1=time()\nprint \"time elapsed: \", tt1-tt0\nprint \"\\n\"", "Tuning our hyperparameters using GridSearch", "from sklearn.metrics import classification_report\n\npipeline = Pipeline([\n ('clf', DecisionTreeClassifier(criterion='entropy'))\n])\n\nparameters = {\n 'clf__max_depth': (5, 25 , 50),\n 'clf__min_samples_split': (1, 5, 10),\n 'clf__min_samples_leaf': (1, 2, 3)\n}\n\ngrid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\nprint 'Best parameters set:'\n\nbest_parameters = grid_search.best_estimator_.get_params()\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n\npredictions = grid_search.predict(X_test)\n\nprint classification_report(y_test, predictions)", "Random Forest accuracy and time elapsed caculation", "t2=time()\nprint \"RandomForest\"\nrf = RandomForestClassifier(n_estimators=100,n_jobs=-1)\nclf_rf = rf.fit(X_train,y_train)\nprint \"Acurracy: \", clf_rf.score(X_test,y_test)\nt3=time()\nprint \"time elapsed: \", t3-t2", "cross validation for RF", "tt2=time()\nprint \"cross result========\"\nscores = cross_validation.cross_val_score(rf, X, y, cv=3)\nprint scores\nprint scores.mean()\ntt3=time()\nprint \"time elapsed: \", tt3-tt2\nprint \"\\n\"\n", "Tuning Models using GridSearch", "pipeline2 = Pipeline([\n('clf', RandomForestClassifier(criterion='entropy'))\n])\n\nparameters = {\n 'clf__n_estimators': (5, 25, 50, 100),\n 'clf__max_depth': (5, 25 , 50),\n 'clf__min_samples_split': (1, 5, 10),\n 'clf__min_samples_leaf': (1, 2, 3)\n}\n\ngrid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)\n\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid_search.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n\npredictions = grid_search.predict(X_test)\nprint 'Accuracy:', accuracy_score(y_test, predictions)\nprint classification_report(y_test, predictions)\n ", "Naive Bayes accuracy and time elapsed caculation", "t4=time()\nprint \"NaiveBayes\"\nnb = BernoulliNB()\nclf_nb=nb.fit(X_train,y_train)\nprint \"Acurracy: \", clf_nb.score(X_test,y_test)\nt5=time()\nprint \"time elapsed: \", t5-t4", "cross-validation for NB", "tt4=time()\nprint \"cross result========\"\nscores = cross_validation.cross_val_score(nb, X,y, cv=3)\nprint scores\nprint scores.mean()\ntt5=time()\nprint \"time elapsed: \", tt5-tt4\nprint \"\\n\"", "KNN accuracy and time elapsed caculation", "t6=time()\nprint \"KNN\"\n# knn = KNeighborsClassifier(n_neighbors=3)\nknn = KNeighborsClassifier(n_neighbors=3)\nclf_knn=knn.fit(X_train, y_train)\nprint \"Acurracy: \", clf_knn.score(X_test,y_test) \nt7=time()\nprint \"time elapsed: \", t7-t6", "cross validation for KNN", "tt6=time()\nprint \"cross result========\"\nscores = cross_validation.cross_val_score(knn, X,y, cv=5)\nprint scores\nprint scores.mean()\ntt7=time()\nprint \"time elapsed: \", tt7-tt6\nprint \"\\n\"", "Fine tuning the model using GridSearch", "from sklearn.svm import SVC\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import grid_search\n\nknn = KNeighborsClassifier()\n\nparameters = {'n_neighbors':[1,10]}\n\ngrid = grid_search.GridSearchCV(knn, parameters, n_jobs=-1, verbose=1, scoring='accuracy')\n\n\ngrid.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n \npredictions = grid.predict(X_test)\nprint classification_report(y_test, predictions)", "SVM accuracy and time elapsed caculation", "t7=time()\nprint \"SVM\"\n\nsvc = SVC()\nclf_svc=svc.fit(X_train, y_train)\nprint \"Acurracy: \", clf_svc.score(X_test,y_test) \nt8=time()\nprint \"time elapsed: \", t8-t7", "cross validation for SVM", "tt7=time()\nprint \"cross result========\"\nscores = cross_validation.cross_val_score(svc,X,y, cv=5)\nprint scores\nprint scores.mean()\ntt8=time()\nprint \"time elapsed: \", tt7-tt6\nprint \"\\n\"\n\nfrom sklearn.svm import SVC\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import grid_search\n\nsvc = SVC()\n\nparameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}\n\ngrid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')\n\n\ngrid.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n \npredictions = grid.predict(X_test)\nprint classification_report(y_test, predictions)\n\npipeline = Pipeline([\n ('clf', SVC(kernel='linear', gamma=0.01, C=10))\n])\n\nparameters = {\n 'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),\n 'clf__C': (0.1, 0.3, 1, 3, 10, 30),\n}\n\ngrid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')\n\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid_search.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n \npredictions = grid_search.predict(X_test)\nprint classification_report(y_test, predictions)", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pagutierrez/tutorial-sklearn
notebooks-spanish/15-encadenando_con_tuberias.ipynb
cc0-1.0
[ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "Encadenando los estimadores con tuberías (pipelines)\nEn esta sección, estudiamos cómo encadenar los estimadores.\nUn ejemplo sencillo: extracción de características y selección antes de aplicar un estimador\nExtracción de características: vectorizer\nPara algunos tipos de datos, por ejemplo, para datos de tipo texto, debemos aplicar un paso de extracción de características para convertirlos en características numéricas. Para demostrarlo, vamos a cargar el dataset de spam que utilizamos previamente.", "import os\n\nwith open(os.path.join(\"datasets\", \"smsspam\", \"SMSSpamCollection\")) as f:\n lines = [line.strip().split(\"\\t\") for line in f.readlines()]\ntext = [x[1] for x in lines]\ny = [x[0] == \"ham\" for x in lines]\n\nfrom sklearn.model_selection import train_test_split\n\ntext_train, text_test, y_train, y_test = train_test_split(text, y)", "Hemos aplicado extracción de características manualmente del siguiente modo:", "from sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.linear_model import LogisticRegression\n\nvectorizer = TfidfVectorizer()\nvectorizer.fit(text_train)\n\nX_train = vectorizer.transform(text_train)\nX_test = vectorizer.transform(text_test)\n\nclf = LogisticRegression()\nclf.fit(X_train, y_train)\n\nclf.score(X_test, y_test)", "El aprender una transformación y luego aplicarla a los datos de test es muy común en aprendizaje automático. Por tanto, scikit-learn tiene una forma cómoda de hacer esto, llamada tuberías (pipelines):", "from sklearn.pipeline import make_pipeline\n\npipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())\npipeline.fit(text_train, y_train)\npipeline.score(text_test, y_test)", "Como puedes comprobar, esto hace el código mucho más corto y fácil de manejar. Realmente, está ejecutándose exactamente lo mismo. Cuando llamamos a fit en la tubería, se llamará a cada método de forma sucesiva.\nDespués del primer ajuste, se utilizará el método transform para crear una nueva representación. Ésta se utilizará como entrada en el método fit del siguiente paso, y así sucesivamente. En el último paso, no se llamará a transform.\n\nSi llamamos a score, solo se llamará a transform en cada paso, generando lo que serían los datos de test. Al final aplicaremos score sobre la representación final obtenida. Lo mismo pasa para predict.\nConstruir tuberías no solo nos permite simplificar el código, también es muy importante para el ajuste de parámetros. Imaginemos que queremos ajustar el parámetro $C$ de la regresión logística anterior.\nLo podríamos hacer de la siguiente forma (incorrecta):", "# Este código ilustra un error común, ¡no lo utilices!\nfrom sklearn.model_selection import GridSearchCV\n\nvectorizer = TfidfVectorizer()\nvectorizer.fit(text_train)\n\nX_train = vectorizer.transform(text_train)\nX_test = vectorizer.transform(text_test)\n\nclf = LogisticRegression()\ngrid = GridSearchCV(clf, param_grid={'C': [.01, .1, 1, 10, 100]}, cv=5)\ngrid.fit(X_train, y_train)\ngrid.score(X_test, y_test)", "2.1.2 ¿Qué hemos hecho mal?\nHemos aplicado búsqueda grid con validación cruzada utilizando X_train. Sin embargo, cuando aplicamos TfidfVectorizer estamos considerando el conjunto X_train completo, no solo los folds de entrenamiento. Por tanto, podríamos estar usando conocimiento acerca de la frecuencia de las palabras en los folds de test. Esto se llama \"contaminación\" del conjunto de test y lleva a estimaciones demasiado optimistas del rendimiento de generalización o a parámetros seleccionados de forma incorrecta. Sin embargo, podemos arreglar esto con una tubería:", "from sklearn.model_selection import GridSearchCV\n\npipeline = make_pipeline(TfidfVectorizer(), \n LogisticRegression())\n\ngrid = GridSearchCV(pipeline,\n param_grid={'logisticregression__C': [.01, .1, 1, 10, 100]}, cv=5)\n\ngrid.fit(text_train, y_train)\ngrid.score(text_test, y_test)", "Observa que tenemos que indicar en que parte de la tubería aparece el parámetro $C$. Esto lo hacemos con la sintaxis especial __. El nombre que hay antes del __ es simplemente el nombre de la clase en minúscula, la parte que hay después de __ es el parámetro que queremos ajustar por búsqueda grid.\n<img src=\"figures/pipeline_cross_validation.svg\" width=\"50%\">\nOtro beneficio de considerar tuberías es que podemos buscar sobre los propios parámetros de los algoritmos de extracción de características mediante el uso de GridSearchCV:", "#Este código puede tardar bastante tiempo\nfrom sklearn.model_selection import GridSearchCV\n\npipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())\n\nparams = {'logisticregression__C': [.1, 1, 10, 100],\n \"tfidfvectorizer__ngram_range\": [(1, 1), (1, 2), (2, 2)]}\ngrid = GridSearchCV(pipeline, param_grid=params, cv=5)\ngrid.fit(text_train, y_train)\nprint(grid.best_params_)\ngrid.score(text_test, y_test)", "<div class=\"alert alert-success\">\n <b>EJERCICIO</b>:\n <ul>\n <li>\n Crea una tubería utilizando un ``StandardScaler`` y una regresión ``RidgeRegression`` y aplícala al dataset Boston housing (que se puede cargar con ``sklearn.datasets.load_boston``). Añade el transformador ``sklearn.preprocessing.PolynomialFeatures`` como una segunda fase de pre-procesamiento, y haz una búsqueda *grid* sobre el grado de los polinomios (prueba con grados 1, 2 y 3).\n </li>\n </ul>\n</div>" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericmjl/Network-Analysis-Made-Simple
archive/2-networkx-basics-instructor.ipynb
mit
[ "import networkx as nx\nfrom datetime import datetime\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport warnings\nfrom custom import load_data as cf\n\nwarnings.filterwarnings('ignore')\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'", "Nodes and Edges: How do we represent relationships between individuals using NetworkX?\nAs mentioned earlier, networks, also known as graphs, are comprised of individual entities and their representatives. The technical term for these are nodes and edges, and when we draw them we typically use circles (nodes) and lines (edges). \nIn this notebook, we will work with a social network of seventh graders, in which nodes are individual students, and edges represent their relationships. Edges between individuals show how often the seventh graders indicated other seventh graders as their favourite.\nData credit: http://konect.cc/networks/moreno_seventh\nData Representation\nIn the networkx implementation, graph objects store their data in dictionaries. \nNodes are part of the attribute Graph.node, which is a dictionary where the key is the node ID and the values are a dictionary of attributes. \nEdges are part of the attribute Graph.edge, which is a nested dictionary. Data are accessed as such: G.edge[node1, node2]['attr_name'].\nBecause of the dictionary implementation of the graph, any hashable object can be a node. This means strings and tuples, but not lists and sets.\nLoad Data\nLet's load some real network data to get a feel for the NetworkX API. This dataset comes from a study of 7th grade students.\n\nThis directed network contains proximity ratings between studetns from 29 seventh grade students from a school in Victoria. Among other questions the students were asked to nominate their preferred classmates for three different activities. A node represents a student. An edge between two nodes shows that the left student picked the right student as his answer. The edge weights are between 1 and 3 and show how often the left student chose the right student as his favourite.", "G = cf.load_seventh_grader_network()", "Basic Network Statistics\nLet's first understand how many students and friendships are represented in the network.", "len(G.nodes())\n\n# Who are represented in the network?\nlist(G.nodes())[0:5]", "Exercise\nCan you write a single line of code that returns the number of nodes in the graph? (1 min.)", "len(G.nodes())\n# len(G)", "Let's now figure out who is connected to who in the network", "# Who is connected to who in the network?\n# list(G.edges())[0:5]\nlist(G.edges())[0:5]", "Exercise\nCan you write a single line of code that returns the number of relationships represented? (1 min)", "len(G.edges()) ", "Concept\nA network, more technically known as a graph, is comprised of:\n\na set of nodes\njoined by a set of edges\n\nThey can be represented as two lists:\n\nA node list: a list of 2-tuples where the first element of each tuple is the representation of the node, and the second element is a dictionary of metadata associated with the node.\nAn edge list: a list of 3-tuples where the first two elements are the nodes that are connected together, and the third element is a dictionary of metadata associated with the edge.\n\nSince this is a social network of people, there'll be attributes for each individual, such as a student's gender. We can grab that data off from the attributes that are stored with each node.", "# Let's get a list of nodes with their attributes.\nlist(G.nodes(data=True))[0:5]\n# G.nodes(data=True)\n\n# NetworkX will return a list of tuples in the form (node_id, attribute_dictionary) ", "Exercise\nCan you count how many males and females are represented in the graph? (3 min.)\nHint: You may want to use the Counter object from the collections module.", "from collections import Counter\nmf_counts = Counter([d['gender'] \n for n, d in G.nodes(data=True)])\n\ndef test_answer(mf_counts):\n assert mf_counts['female'] == 17\n assert mf_counts['male'] == 12\n \ntest_answer(mf_counts)", "Edges can also store attributes in their attribute dictionary.", "list(G.edges(data=True))[0:5]", "In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the \"count\" variable.\nExercise\nCan you figure out the maximum times any student rated another student as their favourite? (3 min.)", "# Answer\ncounts = [d['count'] for n1, n2, d in G.edges(data=True)]\nmaxcount = max(counts)\n\ndef test_maxcount(maxcount):\n assert maxcount == 3\n \ntest_maxcount(maxcount)", "Exercise\nWe found out that there are two individuals that we left out of the network, individual no. 30 and 31. They are one male (30) and one female (31), and they are a pair that just love hanging out with one another and with individual 7 (count=3), in both directions per pair. Add this information to the graph. (5 min.)\nIf you need more help, check out https://networkx.github.io/documentation/stable/tutorial.html", "# Answer\nG.add_node(30, gender='male')\nG.add_node(31, gender='female')\nG.add_edge(30, 31, count=3)\nG.add_edge(31, 30, count=3) # reverse is optional in undirected network\nG.add_edge(30, 7, count=3) # but this network is directed\nG.add_edge(7, 30, count=3)\nG.add_edge(31, 7, count=3)\nG.add_edge(7, 31, count=3)", "Verify that you have added in the edges and nodes correctly by running the following cell.", "def test_graph_integrity(G):\n assert 30 in G.nodes()\n assert 31 in G.nodes()\n assert G.nodes[30]['gender'] == 'male'\n assert G.nodes[31]['gender'] == 'female'\n assert G.has_edge(30, 31)\n assert G.has_edge(30, 7)\n assert G.has_edge(31, 7)\n assert G.edges[30, 7]['count'] == 3\n assert G.edges[7, 30]['count'] == 3\n assert G.edges[31, 7]['count'] == 3\n assert G.edges[7, 31]['count'] == 3\n assert G.edges[30, 31]['count'] == 3\n assert G.edges[31, 30]['count'] == 3\n print('All tests passed.')\n \ntest_graph_integrity(G)", "Exercise (break-time)\nIf you would like a challenge during the break, try figuring out which students have \"unrequited\" friendships, that is, they have rated another student as their favourite at least once, but that other student has not rated them as their favourite at least once.\nSpecifically, get a list of edges for which the reverse edge is not present.\nHint: You may need the class method G.has_edge(n1, n2). This returns whether a graph has an edge between the nodes n1 and n2.", "unrequitted_friendships = []\nfor n1, n2 in G.edges():\n if not G.has_edge(n2, n1):\n unrequitted_friendships.append((n1, n2))\nassert len(unrequitted_friendships) == 124", "In a previous session at ODSC East 2018, a few other class participants provided the following solutions.\nThis one by @schwanne is the list comprehension version of the above solution:", "len([(n1, n2) for n1, n2 in G.edges() if not G.has_edge(n2, n1)])", "This one by @end0 is a unique one involving sets.", "links = ((n1, n2) for n1, n2, d in G.edges(data=True))\nreverse_links = ((n2, n1) for n1, n2, d in G.edges(data=True))\n\nlen(list(set(links) - set(reverse_links)))", "Tests\nA note about the tests: Testing is good practice when writing code. Well-crafted assertion statements help you program defensivel, by forcing you to explicitly state your assumptions about the code or data.\nFor more references on defensive programming, check out Software Carpentry's website: http://swcarpentry.github.io/python-novice-inflammation/08-defensive/\nFor more information on writing tests for your data, check out these slides from a lightning talk I gave at Boston Python and SciPy 2015: http://j.mp/data-test\nCoding Patterns\nThese are some recommended coding patterns when doing network analysis using NetworkX, which stem from my roughly two years of experience with the package.\nIterating using List Comprehensions\nI would recommend that you use the following for compactness: \n[d['attr'] for n, d in G.nodes(data=True)]\n\nAnd if the node is unimportant, you can do:\n[d['attr'] for _, d in G.nodes(data=True)]\n\nIterating over Edges using List Comprehensions\nA similar pattern can be used for edges:\n[n2 for n1, n2, d in G.edges(data=True)]\n\nor\n[n2 for _, n2, d in G.edges(data=True)]\n\nIf the graph you are constructing is a directed graph, with a \"source\" and \"sink\" available, then I would recommend the following pattern:\n[(sc, sk) for sc, sk, d in G.edges(data=True)]\n\nor \n[d['attr'] for sc, sk, d in G.edges(data=True)]\n\nDrawing Graphs\nAs illustrated above, we can draw graphs using the nx.draw() function. The most popular format for drawing graphs is the node-link diagram.\nHairballs\nNodes are circles and lines are edges. Nodes more tightly connected with one another are clustered together. Large graphs end up looking like hairballs.", "nx.draw(G)", "If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.", "nx.draw(G, with_labels=True)", "However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.\nMatrix Plot\nInstead, we can use a matrix to represent them. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. This is done by using the MatrixPlot object from nxviz.", "from nxviz import MatrixPlot\n\nm = MatrixPlot(G)\nm.draw()\nplt.show()", "Arc Plot\nThe Arc Plot is the basis of the next set of rational network visualizations.", "from nxviz import ArcPlot\n\na = ArcPlot(G, node_color='gender', node_grouping='gender')\na.draw()", "Circos Plot\nLet's try another visualization, the Circos plot. We can order the nodes in the Circos plot according to the node ID, but any other ordering is possible as well. Edges are drawn between two nodes.\nCredit goes to Justin Zabilansky (MIT) for the implementation, Jon Charest for subsequent improvements, and nxviz contributors for further development.", "from nxviz import CircosPlot\n\nc = CircosPlot(G, node_color='gender', node_grouping='gender')\nc.draw()\nplt.savefig('images/seventh.png', dpi=300)", "This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.\nHive Plot\nNext up, let's try Hive Plots. HivePlots are not yet implemented in nxviz just yet, so we're going to be using the old hiveplot API for this. When HivePlots have been migrated over to nxviz, its API will resemble that of the CircosPlot's.", "from hiveplot import HivePlot\n\nnodes = dict()\nnodes['male'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'male']\nnodes['female'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'female']\n\nedges = dict()\nedges['group1'] = G.edges(data=True)\n\nnodes_cmap = dict()\nnodes_cmap['male'] = 'blue'\nnodes_cmap['female'] = 'red'\n\nedges_cmap = dict()\nedges_cmap['group1'] = 'black'\n\nh = HivePlot(nodes, edges, nodes_cmap, edges_cmap)\nh.draw()", "Hive plots allow us to divide our nodes into sub-groups, and visualize the within- and between-group connectivity." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
myfunprograms/machine-learning
finding_donors/finding_donors_original.ipynb
apache-2.0
[ "Machine Learning Engineer Nanodegree\nSupervised Learning\nProject: Finding Donors for CharityML\nWelcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Please specify WHICH VERSION OF PYTHON you are using when submitting this notebook. Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. \nThe dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article \"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid\". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries.\n\nExploring the Data\nRun the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, 'income', will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom time import time\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualization code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the Census dataset\ndata = pd.read_csv(\"census.csv\")\n\n# Success - Display the first record\ndisplay(data.head(n=1))", "Implementation: Data Exploration\nA cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \\$50,000. In the code cell below, you will need to compute the following:\n- The total number of records, 'n_records'\n- The number of individuals making more than \\$50,000 annually, 'n_greater_50k'.\n- The number of individuals making at most \\$50,000 annually, 'n_at_most_50k'.\n- The percentage of individuals making more than \\$50,000 annually, 'greater_percent'.\nHint: You may need to look at the table above to understand how the 'income' entries are formatted.", "# TODO: Total number of records\nn_records = None\n\n# TODO: Number of records where individual's income is more than $50,000\nn_greater_50k = None\n\n# TODO: Number of records where individual's income is at most $50,000\nn_at_most_50k = None\n\n# TODO: Percentage of individuals whose income is more than $50,000\ngreater_percent = None\n\n# Print the results\nprint \"Total number of records: {}\".format(n_records)\nprint \"Individuals making more than $50,000: {}\".format(n_greater_50k)\nprint \"Individuals making at most $50,000: {}\".format(n_at_most_50k)\nprint \"Percentage of individuals making more than $50,000: {:.2f}%\".format(greater_percent)", "Preparing the Data\nBefore data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.\nTransforming Skewed Continuous Features\nA dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: 'capital-gain' and 'capital-loss'. \nRun the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.", "# Split the data into features and target label\nincome_raw = data['income']\nfeatures_raw = data.drop('income', axis = 1)\n\n# Visualize skewed continuous features of original data\nvs.distribution(data)", "For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href=\"https://en.wikipedia.org/wiki/Data_transformation_(statistics)\">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of 0 is undefined, so we must translate the values by a small amount above 0 to apply the the logarithm successfully.\nRun the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.", "# Log-transform the skewed features\nskewed = ['capital-gain', 'capital-loss']\nfeatures_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))\n\n# Visualize the new log distributions\nvs.distribution(features_raw, transformed = True)", "Normalizing Numerical Features\nIn addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as 'capital-gain' or 'capital-loss' above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.\nRun the code cell below to normalize each numerical feature. We will use sklearn.preprocessing.MinMaxScaler for this.", "# Import sklearn.preprocessing.StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler\n\n# Initialize a scaler, then apply it to the features\nscaler = MinMaxScaler()\nnumerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']\nfeatures_raw[numerical] = scaler.fit_transform(data[numerical])\n\n# Show an example of a record with scaling applied\ndisplay(features_raw.head(n = 1))", "Implementation: Data Preprocessing\nFrom the table in Exploring the Data above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called categorical variables) be converted. One popular way to convert categorical variables is by using the one-hot encoding scheme. One-hot encoding creates a \"dummy\" variable for each possible category of each non-numeric feature. For example, assume someFeature has three possible entries: A, B, or C. We then encode this feature into someFeature_A, someFeature_B and someFeature_C.\n| | someFeature | | someFeature_A | someFeature_B | someFeature_C |\n| :-: | :-: | | :-: | :-: | :-: |\n| 0 | B | | 0 | 1 | 0 |\n| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |\n| 2 | A | | 1 | 0 | 0 |\nAdditionally, as with the non-numeric features, we need to convert the non-numeric target label, 'income' to numerical values for the learning algorithm to work. Since there are only two possible categories for this label (\"<=50K\" and \">50K\"), we can avoid using one-hot encoding and simply encode these two categories as 0 and 1, respectively. In code cell below, you will need to implement the following:\n - Use pandas.get_dummies() to perform one-hot encoding on the 'features_raw' data.\n - Convert the target label 'income_raw' to numerical entries.\n - Set records with \"<=50K\" to 0 and records with \">50K\" to 1.", "# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()\nfeatures = None\n\n# TODO: Encode the 'income_raw' data to numerical values\nincome = None\n\n# Print the number of features after one-hot encoding\nencoded = list(features.columns)\nprint \"{} total features after one-hot encoding.\".format(len(encoded))\n\n# Uncomment the following line to see the encoded feature names\n#print encoded", "Shuffle and Split Data\nNow all categorical variables have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.\nRun the code cell below to perform this split.", "# Import train_test_split\nfrom sklearn.cross_validation import train_test_split\n\n# Split the 'features' and 'income' data into training and testing sets\nX_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)\n\n# Show the results of the split\nprint \"Training set has {} samples.\".format(X_train.shape[0])\nprint \"Testing set has {} samples.\".format(X_test.shape[0])", "Evaluating Model Performance\nIn this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a naive predictor.\nMetrics and the Naive Predictor\nCharityML, equipped with their research, knows individuals that make more than \\$50,000 are most likely to donate to their charity. Because of this, CharityML is particularly interested in predicting who makes more than \\$50,000 accurately. It would seem that using accuracy as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that does not make more than \\$50,000 as someone who does would be detrimental to CharityML, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \\$50,000 is more important than the model's ability to recall those individuals. We can use F-beta score as a metric that considers both precision and recall:\n$$ F_{\\beta} = (1 + \\beta^2) \\cdot \\frac{precision \\cdot recall}{\\left( \\beta^2 \\cdot precision \\right) + recall} $$\nIn particular, when $\\beta = 0.5$, more emphasis is placed on precision. This is called the F$_{0.5}$ score (or F-score for simplicity).\nLooking at the distribution of classes (those who make at most \\$50,000, and those who make more), it's clear most individuals do not make more than \\$50,000. This can greatly affect accuracy, since we could simply say \"this person does not make more than \\$50,000\" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \\$50,000, CharityML would identify no one as donors. \nQuestion 1 - Naive Predictor Performace\nIf we chose a model that always predicted an individual made more than \\$50,000, what would that model's accuracy and F-score be on this dataset?\nNote: You must use the code cell below and assign your results to 'accuracy' and 'fscore' to be used later.", "# TODO: Calculate accuracy\naccuracy = None\n\n# TODO: Calculate F-score using the formula above for beta = 0.5\nfscore = None\n\n# Print the results \nprint \"Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]\".format(accuracy, fscore)", "Supervised Learning Models\nThe following supervised learning models are currently available in scikit-learn that you may choose from:\n- Gaussian Naive Bayes (GaussianNB)\n- Decision Trees\n- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)\n- K-Nearest Neighbors (KNeighbors)\n- Stochastic Gradient Descent Classifier (SGDC)\n- Support Vector Machines (SVM)\n- Logistic Regression\nQuestion 2 - Model Application\nList three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen\n- Describe one real-world application in industry where the model can be applied. (You may need to do research for this — give references!)\n- What are the strengths of the model; when does it perform well?\n- What are the weaknesses of the model; when does it perform poorly?\n- What makes this model a good candidate for the problem, given what you know about the data?\nAnswer: \nImplementation - Creating a Training and Predicting Pipeline\nTo properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.\nIn the code block below, you will need to implement the following:\n - Import fbeta_score and accuracy_score from sklearn.metrics.\n - Fit the learner to the sampled training data and record the training time.\n - Perform predictions on the test data X_test, and also on the first 300 training points X_train[:300].\n - Record the total prediction time.\n - Calculate the accuracy score for both the training subset and testing set.\n - Calculate the F-score for both the training subset and testing set.\n - Make sure that you set the beta parameter!", "# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score\n\ndef train_predict(learner, sample_size, X_train, y_train, X_test, y_test): \n '''\n inputs:\n - learner: the learning algorithm to be trained and predicted on\n - sample_size: the size of samples (number) to be drawn from training set\n - X_train: features training set\n - y_train: income training set\n - X_test: features testing set\n - y_test: income testing set\n '''\n \n results = {}\n \n # TODO: Fit the learner to the training data using slicing with 'sample_size'\n start = time() # Get start time\n learner = None\n end = time() # Get end time\n \n # TODO: Calculate the training time\n results['train_time'] = None\n \n # TODO: Get the predictions on the test set,\n # then get predictions on the first 300 training samples\n start = time() # Get start time\n predictions_test = None\n predictions_train = None\n end = time() # Get end time\n \n # TODO: Calculate the total prediction time\n results['pred_time'] = None\n \n # TODO: Compute accuracy on the first 300 training samples\n results['acc_train'] = None\n \n # TODO: Compute accuracy on test set\n results['acc_test'] = None\n \n # TODO: Compute F-score on the the first 300 training samples\n results['f_train'] = None\n \n # TODO: Compute F-score on the test set\n results['f_test'] = None\n \n # Success\n print \"{} trained on {} samples.\".format(learner.__class__.__name__, sample_size)\n \n # Return the results\n return results", "Implementation: Initial Model Evaluation\nIn the code cell, you will need to implement the following:\n- Import the three supervised learning models you've discussed in the previous section.\n- Initialize the three models and store them in 'clf_A', 'clf_B', and 'clf_C'.\n - Use a 'random_state' for each model you use, if provided.\n - Note: Use the default settings for each model — you will tune one specific model in a later section.\n- Calculate the number of records equal to 1%, 10%, and 100% of the training data.\n - Store those values in 'samples_1', 'samples_10', and 'samples_100' respectively.\nNote: Depending on which algorithms you chose, the following implementation may take some time to run!", "# TODO: Import the three supervised learning models from sklearn\n\n# TODO: Initialize the three models\nclf_A = None\nclf_B = None\nclf_C = None\n\n# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data\nsamples_1 = None\nsamples_10 = None\nsamples_100 = None\n\n# Collect results on the learners\nresults = {}\nfor clf in [clf_A, clf_B, clf_C]:\n clf_name = clf.__class__.__name__\n results[clf_name] = {}\n for i, samples in enumerate([samples_1, samples_10, samples_100]):\n results[clf_name][i] = \\\n train_predict(clf, samples, X_train, y_train, X_test, y_test)\n\n# Run metrics visualization for the three supervised learning models chosen\nvs.evaluate(results, accuracy, fscore)", "Improving Results\nIn this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F-score. \nQuestion 3 - Choosing the Best Model\nBased on the evaluation you performed earlier, in one to two paragraphs, explain to CharityML which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \\$50,000.\nHint: Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data.\nAnswer: \nQuestion 4 - Describing the Model in Layman's Terms\nIn one to two paragraphs, explain to CharityML, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.\nAnswer: \nImplementation: Model Tuning\nFine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:\n- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.\n- Initialize the classifier you've chosen and store it in clf.\n - Set a random_state if one is available to the same state you set before.\n- Create a dictionary of parameters you wish to tune for the chosen model.\n - Example: parameters = {'parameter' : [list of values]}.\n - Note: Avoid tuning the max_features parameter of your learner if that parameter is available!\n- Use make_scorer to create an fbeta_score scoring object (with $\\beta = 0.5$).\n- Perform grid search on the classifier clf using the 'scorer', and store it in grid_obj.\n- Fit the grid search object to the training data (X_train, y_train), and store it in grid_fit.\nNote: Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!", "# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries\n\n# TODO: Initialize the classifier\nclf = None\n\n# TODO: Create the parameters list you wish to tune\nparameters = None\n\n# TODO: Make an fbeta_score scoring object\nscorer = None\n\n# TODO: Perform grid search on the classifier using 'scorer' as the scoring method\ngrid_obj = None\n\n# TODO: Fit the grid search object to the training data and find the optimal parameters\ngrid_fit = None\n\n# Get the estimator\nbest_clf = grid_fit.best_estimator_\n\n# Make predictions using the unoptimized and model\npredictions = (clf.fit(X_train, y_train)).predict(X_test)\nbest_predictions = best_clf.predict(X_test)\n\n# Report the before-and-afterscores\nprint \"Unoptimized model\\n------\"\nprint \"Accuracy score on testing data: {:.4f}\".format(accuracy_score(y_test, predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, predictions, beta = 0.5))\nprint \"\\nOptimized Model\\n------\"\nprint \"Final accuracy score on the testing data: {:.4f}\".format(accuracy_score(y_test, best_predictions))\nprint \"Final F-score on the testing data: {:.4f}\".format(fbeta_score(y_test, best_predictions, beta = 0.5))", "Question 5 - Final Model Evaluation\nWhat is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in Question 1?\nNote: Fill in the table below with your results, and then provide discussion in the Answer box.\nResults:\n| Metric | Benchmark Predictor | Unoptimized Model | Optimized Model |\n| :------------: | :-----------------: | :---------------: | :-------------: | \n| Accuracy Score | | | |\n| F-score | | | EXAMPLE |\nAnswer: \n\nFeature Importance\nAn important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \\$50,000.\nChoose a scikit-learn classifier (e.g., adaboost, random forests) that has a feature_importance_ attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset.\nQuestion 6 - Feature Relevance Observation\nWhen Exploring the Data, it was shown there are thirteen available features for each individual on record in the census data.\nOf these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?\nAnswer:\nImplementation - Extracting Feature Importance\nChoose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.\nIn the code cell below, you will need to implement the following:\n - Import a supervised learning model from sklearn if it is different from the three used earlier.\n - Train the supervised model on the entire training set.\n - Extract the feature importances using '.feature_importances_'.", "# TODO: Import a supervised learning model that has 'feature_importances_'\n\n# TODO: Train the supervised model on the training set \nmodel = None\n\n# TODO: Extract the feature importances\nimportances = None\n\n# Plot\nvs.feature_plot(importances, X_train, y_train)", "Question 7 - Extracting Feature Importance\nObserve the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \\$50,000.\nHow do these five features compare to the five features you discussed in Question 6? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant?\nAnswer:\nFeature Selection\nHow does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of all features present in the data. This hints that we can attempt to reduce the feature space and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set with only the top five important features.", "# Import functionality for cloning a model\nfrom sklearn.base import clone\n\n# Reduce the feature space\nX_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]\nX_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]\n\n# Train on the \"best\" model found from grid search earlier\nclf = (clone(best_clf)).fit(X_train_reduced, y_train)\n\n# Make new predictions\nreduced_predictions = clf.predict(X_test_reduced)\n\n# Report scores from the final model using both versions of data\nprint \"Final Model trained on full data\\n------\"\nprint \"Accuracy on testing data: {:.4f}\".format(accuracy_score(y_test, best_predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, best_predictions, beta = 0.5))\nprint \"\\nFinal Model trained on reduced data\\n------\"\nprint \"Accuracy on testing data: {:.4f}\".format(accuracy_score(y_test, reduced_predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))", "Question 8 - Effects of Feature Selection\nHow does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?\nIf training time was a factor, would you consider using the reduced data as your training set?\nAnswer:\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
justiceamoh/ENGS108
vagrant/solutions/hw1.ipynb
apache-2.0
[ "HW #1\nImports and dependencies", "import numpy as np\n\nfrom scipy.stats import norm\nfrom scipy.io import loadmat", "Problem #1\nLet $r_j$ be the $j$th row of $A$, and let $x_j$ be the solution to \n$$Ax_j = r_j$$", "# Load the matrix into memory, assuming for now that it is stored in the home directory\nA = loadmat(\"../dataset/hw1/A11F17108.mat\")['A']\n\n# Obtain x, where Ax[:,i] = A[i,:].T\nx = np.linalg.lstsq(A,A.T)[0]\n\n# Test\nfor i in range(200):\n if not np.allclose(np.dot(A,x[:,i]), A[i,:]):\n print(\"Fails on index %d\" % i)\n break\n if i == 199:\n print(\"Succeeds!\")", "Part (a)\nSolve\n$$\\textrm{argmin}_{j \\in [199]} \\|x_j\\|_2$$", "norms = np.linalg.norm(x,axis=0)\n\n# Test\nfor i in range(200):\n if not np.allclose(np.sqrt(np.sum(np.power(x[:,i],2))), norms[i]):\n print(\"Fails on index %d\" % i)\n break\n if i == 199:\n print(\"Succeeds!\")\n\nnp.argmin(norms)\n\n# Test\nm = np.inf\nind = 0\nfor i in range(200):\n if norms[i] < m:\n m = norms[i]\n ind = i\nprint(ind)", "Part (b)\nSolve\n$$\\textrm{argmax}_{j \\in [199]} \\|x_j\\|_2$$", "np.argmax(norms)\n\n# Test\nm = 0\nind = 0\nfor i in range(200):\n if norms[i] > m:\n m = norms[i]\n ind = i\nprint(ind)", "Part (c)\nWhat is the average value of $\\|x_j\\|_2$ over all $j \\in [199]$?", "np.mean(norms)", "Problem #2\nPart (a)\nLet $y$ be a vector orthogonal to $r_1^T, \\dots, r_n-1^T$ with $\\|y\\|2 = 1$ and $y_1 > 0$. What are $y_3$, $y{12}$, and $y_{37}$?", "# Compute the SVD of A with the last row clipped, and use the last row of V\nu,s,v = np.linalg.svd(A[:-1,:])\nns = v[-1,:]\n\n# Test\ntarget = np.zeros((199,1))\nfor i in range(199):\n if not np.allclose(np.dot(A[i,:],ns),target):\n print(\"Fails on index %d\" %i)\n break\n if i == 198:\n print(\"Succeeds!\")\n\nnp.linalg.norm(ns)\n\n# check orientation\nns[0]\n\n# return relevant indices\n(ns[2],ns[11],ns[36])\n\n# Just in case of an oopsie\n(ns[3],ns[12],ns[37])\n\n# Test\nfor i in range(0,199):\n if not np.allclose(np.dot(ns,A[i,:]),0):\n print(\"Failed at index %d\" %i)\n break\n if i == 198:\n print(\"Succeeds!\")\n\nq,r = np.linalg.qr(A[:-1,:].T, mode=\"complete\")\n\ny = q[:,-1]\n\ny[0]\n\ny = -y\n\n(y[2],y[11],y[36])\n\n(y[3],y[12],y[37])\n\n# Test\nfor i in range(0,199):\n if not np.allclose(np.dot(y,A[i,:]),0):\n print(\"Failed at index %d\" %i)\n break\n if i == 198:\n print(\"Succeeds!\")", "Part (b)\nWhat is the ratio $\\frac{\\sigma_1}{\\sigma_n}$, where $\\sigma_i$ is the $i$th largest singular value of $A$?", "u,s,v = np.linalg.svd(A)\ns[0]/s[-1]", "Problem #3\nWhat are the first six terms of the Taylor series expansion at a point $(\\alpha, \\beta)$ for the function\n$$f(x,y) = x^2 + (y-2)^4 + x^2 y^4$$\n$$\\alpha^2 + (\\beta-2)^2 + \\alpha^2\\beta^4 + 2\\alpha(1+\\beta^4)(x-\\alpha) + 4(\\alpha^2\\beta^3 + (\\beta-2)^3)(y-\\beta)$$\nProblem #4\nFor the same $f(x,y)$ as above, what is $\\nabla f (3,1)$?\ngradient\nevaluation\nProblem #5\nConsider the curves\n$$C_1 = {(x,y) \\mid y = 1/x, x > 0 }$$\n$$C_2 = {(w,z) \\mid (w-5)^2 + (z-2)^2 = 1 }$$\nWhat points on $C_1$ and $C_2$ are closest to each other?\nConsider solving\n$$\\min_{\\substack{(x,y) \\in C_1 \\ (w,z) \\in C_2}} d((x,y),(w,z))$$\nThis is equivalent to\n$$\\min_{\\substack{(x,y) \\in C_1 \\ (w,z) \\in C_2}} (x - w)^2 + (y - z)^2$$\nSince $C_2$ defines a circle centered at $(5,2)$, a simple argument suffices to show that this is equivalent to solving\n$$\\min_{(x,y) \\in C_1} (x - 5)^2 + (y - 2)^2$$\nBy substitution of $y$, using the definition of $C_1$, this is equivalent to \n$$\\min_{x > 0} \\quad (x - 5)^2 + (x^{-1} - 2 )^2$$\nThe derivative of this function is \n$$2(-x^{-3} + 2x^{-2} + x -5)$$\nSolve for x when set equal to zero", "# solve for y\nx = 4.92594\ny = 1./x\n\ny\n\n(x,y)\n\n# solve for the vector connecting (x,y) to (5,2)\ns = np.array([x-5,y-2])\ns\n\n# normalize and scale\nnm = np.linalg.norm(s)\nw = s[0]/nm+5\nz = s[1]/nm+2\n(w,z)\n\n# Test\n(w-5)**2 + (z-2)**2\n\n# Print the distance\nnp.linalg.norm([x-w,y-z])", "Of course, this solution can also be obtained by way of stochastic gradient descent (not implemented here).\nProblem #6\nShuffle a normal deck of 52 playing cards and draw three cards.\nPart (a)\nWhat is the probability that they are the same suit in increasing rank?", "def choose(n, k):\n \"\"\"\n A fast way to calculate binomial coefficients by Andrew Dalke (contrib).\n \"\"\"\n if 0 <= k <= n:\n ntok = 1\n ktok = 1\n for t in xrange(1, min(k, n - k) + 1):\n ntok *= n\n ktok *= t\n n -= 1\n return ntok // ktok\n else:\n return 0\n\n# the total number of possible triples\ntot = choose(52,3)*6\n\n# the total number of acceptable triples over the total number of triples\nfloat(choose(13,3)*4)/tot\n\n# Test\n# Count all possible acceptable triples and divide by all possible triples\ntrips = [(i,j,k) for i in range(1,12) for j in range(i+1,13) for k in range(j+1,14)]\nfloat(len(trips)*4)/tot\n\n# Examine the acceptable triples. Bear in mind there are four of each type\ntrips[:20]", "Part (b)\nGiven that the first two cards you drew from the deck are of different suits, what is the probability that they are all of different suits?", "# Much simpler.\n26./50", "Problem #7\nThe random variables $X$ and $Y$ have a joint pdf $p(X,Y)$ with distribution $(X,Y) \\sim \\mathcal{N}((\\mu_X,\\mu_Y)),\\Sigma)$, where $(\\mu_X,\\mu_Y) = (2, -1)$ and \n$$ \\Sigma = \n\\begin{bmatrix}\n1 & 0 \\\n0 & 2\n\\end{bmatrix}$$\nAssume throughout that $(x,y) \\sim p(X,Y)$\nPart (a)\nWhat is $\\mathbb{P} ( x \\in [0,2])$?\nNote that the covariance matrix of $p(X,Y)$ has zeros on the cross-diagonal entries. This implies that $X$ and $Y$ are independent of one another. Hence, we have that $X \\sim \\mathcal{N}(2,1)$ and $Y \\sim \\mathcal{N}(-1,2)$. \nThus, $\\mathbb{P} ( x \\in [0,2]) = \\int_0^2 \\int_{-\\infty}^\\infty p(X,Y) dY dX = \\int_0^2 p(X) dX$", "# Create given normal variables\nX = norm(loc=2,scale=1)\nY = norm(loc=-1,scale=np.sqrt(2))\n\nX.cdf(2) - X.cdf(0)\n\nY.cdf(2) - Y.cdf(0)", "Part (b)\nWhat is $\\mathbb{P} ( x \\in [0,2] \\vee y \\in [0,2])$?", "(X.cdf(2) - X.cdf(0)) + (Y.cdf(2) - Y.cdf(0)) - ((X.cdf(2) - X.cdf(0))*(Y.cdf(2) - Y.cdf(0)))", "Part (c)\nWhat is $\\mathbb{P} ( x \\in [0,2] \\wedge y \\in [0,2])$?", "(X.cdf(2) - X.cdf(0))*(Y.cdf(2) - Y.cdf(0))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marc-moreaux/Deep-Learning-classes
notebooks/Intro_to_convolutions.ipynb
mit
[ "Use of convolutions with tensorflow\nIn this notebook, you'll be using tensorflow to build a Convolutional Neural Network (CNN). \nConvolution\nBoth, this notebook and this wikipedia page might help you understand what is a convolution.\nno, if we consider two functions $f$ and $g$ taking values from $\\mathbb{Z} \\to \\mathbb{R}$ then:\n$ (f * g)[n] = \\sum_{m = -\\infty}^{+\\infty} f[m] \\cdot g[n - m] $\nIn our case, we consider the two vectors $x$ and $w$ :\n$ x = (x_1, x_2, ..., x_{n-1}, x_n) $\n$ w = (w_1, w_2) $\nAnd get : \n$ x * w = (w_1 x_1 + w_2 x_2, w_1 x_2 + w_2 x_3, ..., w_1 x_{n-1} + w_2 x_n)$\nDeep learning subtility :\nIn most of deep learning framewoks, you'll get to chose in between three paddings:\n- Same: $(fg)$ has the same shape as x (we pad the entry with zeros)\n- valid: $(fg)$ has the shape of x minus the shape of w plus 1 (no padding on x)\n- Causal: $(f*g)(n_t)$ does not depend on any $(n_{t+1})$\nTensorflow\n\"TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library, and also used for machine learning applications such as neural networks.[3] It is used for both research and production at Google often replacing its closed-source predecessor, DistBelief.\" - Wikipedia\nWe'll be using tensorflow to build the models we want to use. \nHere below, we build a AND gate with a very simple neural network :", "import tensorflow as tf\nimport numpy as np\n\ntf.reset_default_graph()\n\n# Define our Dataset\nX = np.array([[0,0],[0,1],[1,0],[1,1]])\nY = np.array([0,0,0,1]).reshape(-1,1)\n\n\n# Define the tensorflow tensors\nx = tf.placeholder(tf.float32, [None, 2], name='X') # inputs\ny = tf.placeholder(tf.float32, [None, 1], name='Y') # outputs\nW = tf.Variable(tf.zeros([2, 1]), name='W')\nb = tf.Variable(tf.zeros([1,]), name='b')\n\n# Define the model\npred = tf.nn.sigmoid(tf.matmul(x, W) + b) # Model\n\n# Define the loss\nwith tf.name_scope(\"loss\"):\n loss = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred) + (1-y) * tf.log(1-pred), reduction_indices=1))\n\n# Define the optimizer method you want to use\nwith tf.name_scope(\"optimizer\"):\n optimizer = tf.train.GradientDescentOptimizer(0.1).minimize(loss)\n\n# Include some Tensorboard visualization\nwriter_train = tf.summary.FileWriter(\"./my_model/\")\n\n\n# Start training session\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n writer_train.add_graph(sess.graph)\n \n for epoch in range(1000):\n _, c, p = sess.run([optimizer, loss, pred], feed_dict={x: X,\n y: Y})\nprint p, y", "To visualize the graph you just created, launch tensorbord.\n$tensorboard --logdirs=./ on linux (with corresponding logdir)\n\nGet inspiration from the preceding code to build a XOR gate\nDesign a neural network with 2 layers.\n- layer1 has 2 neurons (sigmoid or tanh activation)\n- Layer2 has 1 neuron (it outouts the prediction)\nAnd train it\nIt's mandatory that you get a tensorboard visualization of your graph, try to make it look good, plz :)\nHere below I put a graph of the model you want to have (yet your weights won't be the same)", "### Code here", "Print the weights of your model\nAnd give an interpretation on what they are doing", "### Code here", "Build a CNN to predict the MNIST digits\nYou can now move to CNNs. You'll have to train a convolutional neural network to predict the digits from MNIST.\nYou might want to reuse some pieces of code from SNN\nYour model should have 3 layers:\n- 1st layer : 6 convolutional kernels with shape (3,3)\n- 2nd layer : 6 convolutional kernels with shape (3,3)\n- 3rd layer : Softmax layer\nTrain your model.\nExplain all you do, and why, make it lovely to read, plz o:)", "### code here", "Print the weights of your model\nAnd give an interpretation on what they are doing", "### code here", "Chose one (tell me what you chose...)\n\nShow how the gradients (show only one kernel) evolve for good and wrong prediction. (hard)\nInitialize the kernels with values that make sense for you and show how they evolve. (easy) \nWhen training is finished, show the 6+6=12 results of some convolved immages. (easy)", "### Code here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hschh86/usersong-extractor
documents/ShallowTimeExperiment.ipynb
mit
[ "I don't know where I'm going with this, but here goes.\nLet's do some experimenting!", "# module faffery\nimport sys\nsys.path.append('..')\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\n\nimport asyncio as aio\nimport asyncio.subprocess as asp\n\nimport signal\nimport time\nimport contextlib\n\nimport logging\nlogging.basicConfig()\n\nimport mido\n\nplt.rcParams['figure.figsize'] = (15.0, 5.0)\n\nlogging.getLogger('asyncio').setLevel(logging.WARNING)\n\ndef triple_open(outport):\n return aio.gather(aio.create_subprocess_exec('python', '../slurp.py', '-p', outport.name,\n stdout=asp.PIPE, stderr=asp.PIPE),\n aio.create_subprocess_exec('python', '../slurp_rtmidi.py', '-gp', outport.name,\n stdout=asp.PIPE, stderr=asp.PIPE),\n aio.create_subprocess_exec('python', '../slurp_rtmidi.py', '-P', '-gp', outport.name,\n stdout=asp.PIPE, stderr=asp.PIPE),\n )\n \n\ndef send_messages(outport, afunc):\n loop = aio.new_event_loop()\n aio.set_event_loop(loop)\n loop.set_debug(True)\n \n async def wtask():\n srp = await triple_open(outport)\n try:\n comm = aio.gather(*(p.communicate() for p in srp))\n aio.ensure_future(comm)\n await aio.sleep(0.5)\n await afunc(outport)\n await aio.sleep(0.5)\n for p in srp:\n with contextlib.suppress(ProcessLookupError):\n p.send_signal(signal.SIGINT)\n return await comm\n except aio.CancelledError:\n for p in srp:\n with contextlib.suppress(ProcessLookupError):\n p.terminate()\n raise\n\n\n try:\n coms = loop.run_until_complete(wtask())\n return coms\n except KeyboardInterrupt:\n # This cleanup code doesn't actually work 100% properly, but hey.\n tasks = aio.gather(*aio.Task.all_tasks())\n tasks.cancel()\n with contextlib.suppress(aio.CancelledError):\n loop.run_until_complete(tasks)\n finally:\n loop.close()\n\n\n\n\neport = mido.open_output('experimental', virtual=True)", "midi beat clock is 24 pulses per quarter note.\nA delay of 0.02 seconds works out to 125 quarter notes per minute, which is a reasonable tempo.", "async def port_send(outport):\n for x in range(500):\n outport.send(mido.Message('clock'))\n await aio.sleep(0.02)\n \n\nouts = send_messages(eport, port_send)\n\n[x[1] for x in outs]\n\nmessage_lists = [list(mido.Message.from_str(l) for l in x[0].decode().splitlines()) for x in outs]\n\n[len(l) for l in message_lists]\n\nmessage_times = [np.fromiter((m.time for m in l), float, count=500) for l in message_lists]\nst, rt, pt = message_times\n\nplt.plot(st, 'c,', rt, 'r,', pt, 'b,')\nplt.show()\n\n# rt and pt start counting at first message, st starts when process starts reading.\n# we normalise so st starts at zero\nst_norm = st - st[0]\n\n\nplt.plot(st_norm, 'c,', rt, 'r,', pt, 'b,')\nplt.show()\n\nideal = np.arange(0, 500*0.02, 0.02)\n# Ideally, messages would have been sent exactly every 0.02 seconds.\n# however, we've been sending with gaps of 0.02 seconds, and sending takes time.\n\nsrpn = np.stack([st_norm, rt, pt])\n\nfor x, c in zip(srpn-ideal, 'crb'):\n plt.plot(x, c)\nplt.show()\n\nbaserange = np.arange(500)\npolyfits = np.polyfit(baserange, srpn.T, 1).T\n\n\nplt.plot(np.polyval(polyfits[0], baserange), 'g', srpn[0], 'c')\nplt.show()\n\n# you know what, let's just combine everything into one big set and get some sort of average fit.\navfit = np.polyfit(np.tile(baserange, 3), srpn.flatten(), 1)\navline = np.polyval(avfit, baserange)\n\n\nplt.plot(st_norm, 'c.', rt, 'r.', pt, 'b.', avline, 'y', ideal, 'k')\nplt.show()\n\nsrpf = srpn - avline\n\nplt.axhline(c='grey', lw=1)\nfor x, c in zip(srpf, 'crb'):\n plt.plot(x, c)\nplt.show()\n\n# the differences.\nsrpd = np.diff(srpn)\n\nfor x, c in zip(srpd, 'crb'):\n plt.plot(x, c)\nplt.show()\n\n# Histograms!\nrge = (srpd.min(), srpd.max())\nfig, axes = plt.subplots(3, 1, sharex=True)\nfor x, c, a in zip(srpd, 'crb', axes):\n a.hist(x, color=c, bins=200, range=rge)\nplt.show()\n\nplt.hist(srpd.T, color='crb', bins=200, range=rge, histtype='step')\nplt.show()\n\nfig, axes = plt.subplots(3, 1, sharex=True)\nfor x, c, a in zip(srpd, 'crb', axes):\n a.hist(x, color=c, bins=200, range=(0.0195, 0.024))\nplt.show()\n\nrpdif = rt - pt\navgt = np.mean(srpn[1:], axis=0)\nplt.axhline(c='grey', lw=1)\nplt.axhline(rpdif.mean(), ls='--', c='k')\nplt.plot(avgt, rpdif, 'm')\nplt.show()\n\n# Let's try sending a lot of messages really quickly\nQUIET_NOTE = mido.Message('note_on', channel=0, note=60, velocity=0)\nasync def rapid_send(outport):\n for x in range(500):\n outport.send(QUIET_NOTE)\n await aio.sleep(0)\n\nrouts = send_messages(eport, rapid_send)\n\n[x[1] for x in routs]\n\nr_message_lists = [list(mido.Message.from_str(l) for l in x[0].decode().splitlines()) for x in routs]\n\n[len(l) for l in r_message_lists]\n\nr_message_times = [np.fromiter((m.time for m in l), float, count=500) for l in r_message_lists]\nfor x, c in zip(r_message_times, 'crb'):\n plt.plot(x, c)\nplt.show()\n\nrdif = np.diff(r_message_times)\n\nfor x, c in zip(rdif, 'crb'):\n plt.plot(x, c)\nplt.show()\n\nr_rpdif = r_message_times[1] - r_message_times[2]\nr_avgt = np.mean(r_message_times[1:], axis=0)\nplt.axhline(c='grey', lw=1)\nplt.axhline(r_rpdif.mean(), ls='--', c='k')\nplt.plot(r_avgt, r_rpdif, 'm.-')\nplt.show()\n\n# Histograms!\nr_rge = (rdif.min(), rdif.max())\nfig, axes = plt.subplots(3, 1, sharex=True)\nfor x, c, a in zip(rdif, 'crb', axes):\n a.hist(x, color=c, bins=200, range=r_rge)\nplt.show()\n\nplt.hist(rdif.T, color='crb', bins=200, range=r_rge, histtype='step')\nplt.show()\n\n# what if we dump them out REALLY FAST\nasync def really_rapid_send(outport):\n for x in range(500):\n outport.send(QUIET_NOTE)\n await aio.sleep(0)\n\n\nrrouts = send_messages(eport, really_rapid_send)\n\n[x[1] for x in rrouts]", "Looks like we overhelmed the callback buffers?", "rr_message_lists = [list(mido.Message.from_str(l) for l in x[0].decode().splitlines()) for x in rrouts]\n[len(l) for l in rr_message_lists]", "... how did we get more than 500 messages??", "[all(x.type == 'note_on' and x.channel == 0 and x.note == 60 for x in l) for l in rr_message_lists]\n\n# what if we dump them out REALLY FAST\nasync def ascend_send(outport):\n for x in range(5):\n for y in range(100):\n outport.send(mido.Message('note_on', channel=x, note=y, velocity=0))\n aio.sleep(0)\n\n\na_outs = send_messages(eport, ascend_send)\n\n[x[1] for x in a_outs]\n\na_message_lists = [list(mido.Message.from_str(l) for l in x[0].decode().splitlines()) for x in a_outs]\n[len(l) for l in a_message_lists]\n\na_message_times = [np.fromiter((m.time for m in l), float) for l in a_message_lists]\nfor x, c in zip(a_message_times, 'crb'):\n plt.plot(x, c)\nplt.show()\n\n\nfor x, c in zip(a_message_times, 'crb'):\n plt.plot(np.diff(x), c)\nplt.show()", "Pretty, but I have no idea what this really means", "a_message_ctr = [np.fromiter((100*m.channel + m.note for m in l), float) for l in a_message_lists]\nfor x, c in zip(a_message_ctr, 'crb'):\n plt.plot(x, c+',-')\nplt.show()", "Looks like the callback dropped a bunch of messages and the poll and mido-queue powered one repeated a bunch of methods.\nI guess unpredictable things happen when you overload the midi buffer", "async def pulse_send(outport):\n for x in range(50):\n for y in range(10):\n outport.send(QUIET_NOTE)\n await aio.sleep(0.01)\n\np_outs = send_messages(eport, pulse_send)\n\n[x[1] for x in p_outs]\n\np_message_lists = [list(mido.Message.from_str(l) for l in x[0].decode().splitlines()) for x in p_outs]\n[len(l) for l in p_message_lists]\n\np_times = [np.fromiter((m.time for m in l), float) for l in p_message_lists]\np_diff = np.diff(p_times)\np_rge = (p_diff.min(), p_diff.max())\nfig, axes = plt.subplots(3, 1, sharex=True)\nfor x, c, a in zip(p_diff, 'crb', axes):\n a.hist(x, color=c, bins=200, range=p_rge)\nplt.show()\n\nfig, axes = plt.subplots(3, 1, sharex=True)\nfor x, c, a in zip(p_diff, 'crb', axes):\n a.hist(x, color=c, bins=200, range=(6e-5, 10e-5))\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NlGG/MachineLearning
.ipynb_checkpoints/auto_encorder_and_rnn-checkpoint.ipynb
mit
[ "今回のレポートでは、①オートエンコーダの作成、②再帰型ニューラルネットワークの作成を試みた。\n①コブダクラス型生産関数を再現できるオートエンコーダの作成が目標である。", "%matplotlib inline\n\nimport numpy as np\nimport pylab as pl\nimport math \nfrom sympy import *\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom NN import NN", "定義域は0≤x≤1である。\n<P>コブ・ダグラス型生産関数は以下の通りである。</P>\n<P>z = x_1**0.5*x_2*0.5</P>", "def example1(x_1, x_2):\n z = x_1**0.5*x_2*0.5\n return z\n\nfig = pl.figure()\nax = Axes3D(fig)\n\nX = np.arange(0, 1, 0.1)\nY = np.arange(0, 1, 0.1)\n\nX, Y = np.meshgrid(X, Y)\nZ = example1(X, Y)\n\nax.plot_surface(X, Y, Z, rstride=1, cstride=1)\n\npl.show()", "NNのクラスはすでにNN.pyからimportしてある。", "nn = NN()", "以下に使い方を説明する。\n初めに、このコブ・ダグラス型生産関数を用いる。", "x_1 = Symbol('x_1')\nx_2 = Symbol('x_2')\nf = x_1**0.5*x_2*0.5", "入力層、中間層、出力層を作る関数を実行する。引数には層の数を用いる。", "nn.set_input_layer(2)\n\nnn.set_hidden_layer(2)\n\nnn.set_output_layer(2)", "<p>nn.set_hidden_layer()は同時にシグモイド関数で変換する前の中間層も作る。</p>\n<p>set_output_layer()は同時にシグモイド関数で変換する前の出力層、さらに教師データを入れる配列も作る。</p>\n\nnn.setup()で入力層ー中間層、中間層ー出力層間の重みを入れる配列を作成する。\nnn.initialize()で重みを初期化する。重みは-1/√d ≤ w ≤ 1/√d (dは入力層及び中間層の数)の範囲で一様分布から決定される。", "nn.setup()\nnn.initialize()", "nn.supervised_function(f, idata)は教師データを作成する。引数は関数とサンプルデータをとる。", "idata = [1, 2]\n\nnn.supervised_function(f, idata)", "nn.simulate(N, eta)は引数に更新回数と学習率をとる。普通はN=1で行うべきかもしれないが、工夫として作成してみた。N回学習した後に出力層を返す。", "nn.simulate(1, 0.1)", "nn.calculation()は学習せずに入力層から出力層の計算を行う。nn.simulate()内にも用いられている。\n次に実際に学習を行う。サンプルデータは、", "X = np.arange(0, 1, 0.2)\nY = np.arange(0, 1, 0.2)\nprint X, Y", "の組み合わせである。", "X = np.arange(0, 1, 0.2)\nY = np.arange(0, 1, 0.2)\n\na = np.array([])\nb = np.array([])\nc = np.array([])\n\nnn = NN()\nnn.set_network()\n\nfor x in X:\n for y in Y:\n a = np.append(a, x)\n b = np.append(b, y)\n \nfor i in range(100):\n l = np.random.choice([i for i in range(len(a))])\n m = nn.main2(1, f, [a[l], b[l]], 0.5)\n\nfor x in X:\n for y in Y:\n idata = [x, y]\n c = np.append(c, nn.realize(f, idata))\n\na\n\nb\n\nc", "例えば(0, 0)を入力すると0.52328635を返している(つまりa[0]とb[0]を入力して、c[0]の値を返している)。\nここでは交差検定は用いていない。", "fig = pl.figure()\nax = Axes3D(fig)\n\nax.scatter(a, b, c)\n\npl.show()", "確率的勾配降下法を100回繰り返したが見た感じから近づいている。回数を10000回に増やしてみる。", "X = np.arange(0, 1, 0.2)\nY = np.arange(0, 1, 0.2)\n\na = np.array([])\nb = np.array([])\nc = np.array([])\n\nnn = NN()\nnn.set_network()\n\nfor x in X:\n for y in Y:\n a = np.append(a, x)\n b = np.append(b, y)\n \nfor i in range(10000):\n l = np.random.choice([i for i in range(len(a))])\n m = nn.main2(1, f, [a[l], b[l]], 0.5)\n\nfor x in X:\n for y in Y:\n idata = [x, y]\n c = np.append(c, nn.realize(f, idata))\n\nfig = pl.figure()\nax = Axes3D(fig)\n\nax.scatter(a, b, c)\n\npl.show()", "見た感じ随分近づいているように見える。\n最後に交差検定を行う。\n初めに学習回数が極めて少ないNNである。", "X = np.arange(0, 1, 0.2)\nY = np.arange(0, 1, 0.2)\n\na = np.array([])\nb = np.array([])\nc = np.array([])\n\nfor x in X:\n for y in Y:\n a = np.append(a, x)\n b = np.append(b, y)\n\nevl = np.array([])\n\nfor i in range(len(a)):\n nn = NN()\n nn.set_network()\n for j in range(1):\n l = np.random.choice([i for i in range(len(a))])\n if l != i:\n nn.main2(1, f, [a[l], b[l]], 0.5)\n idata = [a[i], b[i]]\n est = nn.realize(f, idata)\n evl = np.append(evl, math.fabs(est - nn.supervised_data))\n\nnp.average(evl)", "次に十分大きく(100回に)してみる。", "X = np.arange(0, 1, 0.2)\nY = np.arange(0, 1, 0.2)\n\na = np.array([])\nb = np.array([])\nc = np.array([])\n\nnn = NN()\nnn.set_network(h=7)\n\nfor x in X:\n for y in Y:\n a = np.append(a, x)\n b = np.append(b, y)\n\nevl = np.array([])\n\nfor i in range(len(a)):\n nn = NN()\n nn.set_network()\n for j in range(100):\n l = np.random.choice([i for i in range(len(a))])\n if l != i:\n nn.main2(1, f, [a[l], b[l]], 0.5)\n idata = [a[i], b[i]]\n evl = np.append(evl, math.fabs(nn.realize(f, idata) - nn.supervised_data))\n\nnp.average(evl)", "誤差の平均であるので小さい方よい。\n学習回数を増やした結果、精度が上がった。\n最後にオートエンコーダを作成する。回数を増やした方がよいことが分かったため、10000回学習させてみる。", "nn = NN()\nnn.set_network()\n\nX = np.arange(0, 1, 0.05)\nY = np.arange(0, 1, 0.05)\n\na = np.array([])\nb = np.array([])\nc = np.array([])\n\nfor x in X:\n for y in Y:\n a = np.append(a, x)\n b = np.append(b, y)\n\nevl = np.array([])\n\ns = [i for i in range(len(a))]\n\nfor j in range(1000):\n l = np.random.choice(s)\n nn.main2(1, f, [a[l], b[l]], 0.5)\n\nc = np.array([])\n\nfor i in range(len(a)):\n idata = [a[i], b[i]]\n c = np.append(c, nn.realize(f, idata))\n\nfig = pl.figure()\nax = Axes3D(fig)\n\nax.scatter(a, b, c)\n\npl.show()", "十分再現できていることが分かる。\n②ゲーム理論で用いられるTit for Tatを再現してみる。二人のプレーヤーが互いにRNNで相手の行動を予測し、相手の行動に対してTit for Tatに基づいた行動を選択する。", "from NN import RNN", "最初の行動はRNNで指定できないので、所与となる。この初期値と裏切りに対する感応度で収束の仕方が決まる。\n協調を1、裏切りを0としている。RNNの予測値は整数値でないが、p=(RNNの出力値)で次回に協調を行う。\n例1:1期目に、プレーヤー1が協力、プレーヤー2が裏切り。", "nn1 = RNN()\nnn1.set_network()\nnn2 = RNN()\nnn2.set_network()\n\nidata1 = [[1, 0]]\nidata2 = [[0, 1]]\nsdata1 = [[0]]\nsdata2 = [[1]]\n\nfor t in range(20):\n \n for i in range(10):\n nn1.main2(idata1, sdata2, 0.9)\n nn2.main2(idata2, sdata1, 0.9)\n \n idata1.append([sdata1[-1][0], sdata2[-1][0]])\n idata2.append([idata1[-1][1], idata1[-1][0]])\n \n n1r = nn1.realize(idata1)\n n2r = nn2.realize(idata1)\n sdata1.append([np.random.choice([1, 0], p=[n1r, 1-n1r])])\n \n sdata2.append([np.random.choice([1, 0], p=[n2r, 1-n2r])])\n \nidata.append([sdata1[-1][0], sdata2[-1][0]])\nprint nn1.realize(idata1), nn2.realize(idata), idata1", "下の図より、最初は交互に相手にしっぺ返しをしているが、やがて両者が裏切り合うこと状態に収束する。", "p1 = []\np2 = []\nfor i in range(len(idata1)):\n p1.append(idata1[i][0])\nfor i in range(len(idata2)):\n p2.append(idata2[i][0])\nplt.plot(p1, label='player1')\nplt.plot(p2, label='player2')", "例2:1期目に、プレーヤー1が協力、プレーヤー2が協力。ただし、プレーヤー2は相手の裏切りをかなり警戒している。\n警戒を表すためにp=(RNNの出力値 - 0.2)とする。p<0の場合はp=0に直す。", "nn1 = RNN()\nnn1.set_network()\nnn2 = RNN()\nnn2.set_network()\n\nidata1 = [[1, 1]]\nidata2 = [[1, 1]]\nsdata1 = [[1]]\nsdata2 = [[1]]\n\nfor t in range(20):\n \n for i in range(10):\n nn1.main2(idata1, sdata2, 0.9)\n nn2.main2(idata2, sdata1, 0.9)\n \n idata1.append([sdata1[-1][0], sdata2[-1][0]])\n idata2.append([idata1[-1][1], idata1[-1][0]])\n \n n1r = nn1.realize(idata1)\n n2r = nn2.realize(idata1)\n \n prob1 = n1r \n prob2 = n2r - 0.3\n \n if prob2 < 0:\n prob2 = 0\n \n sdata1.append([np.random.choice([1, 0], p=[prob1, 1-prob1])])\n \n sdata2.append([np.random.choice([1, 0], p=[prob2, 1-prob2])])\n \nidata.append([sdata1[-1][0], sdata2[-1][0]])\nprint nn1.realize(idata1), nn2.realize(idata), idata1\n\np1 = []\np2 = []\nfor i in range(len(idata1)):\n p1.append(idata1[i][0])\nfor i in range(len(idata2)):\n p2.append(idata2[i][0])\nplt.plot(p1, label='player1')\nplt.plot(p2, label='player2')", "例3:次に相手の行動を完全には観測できない場合を考える。t期の相手の行動をt+1期にノイズが加わって知る。例えば、1期目に相手が協調したことを、確率90%で2期目に正しく知れるが、10%で裏切りと誤って伝わる場合である。\nノイズは20%の確率で加わるものとする。その他の条件は例1と同じにした。", "nn1 = RNN()\nnn1.set_network()\nnn2 = RNN()\nnn2.set_network()\n\nidata1 = [[1, 0]]\nidata2 = [[0, 1]]\nsdata1 = [[0]]\nsdata2 = [[1]]\n\nfor t in range(20):\n \n for i in range(10):\n nn1.main2(idata1, sdata2, 0.9)\n nn2.main2(idata2, sdata1, 0.9)\n \n idata1.append([sdata1[-1][0], np.random.choice([sdata2[-1][0], 1-sdata2[-1][0]], p=[0.8, 0.2])])\n idata2.append([sdata2[-1][0], np.random.choice([sdata1[-1][0], 1-sdata1[-1][0]], p=[0.8, 0.2])])\n \n n1r = nn1.realize(idata1)\n n2r = nn2.realize(idata1)\n \n prob1 = n1r \n prob2 = n2r \n \n sdata1.append([np.random.choice([1, 0], p=[prob1, 1-prob1])])\n \n sdata2.append([np.random.choice([1, 0], p=[prob2, 1-prob2])])\n \nidata.append([sdata1[-1][0], sdata2[-1][0]])\nprint nn1.realize(idata1), nn2.realize(idata), idata1\n\np1 = []\np2 = []\nfor i in range(len(idata1)):\n p1.append(idata1[i][0])\nfor i in range(len(idata2)):\n p2.append(idata2[i][0])\nplt.plot(p1, label='player1')\nplt.plot(p2, label='player2')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
srnas/barnaba
examples/example_09_cluster.ipynb
gpl-3.0
[ "%autosave 0\nfrom __future__ import print_function", "Clustering RNA structures\nWe start by clustering the structures obtained from the previous example \"example_08_snippet.ipynb\", where we extracted all fragments with sequence GNRA from the PDB of the large ribosomal subunit. \nFirst, we calculate the g-vectors for all PDB files", "import glob\nimport barnaba as bb\nimport numpy as np\n\nflist = glob.glob(\"snippet/*.pdb\")\nif(len(flist)==0):\n print(\"# You need to run the example example8_snippet.ipynb\")\n exit()\n \n# calculate G-VECTORS for all files\ngvecs = []\nfor f in flist:\n gvec,seq = bb.dump_gvec(f)\n assert len(seq)==4\n gvecs.extend(gvec)\n\n", "Then, we reshape the array so that has the dimension $(N,n \\ast 4\\ast 4)$, where N is the number of frames and n is the number of nucleotides", "gvecs = np.array(gvecs)\ngvecs = gvecs.reshape(149,-1)\nprint(gvecs.shape)\n", "C. We project the data using a simple principal component analysis on the g-vectors", "import barnaba.cluster as cc\n# calculate PCA\nv,w = cc.pca(gvecs,nevecs=3)\nprint(\"# Cumulative explained variance of component: 1=%5.1f 2:=%5.1f 3=%5.1f\" % (v[0]*100,v[1]*100,v[2]*100))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"white\")\n\nplt.scatter(w[:,0],w[:,1])\nplt.xlabel(\"PC1\")\nplt.ylabel(\"PC2\")", "D. We make use of DBSCAN in sklearn to perform clustering. The function cc.dbscan takes four arguments:\n i. the list of G-vectors gvec\n ii. the list of labels for each point\n iii. the eps value\n iv. min_samples \n v. (optional) the weight of the samples for non-uniform clustering\nThe function outputs some information on the clustering: the number of clusters, the number of samples assigned to clusters (non noise), and silouetthe.\nFor each cluster it reports the size, the maximum eRMSD distance between samples in a cluster (IC=intra-cluster), the median intra-cluster eRMSD, the maximum and median distance from the centroid.", "new_labels, center_idx = cc.dbscan(gvecs,range(gvecs.shape[0]),eps=0.35,min_samples=8)\n", "We can now color the PCA according to the different cluster and display the centroid as a label:", "cp = sns.color_palette(\"hls\",len(center_idx)+1)\ncolors = [cp[j-1] if(j!=0) else (0.77,0.77,0.77) for j in new_labels]\nsize = [40 if(j!=0) else 10 for j in new_labels]\n#do scatterplot\nplt.scatter(w[:,0],w[:,1],s=size,c=colors)\nfor i,k in enumerate(center_idx):\n plt.text(w[k,0],w[k,1],str(i),ha='center',va='center',fontsize=25)", "E. We finally visualise the 4 centroids:", "import py3Dmol\n\ncluster_0 = open(flist[center_idx[0]],'r').read()\ncluster_1 = open(flist[center_idx[1]],'r').read()\ncluster_2 = open(flist[center_idx[2]],'r').read()\ncluster_3 = open(flist[center_idx[3]],'r').read()\n\np = py3Dmol.view(width=900,height=600,viewergrid=(2,2))\n#p = py3Dmol.view(width=900,height=600)\n#p.addModel(query_s,'pdb')\np.addModel(cluster_0,'pdb',viewer=(0,0))\np.addModel(cluster_1,'pdb',viewer=(0,1))\np.addModel(cluster_2,'pdb',viewer=(1,0))\np.addModel(cluster_3,'pdb',viewer=(1,1))\n\n\n#p.addModel(hit_0,'pdb',viewer=(0,1))\np.setStyle({'stick':{}})\np.setBackgroundColor('0xeeeeee')\np.zoomTo()\np.show()", "It is interesting to observe that cluster 0 and 1 (in close proximity in the PCA projection), correspond to A-form-like structures. Cluster 2 corresponds to the classic GNRA fold." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vdragan1993/serbian-document-network
SLN.ipynb
apache-2.0
[ "Serbian Legislation Network\nDescribing Serbian legislation system as a complex network.\nData\nThe latest version of each current legal document is available at <a href=\"http://www.pravno-informacioni-sistem.rs/SlGlasnikPortal/reg/advancedSearch\">Serbian Legal Information System website</a>. <a href=\"https://github.com/vdragan1993/serbian-document-network/blob/master/src/crawler.py\">Crawler</a> and <a href=\"https://github.com/vdragan1993/serbian-document-network/blob/master/src/scraper.py\">Scraper</a> were developed in order to collect all republican legislations with their ID card and list of related regulations. Only legislations with ID card were scraped. Original collected data can be found in <a href=\"https://github.com/vdragan1993/serbian-document-network/tree/master/dataset/original_data\">dataset/original_data/</a>.\nNetwork\nIn order to create legislation network, nodes and their links needs to be created. \nNodes\nEvery collected document is a node. List of document names extracted from their ID cards with custom made id number can be found in <a href=\"https://github.com/vdragan1993/serbian-document-network/blob/master/dataset/new_data/new_num_doc_sorted.txt\">dataset/new_data/</a>.\nLinks\nLinks between nodes are full-explicit references. References extraction process for every document:\n\nConverting Serbian Cyrillic to Latin, and replacing special characters with their ASCII pairs (č -> c, š -> s...)\nTokenization\nStemming using <a href=\"https://github.com/vdragan1993/serbian-stemmer\">Serbian Stemmer</a>\nJoining stemmed words into space-separated text\nSearching for collected document names in joined text\nSaving detected reference in <b>this_document_name\\t\\t\\tfound_document_name</b> format\n\nAfter this, all detected references were aggregated into one document. Also, another document was created by replacing document names with their custom made id numbers.\nResult of this process are references by <a href=\"https://github.com/vdragan1993/serbian-document-network/blob/master/dataset/new_data/all_text_lines.txt\">document name</a> and <a href=\"https://github.com/vdragan1993/serbian-document-network/blob/master/dataset/new_data/all_num_lines.txt\">document id</a>, and this documents are our legislation network. Also, references detected in every document can be found in <a href=\"https://github.com/vdragan1993/serbian-document-network/tree/master/dataset/new_data/graph\">dataset/new_data/graph/</a>.\nIn order to evaluate process accuracy, full-explicit references were manually detected in 10 randomly selected documents. After validation, obtained accuracy of references extraction process was: %.", "# imports\nimport pandas as pd\nimport codecs\nimport graphistry\nimport warnings\nimport networkx as nx\nfrom networkx.algorithms import assortativity as assort\nfrom networkx.algorithms import centrality as central\nimport matplotlib.pyplot as plt\nfrom collections import OrderedDict\nfrom operator import itemgetter\nfrom collections import Counter\nfrom itertools import islice\nimport numpy as np\nimport igraph as ig\n# setup\nwarnings.filterwarnings('ignore')\napi_key = open('API_key.txt').read()\ngraphistry.register(key=api_key)\n%matplotlib inline\n\ndef load_num_doc(file_path):\n \"\"\"\n Reading num - doc dictonary for mapping document name and id\n \"\"\"\n f = codecs.open(file_path, 'r', 'utf8')\n lines = f.readlines()\n f.close()\n num_doc_mapper = {}\n clean_lines = [line[:-2] for line in lines if line.endswith('\\r\\n')]\n clean_lines.append(lines[-1])\n for line in clean_lines:\n number = int(line.split(',')[0])\n text = line[len(str(number))+1:]\n num_doc_mapper[number] = text\n return num_doc_mapper\n\ndef sort_dictionary_by_value_asc(input_dict):\n output_dict = OrderedDict(sorted(input_dict.items(), key=itemgetter(1)))\n return output_dict\n\ndef sort_dictionary_by_value_desc(input_dict):\n output_dict = OrderedDict(sorted(input_dict.items(), key=itemgetter(1)))\n return output_dict\n\n# reading graph\nedges = pd.read_csv('dataset/new_data/all_text_lines.txt', sep='\\t\\t\\t', names=['src', 'dest'])\nprint(edges.head())\n# reading igraph from file\nthis_igraph = ig.Graph.Read_Ncol('dataset/new_data/all_num_lines.txt', directed=True)\n\n# visualization using graphistry\ngraphistry.bind(source='src', destination='dest').plot(edges)", "Network Science Measures", "# reading num - name dictionary\nnum_doc_mapper = load_num_doc('dataset/new_data/new_num_doc_sorted.txt')\n\n# reading and creating network using networkx\ngraph = nx.read_edgelist('dataset/new_data/all_num_lines.txt', create_using=nx.DiGraph(), nodetype=int)\nprint(nx.info(graph))\n\n# highest degrees\nprint(\"Nodes with highest degrees: (in + out)\\n\")\ndegrees_high = sort_dictionary_by_value_desc(graph.degree())\ndegrees_high_count = Counter(degrees_high)\nfor k, v in degrees_high_count.most_common(5):\n print('%s: %i (%i + %i)\\n' % (num_doc_mapper[k], v, graph.in_degree(k), graph.out_degree(k)))\n\n# lowest degrees\nprint(\"Nodes with lowest degrees: (in + out)\\n\")\ndeegrees_low = sort_dictionary_by_value_asc(graph.degree())\ndeegrees_low_count = islice(deegrees_low.items(), 0, 5)\nfor k, v in deegrees_low_count:\n print('%s: %i (%i + %i)\\n' % (num_doc_mapper[k], v, graph.in_degree(k), graph.out_degree(k)))\n\n# highest in_degrees\nprint(\"Nodes with highest in degrees:\\n\")\nin_degrees_high = sort_dictionary_by_value_desc(graph.in_degree())\nin_degrees_high_count = Counter(in_degrees_high)\nfor k, v in in_degrees_high_count.most_common(5):\n print('%s: %i\\n' % (num_doc_mapper[k], v))\n\n# highest out_degrees\nprint(\"Nodes with highest out degrees:\\n\")\nout_degrees_high = sort_dictionary_by_value_desc(graph.out_degree())\nout_degrees_high_count = Counter(out_degrees_high)\nfor k, v in out_degrees_high_count.most_common(5):\n print('%s: %i\\n' % (num_doc_mapper[k], v))", "The <b>diameter</b> of a graph is the maximum eccentricity of any vertex in the graph. That is, d is the greatest distance between any pair of vertices.<br/>\nFor directed graphs, the <b>graph density</b> is defined as D = |E| / (|V| (|V| - 1)) where E is the number of edges and V is the number of vertices in the graph.", "print(\"Graph diameter: {0}\".format(this_igraph.diameter()))\nprint(\"Average path length: {0}\".format(this_igraph.average_path_length()))\nprint(\"Graph density: {0}\".format(this_igraph.density()))", "Cycles\nCycle found via a directed, depth-first traversal.", "print(\"Cycles: \\n\")\ncycles = nx.algorithms.find_cycle(graph)\nfor cycle in cycles:\n print('%s --> %s\\n' % (num_doc_mapper[cycle[0]], num_doc_mapper[cycle[1]]))", "Assortativity\nThe average degree connectivity is the average nearest neighbor degree of nodes with degree k.", "print(\"Nodes with highest average degree connectivity: \\n\")\nadc_high = sort_dictionary_by_value_desc(assort.average_degree_connectivity(graph))\nadc_high_count = Counter(adc_high)\nfor k, v in adc_high_count.most_common(5):\n print('k=%i: %f\\n' % (k, v))", "Centrality\nIn graph theory and network analysis, indicators of centrality identify the most important vertices within a graph.\nDegree\nThe <b>degree centrality</b> for a node v is the fraction of nodes it is connected to.<br/>\nThe <b>in-degree centrality</b> for a node v is the fraction of nodes its incoming edges are connected to.<br/>\nThe <b>out-degree centrality</b> for a node v is the fraction of nodes its outgoing edges are connected to.", "print(\"Nodes with highest degree centrality: \\n\")\ndc_high = sort_dictionary_by_value_desc(central.degree_centrality(graph))\ndc_high_count = Counter(dc_high)\nfor k, v in dc_high_count.most_common(5):\n print('%s: %f\\n' % (num_doc_mapper[k], v))\n\nprint(\"Nodes with highest in-degree centrality: \\n\")\nidc_high = sort_dictionary_by_value_desc(central.in_degree_centrality(graph))\nidc_high_count = Counter(idc_high)\nfor k, v in idc_high_count.most_common(5):\n print('%s: %f\\n' % (num_doc_mapper[k], v))\n \nprint(\"\\n\\nNodes with highest out-degree centrality: \\n\")\nodc_high = sort_dictionary_by_value_desc(central.out_degree_centrality(graph))\nodc_high_count = Counter(odc_high)\nfor k, v in odc_high_count.most_common(5):\n print('%s: %f\\n' % (num_doc_mapper[k], v))", "Closeness\nClosenes centrality of a node u is the reciprocal of the sum of the shortest path distances form u to all n-1 other nodes. Thus the more central a node is, the closer it is to all other nodes.", "print(\"Nodes with highest closeness centrality: \\n\")\ncc_high = sort_dictionary_by_value_desc(central.closeness_centrality(graph))\ncc_high_count = Counter(cc_high)\nfor k, v in cc_high_count.most_common(5):\n print('%s: %f\\n' % (num_doc_mapper[k], v))", "Betweenness\nComputing the shortest-path betweenness centrality for nodes. Betweenness centrality of a node v is the sum of the fraction of all-pairs shortest paths that pass through v. <br/>\nBetweenness centrality quantifies the number of times a node actts as a bridge along the shortest path between two other nodes.", "print(\"Nodes with lowest betweenness centrality: \\n\")\nbc_low = sort_dictionary_by_value_asc(central.betweenness_centrality(graph))\nbc_low_count = islice(bc_low.items(), 0, 5)\nfor k, v in bc_low_count:\n print('%s: %f\\n' % (num_doc_mapper[k], v))", "Page Rank\nPageRank computes a ranking of the nodes in the graph based on the structure of the incoming links.", "# highest PR\nprint(\"Nodes with highest PageRank values: \\n\")\npr_high = sort_dictionary_by_value_desc(nx.algorithms.pagerank(graph))\npr_high_count = Counter(pr_high)\nfor k, v in pr_high_count.most_common(10):\n print('%s: %f\\n' % (num_doc_mapper[k], v))\n\n# lowest PR\nprint(\"Nodes with lowest PageRank values: \\n\")\npr_low = sort_dictionary_by_value_asc(nx.algorithms.pagerank(graph))\npr_low_count = islice(pr_low.items(), 0, 10)\nfor k, v in pr_low_count:\n print('%s: %f\\n' % (num_doc_mapper[k], v))", "Katz\nKatz centrality computes the centrality for a node based on the centrality of its neigbors. It is a generalization of the eigenvector centrality.", "# highest K\nprint(\"Nodes with highest Katz centrality: \\n\")\nk_high = sort_dictionary_by_value_desc(nx.katz_centrality_numpy(graph))\nk_high_count = Counter(k_high)\nfor k, v in k_high_count.most_common(10):\n print ('%s: %f\\n' % (num_doc_mapper[k], v))\n\n# lowest Katz\nprint(\"Nodes with lowest Katz centrality: \\n\")\nk_low = sort_dictionary_by_value_asc(nx.katz_centrality_numpy(graph))\nk_low_count = islice(k_low.items(), 0, 10)\nfor k, v in k_low_count:\n print('%s: %f\\n' % (num_doc_mapper[k], v))", "Eigenvector\nEigenvector centrality computes the centrality for a node based on the centrality of its neighbors. It assigns relative scores to all nodes in the network based on the concept that connections to high-scoring nodes contribute mode to the score of the node in question that equal connections to low-scoring nodes.", "# highest ev\nprint(\"Nodes with highest Eigenvector centrality: \\n\")\nev_high = sort_dictionary_by_value_desc(nx.eigenvector_centrality(graph))\nev_high_count = Counter(ev_high)\nfor k, v in ev_high_count.most_common(10):\n print('%s: %f\\n' % (num_doc_mapper[k], v))\n\n# lowest ev\nprint(\"Nodes with lowest Eigenvector centrality: \\n\")\nev_low = sort_dictionary_by_value_asc(nx.eigenvector_centrality(graph))\nev_low_count = islice(ev_low.items(),0, 10)\nfor k, v in ev_low_count:\n print ('%s: %f\\n' % (num_doc_mapper[k], v))", "Clustering\nIn graph theory, a <b>clustering coefficient</b> is a measure of the degree to which nodes in a graph tend to cluster together. The global version of this measure was designed to give an overall indication of the clustering in the network.<br/>\nThe <b>global clustering coefficient</b> is based on triplets of nodes. A triplet consists of three connected nodes. A triangle therefore includes three closed triplets, one centered on each of the nodes. The global clustering coefficient is the number of closed triplets (or 3 x triangles) over the total number of triplets (both open and closed):<br/>\n<b>C = 3 x number of triangles / number of connected triplets of vertices = number of closed triplets / number of connected triplets of vertices</b>.", "print(\"Global Clustering Coefficient: {0}\".format(nx.transitivity(graph)))", "Community Detection\n<a href=\"https://www-complexnetworks.lip6.fr/~latapy/Publis/communities.pdf\">Latapy and Pons algorithm</a> for community detection base on random walks. This algorithm (Walktrap) can be adatped to directed edges, weights, and can alter resolution by walk length.\n<b>Basic idea:</b> Simulate many short random walks on the network and compute pairwise similarity measures bassed on these walks. Use these similarity values to aggreggate vertices into communities. \n<b>Time Complexity:</b> depends on walk length, O(|V|^2 log |V|) typically.", "dendogram = this_igraph.community_walktrap(steps=10)\nclusters = dendogram.as_clustering()\nmembership = clusters.membership\nmemberships_dict = {}\nfor name, membership in zip(this_igraph.vs[\"name\"], membership):\n if membership not in memberships_dict:\n memberships_dict[membership] = []\n memberships_dict[membership].append(num_doc_mapper[int(name)])\nprint(\"Total number of communities: {0}\".format(len(memberships_dict)))\n# number of members \nmemership_nums = {}\nfor k in memberships_dict:\n memership_nums[k] = len(memberships_dict[k])\n\n# top communities\nprint(\"Communities with most members: \\n\")\nmem_num_high = sort_dictionary_by_value_desc(memership_nums)\nmem_num_high_count = Counter(mem_num_high)\nfor k, v in mem_num_high_count.most_common(5):\n print('\\nCommunity %i with %i members:' % (k, v))\n #print(memberships_dict[k])\n\n# bottom communities\nprint(\"Communities with least members: \\n\")\nmem_num_low = sort_dictionary_by_value_asc(memership_nums)\nmem_num_low_count = islice(mem_num_low.items(), 0, 5)\nfor k, v in mem_num_low_count:\n print('\\nCommunity %i with %i members:' % (k, v))\n print(memberships_dict[k])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eco32i/biodata
sessions/examples/CE PCA.ipynb
mit
[ "%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport csv\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA as sklearnPCA\nfrom plotnine import *", "Read in expression matrix\nmRNA-Seq from 10 individual C.elegans worms. Processed with CEL-Seq-pipeline (https://github.com/eco32i/CEL-Seq-pipeline)", "!head ../../data/CE_exp.umi.tab\n\n!tail ../../data/CE_exp.umi.tab", "Expression matrix contains read counts in genes. Columns are worms rows are genes.", "ce = pd.read_csv('../../data/CE_exp.umi.tab', sep='\\t', skipfooter=5, engine='python')\nce", "PCA is sensitive to variable scaling. Therefore before performing the analysis we need to normalize the data. StandardScaler will transform every variable to unti space (mean 0, variance 1). Note also that sklearn expects columns to be genes (features) and rows to be worms (samples, or observations). Therefore we transpose the matrix before doing anything.", "#ce = ce.ix[ce.ix[:,1:].mean(axis=1)>500,:]\n\nX_std = StandardScaler().fit_transform(ce.iloc[:,1:].values.T)\nX_std\n\nsklearn_pca = sklearnPCA(n_components=10)\nY_sklearn = sklearn_pca.fit_transform(X_std)\nY_sklearn", "Y_sklearn is a numpy array of the shape (num_samples, n_components) where original X data is projected onto the number of extracted principal components\nPlot explained variance", "sklearn_pca.explained_variance_\n\nsklearn_pca.explained_variance_ratio_\n\nvdf = pd.DataFrame()\nvdf['PC'] = [(i+1) for i,x in enumerate(sklearn_pca.explained_variance_ratio_)]\nvdf['var'] = sklearn_pca.explained_variance_ratio_\n\n(ggplot(vdf, aes(x='PC', y='var'))\n + geom_point(size=5, alpha=0.4)\n + ylab('Explained variance')\n + theme(figure_size=(12,10))\n)\n\npca_df = pd.DataFrame()\npca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]\npca_df['PC1'] = Y_sklearn[:,0]\npca_df['PC2'] = Y_sklearn[:,1]\n\n(ggplot(pca_df, aes(x='PC1', y='PC2', color='sample'))\n + geom_point(size=5, alpha=0.5)\n + theme(figure_size=(12,10))\n)\n\npca_df = pd.DataFrame()\npca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]\npca_df['PC1'] = Y_sklearn[:,0]\npca_df['PC3'] = Y_sklearn[:,2]\n\n(ggplot(pca_df, aes(x='PC1', y='PC3', color='sample'))\n + geom_point(size=5, alpha=0.5)\n + theme(figure_size=(12,10))\n)\n\npca_df = pd.DataFrame()\npca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]\npca_df['PCA2'] = Y_sklearn[:,1]\npca_df['PCA4'] = Y_sklearn[:,3]\n\n(ggplot(pca_df, aes(x='PCA2', y='PCA4', color='sample'))\n + geom_point(size=5, alpha=0.5)\n + theme(figure_size=(12,10))\n)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisroovers/machinelearning-playground
datascience/NumPy.ipynb
apache-2.0
[ "NumPy introduction\nNumPy provides low-level and fast features to manipulate arrays of data (main implementation is in C).\nWhile it has some relatively advanced features like linear algebraic calculations and more, in many cases Pandas provides a more convenient high level interface to do the same things (and even more).\nIf you just want a quick overview, the following cheatsheet provides one:\n\nNumpy Arrays\nThe basic building block of numpy is the array which has a number of operations defined on it. Because of this, you don't need to write for loops to manipulate them. This is often called vectorization.", "# This is a regular python list\nrange(1,4)\n\n# If you multiply or add to it, it extends the list\n\na = range(1, 10)\na * 2\n\na = range(1,11)\na + [ 11 ]\n\n# Compare this to np.array:\nimport numpy as np\nnp.array(range(1,10))\n\n# Multiplication is defined as multiplying each element in the array\na = np.array(range(1, 10))\na * 2\n\na + 5 # Adding to it works as well, this just adds 5 to each element (note that this operation is undefined in regular python)", "ndarray is actually a multi-dimensional array", "np.array([[1,2],[3,4],[5,6]])\n\na = np.array([[1,2],[3,4],[5,6]])\na.shape, a.dtype, a.size, a.ndim # shape -> dimension sizes, dtype -> datatype, size -> total number of elems, ndim -> number of dimensions\n\n# You can use comma-separated indexing like so:\na[1,1] # same as a[1][1]\n\n# Note that 1,1 is really a tuple (the parenthesis are just ommited), so this works too:\nindices = (1,1)\na[indices]\n\n# Note that regular python doesn't support this\nmylist = [[1,2],[3,4]]\n# mylist[1,1] # error!\n\n# As always, use ? to get details\na?", "Generating arrays\nNumpy has a number of convenience functions to initialize arrays with zeros, ones or empty values.\nOthers are identity for the identity array, and ndrange which is the equivalent of python's range.", "np.zeros(5)\n\nnp.ones(10)\n\nnp.empty(7) # Empty returns uninitialized garbage values (not zeroes!)\n\nnp.identity(5) # identity array\n\nnp.arange(11) # same as .nparray(range(11))\n\nnp.array(range(11))", "Datatypes\nEach np.array has a datatype associated with it. You can also cast between types.", "np.array([1,2,3], dtype='float64')\n\n# Show all available types\nnp.sctypes\n\n# Consider strings\na = np.array(['12', '999', '432536'])\na.dtype", "The datatype S5 stands for Fixed String with length 5, because the longest string in the array is of length 5. Compare this to:", "np.array(['123', '21345312312'])\n\n# You can also cast between types\na.astype(np.int32) # This copies the data into a new array, it does not change the array itself!", "Slicing\nIndex manipulation (slicing)with np.arrays is actually pretty similar to how it works with regular python lists", "a = np.array(range(10, 20))\na[3:]\n\na[4:6]", "However, slices in Numpy are actually views on the original np.array which means that if you manipulate them, \nthe array changes as well.", "a[3:6] = 33\na", "Compare this to regular python:", "b = range(1, 10)\n# b[2:7] = 10 # this will raise an error\n\n# Copies need to be explicit in numpy\nb = a[3:6].copy()\nb[:] = 22 # change all values to 22\nb, a # print b and a, see that a is not modified\n\n# You can also slice multi-dimensionally\nc = np.array([[1,2,3], [4,5,6], [7,8,9]])\nc[1:,:1] # Only keep the last 2 arrays, and from them, only keep up the first elements\n\n# Note how this is different from using c[1:][:1]\n# This is really doing 2 operations: first slice to keep the last 2 arrays. \n# This returns a new array: array([[4, 5, 6],[7, 8, 9]])\n# Then from this new array, return the first element.\nc[1:][:1]", "This picture explains NumPy's array slicing pretty well.\n\nBoolean indexing\nThere are 2 parts to boolean indexing: \n1. Apply a boolean mask to an np.array. Boolean masks are just arrays of booleans:\n[True, False, True]\n2. Creating boolean masks using boolean conditions\nApplying a boolean mask", "# A boolean mask is just a boolean array\nmask = np.array([ True, False, True ])\nmask\n\n# To apply the mask against a target, just pass it like an index.\n# The result is an array with the elements from 'target' that had True on their corresponding index in 'mask'.\ntarget = np.array([7,8,9])\ntarget[mask]\n\n# This works for multi-dimensional arrays too, but the result will obviously be a single dimensional array\n# Also, you need to make sure that the dimensions of your target and mask arrays match\ntarget2 = np.array([['a','b','c'], ['d','e','f'],['g','h','i']])\nmask2 = np.array([[False,True,False], [True, True, False], [True, False, True]])\ntarget2[mask2]", "Creating a boolean mask\nThe easiest way to create a boolean mask is to just create an array with booleans in it. However, you can also create boolean masks by applying a boolean expression to a existing array.", "numbers = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\nnumbers > 5\n\nnumbers % 2 == 0 # Even numbers", "Strings work too!", "names = np.array([\"John\", \"Mary\", \"Joe\", \"Jane\", \"Marc\", \"Jorge\", \"Adele\" ])\n\nnames == \"Joe\"", "You can combine filters using the boolean arithmetic operations | and &amp;. Note that you have to but the individual boolean expressions between parentheses at this point.", "(names == \"Joe\") | (names == \"Mary\")", "Once you have boolean mask, you can apply it to an array of the same length as a boolean mask. This is often useful if you want to select certain values in an array like so:", "names[names == \"Joe\"], numbers[numbers > 5]", "Universal functions\nA universal function, or ufunc, is a function that performs elementwise operations on data in ndarrays. You can think of them as fast vectorized wrappers for simple functions that take one or more scalar values and produce one or more scalar results.", "numbers = np.array([-1, -9, 18.2, 3, 4.3, 0, 5.3, -12.2])\nnumbers\n\nnp.sum(numbers), np.mean(numbers)\n\nnp.square(numbers)\n\nnp.abs(numbers)\n\nnp.sqrt(np.abs(numbers)) # Can't take sqrt of negative number, so let's get the abs values first\n\nnp.max(numbers), np.min(numbers)\n\nnp.ceil(numbers), np.floor(numbers)", "The boolean expressions that create boolean masks (see prev section) can also be expressed explicitely", "np.greater(numbers, 3)\n\n# combining with boolean arithmetic\nnp.logical_or(np.less_equal(numbers, 4), np.greater(numbers, 0))\n\nnp.sort(numbers)\n\nnp.unique(np.array([1, 2, 4, 2, 5, 1]))", "Some of these operations are also directly available on the array", "numbers.sum(), numbers.mean(), numbers.min(), numbers.max()", "File IO ##\nYou can easily store/retrieve numpy arrays from files.", "np.save(\"/tmp/myarray\", np.arange(10))\n\n# The .npy extension is automatically added\n!cat /tmp/myarray.npy\n\nnp.load(\"/tmp/myarray.npy\") # You DO need to specify the .npy extension when loading", "You can also save/load as a zip file using savez and loadz.", "np.savez(\"/tmp/myarray2\", a=np.arange(2000)) \n\nnp.load(\"/tmp/myarray2.npz\")['a'] # Loading from a npz file is lazy, you need to specify which array to load", "You can also load other file formats using loadtxt.", "!echo \"1,2,3,4\" > /tmp/numpytxtsample.txt\n!cat /tmp/numpytxtsample.txt\n\nnp.loadtxt(\"/tmp/numpytxtsample.txt\", delimiter=\",\")", "Linear Algebra\nNumpy also supports linear algebra, e.g.: matrix multiplication, determinants, etc", "x = np.array([[1,2,3],[4,5,6], [7,8,9]])\ny = np.array([[9,8,7],[6,5,4],[3,2,1]])\nx,y\n\n# Matrix multiplication\nnp.dot(x,y) # same as: x.dot(y)\n\n# The numpy.linalg package has a bunch of extra linear algebra functions\n# For example, the determinant (https://en.wikipedia.org/wiki/Determinant)\nfrom numpy.linalg import det\ndet(x)", "Other commonly used functions from numpy.linalg\nFunction | Description\n--------------|-------------\ndiag | Return diagonal of matrix as 1D array\ndot | Matrix multiplication\ntrace | Sum of diagonal elements\ndet | Determinant\neig | Eigenvalues and eigenvectors\ninv | Inverse of square matrix\nqr```` | [QR Decomposition](https://en.wikipedia.org/wiki/QR_decomposition)svd| [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition)solv| Solve linear system Ax=b for x, where A is a square matrixlstsq``` | Compute the least square solution to Ax=b" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JohnPHogan/FantasyFootball
rb_analysis.ipynb
bsd-2-clause
[ "| Stat Category | Point Value |\n|---------------------|---------------------------|\n|Rushing Yards | 1 point for every 10 yards|\n|Rushing TDs | 6 points |\n|Receiving Yards | 1 point for every 10 yards|\n|Receiving TDs | 6 points |\n|Fumbles Lost | -2 points |", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib as mp\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\nrb_games = pd.read_csv('rb_games.csv')\nrb_games.columns.values\n\nrb_games['Fantasy Points'] = ((rb_games['Rush Yds'] + rb_games['Rec Yds']) / 10) + ((rb_games['Rush TD'] + rb_games['Rec TD']) *6)\nrb_fantasy = rb_games[['Name','Career Year', 'Year', 'Game Count', 'Career Games', 'Date', 'Rec Rec', 'Rec Yds', 'Rec TD', 'Rush Att', 'Rush Yds', 'Rush TD', 'Fantasy Points']]\n\nrb_fantasy.head(10)\n\nx = rb_fantasy['Fantasy Points']\nsns.set_context('poster')\nsns.set_style(\"ticks\")\n\ng=sns.distplot(x,\n kde_kws={\"color\":\"g\",\"lw\":4,\"label\":\"KDE Estim\",\"alpha\":0.5},\n hist_kws={\"color\":\"r\",\"alpha\":0.3,\"label\":\"Freq\"})\n\n\n# remove the top and right line in graph\nsns.despine()\n\n# Set the size of the graph from here\ng.figure.set_size_inches(12,7)\n# Set the Title of the graph from here\ng.axes.set_title('RB Fantasy Point Distribution', fontsize=34,color=\"b\",alpha=0.3)\n# Set the xlabel of the graph from here\ng.set_xlabel(\"Fantasy Points\",size = 67,color=\"g\",alpha=0.5)\n# Set the ylabel of the graph from here\ng.set_ylabel(\"Density\",size = 67,color=\"r\",alpha=0.5)\n# Set the ticklabel size and color of the graph from here\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nHeavily skewed distribution where the fantasy points provided by running backs are generally near 0. Initial thought is that this is due to generally perceived high volatility of running back careers and running backs who do not participate a great deal during a game.", "rush_att = rb_fantasy['Rush Att'].mean()\nprint('Shifting data to only include Fantasy Points when greater than %d average rushing attempts' %(rush_att))\nrb_mid_level = rb_fantasy.loc[rb_fantasy['Rush Att'] > rush_att]\nx = rb_mid_level['Fantasy Points']\nsns.set_context('poster')\nsns.set_style(\"ticks\")\n\ng=sns.distplot(x,\n kde_kws={\"color\":\"g\",\"lw\":4,\"label\":\"KDE Estim\",\"alpha\":0.5},\n hist_kws={\"color\":\"r\",\"alpha\":0.3,\"label\":\"Freq\"})\n\n\n# remove the top and right line in graph\nsns.despine()\n\n# Set the size of the graph from here\ng.figure.set_size_inches(12,7)\n# Set the Title of the graph from here\ng.axes.set_title('RB Fantasy Point Distribution \\n Shifted by Averge Rushes', fontsize=34,color=\"b\",alpha=0.3)\n# Set the xlabel of the graph from here\ng.set_xlabel(\"Fantasy Points\",size = 67,color=\"g\",alpha=0.5)\n# Set the ylabel of the graph from here\ng.set_ylabel(\"Density\",size = 67,color=\"r\",alpha=0.5)\n# Set the ticklabel size and color of the graph from here\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nBy shifting the data allowed into the data set by only including data where the running back averaged running the football more than 8 times per game, the distribution of the data is less skewed.", "rush_att = rb_mid_level['Rush Att'].mean()\nprint('Shifting data to only include Fantasy Points when greater than %d average rushing attempts' %(rush_att))\nrb_high_level = rb_mid_level.loc[rb_fantasy['Rush Att'] > rush_att]\nx = rb_high_level['Fantasy Points']\nsns.set_context('poster')\nsns.set_style(\"ticks\")\n\ng=sns.distplot(x,\n kde_kws={\"color\":\"g\",\"lw\":4,\"label\":\"KDE Estim\",\"alpha\":0.5},\n hist_kws={\"color\":\"r\",\"alpha\":0.3,\"label\":\"Freq\"})\n\n\n# remove the top and right line in graph\nsns.despine()\n\n# Set the size of the graph from here\ng.figure.set_size_inches(12,7)\n# Set the Title of the graph from here\ng.axes.set_title('RB Fantasy Point Distribution\\n Shifted by Averge Rushes', fontsize=34,color=\"b\",alpha=0.3)\n# Set the xlabel of the graph from here\ng.set_xlabel(\"Fantasy Points\",size = 67,color=\"g\",alpha=0.5)\n# Set the ylabel of the graph from here\ng.set_ylabel(\"Density\",size = 67,color=\"r\",alpha=0.5)\n# Set the ticklabel size and color of the graph from here\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nBy increasing the shift of data so that only data for running backs that averaged 16 or more rushing attempts, the data is more normalized. The thought is that this is an indicator that very sparsely used players who would not generate many fantasy points are being eliminated and a truer view of what a running back can contribute is being seen.", "low_rush_att = rb_high_level['Rush Att'].mean()\nprint('Shifting data to only include Fantasy Points when greater than %d average rushing attempts' %(low_rush_att))\n\nrb_higher_level = rb_high_level.loc[rb_fantasy['Rush Att'] > low_rush_att]\nx = rb_higher_level['Fantasy Points']\nsns.set_context('poster')\nsns.set_style(\"ticks\")\n\ng=sns.distplot(x,\n kde_kws={\"color\":\"g\",\"lw\":4,\"label\":\"KDE Estim\",\"alpha\":0.5},\n hist_kws={\"color\":\"r\",\"alpha\":0.3,\"label\":\"Freq\"})\n\n\n# remove the top and right line in graph\nsns.despine()\n\n# Set the size of the graph from here\ng.figure.set_size_inches(12,7)\n# Set the Title of the graph from here\ng.axes.set_title('RB Fantasy Point Distribution by Rushes\\n Shifted by Averge Rushes', fontsize=34,color=\"b\",alpha=0.3)\n# Set the xlabel of the graph from here\ng.set_xlabel(\"Fantasy Points\",size = 67,color=\"g\",alpha=0.5)\n# Set the ylabel of the graph from here\ng.set_ylabel(\"Density\",size = 67,color=\"r\",alpha=0.5)\n# Set the ticklabel size and color of the graph from here\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nAgain, filtering the data so that only fantasy points for running backs that averaged over 21 carries a game created an even more normalized distribution of data. One data point to consider here is that averaging 21 carries a game for a 16 game season would result in 336 carries. Anecdototally, carrying more than 300 per season is generally considered a warning flag that a player will have a shorter career.", "rush_att = rb_higher_level['Rush Att'].mean()\nprint('Shifting data to only include Fantasy Points when greater than %d average rushing attempts' %(rush_att))\n\nrb_highest_level = rb_higher_level.loc[rb_fantasy['Rush Att'] > rush_att]\nx = rb_highest_level['Fantasy Points']\nsns.set_context('poster')\nsns.set_style(\"ticks\")\n\ng=sns.distplot(x,\n kde_kws={\"color\":\"g\",\"lw\":4,\"label\":\"KDE Estim\",\"alpha\":0.5},\n hist_kws={\"color\":\"r\",\"alpha\":0.3,\"label\":\"Freq\"})\n\n\n# remove the top and right line in graph\nsns.despine()\n\n# Set the size of the graph from here\ng.figure.set_size_inches(12,7)\n# Set the Title of the graph from here\ng.axes.set_title('RB Fantasy Point Distribution by Rushes\\n Shifted by Averge Rushes', fontsize=34,color=\"b\",alpha=0.3)\n# Set the xlabel of the graph from here\ng.set_xlabel(\"Fantasy Points\",size = 67,color=\"g\",alpha=0.5)\n# Set the ylabel of the graph from here\ng.set_ylabel(\"Density\",size = 67,color=\"r\",alpha=0.5)\n# Set the ticklabel size and color of the graph from here\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nNow that the data is filtered to only include Fantasy Points for 25 carries per game. There is an obvious shift in the skewing of data toward the higher end now. 25 rushes in a season would total 400 rushes for a year. This is probably not a sustainable effort for a single player.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nprint(len(rb_fantasy))\nyearly_fantasy_points = rb_fantasy.groupby(['Career Year'], as_index=False).mean()\nyearly_fantasy_points[['Career Year', 'Rush Att', 'Rush Yds', 'Rush TD', 'Rec Yds', 'Rec TD', 'Fantasy Points']]\n", "Data Analysis by year", "color = ['red']\nax = sns.barplot(x=yearly_fantasy_points['Career Year'], y=yearly_fantasy_points['Fantasy Points'], palette=color)\nsns.despine()\n# Set the size of the graph from here\nax.figure.set_size_inches(12,7)\nax.axes.set_title('Fantasy Points by Year', \n fontsize=34,color=\"b\",alpha=0.3)\nax.set_xlabel(\"Career Year\",size = 67,color=\"g\",alpha=0.5)\ng.set_ylabel(\"Fantasy Points\",size = 67,color=\"r\",alpha=0.5)\ng.tick_params(labelsize=14,labelcolor=\"black\")", "Observation\nAn unfiltered view of fantasy points by year for a running back show them to be far less important than quarterbacks in terms of the fantasy points they can generate. This does show a trend of improvement over the first 6 years of a career, following by a 3 year plateau and then a decline in points after that. The sharpest increase in performance is between years 1 and 2.", "color = ['blue']\nax = sns.barplot(x=yearly_fantasy_points['Career Year'], y=yearly_fantasy_points['Rush Yds'], palette=color)\nsns.despine()\n# Set the size of the graph from here\nax.figure.set_size_inches(12,7)\nax.axes.set_title('Rush Yards by Year', \n fontsize=34,color=\"b\",alpha=0.3)\nax.set_xlabel(\"Career Year\",size = 67,color=\"g\",alpha=0.5)\ng.set_ylabel(\"Rush Yds\",size = 67,color=\"r\",alpha=0.5)\ng.tick_params(labelsize=14,labelcolor=\"black\")\n\n", "Observation\nSimilar to Fantasy Point production, the sharpest increase in rushing yardage happens during the between the first and second year. After that there is a slight to year 6, a bit of a plateau between years 6 and 8, and then a sharp decrease after 8 years with an anomoly in year 12. Need to investigate to see if this is an outlier performer as opposed to a general trend.", "color = ['Green']\n\nax = sns.barplot(x=yearly_fantasy_points['Career Year'], y=yearly_fantasy_points['Rec Yds'], palette=color)\nsns.despine()\n# Set the size of the graph from here\nax.figure.set_size_inches(12,7)\nax.axes.set_title('Rec Yards by Year', \n fontsize=34,color=\"b\",alpha=0.3)\nax.set_xlabel(\"Career Year\",size = 67,color=\"g\",alpha=0.5)\ng.set_ylabel(\"Rec Yds\",size = 67,color=\"r\",alpha=0.5)\ng.tick_params(labelsize=14,labelcolor=\"black\")\n\n", "Observation\nThe number of yards generated by running backs by receiving the football are not necessarily significant, but there are a couple interesting traits. The initial growth period extends from year 1 through 3, which is longer than rushing increases, aand the plateau period seems to be years 3 through 8. After a steep decline after year 8, there is almost a bit of a secondary growth period. This might be interesting when coupled with the general decline in rushing yardage after year 8." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_04/Final/Universal.ipynb
bsd-3-clause
[ "NumPy Universal Functions\nIf the data within a DataFrame are numeric, NumPy's universal functions can be used on/with the DataFrame.", "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])\ndf2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])\nsum_df = df + df2\nsum_df", "NaN are handled correctly by universal function", "np.exp(sum_df)", "Transpose availabe T attribute", "sum_df.T\n\nnp.transpose(sum_df.values)", "dot method on DataFrame implements matrix multiplication\nNote: row and column headers", "A_df = pd.DataFrame(np.arange(15).reshape((3,5)))\nB_df = pd.DataFrame(np.arange(10).reshape((5,2)))\nA_df.dot(B_df)", "dot method on Series implements dot product", "C_Series = pd.Series(np.arange(5,10))\nC_Series.dot(C_Series)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ViralLeadership/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter2_MorePyMC/Chapter2.ipynb
mit
[ "Chapter 2\n\nThis chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.\nA little more on PyMC\nParent and Child relationships\nTo assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables. \n\n\nparent variables are variables that influence another variable. \n\n\nchild variable are variables that are affected by other variables, i.e. are the subject of parent variables. \n\n\nA variable can be both a parent and child. For example, consider the PyMC code below.", "import pymc as pm\n\n\nparameter = pm.Exponential(\"poisson_param\", 1)\ndata_generator = pm.Poisson(\"data_generator\", parameter)\ndata_plus_one = data_generator + 1", "parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.\nLikewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.\nThis nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.", "print \"Children of `parameter`: \"\nprint parameter.children\nprint \"\\nParents of `data_generator`: \"\nprint data_generator.parents\nprint \"\\nChildren of `data_generator`: \"\nprint data_generator.children", "Of course a child can have more than one parent, and a parent can have many children.\nPyMC Variables\nAll PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:", "print \"parameter.value =\", parameter.value\nprint \"data_generator.value =\", data_generator.value\nprint \"data_plus_one.value =\", data_plus_one.value", "PyMC is concerned with two types of programming variables: stochastic and deterministic.\n\n\nstochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.\n\n\ndeterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is. \n\n\nWe will detail each below.\nInitializing Stochastic variables\nInitializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:\nsome_variable = pm.DiscreteUniform(\"discrete_uni_var\", 0, 4)\nwhere 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)\nThe name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.\nFor multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays. \nThe size argument also solves the annoying case where you may have many variables $\\beta_i, \\; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:\nbeta_1 = pm.Uniform(\"beta_1\", 0, 1)\nbeta_2 = pm.Uniform(\"beta_2\", 0, 1)\n...\n\nwe can instead wrap them into a single variable:\nbetas = pm.Uniform(\"betas\", 0, 1, size=N)\n\nCalling random()\nWe can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.", "lambda_1 = pm.Exponential(\"lambda_1\", 1) # prior on first behaviour\nlambda_2 = pm.Exponential(\"lambda_2\", 1) # prior on second behaviour\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=10) # prior on behaviour change\n\nprint \"lambda_1.value = %.3f\" % lambda_1.value\nprint \"lambda_2.value = %.3f\" % lambda_2.value\nprint \"tau.value = %.3f\" % tau.value\nprint\n\nlambda_1.random(), lambda_2.random(), tau.random()\n\nprint \"After calling random() on the variables...\"\nprint \"lambda_1.value = %.3f\" % lambda_1.value\nprint \"lambda_2.value = %.3f\" % lambda_2.value\nprint \"tau.value = %.3f\" % tau.value", "The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.\nWarning: Don't update stochastic variables' values in-place.\nStraight from the PyMC docs, we quote [4]:\n\nStochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:\n\n A.value = new_value\n\n\nThe following are in-place updates and should never be used:\n\n A.value += 3\n A.value[2,1] = 5\n A.value.attribute = new_attribute_value\n\nDeterministic variables\nSince most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:\n@pm.deterministic\ndef some_deterministic_var(v1=v1,):\n #jelly goes here.\n\nFor all purposes, we can treat the object some_deterministic_var as a variable and not a Python function. \nPrepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:", "type(lambda_1 + lambda_2)", "The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\\lambda$ looked like: \n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\nAnd in PyMC code:", "import numpy as np\nn_data_points = 5 # in CH1 we had ~70 data points\n\n\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_data_points)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after tau is lambda2\n return out", "Clearly, if $\\tau, \\lambda_1$ and $\\lambda_2$ are known, then $\\lambda$ is known completely, hence it is a deterministic variable. \nInside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:\n@pm.deterministic\ndef some_deterministic(stoch=some_stochastic_var):\n return stoch.value**2\n\nwill return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable. \nNotice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values. \nIncluding observations in the Model\nAt this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like \"What does my prior distribution of $\\lambda_1$ look like?\"", "%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nfrom matplotlib import pyplot as plt\nfigsize(12.5, 4)\n\n\nsamples = [lambda_1.random() for i in range(20000)]\nplt.hist(samples, bins=70, normed=True, histtype=\"stepfilled\")\nplt.title(\"Prior distribution for $\\lambda_1$\")\nplt.xlim(0, 8);", "To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model. \nPyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:", "data = np.array([10, 5])\nfixed_variable = pm.Poisson(\"fxd\", 1, value=data, observed=True)\nprint \"value: \", fixed_variable.value\nprint \"calling .random()\"\nfixed_variable.random()\nprint \"value: \", fixed_variable.value", "This is how we include data into our models: initializing a stochastic variable to have a fixed value. \nTo complete our text message example, we fix the PyMC variable observations to the observed dataset.", "# We're using some fake data here\ndata = np.array([10, 25, 15, 20, 35])\nobs = pm.Poisson(\"obs\", lambda_, value=data, observed=True)\nprint obs.value", "Finally...\nWe wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)", "model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])", "Modeling approaches\nA good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset. \nIn the last chapter we investigated text message data. We begin by asking how our observations may have been generated:\n\n\nWe started by thinking \"what is the best random variable to describe this count data?\" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.\n\n\nNext, we think, \"Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?\" Well, the Poisson distribution has a parameter $\\lambda$. \n\n\nDo we know $\\lambda$? No. In fact, we have a suspicion that there are two $\\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\\tau$.\n\n\nWhat is a good distribution for the two $\\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\\alpha$.\n\n\nDo we know what the parameter $\\alpha$ might be? No. At this point, we could continue and assign a distribution to $\\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\\lambda$, (\"it probably changes over time\", \"it's likely between 10 and 30\", etc.), we don't really have any strong beliefs about $\\alpha$. So it's best to stop here. \nWhat is a good value for $\\alpha$ then? We think that the $\\lambda$s are between 10-30, so if we set $\\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\\alpha$ as to reflect our belief is to set the value so that the mean of $\\lambda$, given $\\alpha$, is equal to our observed mean. This was shown in the last chapter.\n\n\nWe have no expert opinion of when $\\tau$ might have occurred. So we will suppose $\\tau$ is from a discrete uniform distribution over the entire timespan.\n\n\nBelow we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )\n<img src=\"http://i.imgur.com/7J30oCG.png\" width = 700/>\nPyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:\n\nProbabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.\n\nSame story; different ending.\nInterestingly, we can create new datasets by retelling the story.\nFor example, if we reverse the above steps, we can simulate a possible realization of the dataset.\n1. Specify when the user's behaviour switches by sampling from $\\text{DiscreteUniform}(0, 80)$:", "tau = pm.rdiscrete_uniform(0, 80)\nprint tau", "2. Draw $\\lambda_1$ and $\\lambda_2$ from an $\\text{Exp}(\\alpha)$ distribution:", "alpha = 1. / 20.\nlambda_1, lambda_2 = pm.rexponential(alpha, 2)\nprint lambda_1, lambda_2", "3. For days before $\\tau$, represent the user's received SMS count by sampling from $\\text{Poi}(\\lambda_1)$, and sample from $\\text{Poi}(\\lambda_2)$ for days after $\\tau$. For example:", "data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]", "4. Plot the artificial dataset:", "plt.bar(np.arange(80), data, color=\"#348ABD\")\nplt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Artificial dataset\")\nplt.xlim(0, 80)\nplt.legend();", "It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\\lambda_i, \\tau$, that maximize this probability. \nThe ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:", "def plot_artificial_sms_dataset():\n tau = pm.rdiscrete_uniform(0, 80)\n alpha = 1. / 20.\n lambda_1, lambda_2 = pm.rexponential(alpha, 2)\n data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]\n plt.bar(np.arange(80), data, color=\"#348ABD\")\n plt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\n plt.xlim(0, 80)\n\nfigsize(12.5, 5)\nplt.title(\"More example of artificial datasets\")\nfor i in range(1, 5):\n plt.subplot(4, 1, i)\n plot_artificial_sms_dataset()", "Later we will see how we use this to make predictions and test the appropriateness of our models.\nExample: Bayesian A/B testing\nA/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results. \nSimilarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards. \nOften, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a \"Z-score\" and even more confusing \"p-values\" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural. \nA Simple Case\nAs this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \\lt p_A \\lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us. \nSuppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \\frac{n}{N}$. Unfortunately, the observed frequency $\\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\\frac{1}{6}$. Knowing the true frequency of events like:\n\nfraction of users who make purchases, \nfrequency of social attributes, \npercent of internet users with cats etc. \n\nare common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.\nThe observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.\nWith respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. \nTo set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:", "import pymc as pm\n\n# The parameters are the bounds of the Uniform.\np = pm.Uniform('p', lower=0, upper=1)", "Had we had stronger beliefs, we could have expressed them in the prior above.\nFor this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\\ \\sim \\text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.", "# set constants\np_true = 0.05 # remember, this is unknown.\nN = 1500\n\n# sample N Bernoulli random variables from Ber(0.05).\n# each random variable has a 0.05 chance of being a 1.\n# this is the data-generation step\noccurrences = pm.rbernoulli(p_true, N)\n\nprint occurrences # Remember: Python treats True == 1, and False == 0\nprint occurrences.sum()", "The observed frequency is:", "# Occurrences.mean is equal to n/N.\nprint \"What is the observed frequency in Group A? %.4f\" % occurrences.mean()\nprint \"Does this equal the true frequency? %s\" % (occurrences.mean() == p_true)", "We combine the observations into the PyMC observed variable, and run our inference algorithm:", "# include the observations, which are Bernoulli\nobs = pm.Bernoulli(\"obs\", p, value=occurrences, observed=True)\n\n# To be explained in chapter 3\nmcmc = pm.MCMC([p, obs])\nmcmc.sample(18000, 1000)", "We plot the posterior distribution of the unknown $p_A$ below:", "figsize(12.5, 4)\nplt.title(\"Posterior distribution of $p_A$, the true effectiveness of site A\")\nplt.vlines(p_true, 0, 90, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.hist(mcmc.trace(\"p\")[:], bins=25, histtype=\"stepfilled\", normed=True)\nplt.legend()", "Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.\nA and B Together\nA similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )", "import pymc as pm\nfigsize(12, 4)\n\n# these two quantities are unknown to us.\ntrue_p_A = 0.05\ntrue_p_B = 0.04\n\n# notice the unequal sample sizes -- no problem in Bayesian analysis.\nN_A = 1500\nN_B = 750\n\n# generate some observations\nobservations_A = pm.rbernoulli(true_p_A, N_A)\nobservations_B = pm.rbernoulli(true_p_B, N_B)\nprint \"Obs from Site A: \", observations_A[:30].astype(int), \"...\"\nprint \"Obs from Site B: \", observations_B[:30].astype(int), \"...\"\n\nprint observations_A.mean()\nprint observations_B.mean()\n\n# Set up the pymc model. Again assume Uniform priors for p_A and p_B.\np_A = pm.Uniform(\"p_A\", 0, 1)\np_B = pm.Uniform(\"p_B\", 0, 1)\n\n\n# Define the deterministic delta function. This is our unknown of interest.\n@pm.deterministic\ndef delta(p_A=p_A, p_B=p_B):\n return p_A - p_B\n\n# Set of observations, in this case we have two observation datasets.\nobs_A = pm.Bernoulli(\"obs_A\", p_A, value=observations_A, observed=True)\nobs_B = pm.Bernoulli(\"obs_B\", p_B, value=observations_B, observed=True)\n\n# To be explained in chapter 3.\nmcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])\nmcmc.sample(20000, 1000)", "Below we plot the posterior distributions for the three unknowns:", "p_A_samples = mcmc.trace(\"p_A\")[:]\np_B_samples = mcmc.trace(\"p_B\")[:]\ndelta_samples = mcmc.trace(\"delta\")[:]\n\nfigsize(12.5, 10)\n\n# histogram of posteriors\n\nax = plt.subplot(311)\n\nplt.xlim(0, .1)\nplt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_A$\", color=\"#A60628\", normed=True)\nplt.vlines(true_p_A, 0, 80, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Posterior distributions of $p_A$, $p_B$, and delta unknowns\")\n\nax = plt.subplot(312)\n\nplt.xlim(0, .1)\nplt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_B$\", color=\"#467821\", normed=True)\nplt.vlines(true_p_B, 0, 80, linestyle=\"--\", label=\"true $p_B$ (unknown)\")\nplt.legend(loc=\"upper right\")\n\nax = plt.subplot(313)\nplt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of delta\", color=\"#7A68A6\", normed=True)\nplt.vlines(true_p_A - true_p_B, 0, 60, linestyle=\"--\",\n label=\"true delta (unknown)\")\nplt.vlines(0, 0, 60, color=\"black\", alpha=0.2)\nplt.legend(loc=\"upper right\");", "Notice that as a result of N_B &lt; N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$. \nWith respect to the posterior distribution of $\\text{delta}$, we can see that the majority of the distribution is above $\\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:", "# Count the number of samples less than 0, i.e. the area under the curve\n# before 0, represent the probability that site A is worse than site B.\nprint \"Probability site A is WORSE than site B: %.3f\" % \\\n (delta_samples < 0).mean()\n\nprint \"Probability site A is BETTER than site B: %.3f\" % \\\n (delta_samples > 0).mean()", "If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential \"power\" than each additional data point for site A). \nTry playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.\nI hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation. \nAn algorithm for human deceit\nSocial data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals \"Have you ever cheated on a test?\" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit \"Yes\" to cheating when in fact they hadn't cheated). \nTo present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.\nThe Binomial Distribution\nThe binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:\n$$P( X = k ) = {{N}\\choose{k}} p^k(1-p)^{N-k}$$\nIf $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \\sim \\text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \\le X \\le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.", "figsize(12.5, 4)\n\nimport scipy.stats as stats\nbinomial = stats.binom\n\nparameters = [(10, .4), (10, .9)]\ncolors = [\"#348ABD\", \"#A60628\"]\n\nfor i in range(2):\n N, p = parameters[i]\n _x = np.arange(N + 1)\n plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],\n edgecolor=colors[i],\n alpha=0.6,\n label=\"$N$: %d, $p$: %.1f\" % (N, p),\n linewidth=3)\n\nplt.legend(loc=\"upper left\")\nplt.xlim(0, 10.5)\nplt.xlabel(\"$k$\")\nplt.ylabel(\"$P(X = k)$\")\nplt.title(\"Probability mass distributions of binomial random variables\");", "The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \\sim \\text{Binomial}(N, p )$.\nThe expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.\nExample: Cheating among students\nWe will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ \"Yes I did cheat\" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$. \nThis is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:\n\nIn the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n\nI call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars. \nSuppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\\text{Uniform}(0,1)$ prior.", "import pymc as pm\n\nN = 100\np = pm.Uniform(\"freq_cheating\", 0, 1)", "Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.", "true_answers = pm.Bernoulli(\"truths\", p, size=N)", "If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.", "first_coin_flips = pm.Bernoulli(\"first_flips\", 0.5, size=N)\nprint first_coin_flips.value", "Although not everyone flips a second time, we can still model the possible realization of second coin-flips:", "second_coin_flips = pm.Bernoulli(\"second_flips\", 0.5, size=N)", "Using these variables, we can return a possible realization of the observed proportion of \"Yes\" responses. We do this using a PyMC deterministic variable:", "@pm.deterministic\ndef observed_proportion(t_a=true_answers,\n fc=first_coin_flips,\n sc=second_coin_flips):\n\n observed = fc * t_a + (1 - fc) * sc\n return observed.sum() / float(N)", "The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.", "observed_proportion.value", "Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 \"Yes\" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a \"Yes\" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be \"Yes\". \nThe researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:", "X = 35\n\nobservations = pm.Binomial(\"obs\", N, observed_proportion, observed=True,\n value=X)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([p, true_answers, first_coin_flips,\n second_coin_flips, observed_proportion, observations])\n\n# To be explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 15000)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)\nplt.xlim(0, 1)\nplt.legend();", "With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency? \nI would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters. \nThis kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful. \nAlternative PyMC Model\nGiven a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes: \n\\begin{align}\nP(\\text{\"Yes\"}) &= P( \\text{Heads on first coin} )P( \\text{cheater} ) + P( \\text{Tails on first coin} )P( \\text{Heads on second coin} ) \\\\\n& = \\frac{1}{2}p + \\frac{1}{2}\\frac{1}{2}\\\\\n& = \\frac{p}{2} + \\frac{1}{4}\n\\end{align}\nThus, knowing $p$ we know the probability a student will respond \"Yes\". In PyMC, we can create a deterministic function to evaluate the probability of responding \"Yes\", given $p$:", "p = pm.Uniform(\"freq_cheating\", 0, 1)\n\n\n@pm.deterministic\ndef p_skewed(p=p):\n return 0.5 * p + 0.25", "I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake. \nIf we know the probability of respondents saying \"Yes\", which is p_skewed, and we have $N=100$ students, the number of \"Yes\" responses is a binomial random variable with parameters N and p_skewed.\nThis is where we include our observed 35 \"Yes\" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.", "yes_responses = pm.Binomial(\"number_cheaters\", 100, p_skewed,\n value=35, observed=True)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([yes_responses, p_skewed, p])\n\n# To Be Explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(25000, 2500)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)\nplt.xlim(0, 1)\nplt.legend();", "More PyMC Tricks\nProtip: Lighter deterministic variables with Lambda class\nSometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example, \nbeta = pm.Normal(\"coefficients\", 0, size=(N, 1))\nx = np.random.randn((N, 1))\nlinear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))\n\nProtip: Arrays of PyMC variables\nThere is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:", "N = 10\nx = np.empty(N, dtype=object)\nfor i in range(0, N):\n x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)", "The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:\nExample: Challenger Space Shuttle Disaster <span id=\"challenger\"/>\nOn January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):", "figsize(12.5, 3.5)\nnp.set_printoptions(precision=3, suppress=True)\nchallenger_data = np.genfromtxt(\"data/challenger_data.csv\", skip_header=1,\n usecols=[1, 2], missing_values=\"NA\",\n delimiter=\",\")\n# drop the NA values\nchallenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]\n\n# plot it, as a function of temperature (the first column)\nprint \"Temp (F), O-Ring failure?\"\nprint challenger_data\n\nplt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color=\"k\",\n alpha=0.5)\nplt.yticks([0, 1])\nplt.ylabel(\"Damage Incident?\")\nplt.xlabel(\"Outside temperature (Fahrenheit)\")\nplt.title(\"Defects of the Space Shuttle O-Rings vs temperature\")", "It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask \"At temperature $t$, what is the probability of a damage incident?\". The goal of this example is to answer that question.\nWe need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t } } $$\nIn this model, $\\beta$ is the variable we are uncertain about. Below is the function plotted for $\\beta = 1, 3, -5$.", "figsize(12, 3)\n\n\ndef logistic(x, beta):\n return 1.0 / (1.0 + np.exp(beta * x))\n\nx = np.linspace(-4, 4, 100)\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\")\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\")\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\")\nplt.legend();", "But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t + \\alpha } } $$\nSome plots are below, with differing $\\alpha$.", "def logistic(x, beta, alpha=0):\n return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))\n\nx = np.linspace(-4, 4, 100)\n\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\", ls=\"--\", lw=1)\n\nplt.plot(x, logistic(x, 1, 1), label=r\"$\\beta = 1, \\alpha = 1$\",\n color=\"#348ABD\")\nplt.plot(x, logistic(x, 3, -2), label=r\"$\\beta = 3, \\alpha = -2$\",\n color=\"#A60628\")\nplt.plot(x, logistic(x, -5, 7), label=r\"$\\beta = -5, \\alpha = 7$\",\n color=\"#7A68A6\")\n\nplt.legend(loc=\"lower left\");", "Adding a constant term $\\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).\nLet's start modeling this in PyMC. The $\\beta, \\alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.\nNormal distributions\nA Normal random variable, denoted $X \\sim N(\\mu, 1/\\tau)$, has a distribution with two parameters: the mean, $\\mu$, and the precision, $\\tau$. Those familiar with the Normal distribution already have probably seen $\\sigma^2$ instead of $\\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\\tau$ is always positive. \nThe probability density function of a $N( \\mu, 1/\\tau)$ random variable is:\n$$ f(x | \\mu, \\tau) = \\sqrt{\\frac{\\tau}{2\\pi}} \\exp\\left( -\\frac{\\tau}{2} (x-\\mu)^2 \\right) $$\nWe plot some different density functions below.", "import scipy.stats as stats\n\nnor = stats.norm\nx = np.linspace(-8, 7, 150)\nmu = (-2, 0, 3)\ntau = (.7, 1, 2.8)\ncolors = [\"#348ABD\", \"#A60628\", \"#7A68A6\"]\nparameters = zip(mu, tau, colors)\n\nfor _mu, _tau, _color in parameters:\n plt.plot(x, nor.pdf(x, _mu, scale=1. / _tau),\n label=\"$\\mu = %d,\\;\\\\tau = %.1f$\" % (_mu, _tau), color=_color)\n plt.fill_between(x, nor.pdf(x, _mu, scale=1. / _tau), color=_color,\n alpha=.33)\n\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"density function at $x$\")\nplt.title(\"Probability distribution of three different Normal random \\\nvariables\");", "A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\\mu$. In fact, the expected value of a Normal is equal to its $\\mu$ parameter:\n$$ E[ X | \\mu, \\tau] = \\mu$$\nand its variance is equal to the inverse of $\\tau$:\n$$Var( X | \\mu, \\tau ) = \\frac{1}{\\tau}$$\nBelow we continue our modeling of the Challenger space craft:", "import pymc as pm\n\ntemperature = challenger_data[:, 0]\nD = challenger_data[:, 1] # defect or not?\n\n# notice the`value` here. We explain why below.\nbeta = pm.Normal(\"beta\", 0, 0.001, value=0)\nalpha = pm.Normal(\"alpha\", 0, 0.001, value=0)\n\n\n@pm.deterministic\ndef p(t=temperature, alpha=alpha, beta=beta):\n return 1.0 / (1. + np.exp(beta * t + alpha))", "We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:\n$$ \\text{Defect Incident, $D_i$} \\sim \\text{Ber}( \\;p(t_i)\\; ), \\;\\; i=1..N$$\nwhere $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.", "p.value\n\n# connect the probabilities in `p` with our observations through a\n# Bernoulli random variable.\nobserved = pm.Bernoulli(\"bernoulli_obs\", p, value=D, observed=True)\n\nmodel = pm.Model([observed, beta, alpha])\n\n# Mysterious code to be explained in Chapter 3\nmap_ = pm.MAP(model)\nmap_.fit()\nmcmc = pm.MCMC(model)\nmcmc.sample(120000, 100000, 2)", "We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\\alpha$ and $\\beta$:", "alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d\nbeta_samples = mcmc.trace('beta')[:, None]\n\nfigsize(12.5, 6)\n\n# histogram of the samples:\nplt.subplot(211)\nplt.title(r\"Posterior distributions of the variables $\\alpha, \\beta$\")\nplt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\beta$\", color=\"#7A68A6\", normed=True)\nplt.legend()\n\nplt.subplot(212)\nplt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\alpha$\", color=\"#A60628\", normed=True)\nplt.legend();", "All samples of $\\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\\beta = 0$, implying that temperature has no effect on the probability of defect. \nSimilarly, all $\\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\\alpha$ is significantly less than 0. \nRegarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected). \nNext, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.", "t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]\np_t = logistic(t.T, beta_samples, alpha_samples)\n\nmean_prob_t = p_t.mean(axis=0)\n\nfigsize(12.5, 4)\n\nplt.plot(t, mean_prob_t, lw=3, label=\"average posterior \\nprobability \\\nof defect\")\nplt.plot(t, p_t[0, :], ls=\"--\", label=\"realization from posterior\")\nplt.plot(t, p_t[-2, :], ls=\"--\", label=\"realization from posterior\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.title(\"Posterior expected value of probability of defect; \\\nplus realizations\")\nplt.legend(loc=\"lower left\")\nplt.ylim(-0.1, 1.1)\nplt.xlim(t.min(), t.max())\nplt.ylabel(\"probability\")\nplt.xlabel(\"temperature\");", "Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.\nAn interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.", "from scipy.stats.mstats import mquantiles\n\n# vectorized bottom and top 2.5% quantiles for \"confidence interval\"\nqs = mquantiles(p_t, [0.025, 0.975], axis=0)\nplt.fill_between(t[:, 0], *qs, alpha=0.7,\n color=\"#7A68A6\")\n\nplt.plot(t[:, 0], qs[0], label=\"95% CI\", color=\"#7A68A6\", alpha=0.7)\n\nplt.plot(t, mean_prob_t, lw=1, ls=\"--\", color=\"k\",\n label=\"average posterior \\nprobability of defect\")\n\nplt.xlim(t.min(), t.max())\nplt.ylim(-0.02, 1.02)\nplt.legend(loc=\"lower left\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.xlabel(\"temp, $t$\")\n\nplt.ylabel(\"probability estimate\")\nplt.title(\"Posterior probability estimates given temp. $t$\");", "The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.\nMore generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.\nWhat about the day of the Challenger disaster?\nOn the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.", "figsize(12.5, 2.5)\n\nprob_31 = logistic(31, beta_samples, alpha_samples)\n\nplt.xlim(0.995, 1)\nplt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')\nplt.title(\"Posterior distribution of probability of defect, given $t = 31$\")\nplt.xlabel(\"probability of defect occurring in O-ring\");", "Is our model appropriate?\nThe skeptical reader will say \"You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?\" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\\; \\forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.\nWe can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data. \nPreviously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:\nobserved = pm.Bernoulli( \"bernoulli_obs\", p, value=D, observed=True)\n\nHence we create:\nsimulated_data = pm.Bernoulli(\"simulation_data\", p)\n\nLet's simulate 10 000:", "simulated = pm.Bernoulli(\"bernoulli_sim\", p)\nN = 10000\n\nmcmc = pm.MCMC([simulated, alpha, beta, observed])\nmcmc.sample(N)\n\nfigsize(12.5, 5)\n\nsimulations = mcmc.trace(\"bernoulli_sim\")[:]\nprint simulations.shape\n\nplt.title(\"Simulated dataset using posterior parameters\")\nfigsize(12.5, 6)\nfor i in range(4):\n ax = plt.subplot(4, 1, i + 1)\n plt.scatter(temperature, simulations[1000 * i, :], color=\"k\",\n s=50, alpha=0.6)", "Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).\nWe wish to assess how good our model is. \"Good\" is a subjective term of course, so results must be relative to other models. \nWe will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\nThe following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.\nFor each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:", "posterior_probability = simulations.mean(axis=0)\nprint \"posterior prob of defect | realized defect \"\nfor i in range(len(D)):\n print \"%.2f | %d\" % (posterior_probability[i], D[i])", "Next we sort each column by the posterior probabilities:", "ix = np.argsort(posterior_probability)\nprint \"probb | defect \"\nfor i in range(len(D)):\n print \"%.2f | %d\" % (posterior_probability[ix[i]], D[ix[i]])", "We can present the above data better in a figure: I've wrapped this up into a separation_plot function.", "from separation_plot import separation_plot\n\n\nfigsize(11., 1.5)\nseparation_plot(posterior_probability, D)", "The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions. \nThe black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.\nIt is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:\n\nthe perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.\na completely random model, which predicts random probabilities regardless of temperature.\na constant model: where $P(D = 1 \\; | \\; t) = c, \\;\\; \\forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.", "figsize(11., 1.25)\n\n# Our temperature-dependent model\nseparation_plot(posterior_probability, D)\nplt.title(\"Temperature-dependent model\")\n\n# Perfect model\n# i.e. the probability of defect is equal to if a defect occurred or not.\np = D\nseparation_plot(p, D)\nplt.title(\"Perfect model\")\n\n# random predictions\np = np.random.rand(23)\nseparation_plot(p, D)\nplt.title(\"Random model\")\n\n# constant model\nconstant_prob = 7. / 23 * np.ones(23)\nseparation_plot(constant_prob, D)\nplt.title(\"Constant-prediction model\")", "In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.\nThe perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.\nExercises\n1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50? \n2. Try plotting $\\alpha$ samples versus $\\beta$ samples. Why might the resulting plot look like this?", "# type your code here.\nfigsize(12.5, 4)\n\nplt.scatter(alpha_samples, beta_samples, alpha=0.1)\nplt.title(\"Why does the plot look like this?\")\nplt.xlabel(r\"$\\alpha$\")\nplt.ylabel(r\"$\\beta$\")", "References\n\n[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.\n[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.\n[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.\n[4] Fonnesbeck, Christopher. \"Building Models.\" PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.\n[5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.\n[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000\n[7] Gelman, Andrew. \"Philosophy and the practice of Bayesian statistics.\" British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.\n[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. \"The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models.\" American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.", "from IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
theandygross/HIV_Methylation
Validation/Unsupervised_both.ipynb
mit
[ "Process Normalized Cross-Study Dataset\nFor differential expression analysis, we processed all of the data together on a high memory node. This was done using the minfi package in R, and all data were normalized using the preprocessQuantile function. For a full script of the R pipeline see the Methylation_Normalization_MINFI notebook.\nImports and helper functions from Imports notebook.", "import os \nif os.getcwd().endswith('Validation'):\n os.chdir('..')\n\nimport NotebookImport\nfrom Setup.Imports import *", "Read in Validation Data", "path = '/cellar/users/agross/TCGA_Code/Methlation/data/Validation/'\n\ndf_hiv = pd.read_csv(path + 'BMIQ_Hannum2.csv', index_col=0)\n\ncell_type = pd.Series(df_hiv.columns, index=df_hiv.columns)\ncell_type = cell_type.map(lambda s: s.strip()[-3:])\n\ndf_hiv.shape\n\ncell_type.value_counts()", "Read in annotations", "hiv_ann = pd.read_excel(path + 'Methylome human NEUvs WB.xlsx', \n sheetname='HIV', skiprows=2, index_col=0)\ncontrol_ann = pd.read_excel(path + 'Methylome human NEUvs WB.xlsx',\n sheetname='Negative', skiprows=2, index_col=0)\nann = pd.concat([hiv_ann, control_ann], keys=['HIV+','HIV-'])\nann = ann.reset_index()\nann = ann.rename(columns={'Age (years)': 'age', 'level_0':'HIV'})\nann = ann.set_index('PATID')\n\nhiv_ann = pd.read_excel(path + 'Methylome human NEUvs WB.xlsx', \n sheetname='HIV', skiprows=2, index_col=0)\ncontrol_ann = pd.read_excel(path + 'Methylome human NEUvs WB.xlsx',\n sheetname='Negative', skiprows=2, index_col=0)\nann = pd.concat([hiv_ann, control_ann], keys=['HIV+','HIV-'])\nann = ann.reset_index()\nann = ann.rename(columns={'Age (years)': 'age', 'level_0':'HIV'})\nann = ann.set_index('PATID')\n\nm2 = pd.read_excel('/cellar/users/agross/Downloads_Old/Methylome Study Demographic Data.xlsx',\n skiprows=5, index_col=0)\nage2 = m2['Age (years)']\nann = ann.ix[ann.index.union(age2.index)]\nann['age'] = age2.combine_first(ann.age)\nann['HIV'] = ann.HIV.fillna('HIV+')\n\nage_n = pd.Series({'{}_Neu'.format(i[3:]): v for i,v in ann.age.iteritems()})\nage_t = pd.Series({'{}_CD4'.format(i[3:]): v for i,v in ann.age.iteritems()})\nage = pd.concat([age_n, age_t])\n\nhiv_n = pd.Series({'{}_Neu'.format(i[3:]): v for i,v in ann.HIV.iteritems()})\nhiv_t = pd.Series({'{}_CD4'.format(i[3:]): v for i,v in ann.HIV.iteritems()})\nhiv = pd.concat([hiv_n, hiv_t])\nhiv = hiv == 'HIV+'\n\nimport Setup.DX_Imports as dx\nimport Parallel.Age_HIV_Features as fx\n\nneu_corr = pd.read_csv('/cellar/users/agross/Data/Methylation_Controls/neutrophils_age_corr.csv',\n index_col=0, squeeze=True)\ncd4_corr = pd.read_csv('/cellar/users/agross/Data/Methylation_Controls/CD4T_age_corr.csv',\n index_col=0, squeeze=True)\n\nr2 = ti(fx.rr > 1)\nr4 = r2.intersection(ti(cd4_corr.abs() > .2))\n#r4 = cd4_corr.abs().ix[r2].dropna().order()[-10000:].index\nlen(r4)\n\ndf = df_hiv.ix[r4, ti(cell_type=='CD4')].dropna(1)\ndd = logit_adj(df)\nm = dd.mean(1)\ns = dd.std(1)\ndf_norm = dd.subtract(m, axis=0).divide(s, axis=0)\n\nU,S,vH = frame_svd(df_norm)\n\np = S ** 2 / sum(S ** 2)\np[:5]\n\nfig, ax = subplots(1,1, figsize=(4,3))\nrr1 = -1*vH[0]\n\nsns.regplot(*match_series(age, rr1.ix[ti(hiv==0)]),\n ax=ax, label='HIV+', ci=None)\nsns.regplot(*match_series(age, rr1.ix[ti(hiv>0)]),\n ax=ax, label='Control', ci=None)\nax.set_ylabel('First PC', size=12)\nax.set_xlabel('Chronological age (years)', size=14)\n\nax.set_yticks([0])\nax.axhline(0, ls='--', lw=2.5, color='grey', zorder=-1)\nax.set_xbound(23,70)\nprettify_ax(ax)\nfig.tight_layout()\n\nr2 = ti(fx.rr > 1)\nr4 = r2.intersection(ti(neu_corr.abs() > .2))\n#r4 = neu_corr.abs().ix[r2].dropna().order()[-10000:].index\nlen(r4)\n\ndf = df_hiv.ix[r4, ti(cell_type=='Neu')].dropna(1)\ndd = logit_adj(df)\nm = dd.mean(1)\ns = dd.std(1)\ndf_norm = dd.subtract(m, axis=0).divide(s, axis=0)\n\nU,S,vH = frame_svd(df_norm)\np = S ** 2 / sum(S ** 2)\np[:5]\n\n((vH[0] - vH[0].mean()) / vH[0].std()).abs().order().tail()\n\nU,S,vH = frame_svd(df_norm[[p for p in df_norm.columns if p != '383_Neu']])\np = S ** 2 / sum(S ** 2)\np[:5]\n\nfig, ax = subplots(1,1, figsize=(4,3))\nrr2 = -1*vH[0]\n\nsns.regplot(*match_series(age, rr2.ix[ti(hiv==0)]),\n ax=ax, label='HIV+', ci=None)\nsns.regplot(*match_series(age, rr2.ix[ti(hiv>0)]),\n ax=ax, label='Control', ci=None)\nax.set_ylabel('First PC', size=12)\nax.set_xlabel('Chronological age (years)', size=14)\n\nax.set_yticks([0])\nax.axhline(0, ls='--', lw=2.5, color='grey', zorder=-1)\nax.set_xbound(23,70)\n#ax.set_ybound(-.25,.25)\nprettify_ax(ax)\nfig.tight_layout()\n\nfig, axs = subplots(1,2, figsize=(8,3))\n\nax = axs[0]\nsns.regplot(*match_series(age, rr1.ix[ti(hiv==0)]),\n ax=ax, label='HIV+', ci=None)\nsns.regplot(*match_series(age, rr1.ix[ti(hiv>0)]),\n ax=ax, label='Control', ci=None)\nax.set_ylabel('First PC\\n(CD4+)', size=12)\n\nax = axs[1]\nsns.regplot(*match_series(age, rr2.ix[ti(hiv==0)]),\n ax=ax, label='HIV+', ci=None)\nsns.regplot(*match_series(age, rr2.ix[ti(hiv>0)]),\n ax=ax, label='Control', ci=None)\nax.set_ylabel('First PC\\n(Neutrophil)', size=12)\n\nfor ax in axs:\n ax.legend(loc='lower right', frameon=True, fancybox=True)\n ax.set_xlabel('Chronological age (years)', size=14)\n\n ax.set_yticks([0])\n ax.axhline(0, ls='--', lw=2.5, color='grey', zorder=-1)\n ax.set_xbound(23,70)\n prettify_ax(ax)\n \nfig.tight_layout()\nfig.savefig(FIGDIR + 'PCA_sorted.png', dpi=200)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
thewtex/SimpleITK-Notebooks
65_Registration_FFD.ipynb
apache-2.0
[ "<h1 align=\"center\">Non-Rigid Registration: Free Form Deformation</h1>\n\nThis notebook illustrates the use of the Free Form Deformation (FFD) based non-rigid registration algorithm in SimpleITK.\nThe data we work with is a 4D (3D+time) thoracic-abdominal CT, the Point-validated Pixel-based Breathing Thorax Model (POPI) model. This data consists of a set of temporal CT volumes, a set of masks segmenting each of the CTs to air/body/lung, and a set of corresponding points across the CT volumes. \nThe POPI model is provided by the Léon Bérard Cancer Center & CREATIS Laboratory, Lyon, France. The relevant publication is:\nJ. Vandemeulebroucke, D. Sarrut, P. Clarysse, \"The POPI-model, a point-validated pixel-based breathing thorax model\",\nProc. XVth International Conference on the Use of Computers in Radiation Therapy (ICCR), Toronto, Canada, 2007.\nThe POPI data, and additional 4D CT data sets with reference points are available from the CREATIS Laboratory <a href=\"http://www.creatis.insa-lyon.fr/rio/popi-model?action=show&redirect=popi\">here</a>.", "import SimpleITK as sitk\nimport registration_utilities as ru\nimport registration_callbacks as rc\n\nfrom __future__ import print_function\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom ipywidgets import interact, fixed\n\n#utility method that either downloads data from the MIDAS repository or\n#if already downloaded returns the file name for reading from disk (cached data)\nfrom downloaddata import fetch_data as fdata", "Utilities\nLoad utilities that are specific to the POPI data, functions for loading ground truth data, display and the labels for masks.", "%run popi_utilities_setup.py", "Loading Data\nLoad all of the images, masks and point data into corresponding lists. If the data is not available locally it will be downloaded from the original remote repository. \nTake a look at the images. According to the documentation on the POPI site, volume number one corresponds to end inspiration (maximal air volume).", "images = []\nmasks = []\npoints = []\nfor i in range(0,10):\n image_file_name = 'POPI/meta/{0}0-P.mhd'.format(i)\n mask_file_name = 'POPI/masks/{0}0-air-body-lungs.mhd'.format(i)\n points_file_name = 'POPI/landmarks/{0}0-Landmarks.pts'.format(i)\n images.append(sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)) #read and cast to format required for registration\n masks.append(sitk.ReadImage(fdata(mask_file_name)))\n points.append(read_POPI_points(fdata(points_file_name)))\n \ninteract(display_coronal_with_overlay, temporal_slice=(0,len(images)-1), \n coronal_slice = (0, images[0].GetSize()[1]-1), \n images = fixed(images), masks = fixed(masks), \n label=fixed(lung_label), window_min = fixed(-1024), window_max=fixed(976));", "Geting to know your data\nWhile the POPI site states that image number 1 is end inspiration, and visual inspection seems to suggest this is correct, we should probably take a look at the lung volumes to ensure that what we expect is indeed what is happening.\nWhich image is end inspiration and which end expiration?", "label_shape_statistics_filter = sitk.LabelShapeStatisticsImageFilter()\n\nfor i, mask in enumerate(masks):\n label_shape_statistics_filter.Execute(mask)\n print('Lung volume in image {0} is {1} liters.'.format(i,0.000001*label_shape_statistics_filter.GetPhysicalSize(lung_label)))", "Free Form Deformation\nThis function will align the fixed and moving images using a FFD. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration. \nAs this notebook performs intra-modal registration, we use the MeanSquares similarity metric (simple to compute and appropriate for the task).", "def bspline_intra_modal_registration(fixed_image, moving_image, fixed_image_mask=None, fixed_points=None, moving_points=None):\n\n registration_method = sitk.ImageRegistrationMethod()\n \n # Determine the number of Bspline control points using the physical spacing we want for the control grid. \n grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm\n image_physical_size = [size*spacing for size,spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())]\n mesh_size = [int(image_size/grid_spacing + 0.5) \\\n for image_size,grid_spacing in zip(image_physical_size,grid_physical_spacing)]\n\n initial_transform = sitk.BSplineTransformInitializer(image1 = fixed_image, \n transformDomainMeshSize = mesh_size, order=3) \n registration_method.SetInitialTransform(initial_transform)\n \n registration_method.SetMetricAsMeanSquares()\n # Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be \n # generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the\n # whole image.\n registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n registration_method.SetMetricSamplingPercentage(0.01)\n if fixed_image_mask:\n registration_method.SetMetricFixedMask(fixed_image_mask)\n \n # Multi-resolution framework. \n registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\n registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])\n registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n\n registration_method.SetInterpolator(sitk.sitkLinear)\n registration_method.SetOptimizerAsLBFGSB(gradientConvergenceTolerance=1e-5, numberOfIterations=100)\n \n\n # If corresponding points in the fixed and moving image are given then we display the similarity metric\n # and the TRE during the registration.\n if fixed_points and moving_points:\n registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)\n registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)\n registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))\n \n return registration_method.Execute(fixed_image, moving_image)", "Perform Registration\nThe following cell allows you to select the images used for registration, runs the registration, and afterwards computes statstics comparing the target registration errors before and after registration and displays a histogram of the TREs.\nTo time the registration, uncomment the timeit magic. \n<b>Note</b>: this creates a seperate scope for the cell. Variables set inside the cell, specifically tx, will become local variables and thus their value is not available in other cells.", "#%%timeit -r1 -n1\n\n# Select the fixed and moving images, valid entries are in [0,9].\nfixed_image_index = 0\nmoving_image_index = 7\n\n\ntx = bspline_intra_modal_registration(fixed_image = images[fixed_image_index], \n moving_image = images[moving_image_index],\n fixed_image_mask = (masks[fixed_image_index] == lung_label),\n fixed_points = points[fixed_image_index], \n moving_points = points[moving_image_index]\n )\ninitial_errors_mean, initial_errors_std, _, initial_errors_max, initial_errors = ru.registration_errors(sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index])\nfinal_errors_mean, final_errors_std, _, final_errors_max, final_errors = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])\n\nplt.hist(initial_errors, bins=20, alpha=0.5, label='before registration', color='blue')\nplt.hist(final_errors, bins=20, alpha=0.5, label='after registration', color='green')\nplt.legend()\nplt.title('TRE histogram');\nprint('Initial alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max))\nprint('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))", "Another option for evaluating the registration is to use segmentation. In this case, we transfer the segmentation from one image to the other and compare the overlaps, both visually, and quantitatively.", "# Transfer the segmentation via the estimated transformation. Use Nearest Neighbor interpolation to retain the labels.\ntransformed_labels = sitk.Resample(masks[moving_image_index],\n images[fixed_image_index],\n tx, \n sitk.sitkNearestNeighbor,\n 0.0, \n masks[moving_image_index].GetPixelIDValue())\n\nsegmentations_before_and_after = [masks[moving_image_index], transformed_labels]\ninteract(display_coronal_with_label_maps_overlay, coronal_slice = (0, images[0].GetSize()[1]-1),\n mask_index=(0,len(segmentations_before_and_after)-1),\n image = fixed(images[fixed_image_index]), masks = fixed(segmentations_before_and_after), \n label=fixed(lung_label), window_min = fixed(-1024), window_max=fixed(976));\n\n# Compute the Dice coefficient and Hausdorf distance between the segmentations before, and after registration.\nground_truth = masks[fixed_image_index] == lung_label\nbefore_registration = masks[moving_image_index] == lung_label\nafter_registration = transformed_labels == lung_label\n\nlabel_overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()\nlabel_overlap_measures_filter.Execute(ground_truth, before_registration)\nprint(\"Dice coefficient before registration: {:.2f}\".format(label_overlap_measures_filter.GetDiceCoefficient()))\nlabel_overlap_measures_filter.Execute(ground_truth, after_registration)\nprint(\"Dice coefficient after registration: {:.2f}\".format(label_overlap_measures_filter.GetDiceCoefficient()))\n\nhausdorff_distance_image_filter = sitk.HausdorffDistanceImageFilter()\nhausdorff_distance_image_filter.Execute(ground_truth, before_registration)\nprint(\"Hausdorff distance before registration: {:.2f}\".format(hausdorff_distance_image_filter.GetHausdorffDistance()))\nhausdorff_distance_image_filter.Execute(ground_truth, after_registration)\nprint(\"Hausdorff distance after registration: {:.2f}\".format(hausdorff_distance_image_filter.GetHausdorffDistance()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hainm/mdtraj
examples/centroids.ipynb
lgpl-2.1
[ "Finding centroids\nIn this example, we're going to find a \"centroid\" (representitive structure) for a group of conformations. This group might potentially come from clustering, using method like Ward hierarchical clustering.\nNote that there are many possible ways to define the centroids. This is just one.", "from __future__ import print_function\n%matplotlib inline\nimport mdtraj as md\nimport numpy as np", "Load up a trajectory to use for the example.", "traj = md.load('ala2.h5')\nprint(traj)", "Lets compute all pairwise rmsds between conformations.", "atom_indices = [a.index for a in traj.topology.atoms if a.element.symbol != 'H']\ndistances = np.empty((traj.n_frames, traj.n_frames))\nfor i in range(traj.n_frames):\n distances[i] = md.rmsd(traj, traj, i, atom_indices=atom_indices)", "The algorithim we're going to use is relatively simple:\n- Compute all of the pairwise RMSDs between the conformations. This is O(N^2), so it's not going to\n scale extremely well to large datasets.\n- Transform these distances into similarity scores. Our similarities will calculated as\n $$ s_{ij} = e^{-\\beta \\cdot d_{ij} / d_\\text{scale}} $$\n where $s_{ij}$ is the pairwise similarity, $d_{ij}$ is the pairwise distance, and $d_\\text{scale}$ is the standard deviation of\n the values of $d$, to make the computation scale invariant.\n- Then, we define the centroid as\n $$ \\text{argmax}i \\sum_j s{ij} $$\nUsing $\\beta=1$, this is implemented with the following code:", "beta = 1\nindex = np.exp(-beta*distances / distances.std()).sum(axis=1).argmax()\nprint(index)\n\ncentroid = traj[index]\nprint(centroid)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Timmy-Oh/Generating-Visual-Explanation
XAI.ipynb
mit
[ "Import", "import tensorflow as tf\nfrom PIL import Image\nimport numpy as np\nfrom scipy.misc import imread, imresize\nfrom imagenet_classes import class_names\nimport os", "file_path", "#File Path\n# filepath_input = \"./data/run/\" #input csv file path\nfilepath_ckpt = \"./ckpt/model_weight.ckpt\" #weight saver check point file path\nfilepath_pred = \"./output/predicted.csv\" #predicted value file path\nfilename_queue_description = tf.train.string_input_producer(['./data/description/raw_data.csv'])\nnum_record = 50", "LSTM - Hyper Params", "label_vec_size = 5\ninput_vec_size = 27\nbatch_size = 50\nstate_size_1 = 100\nstate_size_2 = 4096 + state_size_1\nhidden = 15\nlearning_rate = 0.01", "vgg16", "class vgg16:\n def __init__(self, imgs, weights=None, sess=None):\n self.imgs = imgs\n self.convlayers()\n self.fc_layers()\n self.probs = tf.nn.softmax(self.fc3l)\n if weights is not None and sess is not None:\n self.load_weights(weights, sess)\n\n\n def convlayers(self):\n self.parameters = []\n\n # zero-mean input\n with tf.name_scope('preprocess') as scope:\n mean = tf.constant([123.68, 116.779, 103.939], dtype=tf.float32, shape=[1, 1, 1, 3], name='img_mean')\n images = self.imgs-mean\n\n # conv1_1\n with tf.name_scope('conv1_1') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv1_1 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv1_2\n with tf.name_scope('conv1_2') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv1_1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv1_2 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # pool1\n self.pool1 = tf.nn.max_pool(self.conv1_2,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME',\n name='pool1')\n\n # conv2_1\n with tf.name_scope('conv2_1') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.pool1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv2_1 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv2_2\n with tf.name_scope('conv2_2') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv2_1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv2_2 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # pool2\n self.pool2 = tf.nn.max_pool(self.conv2_2,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME',\n name='pool2')\n\n # conv3_1\n with tf.name_scope('conv3_1') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.pool2, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv3_1 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv3_2\n with tf.name_scope('conv3_2') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv3_1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv3_2 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv3_3\n with tf.name_scope('conv3_3') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv3_2, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv3_3 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # pool3\n self.pool3 = tf.nn.max_pool(self.conv3_3,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME',\n name='pool3')\n\n # conv4_1\n with tf.name_scope('conv4_1') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.pool3, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv4_1 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv4_2\n with tf.name_scope('conv4_2') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv4_1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv4_2 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv4_3\n with tf.name_scope('conv4_3') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv4_2, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv4_3 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # pool4\n self.pool4 = tf.nn.max_pool(self.conv4_3,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME',\n name='pool4')\n\n # conv5_1\n with tf.name_scope('conv5_1') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.pool4, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv5_1 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv5_2\n with tf.name_scope('conv5_2') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv5_1, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv5_2 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # conv5_3\n with tf.name_scope('conv5_3') as scope:\n kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,\n stddev=1e-1), name='weights')\n conv = tf.nn.conv2d(self.conv5_2, kernel, [1, 1, 1, 1], padding='SAME')\n biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),\n trainable=True, name='biases')\n out = tf.nn.bias_add(conv, biases)\n self.conv5_3 = tf.nn.relu(out, name=scope)\n self.parameters += [kernel, biases]\n\n # pool5\n self.pool5 = tf.nn.max_pool(self.conv5_3,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME',\n name='pool4')\n\n def fc_layers(self):\n # fc1\n with tf.name_scope('fc1') as scope:\n shape = int(np.prod(self.pool5.get_shape()[1:]))\n fc1w = tf.Variable(tf.truncated_normal([shape, 4096],\n dtype=tf.float32,\n stddev=1e-1), name='weights')\n fc1b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),\n trainable=True, name='biases')\n pool5_flat = tf.reshape(self.pool5, [-1, shape])\n fc1l = tf.nn.bias_add(tf.matmul(pool5_flat, fc1w), fc1b)\n self.fc1 = tf.nn.relu(fc1l)\n self.parameters += [fc1w, fc1b]\n\n # fc2\n with tf.name_scope('fc2') as scope:\n fc2w = tf.Variable(tf.truncated_normal([4096, 4096],\n dtype=tf.float32,\n stddev=1e-1), name='weights')\n fc2b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),\n trainable=True, name='biases')\n fc2l = tf.nn.bias_add(tf.matmul(self.fc1, fc2w), fc2b)\n self.fc2 = tf.nn.relu(fc2l)\n self.parameters += [fc2w, fc2b]\n\n # fc3\n with tf.name_scope('fc3') as scope:\n fc3w = tf.Variable(tf.truncated_normal([4096, 1000],\n dtype=tf.float32,\n stddev=1e-1), name='weights')\n fc3b = tf.Variable(tf.constant(1.0, shape=[1000], dtype=tf.float32),\n trainable=True, name='biases')\n self.fc3l = tf.nn.bias_add(tf.matmul(self.fc2, fc3w), fc3b)\n self.parameters += [fc3w, fc3b]\n\n def load_weights(self, weight_file, sess):\n weights = np.load(weight_file)\n keys = sorted(weights.keys())\n for i, k in enumerate(keys):\n print(i, k, np.shape(weights[k]))\n sess.run(self.parameters[i].assign(weights[k]))", "load_vgg16", "with tf.Session() as sess_vgg:\n imgs = tf.placeholder(tf.float32, [None, 200, 200, 3])\n vgg = vgg16(imgs, 'vgg16_weights.npz', sess_vgg)\n img_files = ['./data/img/cropped/' + i for i in os.listdir('./data/img/cropped')]\n imgs = [imread(file, mode='RGB') for file in img_files]\n temps = [sess_vgg.run(vgg.fc1, feed_dict={vgg.imgs: [imgs[i]]})[0] for i in range(50)]\n reimgs= np.reshape(a=temps, newshape=[50,-1])\n sess_vgg.close()", "File Info", "reader = tf.TextLineReader()\nkey,value = reader.read(filename_queue_description)\nrecord_defaults =[[-1], [-1], [-1], [-1], [-1], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2]]\nlab1, lab2, lab3, lab4, lab5, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15 = tf.decode_csv(value, record_defaults) \n\nfeature_label = tf.stack([lab1, lab2, lab3, lab4, lab5])\nfeature_word = tf.stack([w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15])\n\nwith tf.Session() as sess_data:\n coord = tf.train.Coordinator()\n threads = tf.train.start_queue_runners(coord=coord)\n img_queue = []\n for i in range(num_record):\n# image = sess.run(images)\n \n label, raw_word = sess_data.run([feature_label, feature_word])\n onehot = tf.one_hot(indices=raw_word, depth=27)\n if i == 0:\n full_input = onehot\n full_label = label\n else:\n full_input = tf.concat([full_input, onehot], 0)\n full_label = tf.concat([full_label, label], 0)\n# print(sess.run(tf.shape(image)))\n# batch = tf.train.batch([image, label], 1)\n# print(sess.run(batch))\n \n coord.request_stop()\n coord.join(threads)\n sess_data.close()", "Text Reader\ndef input_pipeline(filenames, batch_size, num_epochs=None):\n filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs, shuffle=False)\n images = tf.image.decode_png(value, channels=3, dtype=tf.uint8)\n reader = tf.TextLineReader()\n key,value = reader.read(filename_queue_description)\n record_defaults =[[-1], [-1], [-1], [-1], [-1], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2]]\n lab1, lab2, lab3, lab4, lab5, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15 = tf.decode_csv(value, record_defaults) \nfeature_label = tf.stack([lab1, lab2, lab3, lab4, lab5])\nfeature_word = tf.stack([w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15])\nexample_batch, label_batch = tf.train.batch([images, feature_label], batch_size=batch_size)\nreturn example_batch, label_batch\n\nwith tf.Session() as sess:\n coord = tf.train.Coordinator()\n threads = tf.train.start_queue_runners(coord=coord)\ninput_pipeline(filename_queue_description, num_epochs=1, batch_size=10)\ncoord.request_stop()\ncoord.join(threads)\nsess.close()\n\nBatching", "with tf.name_scope('batch') as scope:\n # full_label = tf.reshape(full_label, [batch_size, hidden, label_vec_size])\n full_input = tf.reshape(full_input, [batch_size, hidden, input_vec_size])\n input_batch, label_batch = tf.train.batch([full_input, full_input], batch_size=1)", "LSTM First Layer", "with tf.name_scope('lstm_layer_1') as scope:\n with tf.variable_scope('lstm_layer_1'):\n rnn_cell_1 = tf.contrib.rnn.BasicLSTMCell(state_size_1, reuse=None)\n output_1, _ = tf.contrib.rnn.static_rnn(rnn_cell_1, tf.unstack(full_input, axis=1), dtype=tf.float32)\n# output_w_1 = tf.Variable(tf.truncated_normal([hidden, state_size_1, input_vec_size]))\n# output_b_1 = tf.Variable(tf.zeros([input_vec_size]))\n# pred_temp = tf.matmul(output_1, output_w_1) + output_b_1\n\nwith tf.Session() as sess_temp:\n print(sess_temp.run(tf.shape(output_1)))", "matrix_concat", "input_2 = [tf.concat([out, reimgs], axis=1) for out in output_1]", "LSTM Second Layer", "with tf.name_scope('lstm_layer_2') as scope:\n with tf.variable_scope('lstm_layer_2'):\n rnn_cell_2 = tf.contrib.rnn.BasicLSTMCell(state_size_2, reuse=None)\n output_2, _ = tf.contrib.rnn.static_rnn(rnn_cell_2, tf.unstack(input_2, axis=0), dtype=tf.float32)\n output_w_2 = tf.Variable(tf.truncated_normal([hidden, state_size_2, input_vec_size]))\n output_b_2 = tf.Variable(tf.zeros([input_vec_size]))\n pred = tf.nn.softmax(tf.matmul(output_2, output_w_2) + output_b_2)\n\nwith tf.name_scope('loss') as scope:\n loss = tf.constant(0, tf.float32)\n for i in range(hidden):\n loss += tf.losses.softmax_cross_entropy(tf.unstack(full_input, axis=1)[i], tf.unstack(pred, axis=0)[i])\n train = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n\nwith tf.Session() as sess_train:\n sess_train.run(tf.global_variables_initializer())\n saver = tf.train.Saver()\n save_path = saver.save(sess_train, filepath_ckpt)\n \n for i in range(31):\n sess_train.run(train)\n if i % 5 == 0:\n print(\"loss : \", sess_train.run(loss))\n# print(\"pred : \", sess.run(pred))\n save_path = saver.save(sess_train, filepath_ckpt)\n print(\"= Weigths are saved in \" + filepath_ckpt)\n sess_train.close()", "Test", "with tf.Session() as sess_vgg_test:\n imgs = tf.placeholder(tf.float32, [None, 200, 200, 3])\n vgg = vgg16(imgs, 'vgg16_weights.npz', sess_vgg_test)\n test_img_files = ['./data/img/cropped/001.png']\n test_imgs = [imread(file, mode='RGB') for file in test_img_files]\n# bilinear_test_imgs = [imresize(arr=img,interp='bilinear') for img in test_imgs]\n temps = [sess_vgg_test.run(vgg.fc1, feed_dict={vgg.imgs: [img]})[0] for img in test_imgs]\n test_reimgs= np.reshape(a=temps, newshape=[1,-1])\n sess_vgg_test.close()\n\nstart_input = tf.zeros([1,15,27])\nwith tf.Session() as sess_init_generator:\n input_init = sess_init_generator.run(start_input)\nsos = [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]\ninput_init[0][0] = sos\n\nwith tf.name_scope('lstm_layer_1') as scope:\n with tf.variable_scope('lstm_layer_1'):\n rnn_cell_1 = tf.contrib.rnn.BasicLSTMCell(state_size_1, reuse=True)\n output_test_1, _ = tf.contrib.rnn.static_rnn(rnn_cell_1, tf.unstack(input_init, axis=1), dtype=tf.float32)\n# output_t_1 = tf.contrib.rnn.static_rnn(rnn_cell, tf.unstack(full_input, axis=1), dtype=tf.float32)\n# pred = tf.nn.softmax(tf.matmul(output1, output_w[0]) + output_b[0])\n\ninput_2 = [tf.concat([out, test_reimgs], axis=1) for out in output_test_1]\n\nwith tf.name_scope('lstm_layer_2') as scope:\n with tf.variable_scope('lstm_layer_2'):\n rnn_cell_2 = tf.contrib.rnn.BasicLSTMCell(state_size_2, reuse=None)\n output_2, _ = tf.contrib.rnn.static_rnn(rnn_cell_2, tf.unstack(input_2, axis=0), dtype=tf.float32)\n output_w_2 = tf.Variable(tf.truncated_normal([hidden, state_size_2, input_vec_size]))\n output_b_2 = tf.Variable(tf.zeros([input_vec_size]))\n pred = tf.nn.softmax(tf.matmul(output_2, output_w_2) + output_b_2)\n\nsess_model = tf.Session()\nsaver = tf.train.Saver(allow_empty=True)\nsaver.restore(sess_model, filepath_ckpt)\n\nfor i in range(hidden):\n result = sess_model.run(pred)\n result_temp = result[i]\n if i == hidden -1:\n pass\n else:\n input_init[0][i+1] = result_temp", "Result Check", "print(result.shape)\n\ndecoded_result = np.argmax(a=result, axis=2)\n\nprint(result)\n\nprint(decoded_result)", "Code Storage" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/pyquickhelper
_unittests/ut_helpgen/notebooks_svg/seance4_projection_population_correction.ipynb
mit
[ "Evolutation d'une population (correction)", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Exercice 1 : pyramides des âges", "from actuariat_python.data import population_france_2015\npopulation = population_france_2015()\ndf = population\ndf.head(n=3)\n\nhommes = df[\"hommes\"]\nfemmes = df[\"femmes\"]\nsomme = hommes - femmes", "Je reprends ici le code exposé à Damien Vergnaud's Homepage en l'adaptant un peu avec les fonctions de matplotlib via l'interface pyplot. Puis j'ajoute la différence par âge. On commence souvent par la gallerie pour voir si un graphe ou juste une partie est similaire à ce qu'on veut obtenir.", "from matplotlib import pyplot as plt\nfrom numpy import arange\nplt.style.use('ggplot')\nfig, ax = plt.subplots(figsize=(8,8))\nValH = ax.barh(arange(len(hommes)),hommes,1.0,label=\"Hommes\",color='b',linewidth=0,align='center')\nValF = ax.barh(arange(len(femmes)),-femmes,1.0,label=\"Femmes\",color='r',linewidth=0,align='center')\ndiff, = ax.plot(somme,arange(len(femmes)),'y',linewidth=2)\nax.set_title(\"Pyramide des âges\")\nax.set_ylabel(\"Ages\")\nax.set_xlabel(\"Habitants\")\nax.set_ylim([0,110])\nax.legend((ValH[0],ValF[0],diff),('Hommes','Femmes','différence'))", "Le même en utilisant la fonction insérée dans le module actuariat_python.", "from actuariat_python.plots import plot_population_pyramid\nplot_population_pyramid(df[\"hommes\"], df[\"femmes\"], figsize=(8,4))", "Exercice 2 : calcul de l'espérance de vie\nLe premier objectif est de calculer l'espérance de vie à l'âge $t$ à partir de la table de mortalité. On récupère cette table.", "from actuariat_python.data import table_mortalite_france_00_02\ndf=table_mortalite_france_00_02()\nimport pandas\npandas.concat([ df.head(n=3), df.tail(n=3) ])", "On note $P_t$ la population l'âge $t$. La probabilité de mourir à la date $t+d$ lorsqu'on a l'âge $t$ correspond à la probabilité de rester en vie à jusqu'à l'âge $t+d$ puis de mourir dans l'année qui suit :\n$$m_{t+d} = \\frac{P_{t+d}}{P_t}\\frac{P_{t+d} - P_{t+d+1}}{P_{t+d}} $$.\nL'espérance de vie s'exprime :\n$$\\mathbb{E}(t) = \\sum_{d=1}^\\infty d m_{t+d} = \\sum_{d=1}^\\infty d \\frac{P_{t+d}}{P_t}\\frac{P_{t+d} - P_{t+d+1}}{P_{t+d}} = \\sum_{d=1}^\\infty d \\frac{P_{t+d} - P_{t+d+1}}{P_{t}} $$\nOn crée une matrice allant de 0 à 120 ans et on pose $\\mathbb{E}(120)=0$. On utilise le module numpy.", "import numpy\nhf = df[[\"Homme\", \"Femme\"]].as_matrix()\nhf = numpy.vstack([hf, numpy.zeros((8,2))])\nhf.shape\n\nnb = hf.shape[0]\nesp = numpy.zeros ((nb,2))\nfor t in range(0,nb):\n for i in (0,1):\n if hf[t,i] == 0:\n esp[t,i] = 0\n else:\n somme = 0.0\n for d in range(1,nb-t):\n if hf[t+d,i] > 0:\n somme += d * (hf[t+d,i] - hf[t+d+1,i]) / hf[t,i]\n esp[t,i] = somme\nesp[:1] ", "Enfin, on dessine le résultat avec matplotlib :", "from matplotlib import pyplot as plt\nplt.style.use('ggplot')\nh = plt.plot(esp)\nplt.legend(h, [\"Homme\", \"Femme\"])\nplt.title(\"Espérance de vie\")", "Le calcul implémenté ci-dessus n'est pas le plus efficace. On fait deux boucles imbriquées dont le coût global est en $O(n^2)$ mais surtout on effectue les mêmes calculs plusieurs fois. Pour le réduire à un coût linéaire $O(n)$, il faut s'intéresser à la quantité :\n$$P_{t+1} \\mathbb{E}(t+1) - P_t \\mathbb{E}(t) = \\sum_{d=1}^\\infty d (P_{t+d+1} - P_{t+d+2}) - \\sum_{d=1}^\\infty d (P_{t+d} - P_{t+d+1})$$\nL'implémentation devra utiliser la fonction numpy.cumsum et cette astuce Pandas Dataframe cumsum by row in reverse column order?.", "# à suivre", "numpy et pandas ont plusieurs fonction en commun dès qu'il s'agit de parcourir les données. Il existe aussi la fonction DataFrame.cumsum.\nExercice 3 : simulation de la pyramide en 2016\nL'objectif est d'estimer la population française en 2016. Si $P(a,2015)$ désigne le nombre de personnes d'âge $a$ en 2015, on peut estimer $P(a,2016)$ en utilisant la probabilité de mourir $m(a)$ :\n$$ P(a+1, 2016) = P(a,2015) * (1 - m(a))$$\nOn commence par calculer les coefficients $m(a)$ avec la table hf obtenue lors de l'exercice précédent tout en gardant la même dimension (on aura besoin de la fonction nan_to_num :", "mortalite = (hf[:-1] - hf[1:]) / hf[:-1]\nmortalite = numpy.nan_to_num(mortalite) # les divisions nulles deviennent nan, on les remplace par 0\nmortalite = numpy.vstack([mortalite, numpy.zeros((1,2))])\nm = mortalite", "La population a été obtenue lors de l'exercice 1, on la convertit en un objet numpy :", "pop = population[[\"hommes\",\"femmes\"]].as_matrix()\npop = numpy.vstack( [pop, numpy.zeros((m.shape[0] - pop.shape[0],2))])\npop.shape", "Ensuite on calcule la population en 2016 :", "pop_next = pop * (1-m)\npop_next = numpy.vstack([numpy.zeros((1,2)), pop_next[:-1]])\npop_next[:5]\n\npop[:5]\n\nfrom actuariat_python.plots import plot_population_pyramid\nplot_population_pyramid(pop_next[:,0], pop_next[:,1])", "Exercice 4 : simulation jusqu'en 2100\nIl s'agit de répéter l'itération effectuée lors de l'exercice précédent. Le plus est de recopier le code dans une fonction et de l'appeler un grand nombre de fois.", "def iteration(pop, mortalite):\n pop_next = pop * (1-mortalite)\n pop_next = numpy.vstack([numpy.zeros((1,2)), pop_next[:-1]]) # aucune naissance\n return pop_next\n\npopt = pop\nfor year in range(2016, 2051):\n popt = iteration(popt, mortalite)\n \nplot_population_pyramid(popt[:,0], popt[:,1], title=\"Pyramide des âges en 2050\")", "Exercice 5 : simulation avec les naissances\nDans l'exercice précédent, la seconde ligne de la fonction iteration correspond à cas où il n'y a pas de naissance. On veut remplacer cette ligne par quelque chose proche de la réalité :\n\nles naissances sont calculées à partir de la population féminines et de la table de fécondité\non garde la même proportion homme/femme que celle actuellement observée", "ratio = pop[0,0] / (pop[0,1] + pop[0,0])\nratio", "Il y a un peu plus de garçons qui naissent chaque année.", "from actuariat_python.data import fecondite_france\ndf=fecondite_france()\ndf.head()\n\nfrom matplotlib import pyplot as plt\ndf.plot(x=\"age\", y=[\"2004\",\"2014\"])", "On convertit ces données en une matrice numpy sur 120 lignes comme les précédentes. On se sert des méthodes fillna et merge.", "ages = pandas.DataFrame(dict(age=range(0,120)))\nmerge = ages.merge(df, left_on=\"age\", right_on=\"age\", how=\"outer\")\nfecondite = merge.fillna(0.0)\nfecondite[13:17]\n\nmat_fec = fecondite[[\"2014\"]].as_matrix() / 1000 # les chiffres sont pour 1000 femmes\nmat_fec.shape", "Il faut maintenant coder une fonction qui calcule le naissances pour l'année suivantes.", "def naissances(pop, fec):\n # on suppose que pop est une matrice avec deux colonnes homme, femme\n # et que fec est une matrice avec une colonne fécondité\n n = pop[:,1] * fec[:,0]\n return n.sum()\n\nnais = naissances(pop, mat_fec)\nnais", "Et on reprend la fonction iteration et le code de l'exercice précédent :", "def iteration(pop, mortalite, fec, ratio):\n pop_next = pop * (1-mortalite)\n nais = naissances(pop, fec)\n row = numpy.array([[nais*ratio, nais*(1-ratio)]])\n pop_next = numpy.vstack([row, pop_next[:-1]]) # aucune naissance\n return pop_next\n\npopt = pop\nfor year in range(2016, 2051):\n popt = iteration(popt, m, mat_fec, ratio)\n \nplot_population_pyramid(popt[:,0], popt[:,1], title=\"Pyramide des âges en 2050\")", "On va plus loin et on stocke la population dans un vecteur :", "total = [[2015, pop[:,0].sum(),pop[:,1].sum()]]\npopt = pop\nfor year in range(2016, 2101):\n popt = iteration(popt, m, mat_fec, ratio)\n total.append([year, popt[:,0].sum(),popt[:,1].sum()])\n \nplot_population_pyramid(popt[:,0], popt[:,1], title=\"Pyramide des âges en 2101\")\n\ndf = pandas.DataFrame(data=total, columns=[\"année\",\"hommes\",\"femmes\"])\ndf.plot(x=\"année\", y=[\"hommes\", \"femmes\"], title=\"projection population française\")", "Le code suivant permet de combiner les deux graphes sur la même ligne avec la fonction subplots :", "from matplotlib import pyplot as plt\nfig, ax = plt.subplots(1,2,figsize=(14,6))\nplot_population_pyramid(popt[:,0], popt[:,1], title=\"Pyramide des âges en 2050\", ax=ax[0])\ndf.plot(x=\"année\", y=[\"hommes\", \"femmes\"], title=\"projection population française\", ax=ax[1])", "Autre représentation d'une pyramide avec le module pygal\nLe code suivant utilise le module pygal et le notebook Interactive plots in IPython notebook using pygal ou celui-ci pygal.ipynb\n.", "from actuariat_python.data import population_france_2015\npopulation = population_france_2015()\npopulation.head(n=3)\n\nfrom IPython.display import SVG\nimport pygal\npyramid_chart = pygal.Pyramid(human_readable=True, legend_at_bottom=True)\npyramid_chart.title = 'Population française en 2015'\npyramid_chart.x_labels = map(lambda x: str(x) if not x % 5 else '', population[\"age\"])\npyramid_chart.add(\"hommes\", population[\"hommes\"])\npyramid_chart.add(\"femmes\", population[\"femmes\"])\nSVG(pyramid_chart.render())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OceanPARCELS/parcels
parcels/examples/tutorial_particle_field_interaction.ipynb
mit
[ "Particle-Field interaction example\nThis notebook illustrates a simple way to make particles interact with a Field object and modify it. The Field will thus change at each step of the simulation, and will be written using the same time resolution as the particle outputs, in as many netCDF files.\nThe concept is similar to that of Field sampling: here instead, on top of reading the field value at their location, particles are able to alter it as defined in the Kernel. To do this, it is important to keep in mind that:\n\nParticles have to be defined as ScipyParticles\nField writing at each outputdt is not default and has to be enabled\nThe time of the Field to be saved has to be updated within a Kernel\n\nIn this example, particles will carry a tracer and release it into a clean Field during their advection by surface currents. To show how can particles interact with a Field and alter it, the exchange of such tracer is modelled here with a discretized version of the mass transfer equation, defined as follows:\n\\begin{equation}\n\\Delta c_{particle}(t) = aC_{field}(t-1) - bc_{particle}(t-1)\n\\tag{1}\n\\label{1}\n\\end{equation}\nIn Eq.1, $c_{particle}$ is the tracer concentration associated with the particle, $C_{field}$ is the tracer concentration in seawater at particle location, and $a$ and $b$ are weights that modulate the sorption of tracer from seawater and its desorption, respectively. \nAdditionally to a relevant Kernel, we will define a suitable particle class to store $c_{particle}$, as it is needed to solve Eq.1. \nParticle altering a Field during advection", "%matplotlib inline\nfrom parcels import Variable, Field, FieldSet, ParticleSet, ScipyParticle, AdvectionRK4, plotTrajectoriesFile\n\nimport numpy as np\nfrom datetime import timedelta as delta\nimport netCDF4\nimport matplotlib.pyplot as plt", "In this specific example, particles will be advected by surface ocean velocities stored in netCDF files in the folder GlobCurrent_example_data. We will store these in a FieldSet object, and then add a Field to it to represent the tracer field. This latter field will be initialized with zeroes, as we assume that this tracer is absent on the ocean surface and released by particles only. Note that, in order to conserve mass, it is important to set interp_method='nearest' for the tracer Field.\nAs we are interested in storing this new field during the simulation, we will set its .to_write method to True. Note that this only works for Fields that consist of only one snapshot in time.", "# Velocity fields\nfname = r'GlobCurrent_example_data/*.nc'\nfilenames = {'U': fname, 'V': fname}\nvariables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'}\ndimensions = {'U': {'lat': 'lat', 'lon': 'lon', 'time': 'time'},\n 'V': {'lat': 'lat', 'lon': 'lon', 'time': 'time'},\n } \nfieldset = FieldSet.from_netcdf(filenames, variables, dimensions)\n\n# In order to assign the same grid to the tracer field, it is convenient to load a single velocity file\nfname1 = r'GlobCurrent_example_data/20030101000000-GLOBCURRENT-L4-CUReul_hs-ALT_SUM-v02.0-fv01.0.nc'\nfilenames1 = {'U': fname1, 'V': fname1}\n\nfield_for_size = FieldSet.from_netcdf(filenames1, variables, dimensions) # this field has the same variables and dimensions as the other velocity fields\n\n# Adding the tracer field to the FieldSet\ndimsC = [len(field_for_size.U.lat),len(field_for_size.U.lon)] # it has to have the same dimentions as the velocity fields\ndataC = np.zeros([dimsC[0],dimsC[1]])\n\nfieldC = Field('C', dataC, grid=field_for_size.U.grid, interp_method='nearest') # the new Field will be called C, for tracer Concentration. For mass conservation, interp_method='nearest'\n\nfieldset.add_field(fieldC) # C field added to the velocity FieldSet\n\nfieldset.C.to_write = True # enabling the writing of Field C during execution \n\nfieldset.C.show() # our new C field has been added to the FieldSet", "Some global parameters have to be defined, such as $a$ and $b$ of Eq.1, and a weight that works as a conversion factor from $\\Delta c_{particle}$ to $C_{field}$. \nWe will add these parameters to the FieldSet.", "fieldset.add_constant('a', 10) \nfieldset.add_constant('b', .2) \nfieldset.add_constant('weight', .01) ", "We will now define a new particle class. A VectorParticle is a ScipyParticle having a Variable to store the current tracer concentration c associated with it. As in this case we want our particles to release a tracer into a clean field, we will initialize c with an arbitrary value of 100.\nWe also need to define the Kernel that performs the particle-field interaction. In this Kernel, we will implement Eq.1, so that $\\Delta c_{particle}$ can be used to update $c_{particle}$ and $C_{field}$ at the particle location, and thus get their values at the current time $t$.\nAdditionally, the time of fieldset.C is updated within the Interaction Kernel, which is an important step to properly write its value.", "class VectorParticle(ScipyParticle):\n c = Variable('c', dtype=np.float32, initial=100.) # particle concentration c is initialized with a non-zero value\n\ndef Interaction(particle, fieldset, time):\n deltaC = (fieldset.a*fieldset.C[particle]-fieldset.b*particle.c) # the exchange is obtained as a discretized mass transfer equation\n \n xi, yi = particle.xi[fieldset.C.igrid], particle.yi[fieldset.C.igrid], \n if abs(particle.lon - fieldset.C.grid.lon[xi+1]) < abs(particle.lon - fieldset.C.grid.lon[xi]):\n xi += 1\n if abs(particle.lat - fieldset.C.grid.lat[yi+1]) < abs(particle.lat - fieldset.C.grid.lat[yi]):\n yi += 1\n \n particle.c += deltaC\n fieldset.C.data[0, yi, xi] += -deltaC*fieldset.weight # weight, defined as a constant for the FieldSet, acts here as a conversion factor between c_particle and C_field\n fieldset.C.grid.time[0] = time # updating Field C time\n\ndef WriteInitial(particle, fieldset, time): # will be used to store the initial conditions of fieldset.C\n fieldset.C.grid.time[0] = time\n\npset = ParticleSet(fieldset=fieldset, pclass=VectorParticle, lon=[24.5], lat=[-34.8]) # for simplicity, we'll track a single particle here", "Three things are worth noticing in the code above:\n- The use of fieldset.C[particle] \n- The computation of the relevant grid cell (xi, yi)\n- Writing $C_{field}$ through fieldset.C.data[0, yi, xi] \nBecause fieldset.C[particle] interpolates the $C_{field}$ value from the nearest grid cell to the particle, it is important to also write to that same grid cell. That is not completely trivial to do in Parcels, which is why lines 7-11 in the cell above calculate which xi and yi are closest to the particle longitude and latitude (this extends trivially to depth too). Our first guess is particle.xi[fieldset.C.igrid], the location in the particular grid, but it could be that xi+1 is closer, which is what the if-statements are for.\nThe new indices are then used in fieldset.C.data[0, yi, xi] to access the field value in the cell where the particle is found, that can be different from the result of the interpolation at the particle's coordinates. Note that here we need to use fieldset.C.data[0, yi, xi] both for calculating deltaC and for the consequent update of the cell value for consistency between the forcing (the seawater-particle gradient) and its effect (the exchange, and consequent alteration of the field).\nRemember that reading and writing Fields at particle location through particle indices is only possible for ScipyParticles (and returns an error if JITParticles are used).", "pset.show(field=fieldset.C) # Initial particle location and the tracer field C", "Now we are going to execute the advection of the particle and the simultaneous release of the tracer it carries. We will thus add the interactionKernel defined above to the built-in Kernel AdvectionRK4.\nBefore running the advection, we will execute the pset with the WriteInitial for dt=0: this will write the initial condition of fieldset.C to a netCDF file.\nWhile particle outputs will be written in a file named interaction.nc at every outputdt, the field will be automatically written in netCDF files named interaction_wxyzC.nc, with wxyz being the number of the output and C the FieldSet variable of our interest. Note that you can use tools like ncrcat (on linux/macOS) to combine these separate files into one large netCDF file after the simualtion.", "output_file = pset.ParticleFile(name=r'interaction.nc', outputdt=delta(days=1))\n\npset.execute(WriteInitial, dt=0., output_file=output_file)\n\npset.execute(AdvectionRK4 + pset.Kernel(Interaction), # the particle will FIRST be transported by currents and THEN interact with the field\n dt=delta(days=1),\n runtime=delta(days=24), # we are going to track the particle and save its trajectory and tracer concentration for 24 days\n output_file=output_file)\n\noutput_file.close()", "We can see that $c_{particle}$ has been saved along with particle trajectory, as expected.", "pset_traj = netCDF4.Dataset(r'interaction.nc')\n\nprint(pset_traj['c'][:])\n\nplotTrajectoriesFile('interaction.nc');", "But what about fieldset.C? We can see that it has been accordingly modified during particle motion. Using fieldset.C we can access the field as resulting at the end of the run, with no information about the previous time steps.", "c_results = fieldset.C.data[0,:,:].copy() # Copying the final field data in a new array\nc_results[[field_for_size.U.data==0][0][0]]= np.nan # using a mask for fieldset.C.data on land\nc_results[c_results==0] = np.nan # masking the field where its value is zero -- areas that have not been modified by the particle, for clearer plotting\n\ntry: # Works if Cartopy is installed\n import cartopy\n import cartopy.crs as ccrs\n extent = [10, 33, -37, -29]\n \n X = fieldset.U.lon\n Y = fieldset.U.lat\n\n plt.figure(figsize=(12, 6))\n ax = plt.axes(projection=ccrs.Mercator())\n ax.set_extent(extent)\n\n ax.add_feature(cartopy.feature.OCEAN, facecolor='lightgrey')\n ax.add_feature(cartopy.feature.LAND, edgecolor='black', facecolor='floralwhite')\n gl=ax.gridlines(xlocs = np.linspace(10,34,13) , ylocs=np.linspace(-29,-37,9),draw_labels=True)\n gl.right_labels = False\n gl.bottom_labels = False\n\n xx, yy = np.meshgrid(X,Y)\n\n results = ax.pcolormesh(xx,yy,(c_results),transform=ccrs.PlateCarree(),vmin=0,)\n cbar=plt.colorbar(mappable = results, ax=ax)\n cbar.ax.text(.8,.070,'$C_{field}$ concentration', rotation=270, fontsize=12)\n \nexcept:\n print('Please install the Cartopy package.')", "When looking at tracer concentrations, we see that $c_{particle}$ decreases along its trajectory (right to left), as it is releasing the tracer it carries. Accordingly, values of $C_{field}$ provided by particle interaction progressively reduce along the particle's route.\nNotice that the first particle-field interaction occurs at time $t = 1$ day, and namely after the execution of the first step of AdvectionRK4, as shown by the unaltered field value at the particle's starting location. \nIn order to let the particle interact before being advected, we would have to change the order in which the two Kernels are added together in pset.execute, i.e. pset.execute(interactionKernel + AdvectionRK4, ...). In this latter case, the interaction would not occur at the particle's final position instead.", "x_centers, y_centers = np.meshgrid(fieldset.U.lon-np.diff(fieldset.U.lon[:2])/2, fieldset.U.lat-np.diff(fieldset.U.lat[:2])/2)\n\nfig,ax = plt.subplots(1,1,figsize=(10,7),constrained_layout=True)\nax.set_facecolor('lightgrey') # For visual coherence with the plot above\n\nfieldplot=ax.pcolormesh(x_centers[-28:-17,22:41],y_centers[-28:-17,22:41],c_results[-28:-18,22:40], vmin=0, vmax=0.2,cmap='viridis') \n# Zoom on the area of interest\nfield_cbar = plt.colorbar(fieldplot,ax=ax)\nfield_cbar.ax.text(.6,.070,'$C_{field}$ concentration', rotation=270, fontsize=12)\n\nparticle = plt.scatter(pset_traj['lon'][:].data[0,:],pset_traj['lat'][:].data[0,:], c=pset_traj['c'][:].data[0,:],vmin=0, s=100, edgecolor='white') \nparticle_cbar = plt.colorbar(particle,ax=ax, location = 'top')\nparticle_cbar.ax.text(40,300,'$c_{particle}$ concentration', fontsize=12);", "Finally, to see the C field in time we have to load the .nc files produced during the run. In the following plots, particle location and field values are shown at each time step.", "fig, ax = plt.subplots(5,5, figsize=(30,20))\n\ndaycounter = 1\n\nfor i in range(len(ax)):\n for j in range(len(ax)):\n data = netCDF4.Dataset(r'interaction_00'+ '%02d' % daycounter+'C.nc')\n \n c_results = data['C'][0,0,:,:].data.copy() # copying the final field data in a new array\n c_results[[field_for_size.U.data==0][0][0]]= np.nan # using a mask for fieldset.C.data on land\n c_results[c_results==0] = np.nan # masking the field where its value is zero -- areas that have not been modified by the particle, for clearer plotting\n\n \n ax[i,j].set_facecolor('lightgrey') # For visual coherence with the plots above\n\n fieldplot=ax[i,j].pcolormesh(x_centers[-28:-17,22:41],y_centers[-28:-17,22:41],c_results[-28:-18,22:40], vmin=0, vmax=0.2,cmap='viridis') \n \n particle = ax[i,j].scatter(pset_traj['lon'][:].data[0,daycounter-1],pset_traj['lat'][:].data[0,daycounter-1], c=pset_traj['c'][:].data[0,daycounter-1],vmin=0, vmax=100, s=100, edgecolor='white') \n # plotting particle location at current time step -- daycounter-1 due to different indexing\n \n ax[i,j].set_title('Day '+ str(daycounter-1))\n \n daycounter +=1 # next day\n\nfig.subplots_adjust(right=0.8)\nfig.subplots_adjust(top=0.8)\n \ncbar_ax = fig.add_axes([0.82, 0.12, 0.03, 0.7])\nfig.colorbar(fieldplot, cax=cbar_ax)\ncbar_ax.tick_params(labelsize=18)\ncbar_ax.text(.4,.08,'$C_{field}$ concentration', fontsize=25, rotation=270)\n\ncbar_ax1 = fig.add_axes([0.1, .85, .7, 0.04])\nfig.colorbar(particle, cax=cbar_ax1, orientation = 'horizontal')\ncbar_ax1.tick_params(labelsize=18)\ncbar_ax1.text(42,170,'$c_{particle}$ concentration', fontsize=25);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Automating-GIS-processes/2017
source/codes/Lesson3-projections.ipynb
mit
[ "Projections: Converting from projection to another\nA map projection is a systematic transformation of the latitudes and longitudes into a plain surface. As map projections of gis-layers are fairly often defined differently (i.e. they do not match), it is a common procedure to redefine the map projections to be identical in both layers. It is important that the layers have the same projection as it makes it possible to analyze the spatial relationships between layer, such as conduct the Point in Polygon spatial query (which we will try next). \nChanging the coordinate reference system of a layer in Geopandas\nDefining a projection and changing it is easy in Geopandas. Let's continue working with our address points, and change the Coordinate Reference System (CRS) from WGS84 into a projection called ETRS GK-25 (EPSG:3879) which uses a Gauss-Krüger projection that is (sometimes) used in Finland. \n\nLet's first read the data from the Shapefile that we created previously", "import geopandas as gpd\n\n# Filepath to the addresses Shapefile\nfp = r\"/home/geo/addresses.shp\"\nfp = r\"D:\\KOODIT\\Opetus\\Automating-GIS-processes\\AutoGIS-Sphinx\\source\\data\\addresses.shp\"\n\n# Read data\ndata = gpd.read_file(fp)", "Let's check what is the current CRS of our layer", "data.crs", "Okey, so it is WGS84 (i.e. EPSG: 4326). \n\nLet's also check the values in our geometry column", "data['geometry'].head()", "Okey, so they indeed look like lat-lon values. \n\n\nLet's convert those geometries into ETRS GK-25 projection (EPSG: 3879). Changing the projection is really easy to do in Geopandas with .to_crs() -function. As an input for the function, you should define the column containing the geometries, i.e. geometry in this case, and a epgs value of the projection that you want to use. \n\n\nNote: there is also possibility to pass the projection information as proj4 strings or dictionaries, see more here", "# Let's take a copy of our layer\ndata_proj = data.copy()\n\n# Reproject the geometries by replacing the values with projected ones\ndata_proj['geometry'] = data_proj['geometry'].to_crs(epsg=3879)", "Let's see how they look now", "data_proj['geometry'].head()", "And here we go, the numbers have changed! Now we have successfully changed the projection of our layer into a new one. \n\nLet's still compare the layers visually", "# Plot the WGS84 \ndata.plot()\n\n# Plot the one with ETRS GK-25 projection\ndata_proj.plot()", "Indeed, they look different and our re-projected one looks much better in Finland (not so stretced as in WGS84). \n\nNow we still need to change the crs of our GeoDataFrame into EPSG 3879 as now we only modified the values of the geometry column. We can take use of fiona's from_epsg -function.", "from fiona.crs import from_epsg\n\n# Determine the CRS of the GeoDataFrame\ndata_proj.crs = from_epsg(3879)\n\n# Let's see what we have\nprint(data_proj.crs)", "Notice: The above works for most EPSG codes but ETRS GK-25 projection is rather rare, so we still need to specify and make sure that .prj file is having correct coordinate system information by passing a proj4 dictionary below into it (otherwise the .prj file would be empty):", "# Pass the coordinate information\ndata_proj.crs = {'y_0': 0, 'no_defs': True, 'x_0': 25500000, 'k': 1, 'lat_0': 0, 'units': 'm', 'lon_0': 25, 'ellps': 'GRS80', 'proj': 'tmerc'}\n\n# Check it\nprint(data_proj.crs)", "Finally, let's save our projected layer into a Shapefile so that we can use it later.", "# Ouput file path\noutfp = r\"/home/geo/addresses_epsg3879.shp\"\n\n# Save to disk\ndata_proj.to_file(outfp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mozilla-services/data-pipeline
reports/update-orphaning/Update orphaning analysis using longitudinal dataset.ipynb
mpl-2.0
[ "Update orphaning", "import datetime as dt\nimport urllib2\nimport ujson as json\nfrom os import environ\n\n%pylab inline", "Get the time when this job was started (for debugging purposes).", "starttime = dt.datetime.now()\nstarttime", "Declare the channel to look at.", "channel_to_process = \"release\"\n\nsc.defaultParallelism\n\n# Uncomment the next line and adjust |today_env_str| as necessary to run manually\n#today_env_str = \"20161105\"\ntoday_env_str = environ.get(\"date\", None)\n\nassert (today_env_str is not None), \"The date environment parameter is missing.\"\ntoday = dt.datetime.strptime(today_env_str, \"%Y%m%d\").date()\n\n# Find the date of last Wednesday to get the proper 7 day range\nlast_wednesday = today\ncurrent_weekday = today.weekday()\nif (current_weekday < 2):\n last_wednesday -= (dt.timedelta(days=5) + dt.timedelta(days=current_weekday))\nif (current_weekday > 2):\n last_wednesday -= (dt.timedelta(days=current_weekday) - dt.timedelta(days=2))\n\nmin_range = last_wednesday - dt.timedelta(days=17)\nreport_date_str = last_wednesday.strftime(\"%Y%m%d\")\nmin_range_str = min_range.strftime(\"%Y%m%d\")\nmin_range_dash_str = min_range.strftime(\"%Y-%m-%d\")\nlist([last_wednesday, min_range_str, report_date_str])", "The longitudinal dataset can be accessed as a Spark DataFrame, which is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python.", "sql_str = \"SELECT * FROM longitudinal_v\" + today_env_str\nframe = sqlContext.sql(sql_str)\nsql_str", "Restrict the dataframe to the desired channel.", "channel_subset = frame.filter(frame.normalized_channel == channel_to_process)", "Restrict the dataframe to the desired data.", "data_subset = channel_subset.select(\"subsession_start_date\",\n \"subsession_length\",\n \"update_check_code_notify\",\n \"update_check_no_update_notify\",\n \"build.version\",\n \"settings.update.enabled\")", "Restrict the data to the proper 7 day range, starting at least 17 days before the creation date of the\nlongitudinal dataset.", "def start_date_filter(d):\n try:\n date = dt.datetime.strptime(d.subsession_start_date[0][:10], \"%Y-%m-%d\").date()\n return min_range <= date\n except ValueError:\n return False\n except TypeError:\n return False\n\ndate_filtered = data_subset.rdd.filter(start_date_filter)", "Analyze the data to determine the number of users on a current version of Firefox vs. a version that's out of date. A \"user on a current version\" is defined as being either on the latest version as of the beginning of the 7 day range, according to the firefox_history_major_releases.json file on product-details.mozilla.org, or the two versions just prior to it. Versions prior to FF 42 are ignored since unified telemetry was not turned on by default on earlier versions.", "def latest_version_on_date(date, major_releases):\n latest_date, latest_version = u\"1900-01-01\", 0\n for version, release_date in major_releases.iteritems():\n version_int = int(version.split(\".\")[0])\n if release_date <= date and release_date >= latest_date and version_int >= latest_version:\n latest_date = release_date\n latest_version = version_int\n \n return latest_version\n \nmajor_releases_json = urllib2.urlopen(\"https://product-details.mozilla.org/1.0/firefox_history_major_releases.json\").read()\nmajor_releases = json.loads(major_releases_json)\nlatest_version = latest_version_on_date(min_range_dash_str, major_releases)\n\ndef status_mapper(d):\n try:\n if d.version is None or d.version[0] is None:\n return (\"none-version\", d)\n curr_version = int(d.version[0].split(\".\")[0])\n if curr_version < 42:\n return (\"ignore-version-too-low\", d)\n if curr_version < latest_version - 2:\n # Check if the user ran a particular orphaned version of Firefox for at least 2 hours in\n # the last 12 weeks. An orphaned user is running a version of Firefox that's at least 3\n # versions behind the current version. This means that an update has been available for\n # at least 12 weeks. 2 hours so most systems have had a chance to perform an update\n # check, download the update, and restart Firefox after the update has been downloaded.\n seconds = 0\n curr_version = d.version[0]\n index = 0\n twelve_weeks_ago = last_wednesday - dt.timedelta(weeks=12)\n while seconds < 7200 and index < len(d.version) and d.version[index] == curr_version:\n try:\n date = dt.datetime.strptime(d.subsession_start_date[index][:10], \"%Y-%m-%d\").date()\n if date < twelve_weeks_ago:\n return (\"out-of-date-not-run-long-enough\", d)\n seconds += d.subsession_length[index]\n index += 1\n except ValueError:\n index += 1\n except TypeError:\n index += 1\n if seconds >= 7200:\n return (\"out-of-date\", d)\n return (\"out-of-date-not-run-long-enough\", d)\n return (\"up-to-date\", d)\n except ValueError:\n return (\"value-error\", d)\n \nstatuses = date_filtered.map(status_mapper).cache()\nup_to_date_results = statuses.countByKey()\nup_to_date_json_results = json.dumps(up_to_date_results, ensure_ascii=False)\nup_to_date_json_results", "For people who are out-of-date, determine how many of them have updates disabled:", "out_of_date_statuses = statuses.filter(lambda p: \"out-of-date\" in p)\n\ndef update_disabled_mapper(d):\n status, ping = d\n if ping is None or ping.enabled is None or ping.enabled[0] is None:\n return (\"none-update-enabled\", ping)\n if ping.enabled[0] == True:\n return (\"update-enabled\", ping)\n return (\"update-disabled\", ping)\n \nupdate_enabled_disabled_statuses = out_of_date_statuses.map(update_disabled_mapper)\nupdate_enabled_disabled_results = update_enabled_disabled_statuses.countByKey()\nupdate_enabled_disabled_json_results = json.dumps(update_enabled_disabled_results, ensure_ascii=False)\nupdate_enabled_disabled_json_results", "Focus on orphaned users who have updates enabled.", "update_enabled_statuses = update_enabled_disabled_statuses.filter(lambda p: \"update-enabled\" in p).cache()", "For people who are out-of-date and have updates enabled, determine the distribution across Firefox versions.", "def version_mapper(d):\n status, ping = d\n return (ping.version[0], ping)\n \norphaned_by_versions = update_enabled_statuses.map(version_mapper)\norphaned_by_versions_results = orphaned_by_versions.countByKey()\norphaned_by_versions_json_results = json.dumps(orphaned_by_versions_results, ensure_ascii=False)\norphaned_by_versions_json_results", "For people who are out-of-date and have updates enabled, determine what the update check returns.", "def update_check_code_notify_mapper(d):\n status, ping = d\n if ping is None or ping.update_check_code_notify is None:\n return -1\n for check_code in ping.update_check_code_notify:\n counter = -1\n for i in check_code:\n counter += 1\n if i != 0:\n return counter\n if ping.update_check_no_update_notify is not None and ping.update_check_no_update_notify[0] > 0:\n return 0;\n return -1\n\nupdate_check_code_notify_statuses = update_enabled_statuses.map(update_check_code_notify_mapper)\nupdate_check_code_notify_results = update_check_code_notify_statuses.countByValue()\nupdate_check_code_notify_json_results = json.dumps(update_check_code_notify_results, ensure_ascii=False)\nupdate_check_code_notify_json_results", "Write results to JSON.", "latest_version_object = {\"latest-version\": latest_version}\nup_to_date_object = {\"up-to-date\": up_to_date_results}\nupdate_enabled_disabled_object = {\"update-enabled-disabled\": update_enabled_disabled_results}\nupdate_check_code_notify_object = {\"update-check-code-notify\": update_check_code_notify_results}\norphaned_by_versions_object = {\"orphaned-by-versions\": orphaned_by_versions_results}\n\nfinal_results = [up_to_date_object, update_enabled_disabled_object, update_check_code_notify_object, latest_version_object, orphaned_by_versions_object]\nfinal_results_json = json.dumps(final_results, ensure_ascii=False)\nfinal_results_json", "Finally, store the output in the local directory to be uploaded automatically once the job completes. The file will be stored at:\nhttps://analysis-output.telemetry.mozilla.org/SPARKJOBNAME/data/FILENAME", "filename = \"./output/\" + report_date_str + \".json\"\n\nwith open(filename, 'w') as f:\n f.write(final_results_json)\n\nfilename", "Get the time when this job ended (for debugging purposes):", "endtime = dt.datetime.now()\nendtime\n\ndifference = endtime - starttime\ndifference" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session05/Day4/WavelengthSolution.ipynb
mit
[ "import os\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib notebook \n%config InlineBackend.figure_format = 'retina'\n\n#%matplotlib qt\n#%gui qt\n\ndataDir = \"/Users/rhl/TeX/Talks/DSFP/2018-01/Exercises/Detectors\"", "Let's start by looking at some extracted spectra\nRead the data (and don't worry about the details of how this cell works!).\n\nThe reference lines (a dict arcLines indexed by fiberId containing numpy arrays):\npixelPos Measured centroid of an arc line (pixels)\npixelPosErr Error in pixelPos\nrefWavelength Nominal wavelength (from NIST)\nmodelFitWavelength Wavelength corresponding to pixelPos, based on instrument model\nstatus bitwise OR of flags (see statusFlags)\nA dict statusFlags giving the meaning of the status bits\n\nwavelength0 and npPerPix A couple of scalers giving an approximate wavelength solution\n\n\nThe arc spectra (a dict spectra indexed by fiberId containing numpy arrays):\n\nflux The flux level of each pixel\nfluxVar The error in flux\nmodelFitLambda The wavelength at each pixel according to my instrument model\nAn array pixels giving the pixel values associated with the wavelengths, fluxes, and fluxVars arrays", "dataDump = np.load(os.path.join(dataDir, \"spectro.npz\"))\n\npacked = dataDump[\"packed\"]\nstatusFlags = dataDump[\"statusFlags\"].reshape(1)[0]\nfiberIds = dataDump[\"fiberIds\"]\ndataFields = dataDump[\"dataFields\"]\nwavelength0 = dataDump[\"wavelength0\"]\nnmPerPix = dataDump[\"nmPerPix\"]\npixels = dataDump[\"pixels\"]\nfluxes = dataDump[\"fluxes\"]\nfluxVars = dataDump[\"fluxVars\"]\nmodelFitLambdas = dataDump[\"modelFitLambdas\"]\n\narcLines = {}\nspectra = {}\nfor i, fiberId in enumerate(fiberIds):\n arcLines[fiberId] = {}\n spectra[fiberId] = {}\n\n for j, n in enumerate(dataFields):\n arcLines[fiberId][n] = packed[i][j]\n arcLines[fiberId]['status'] = arcLines[fiberId]['status'].astype('int')\n\n spectra[fiberId][\"modelFitLambda\"] = modelFitLambdas[i]\n spectra[fiberId][\"flux\"] = fluxes[i]\n spectra[fiberId][\"fluxVar\"] = fluxVars[i]\n \ndel packed; del dataFields\ndel modelFitLambdas; del fluxes; del fluxVars", "Plot the spectrum. Do the lines from the different fibres line up?", "plt.clf()\nfor fiberId in fiberIds:\n\n# ...", "I think that the answer is, \"No\". That's why we need a wavelength solution!\nFortunately I have measured the positions of the brighter arclines (the dict arcLines). Choose a fibre and plot the measured lines on top of the spectrum. You should find that I did a good job with the centroiding.\nWe have good pixel centroids and we know the true wavelengths, so we can measure our wavelength solution.\nLet's concentrate on just one fiber for now; choose a fibre, any fibre. \nPlot some of the fitted lines. A good place to start would be the pixel position (pixelPos) and the reference wavelength (refWavelength) for a fibre, then use wavelength0 and nmPerPix to construct an approximate (linear) wavelength solution and look at the residuals from the linear fit.", "fiberId = 5\nassert fiberId in arcLines, \"Unknown fiberId: %d\" % fiberId\npixelPos = arcLines[fiberId]['pixelPos']\nrefWavelength = arcLines[fiberId]['refWavelength']\n\nwavelength = wavelength0 + nmPerPix*pixelPos\n\n#...", "Take a look at the statusFlags and the values of status from your fibre. You probably want to ignore some of the data. The FIT bit is good; the SATURATED, INTERPOLATED, and CR bits are bad (and the other bits aren't set in this dataset).\nRemake your residual plot with the bad lines removed; is that better?", "fiberId = 5\nassert fiberId in arcLines, \"Unknown fiberId: %d\" % fiberId\n \npixelPos = arcLines[fiberId]['pixelPos']\nrefWavelength = arcLines[fiberId]['refWavelength']\nstatus = arcLines[fiberId][\"status\"]\n\nwavelength = wavelength0 + nmPerPix*pixelPos\n\ngood = np.logical_and.reduce([(status & statusFlags[\"FIT\"]) != 0,\n status & (statusFlags[\"INTERPOLATED\"]) == 0])\n\nplt.plot(pixelPos[good], (refWavelength - wavelength)[good], 'o', label=str(fiberId))\n\nplt.legend()\nplt.xlabel(\"pixels\")\nplt.ylabel(r\"$\\lambda_{ref} - \\lambda_{linear}$\");", "Here's the plot for all the fibres:", "plt.clf()\nfor fiberId, fiberData in arcLines.items():\n pixelPos = fiberData[\"pixelPos\"]\n pixelPosErr = fiberData[\"pixelPosErr\"]\n refWavelength = fiberData[\"refWavelength\"]\n modelFitWavelength = fiberData[\"modelFitWavelength\"]\n status = fiberData[\"status\"]\n \n wavelength = wavelength0 + nmPerPix*(pixelPos)\n\n good = np.logical_and.reduce([(status & statusFlags[\"FIT\"]) != 0,\n status & (statusFlags[\"INTERPOLATED\"]) == 0])\n \n plt.plot(pixelPos[good], (refWavelength - wavelength)[good], 'o', label=str(fiberId))\n \nplt.xlabel(\"pixels\")\nplt.ylabel(r\"$\\lambda_{ref} - \\lambda_{linear}$\");\nplt.legend(loc=9, bbox_to_anchor=(1.0, 1.1));", "Fit a curve to those residuals (it's generally better to fit to residuals, especially when you have a more informative model than our current linear one).\nUse Chebyshev polynomials; I recommend using np.polynomial.chebyshev.Chebyshev.fit. You should set the domain of the fit so that it'll be usable over all the CCD's rows.\nExperiment with a range of order of fitter, and look at the rms error in the wavelength solution. You can look at $\\chi^2/\\nu$ too, if you like, but I think you'll find that the centroiding errors are wrong.\nYou probably want to look at the fit (to the residuals!) and at the residuals from the fit.", "import numpy.polynomial.chebyshev\nmyFiberId = 315\n\npixelPosErrorFloor = 1 # 1e-4\n\nplt.clf()\nplotResiduals = True\nuseLinearApproximation = False\n\nfitOrder = 5 if useLinearApproximation else 2\n\nfor fiberId in arcLines:\n if False and fiberId != myFiberId:\n continue\n\n fiberData = arcLines[fiberId]\n #\n # Unpack data\n #\n pixelPos = fiberData[\"pixelPos\"]\n pixelPosErr = fiberData[\"pixelPosErr\"]\n refWavelength = fiberData[\"refWavelength\"]\n modelFitWavelength = fiberData[\"modelFitWavelength\"]\n status = fiberData[\"status\"]\n \n good = np.logical_and.reduce([(status & statusFlags[\"FIT\"]) != 0,\n status & (statusFlags[\"SATURATED\"]) == 0])\n #\n # OK, on to work\n #\n linearWavelength = wavelength0 + nmPerPix*(pixelPos)\n if useLinearApproximation:\n wavelength = linearWavelength\n else:\n wavelength = modelFitWavelength\n \n nominalPixelPos = (refWavelength - wavelength0)/nmPerPix\n fitWavelengthErr = pixelPosErr*nmPerPix\n\n x = nominalPixelPos[good]\n y = (refWavelength - wavelength)[good]\n yerr = np.hypot(fitWavelengthErr, pixelPosErrorFloor*nmPerPix)[good]\n \n used = np.ones_like(good[good])\n\n wavelengthCorr = np.polynomial.chebyshev.Chebyshev.fit(\n x[used], y[used], fitOrder, domain=[pixels[0], pixels[-1]], w=1/yerr[used])\n arcLines[fiberId]['wavelengthCorr'] = wavelengthCorr\n yfit = wavelengthCorr(x)\n\n if plotResiduals:\n plt.plot(x, y - yfit, 'o', label=str(fiberId))\n plt.axhline(0, ls=':', color='black')\n else:\n ax = plt.plot(x, y, 'o', label=str(fiberId))[0]\n plt.plot(pixels, wavelengthCorr(pixels), color=ax.get_color())\n \n print(\"%-3d rms = %.2f mpix\" % \n (fiberId, 1e3*np.sqrt(np.sum((y - yfit)**2)/(len(y) - fitOrder))))\n \nplt.legend(loc=9, bbox_to_anchor=(1.0, 1.1));", "Use your wavelength solution to plot the arc spectra against wavelength (you started out by plotting against pixels).\nDo the Ne lines for the different fibres agree now?", "plt.clf()\nfor fiberId in fiberIds:\n xlab = r\"$\\lambda$ (nm)\"\n if useLinearApproximation:\n x = wavelength0 + nmPerPix*(pixels)\n else:\n x = spectra[fiberId][\"modelFitLambda\"]\n\n x += arcLines[fiberId]['wavelengthCorr'](pixels)\n\n plt.plot(x, spectra[fiberId][\"flux\"], label=str(fiberId))\n \nplt.legend()\nplt.xlabel(xlab)\n\nif True:\n plt.xlim(625, 675)\n plt.ylim(-10000, 250000)", "Now repeat the preceeding exercise using the model of the spectrograph (i.e. spectra[fiberId][\"modelFitLambda\"] not your linear approximation). What order of polynomial is needed now?\nZoom in on a single strong (but not saturated) line and compare the solution derived from the linear wavelength model to that from the model based on knowledge of the instrument.\n\nIs that rms error honest, or are we overfitting? Modify your code to hold back some number of arclines from the fit and measure the rms only of those ones.\n\nIf this was all too easy:\nI was nice and gave you clean (but real) data. In the real world you'd probably want to do an n-sigma clip on the residuals and iterate. Implement this." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/wt
nt2_ce2/vorlesung/ch_5_synchronization/phase_data_aided.ipynb
gpl-2.0
[ "Desription\n\nSimulation of data-aided frequency synchronization \nQPSK symbols are sampled, pulse shaped and transmitted\nUniformly distributed phase distortion is added, before phase is estimated using algorithm as discussed in the lecture\n\nImport", "# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 20}\nplt.rc('font', **font)\nplt.rc('text', usetex=True)\n\nmatplotlib.rc('figure', figsize=(28, 8) )", "Initialization\nParameters", "# number of symbols per sequence/packet\nn_symb = 32\n\n\n# constellation points for modulation scheme\nconstellation = np.array( [ 1+1j, -1+1j, -1-1j, +1-1j ] ) / np.sqrt(2)\n\n\n# snr range for simulation\nEsN0_dB_min = -15 \nEsN0_dB_max = 30\nEsN0_dB_step = 5\nEsN0_dB = np.arange( EsN0_dB_min, EsN0_dB_max + EsN0_dB_step, EsN0_dB_step)\n\n# parameters of the filter\nbeta = 0.5\nn_sps = 4 # samples per symbol\nsyms_per_filt = 4 # symbols per filter (plus minus in both directions)\n\nK_filt = 2 * syms_per_filt * n_sps + 1 # length of the fir filter\n\n\n# set symbol time and sample time\nt_symb = 1.0 \nt_sample = t_symb / n_sps", "Function for Getting RRC Impulse Response", "########################\n# find impulse response of an RRC filter\n########################\ndef get_rrc_ir(K, n_up, t_symbol, beta):\n \n ''' \n Determines coefficients of an RRC filter \n \n Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15\n \n NOTE: Length of the IR has to be an odd number\n \n IN: length of IR, upsampling factor, symbol time, roll-off factor\n OUT: filter coefficients\n '''\n \n K = int(K) \n \n if ( K%2 == 0):\n print('Length of the impulse response should be an odd number')\n sys.exit()\n\n # initialize np.array \n rrc = np.zeros( K )\n \n # find sample time and initialize index vector\n t_sample = t_symbol / n_sps\n time_ind = np.linspace( -(K-1)/2, (K-1)/2, K)\n \n # assign values of rrc\n # if roll-off factor equals 0 use rc\n if beta != 0:\n \n # loop for times and assign according values\n for t_i in time_ind:\n t = (t_i)* t_sample \n \n if t_i==0:\n rrc[ int( t_i+(K-1)/2 ) ] = (1-beta+4*beta/np.pi)\n \n elif np.abs(t)==t_symbol/(4*beta):\n # apply l'Hospital \n rrc[ int( t_i+(K-1)/2 ) ] = beta*np.sin(np.pi/(4*beta)*(1+beta)) - 2*beta/np.pi*np.cos(np.pi/(4*beta)*(1+beta)) \n \n else:\n rrc[ int( t_i+(K-1)/2 ) ] = (4*beta*t/t_symbol * np.cos(np.pi*(1+beta)*t/t_symbol) + np.sin(np.pi*(1-beta)*t/t_symbol) ) / (np.pi*t/t_symbol*(1-(4*beta*t/t_symbol)**2) )\n \n rrc = rrc / np.sqrt(t_symbol)\n \n\n else:\n for t_i in time_ind:\n t = t_i * t_sample\n \n if np.abs(t)<t_sample/20:\n rrc[ t_i + (K-1)/2 ] = 1\n \n else:\n rrc[ t_i + (K-1)/2 ] =np.sin(np.pi*t/t_symbol)/(np.pi*t/t_symbol)\n \n return rrc ", "Simulation\nGet Tx signal", "# find rrc response and normalize to energy 1\nrrc = get_rrc_ir( K_filt, n_sps, t_symb, beta)\nrrc = rrc / np.linalg.norm( rrc )\n\n\n# generate random binary vector and modulate the specified modulation scheme\ndata = np.random.randint( 4, size=n_symb )\ns = [ constellation[ d ] for d in data ]\n\n \n# prepare sequence to be filtered by upsampling \ns_up = np.zeros( n_symb * n_sps, dtype = complex ) \ns_up[ : : n_sps ] = s\n\n# apply RRC filtering \ns_Tx = np.convolve( rrc, s_up )\n\n# vector of time samples for Tx signal\nt_Tx = np.arange( len(s_Tx) ) * t_sample\nt_symbol = np.arange( n_symb ) * t_symb", "Visualize Tx signal", "plt.subplot(121)\nplt.plot( np.real( s_Tx ), label='Inphase' )\nplt.plot( np.imag( s_Tx ), label='Quadrature' )\n\nplt.xlabel('$t$')\nplt.ylabel('$s(t)$')\nplt.grid(True)\nplt.legend( loc='upper right' )\n\nplt.subplot(122)\nplt.psd( s_Tx, Fs=1/t_sample);", "Parameters of estimation", "# vector for storing variances of estimation\nvar_delta_phase = np.zeros( len(EsN0_dB) )\n\n# number of trials per simulation point\nN_trials_phase = int( 1e3 )\n\n\n# delta phi max (taken in both directions, i.e. [-delta_phi_max, delta_phi_max])\ndelta_phi_max = -np.pi/8", "Loop for SNRs and perform simulation", "# loop for SNRs\nfor ind_esn0, esn0 in enumerate(EsN0_dB):\n\n # determine variance of the noise\n sigma2 = 10**( -esn0/10 )\n \n # initialize error vector\n delta_phase = np.zeros( N_trials_phase )\n\n # loop for trials with different f_off\n for n in range( N_trials_phase ):\n \n # apply phase offset\n phi_off = np.random.uniform( -delta_phi_max, delta_phi_max )\n \n s_Rx = np.exp( 1j*phi_off ) * s_Tx \n \n # add noise\n noise = np.sqrt(sigma2/2) * ( np.random.randn(len(s_Rx)) + 1j * np.random.randn(len(s_Rx)) )\n \n # apply noise and insert asynchronity with respect to time\n delta_tau = 0 #np.random.randint( 0, n_up ) \n r = s_Rx + noise\n \n # signal after MF\n y_mf = np.convolve( rrc, r )\n \n # down-sampling to symbol time\n y_down = y_mf[ K_filt-1 : K_filt-1 + len(s) * n_sps : n_sps ]\n \n # determine phase estimation according to the approximated ML rule out of [Mengali]\n corr_out = np.sum( np.conjugate( s ) * y_down )\n phi_est = np.angle( corr_out ) \n \n # determining deviation\n delta_phase[ n ] = phi_est - phi_off \n \n # find mean and mse of estimation\n var_delta_phase[ ind_esn0 ] = np.var( delta_phase )\n \n \n # show progress\n print('SNR: {}'.format( esn0 ) )", "Show Results", "# determine modified Cramer Rao bound according to (5.2.12) in [Mengali]\nmcrb = 1 / ( 2 * n_symb * 10**(EsN0_dB/10) )\n\n\n# plot phase error\nplt.figure()\nplt.plot( EsN0_dB, var_delta_phase, 'o-', label='MSE', linewidth=2.0 )\nplt.plot( EsN0_dB, mcrb, label='MCRB', linewidth=2.0 )\nplt.grid(True)\nplt.legend(loc='upper right')\nplt.xlabel('$E_s/N_0 \\; (dB)$')\nplt.ylabel('$E( (\\hat{\\phi}-\\phi_{off})^2)$')\nplt.semilogy()", "Correct phase offset and show symbols", "# correct frequency deviation and resample \ny_mf_corrected = y_mf * np.exp( -1j * phi_est )\n\ny_down_corrected = y_mf_corrected[ K_filt-1 : K_filt-1 + len(s) * n_sps : n_sps ]\n\n# show symbols\nmarkerline, stemlines, baseline = plt.stem( np.arange(len(s)), np.real(s), use_line_collection='True', label='syms Tx')\nplt.setp(markerline, 'markersize', 8, 'markerfacecolor', 'b')\n\nmarkerline, stemlines, baseline = plt.stem( np.arange(len(y_down)), np.real(y_down), use_line_collection='True', label='syms Rx')\nplt.setp(markerline, 'markersize', 12, 'markerfacecolor', 'r',)\n\nmarkerline, stemlines, baseline = plt.stem( np.arange(len(y_down_corrected)), np.real(y_down_corrected), use_line_collection='True', label='$y_{corr}(t)$')\nplt.setp(markerline, 'markersize', 12, 'markerfacecolor', 'g',)\n\nplt.title('Symbols in Tx and Rx')\nplt.legend(loc='upper right')\nplt.grid(True)\nplt.xlabel('$n$')\nplt.ylabel('Re$\\{I_n\\}$')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
prodicus/dabble
nltk/naive_bayes.ipynb
mit
[ "import nltk\nimport random\nfrom nltk.corpus import movie_reviews\nimport pprint\nfrom nltk.corpus import stopwords\nstop_words = stopwords.words(\"english\")\nimport pickle", "There are a thousand movie reviews for both\n\npositive and\nnegetive\n\nreviews", "movie_reviews.categories()", "Now I need to store it as\npython\ndocuments = [\n ('pos', ['good', 'awesome', ....]), \n ('neg', ['ridiculous', 'horrible', ...])\n]\nOR \nStoring it in a dictionary would also be a better idea, will try out with both\npython\ndocuments = {\n 'pos': ['good', 'awesome', ....],\n 'neg': ['ridiculous', 'horrible', ...]\n}", "documents = [(list(movie_reviews.words(fileid)), category)\n for category in movie_reviews.categories()\n for fileid in movie_reviews.fileids(category)\n ]\nrandom.shuffle(documents)", "Other way to do it would be the normal way instead of this one liner", "document_dict = {\n 'pos': [],\n 'neg': []\n}\nfor category in movie_reviews.categories():\n for fileid in movie_reviews.fileids(category):\n # this will store the list of words read from the particular file in fileid\n raw_list = movie_reviews.words(fileid)\n # cleaning the list using stopwords\n word_list = [word for word in raw_list if word not in stop_words]\n if category == 'pos':\n document_dict['pos'].extend(word_list)\n elif category == 'neg':\n document_dict['neg'].extend(word_list)\n", "Cleaning it up using the stopwords", "print(len(document_dict['pos']))\nprint(len(document_dict['neg']))", "Getting the list of all words to store the most frequently occuring ones", "all_words = []\nfor w in movie_reviews.words():\n all_words.append(w.lower())", "Making a frequency distribution of the words", "all_words = nltk.FreqDist(all_words)\nall_words.most_common(20)\n\nall_words[\"hate\"] ## counting the occurences of a single word", "will train only for the first 5000 top words in the list", "feature_words = list(all_words.keys())[:5000]", "Finding these feature words in documents, making our function would ease it out!", "def find_features(document):\n words = set(document)\n feature = {}\n for w in feature_words:\n feature[w] = (w in words)\n return feature", "What the below one does is, before hand we had only words and its category. But not we have the feature set (along with a boolean value of whether it is one of the most frequently used words or not)of the same word and then the category.", "feature_sets = [(find_features(rev), category) for (rev, category) in documents]\n\nprint(feature_sets[:1])", "Training the classifier", "training_set = feature_sets[:1900]\ntesting_set = feature_sets[1900:]", "We won't be telling the machine the category i.e. whether the document is a postive one or a negative one. We ask it to tell that to us. Then we compare it to the known category that we have and calculate how accurate it is.\nNaive bayes algorithm\nIt states that\n\\begin{equation}\nposterior = \\frac{PriorOccurences \\times likelihood}{CurrentEvidence}\n\\end{equation}\nHere posterior is likelihood of occurence", "## TO-DO: To build own naive bais algorithm\nclassifier = nltk.NaiveBayesClassifier.train(training_set)\n\n## Testing it's accuracy\nprint(\"Naive bayes classifier accuracy percentage : \", (nltk.classify.accuracy(classifier, testing_set))*100)\n\nclassifier.show_most_informative_features(20)", "What the above feature set means is lets take abysmal, \n\nneg : pos = 6.3 : 1.0\n\nmeans that it appears 6.3 times more in neg reviews than in pos reviews\nSaving the trained algorithm using Pickle\nWe will be saving python objects so that we can quickly load them again.\nImporting pickle at the top", "save_classifier = open(\"naivebayes.pickle\", \"wb\") ## 'wb' tells to write it using bytes\npickle.dump(classifier, save_classifier)\nsave_classifier.close()", "We will now use this classifier in the next file to classify documents" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ankdesh/Udacity-MachineLearning-Nanodegree
projects/titanic_survival_exploration/titanic_survival_exploration.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject 0: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.", "import numpy as np\nimport pandas as pd\n\n# RMS Titanic data visualization code \nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code cell below to remove Survived as a feature of the dataset and store it in outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.\nThe predictions_0 function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)", "Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\n\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.", "survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)", "Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\n\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)", "Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\n\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age &lt; 18\"]", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Age < 18\"])", "After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with your answer to the question above.\nConclusion\nAfter several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the decision tree. A decision tree splits a set of data into smaller and smaller groups (called nodes), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many models that come from supervised learning. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like 'Survived', or a numerical, continuous value like predicting the price of a house.\nQuestion 5\nThink of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions. \nAnswer: Replace this text with your answer to the question above.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jaduimstra/nilmtk
docs/manual/user_guide/loading_data_into_memory.ipynb
apache-2.0
[ "Loading data into memory\nLoading API is central to a lot of nilmtk operations and provides a great deal of flexibility. Let's look at ways in which we can load data from a NILMTK DataStore into memory. To see the full range of possible queries, we'll use the iAWE data set (whose HDF5 file can be downloaded here).\nThe load function returns a generator of DataFrames loaded from the DataStore based on the conditions specified. If no conditions are specified, then all data from all the columns is loaded. (If you have not come across Python generators, it might be worth reading this quick guide to Python generators.)", "from nilmtk import DataSet\n\niawe = DataSet('/data/iawe/iawe.h5')\nelec = iawe.buildings[1].elec\nelec", "Let us see what measurements we have for the fridge:", "fridge = elec['fridge']\nfridge.available_columns()", "Loading data\nLoad all columns (default)", "df = fridge.load().next()\ndf.head()", "Load a single column of power data\nUse fridge.power_series() which returns a generator of 1-dimensional pandas.Series objects, each containing power data using the most 'sensible' AC type:", "series = fridge.power_series().next()\nseries.head()", "or, to get reactive power:", "series = fridge.power_series(ac_type='reactive').next()\nseries.head()", "Specify physical_quantity or AC type", "df = fridge.load(physical_quantity='power', ac_type='reactive').next()\ndf.head()", "To load voltage data:", "df = fridge.load(physical_quantity='voltage').next()\ndf.head()\n\ndf = fridge.load(physical_quantity = 'power').next()\ndf.head()", "Loading by specifying AC type", "df = fridge.load(ac_type = 'active').next()\ndf.head()", "Loading by resampling to a specified period", "# resample to minutely (i.e. with a sample period of 60 secs)\ndf = fridge.load(ac_type = 'active', sample_period=60).next()\ndf.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
4DGenome/Chromosomal-Conformation-Course
Notebooks/01-Mapping.ipynb
gpl-3.0
[ "Table of Contents\n\nIterative vs fragment-based mapping\nAdvantages of iterative mapping\nAdvantages of fragment-based mapping\n\n\nMapping\nIterative mapping\nFragment-based mapping\n\n\n\nIterative vs fragment-based mapping\nIterative mapping first proposed by <a name=\"ref-1\"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less \"brute-force\" can be used to take into account the chimeric nature of Hi-C reads.\nA simple alternative is to allow split mapping, just as with RNA-seq data.\nAnother way consists in pre-truncating <a name=\"ref-1\"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name=\"ref-2\"/>(Wingett et al., 2015).\nFinally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name=\"ref-1\"/>(Serra, Ba`{u, Filion and Marti-Renom, 2016).\nAdvantages of iterative mapping\n\nIt's the only solution when no restriction enzyme has been used (i.e. micro-C)\nCan be faster when few windows (2 or 3) are used\n\nAdvantages of fragment-based mapping\n\nGenerally faster\nSafer: mapped reads are generally larger than 25-30 nm (the largest window used in iterative mapping). Less reads are mapped, but the difference is usually canceled or reversed when looking for \"valid-pairs\".\n\n Note: We use GEM <a name=\"ref-1\"/>(Marco-Sola, Sammeth, Guig\\'{o and Ribeca, 2012), performance are very similar to Bowtie2, perhaps a bit better. \nFor now TADbit is only compatible with GEM.\nMapping", "from pytadbit.mapping.full_mapper import full_mapping", "The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.\nIterative mapping\nHere an example of use as iterative mapping:", "r_enz = 'MboI'\n\n! mkdir -p results/iterativ/$r_enz\n! mkdir -p results/iterativ/$r_enz/01_mapping\n\n# for the first side of the reads\nfull_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',\n out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),\n fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),\n r_enz='hindIII', frag_map=False, clean=True, nthreads=20,\n windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)), \n temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))", "And for the second side of the read:", "# for the second side of the reads\nfull_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',\n out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),\n fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),\n r_enz=r_enz, frag_map=False, clean=True, nthreads=20,\n windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)),\n temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))", "Fragment-based mapping\nWith fragment based mapping it would be:", "! mkdir -p results/fragment/$r_enz\n! mkdir -p results/fragment/$r_enz/01_mapping\n\n# for the first side of the reads \nfull_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',\n out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),\n fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),\n r_enz=r_enz, frag_map=True, clean=True, nthreads=20, \n temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))\n\n# for the second side of the reads\nfull_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',\n out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),\n fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),\n r_enz=r_enz, frag_map=True, clean=True, nthreads=20, \n temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))", "<!--bibtex\n@article{Imakaev2012a,\nabstract = {Extracting biologically meaningful information from chromosomal interactions obtained with genome-wide chromosome conformation capture (3C) analyses requires the elimination of systematic biases. We present a computational pipeline that integrates a strategy to map sequencing reads with a data-driven method for iterative correction of biases, yielding genome-wide maps of relative contact probabilities. We validate this ICE (iterative correction and eigenvector decomposition) technique on published data obtained by the high-throughput 3C method Hi-C, and we demonstrate that eigenvector decomposition of the obtained maps provides insights into local chromatin states, global patterns of chromosomal interactions, and the conserved organization of human and mouse chromosomes.},\nauthor = {Imakaev, Maxim V and Fudenberg, Geoffrey and McCord, Rachel Patton and Naumova, Natalia and Goloborodko, Anton and Lajoie, Bryan R and Dekker, Job and Mirny, Leonid A},\ndoi = {10.1038/nmeth.2148},\nfile = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Imakaev et al. - 2012 - Iterative correction of Hi-C data reveals hallmarks of chromosome organization.pdf:pdf},\nissn = {1548-7105},\njournal = {Nature methods},\nkeywords = {Hi-C},\nmendeley-groups = {stats/Hi-C,Research articles},\nmendeley-tags = {Hi-C},\nmonth = {oct},\nnumber = {10},\npages = {999--1003},\npmid = {22941365},\ntitle = {{Iterative correction of Hi-C data reveals hallmarks of chromosome organization.}},\nurl = {http://www.ncbi.nlm.nih.gov/pubmed/22941365},\nvolume = {9},\nyear = {2012}\n}\n@article{Ay2015a,\nauthor = {Ay, Ferhat and Noble, William Stafford},\ndoi = {10.1186/s13059-015-0745-7},\nfile = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Ay, Noble - 2015 - Analysis methods for studying the 3D architecture of the genome.pdf:pdf},\nissn = {1474-760X},\njournal = {Genome Biology},\nkeywords = {Chromatin conformation capture,Genome architecture,Hi-C,Three-dimensional genome,Three-dimensional modeling,chromatin,conformation capture,genome architecture,in many other applications,ranging from genome assem-,review,three-dimensional genome,three-dimensional modeling},\nmendeley-groups = {Research articles},\nmendeley-tags = {Hi-C,review},\nnumber = {1},\npages = {183},\npublisher = {Genome Biology},\ntitle = {{Analysis methods for studying the 3D architecture of the genome}},\nurl = {http://genomebiology.com/2015/16/1/183},\nvolume = {16},\nyear = {2015}\n}\n@article{Wingett2015,\nabstract = {HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies.},\nauthor = {Wingett, Steven and Ewels, Philip and Furlan-Magaril, Mayra and Nagano, Takashi and Schoenfelder, Stefan and Fraser, Peter and Andrews, Simon},\ndoi = {10.12688/f1000research.7334.1},\nfile = {:home/fransua/Downloads/f1000research-4-7903.pdf:pdf},\nissn = {2046-1402},\njournal = {F1000Research},\nmendeley-groups = {Computer programs/Hi-C/Hi-C processing},\npages = {1310},\npmid = {26835000},\ntitle = {{HiCUP: pipeline for mapping and processing Hi-C data.}},\nurl = {http://f1000research.com/articles/4-1310/v1},\nvolume = {4},\nyear = {2015}\n}\n@article{Serra2016,\nabstract = {The sequence of a genome is insufficient to understand all genomic processes carried out in the cell nucleus. To achieve this, the knowledge of its three- dimensional architecture is necessary. Advances in genomic technologies and the development of new analytical methods, such as Chromosome Conformation Capture (3C) and its derivatives, now permit to investigate the spatial organization of genomes. However, inferring structures from raw contact data is a tedious process for shortage of available tools. Here we present TADbit, a computational framework to analyze and model the chromatin fiber in three dimensions. To illustrate the use of TADbit, we automatically modeled 50 genomic domains from the fly genome revealing differential structural features of the previously defined chromatin colors, establishing a link between the conformation of the genome and the local chromatin composition. More generally, TADbit allows to obtain three-dimensional models ready for visualization from 3C-based experiments and to characterize their relation to gene expression and epigenetic states. TADbit is open-source and available for download from http://www.3DGenomes.org.},\nauthor = {Serra, Fran{\\c{c}}ois and Ba{\\`{u}}, Davide and Filion, Guillaume and Marti-Renom, Marc A.},\ndoi = {10.1101/036764},\nfile = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Serra et al. - 2016 - Structural features of the fly chromatin colors revealed by automatic three-dimensional modeling.pdf:pdf},\njournal = {bioRxiv},\nkeywords = {3d,Hi-C,capture,genome architecture,genome reconstruction chromosome conformation,modeling,optimization,resampling,restraint-based,three-dimensional genome reconstruction},\nmendeley-groups = {Research articles,projects/WIREs{\\_}review/modeling{\\_}tools,Computer programs/Hi-C/Hi-C modeling},\nmendeley-tags = {Hi-C,modeling,optimization,resampling},\npages = {1--29},\ntitle = {{Structural features of the fly chromatin colors revealed by automatic three-dimensional modeling.}},\nurl = {http://biorxiv.org/content/early/2016/01/15/036764},\nyear = {2016}\n}\n@misc{Marco-Sola2012,\nabstract = {Because of ever-increasing throughput requirements of sequencing data, most existing short-read aligners have been designed to focus on speed at the expense of accuracy. The Genome Multitool (GEM) mapper can leverage string matching by filtration to search the alignment space more efficiently, simultaneously delivering precision (performing fully tunable exhaustive searches that return all existing matches, including gapped ones) and speed (being several times faster than comparable state-of-the-art tools).},\nauthor = {Marco-Sola, Santiago and Sammeth, Michael and Guig{\\'{o}}, Roderic and Ribeca, Paolo},\nbooktitle = {Nature Methods},\ndoi = {10.1038/nmeth.2221},\nisbn = {1548-7105 (Electronic)$\\backslash$r1548-7091 (Linking)},\nissn = {1548-7091},\nmendeley-groups = {Research articles},\npmid = {23103880},\ntitle = {{The GEM mapper: fast, accurate and versatile alignment by filtration}},\nyear = {2012}\n}\n\n-->\n\nReferences\n<a name=\"cite-Imakaev2012a\"/><sup>^ </sup>Imakaev, Maxim V and Fudenberg, Geoffrey and McCord, Rachel Patton and Naumova, Natalia and Goloborodko, Anton and Lajoie, Bryan R and Dekker, Job and Mirny, Leonid A. 2012. Iterative correction of Hi-C data reveals hallmarks of chromosome organization.. URL\n<a name=\"cite-Ay2015a\"/><sup>^ </sup>Ay, Ferhat and Noble, William Stafford. 2015. Analysis methods for studying the 3D architecture of the genome. URL\n<a name=\"cite-Wingett2015\"/><sup>^ </sup>Wingett, Steven and Ewels, Philip and Furlan-Magaril, Mayra and Nagano, Takashi and Schoenfelder, Stefan and Fraser, Peter and Andrews, Simon. 2015. HiCUP: pipeline for mapping and processing Hi-C data.. URL\n<a name=\"cite-Serra2016\"/><sup>^ </sup>Serra, Fran\\c{cois and Ba`{u, Davide and Filion, Guillaume and Marti-Renom, Marc A.. 2016. Structural features of the fly chromatin colors revealed by automatic three-dimensional modeling.. URL\n<a name=\"cite-Marco-Sola2012\"/><sup>^ </sup>Marco-Sola, Santiago and Sammeth, Michael and Guig\\'{o, Roderic and Ribeca, Paolo. 2012. The GEM mapper: fast, accurate and versatile alignment by filtration." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bigdata-i523/hid335
project/BDA-Analytics-Classifier-Heroin.ipynb
gpl-3.0
[ "Big Data Applications and Analytics - Term Project\nSean M. Shiverick Fall 2017\nClassification of Opioid Use: Heroin\n\nLogistic Regression Classifier, Decision Tree Classifier, Random Forests\nfrom Introduction to Machine Learning by Andreas Mueller and Sarah Guido\nCh. 2 Supervised Learning, Classification Models\n\nDataset: NSDUH-2015\n\nNational Survey of Drug Use and Health 2015\nSubstance Abuse and Metnal Health Data Archive\n\nImport packages\n\nLoad forge dataset and assign variables", "import sklearn\nimport mglearn\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Part 1. Load Project dataset\n\nDelete first two columns and SUICIDATT; examine dataframe keys\nIdentify features and target variable HEROINEVR\nSplit data into Training and Test sets", "file = pd.read_csv('project-data.csv')\nopioids = pd.DataFrame(file)\n\nopioids.drop(opioids.columns[[0,1]], axis=1, inplace=True)\ndel opioids['SUICATT']\n\nopioids.shape\n\nprint(opioids.keys())\n\nopioids['HEROINEVR'].value_counts()\n\nfeatures = ['AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18', \n 'CTYMETRO', 'HEALTH','MENTHLTH', 'PRLMISEVR', 'PRLMISAB', 'PRLANY', \n 'TRQLZRS', 'SEDATVS', 'COCAINE', 'AMPHETMN', 'TRTMENT','MHTRTMT']\n\nopioids.data = pd.DataFrame(opioids, columns=['AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18', \n 'CTYMETRO', 'HEALTH','MENTHLTH', 'PRLMISEVR', \n 'PRLMISAB','PRLANY','TRQLZRS', 'SEDATVS', \n 'COCAINE', 'AMPHETMN', 'TRTMENT','MHTRTMT'])\nopioids.target = opioids['HEROINEVR']\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n opioids.data, opioids.target, stratify=opioids.target, random_state=42)", "Part 2. Logistic Regression Classifier\n\nDecision boundary is a linear function of input\nBinary linear classifier separates two classes using a line, plane, or hyperplane\n\nRegularization Parameter C\nLow values of C:\n\nModel tries to correctly classify all points correctly with a straight line\nCause models to try to adjust to the 'majority' of data points\nMay not capture overall layout of classes well; model is likely OVERFITTING! \n\nHigh values of C:\n\nCorrespond to less regularization, models will fit training set as best as possible\nStresses importance of each individual data point to be classified correctly \n\n2.1 Built Classifier Model on Training set", "from sklearn.linear_model import LogisticRegression\nlogreg = LogisticRegression().fit(X_train, y_train)", "2.2 Evaluate Classifier Model on Test set", "print(\"Training set score: {:.3f}\".format(logreg.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg.score(X_test, y_test)))", "2.3 Adjust Model Parameter settings\n\nDefault setting C=1 provides good performance for train and test sets\nVery likely UNDERFITTING the test data\n\nHigher value of C fits more 'Flexible' model\n\nC=100 generally gives higher training set accuracy and slightly higher Test set accuracy", "logreg100 = LogisticRegression(C=100).fit(X_train, y_train)\nprint(\"Training set score: {:.3f}\".format(logreg100.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg100.score(X_test, y_test)))", "Lower value of C fits more 'regularized' model\n\nSetting C=0.01 leads model to try to adjust to 'majority' of data points", "logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)\nprint(\"Training set score: {:.3f}\".format(logreg001.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg001.score(X_test, y_test)))", "2.4 Plot Coefficients of Logistic Regression Classifier\n\nMain difference between linear models for classification is penalty parameter\nLogisticRegression applies L2 (Ridge) regularization by default\n\nPenalty Parameter (L)\n\nL2 penalty (Ridge) uses all available features, regularization C pushes toward zero \nL1 penalty (Lasso) sets coefficients for most features to zero, uses only a subset\nImproved interpretability with L2 penalty (Lasso)\n\n\n\nL1 Regularization (Lasso)\n\nModel is limited to using only a few features, more interpretable\n\nLegend: Different values of Parameter C\n\nStronger regularization pushes coefficients closer to zero\nParameter values can influence values of Coefficients", "for C, marker in zip([0.01, 1, 100], ['v', 'o', '^']):\n lr_l1 = LogisticRegression(C=C, penalty=\"l1\").fit(X_train, y_train)\n print(\"Training accuracy of L1 logreg with C={:.3f}: {:.3f}\".format(\n C, lr_l1.score(X_train, y_train)))\n print(\"Test accuracy of L1 logreg with C={:.3f}: {:.3f}\".format(\n C, lr_l1.score(X_test, y_test)))\n plt.plot(lr_l1.coef_.T, marker, label=\"C={:.3f}\".format(C))\n \nplt.xticks(range(opioids.data.shape[1]), features, rotation=90)\nplt.hlines(0,0, opioids.data.shape[1])\nplt.xlabel(\"Coefficient Index\")\nplt.xlabel(\"Coefficient Magnitude\")\n\nplt.ylim(-2, 2)\nplt.legend()", "Part 3. Decision Trees Classifier\nBuilding decision tree\n\nContinues until all leaves are pure leads to models that are complex, overfit\nPresence of pure leaves means the tree is 100% accurate on the training set\nEach data point in training set is in a leaf that has the correct majority class\n\nPre-pruning: to Prevent Overfitting\n\nStopping creation of tree early\nLimiting the maximum depth of the tree, or limiting maximum number of leaves\nRequiring a minimum number of points in a node to keep splitting it\n\n3.1 Build Decision Trees Classifier Model for Heroin\n\nImport package: DecisionTreeClassifier\nBuild model using default setting that fully develops tree until all leaves are pure\nFix the random_state in the tree, for breaking ties internally", "from IPython.display import Image, display\nfrom sklearn.tree import DecisionTreeClassifier\n\ntree = DecisionTreeClassifier(random_state=0)\ntree.fit(X_train, y_train)", "3.2 Evaluate Tree Classifier model on Test set", "print(\"Accuracy on the training: {:.3f}\".format(tree.score(X_train, y_train)))\nprint(\"Accuracy on the test set: {:.3f}\".format(tree.score(X_test, y_test)))", "3.3 Adjust Parameter Settings\n\nTraining set accuracy is 100% because leaves are pure\nTrees can become arbitrarily deep, complex, if depth of the tree is not limmited\nUnpruned trees are proone to overfitting and not generalizing well to new data\n\nPruning: Set max_depth=4\n\nTree depth is limited to 4 branches\nLimiting depth of the tree decreases overfitting \nResults in lower accuracy on training set, but improvement on test set", "tree = DecisionTreeClassifier(max_depth=4, random_state=0)\ntree.fit(X_train, y_train)\n\nprint(\"Accuracy on the training: {:.3f}\".format(tree.score(X_train, y_train)))\nprint(\"Accuracy on the test set: {:.3f}\".format(tree.score(X_test, y_test)))", "3.4 Visualizing Decision Tree Classifier\n\nVisualize the tree using export_graphviz function from trees module\nSet an option to color the nodes to reflect majority class in each node\nFirst, install graphviz at terminal using brew install graphviz", "from sklearn.tree import export_graphviz\n\nexport_graphviz(tree, out_file=\"tree.dot\", class_names=[\"Yes\", \"No\"],\n feature_names=features, impurity=False, filled=True)\n\nfrom IPython.display import display\n\nimport graphviz\n\nwith open('tree.dot') as f:\n dot_graph = f.read()\n\ndisplay(graphviz.Source(dot_graph))", "Feature Importance in Trees\n\nRelates how important each feature is for the decision a tree makes\nValues range from 0 = \"Not at all\", to 1 = perfectly predicts target\"\nFeature importances always sum to a total of 1.\n\n3.5 Visualize Feature Importancee\n\nSimilar to how we visualized coefficients in linear model\nFeatures used in top split (\"worst radius\") is most important feature\nFeatures with low importance may be redundant with another feature that encodes same info", "print(\"Feature importances:\\n{}\".format(tree.feature_importances_))\n\ndef plot_feature_importances_heroin(model):\n n_features = opioids.data.shape[1]\n plt.barh(range(n_features), model.feature_importances_, align='center')\n plt.yticks(np.arange(n_features), features)\n plt.xlabel(\"Feature importance\")\n plt.ylabel(\"Feature\")\n\nplot_feature_importances_heroin(tree)", "Part 4. Random Forests Classifier\n\nRandom forest gives better accuracy than linear models or single decision tree, without tuning any parameters\n\nBuilding Random Forests\n\nFirst, need to decide how many trees to build: n_estimators parameter\nTrees are built independently of each other, based on \"bootstrap\" samples: n_samples\nAlgorithm randomly selects subset of features, number determined by max_features paramter\nEach node of tree makes decision involving different subset of features\n\nBootstrap Sampling and Subsets of Features\n\nEach decision tree in random forest is built on slightly different dataset\nEach split in each tree operates on different subset of features\n\nCritical Parameter: max_features\n\nIf max_features set to n_features, each split can look at all features inthe dataset, no randomness in selection\nHigh max_features means the trees in a random forest will be quite similar\nLow max_feature means trees in random forest will be quite different\n\nPrediction with Random Forests\n\nAlgorithm first makes prediction for every tree in the forest\nFor classification, a \"soft voting\" is applied, probabilities for all trees are then averaged, and class with highest probability is predicted\n\n3.1 Build Random Forests Classifier: Heroin\n\nSplit data into train and test sets; \nset n_estimators to 100 trees; build model on the training set", "from sklearn.ensemble import RandomForestClassifier\n\nforest = RandomForestClassifier(n_estimators=100, random_state=0)\nforest.fit(X_train, y_train)", "3.2 Evaluate Random Forests Classifier on Test set", "print(\"Accuracy on training set: {:.3f}\".format(forest.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(forest.score(X_test, y_test)))", "3.3 Features Importance for Random Forest\n\nComputed by aggregating the feature importances over the trees in the forest\nFeature importances provided by Random Forest are more reliable than provided by single tree\nMany more features have non-zero importance than single tree, chooses similar features", "plot_feature_importances_heroin(forest)", "Part 5. Gradient Boosted Classifier Trees\n\nWorks by building trees in a serial manner, where each tree tries to correct for mistakes of previous ones\nMain idea: combine many simple models, shallow trees ('weak learners'); more tree iteratively improves performance\n\nParameters\n\nPre-pruning, and number of trees in ensemble\nlearning_rate parameter controls how strongly each tree tries to correct mistakes of previous trees\nAdd more trees to model with n_estimators, increases model complexity\n\n5.1 Build Gradient Boosting Classifier for Heroin\nWith 100 trees, of maximum depth 3, and learning rate of 0.1", "from sklearn.ensemble import GradientBoostingClassifier\n\ngbrt = GradientBoostingClassifier(random_state=0, n_estimators=100, max_depth=3, learning_rate=0.01)\ngbrt.fit(X_train, y_train)", "5.2 Feature Importance\n\nWith 100 trees, cannot inspect them all, even if maximum depth is only 1", "gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)\ngbrt.fit(X_train, y_train)\n\nplot_feature_importances_heroin(gbrt)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xray/xray
doc/examples/ROMS_ocean_model.ipynb
apache-2.0
[ "ROMS Ocean Model Example\nThe Regional Ocean Modeling System (ROMS) is an open source hydrodynamic model that is used for simulating currents and water properties in coastal and estuarine regions. ROMS is one of a few standard ocean models, and it has an active user community.\nROMS uses a regular C-Grid in the horizontal, similar to other structured grid ocean and atmospheric models, and a stretched vertical coordinate (see the ROMS documentation for more details). Both of these require special treatment when using xarray to analyze ROMS ocean model output. This example notebook shows how to create a lazily evaluated vertical coordinate, and make some basic plots. The xgcm package is required to do analysis that is aware of the horizontal C-Grid.", "import numpy as np\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport xarray as xr", "Load a sample ROMS file. This is a subset of a full model available at \nhttp://barataria.tamu.edu/thredds/catalog.html?dataset=txla_hindcast_agg\n\nThe subsetting was done using the following command on one of the output files:\n#open dataset\nds = xr.open_dataset('/d2/shared/TXLA_ROMS/output_20yr_obc/2001/ocean_his_0015.nc')\n\n# Turn on chunking to activate dask and parallelize read/write.\nds = ds.chunk({'ocean_time': 1})\n\n# Pick out some of the variables that will be included as coordinates\nds = ds.set_coords(['Cs_r', 'Cs_w', 'hc', 'h', 'Vtransform'])\n\n# Select a a subset of variables. Salt will be visualized, zeta is used to \n# calculate the vertical coordinate\nvariables = ['salt', 'zeta']\nds[variables].isel(ocean_time=slice(47, None, 7*24), \n xi_rho=slice(300, None)).to_netcdf('ROMS_example.nc', mode='w')\n\nSo, the ROMS_example.nc file contains a subset of the grid, one 3D variable, and two time steps.\nLoad in ROMS dataset as an xarray object", "# load in the file\nds = xr.tutorial.open_dataset('ROMS_example.nc', chunks={'ocean_time': 1})\n\n# This is a way to turn on chunking and lazy evaluation. Opening with mfdataset, or \n# setting the chunking in the open_dataset would also achive this.\nds", "Add a lazilly calculated vertical coordinates\nWrite equations to calculate the vertical coordinate. These will be only evaluated when data is requested. Information about the ROMS vertical coordinate can be found (here)[https://www.myroms.org/wiki/Vertical_S-coordinate]\nIn short, for Vtransform==2 as used in this example, \n$Z_0 = (h_c \\, S + h \\,C) / (h_c + h)$\n$z = Z_0 (\\zeta + h) + \\zeta$\nwhere the variables are defined as in the link above.", "if ds.Vtransform == 1:\n Zo_rho = ds.hc * (ds.s_rho - ds.Cs_r) + ds.Cs_r * ds.h\n z_rho = Zo_rho + ds.zeta * (1 + Zo_rho/ds.h)\nelif ds.Vtransform == 2:\n Zo_rho = (ds.hc * ds.s_rho + ds.Cs_r * ds.h) / (ds.hc + ds.h)\n z_rho = ds.zeta + (ds.zeta + ds.h) * Zo_rho\n\nds.coords['z_rho'] = z_rho.transpose() # needing transpose seems to be an xarray bug\nds.salt", "A naive vertical slice\nCreate a slice using the s-coordinate as the vertical dimension is typically not very informative.", "ds.salt.isel(xi_rho=50, ocean_time=0).plot()", "We can feed coordinate information to the plot method to give a more informative cross-section that uses the depths. Note that we did not need to slice the depth or longitude information separately, this was done automatically as the variable was sliced.", "section = ds.salt.isel(xi_rho=50, eta_rho=slice(0, 167), ocean_time=0)\nsection.plot(x='lon_rho', y='z_rho', figsize=(15, 6), clim=(25, 35))\nplt.ylim([-100, 1]);", "A plan view\nNow make a naive plan view, without any projection information, just using lon/lat as x/y. This looks OK, but will appear compressed because lon and lat do not have an aspect constrained by the projection.", "ds.salt.isel(s_rho=-1, ocean_time=0).plot(x='lon_rho', y='lat_rho')", "And let's use a projection to make it nicer, and add a coast.", "proj = ccrs.LambertConformal(central_longitude=-92, central_latitude=29)\nfig = plt.figure(figsize=(15, 5))\nax = plt.axes(projection=proj)\nds.salt.isel(s_rho=-1, ocean_time=0).plot(x='lon_rho', y='lat_rho', \n transform=ccrs.PlateCarree())\n\ncoast_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m',\n edgecolor='k', facecolor='0.8')\nax.add_feature(coast_10m)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jonathf/chaospy
docs/user_guide/main_usage/monte_carlo_integration.ipynb
mit
[ "Monte Carlo integration\nMonte Carlo is the simplest of all collocation methods.\nIt consist of the following steps:\n\nGenerate (pseudo-)random samples $Q_1, ..., Q_N$.\nEvaluate model solver $U_1=u(Q_1), ..., U_N=u(Q_N)$ for each sample.\nUse empirical metrics to assess statistics on the evaluations.\n\nThis was the approached introduced in problem\nformulation, and we shall go through it again,\nbut in a bit more details, and leveraging some of the features introduced in\nthe section quasi_random_samples.\nGenerating samples\nThe samples that shall be used in Monte Carlo must be assumed to behave as if\ndrawn from the probability distribution of the model parameters on want to\nmodel. In the case of problem formulation, the\nsamples drawn was random. Here we shall replace the random samples with\nvariance reduced samples from the following three schemes:\n\nSobol\nAntithetic variate\nHalton\n\nWe start by generating samples from the distribution of interest:", "from problem_formulation import joint\n\njoint", "Then we generate samples from the three schemes:", "sobol_samples = joint.sample(10000, rule=\"sobol\")\nantithetic_samples = joint.sample(10000, antithetic=True, seed=1234)\nhalton_samples = joint.sample(10000, rule=\"halton\")\n\nfrom matplotlib import pyplot\n\npyplot.rc(\"figure\", figsize=[16, 4])\n\npyplot.subplot(131)\npyplot.scatter(*sobol_samples[:, :1000])\npyplot.title(\"sobol\")\n\npyplot.subplot(132)\npyplot.scatter(*antithetic_samples[:, :1000])\npyplot.title(\"antithetic variates\")\n\npyplot.subplot(133)\npyplot.scatter(*halton_samples[:, :1000])\npyplot.title(\"halton\")\n\n\npyplot.show()", "From the three plots above it is easy to see both how the Sobol sequence have\nmore structure, and the antithetic variate have observable symmetries.\nEvaluating model solver\nLike in the case of problem formulation again,\nevaluation is straight forward:", "from problem_formulation import model_solver, coordinates\nimport numpy\n\nsobol_evals = numpy.array([\n model_solver(sample) for sample in sobol_samples.T])\n\nantithetic_evals = numpy.array([\n model_solver(sample) for sample in antithetic_samples.T])\n\nhalton_evals = numpy.array([\n model_solver(sample) for sample in halton_samples.T])\n\npyplot.subplot(131)\npyplot.plot(coordinates, sobol_evals[:100].T, alpha=0.3)\npyplot.title(\"sobol\")\n\npyplot.subplot(132)\npyplot.plot(coordinates, antithetic_evals[:100].T, alpha=0.3)\npyplot.title(\"antithetic variate\")\n\npyplot.subplot(133)\npyplot.plot(coordinates, halton_evals[:100].T, alpha=0.3)\npyplot.title(\"halton\")\n\npyplot.show()", "Error analysis\nHaving a good estimate on the statistical properties allows us to asses the\nproperties of the uncertainty in the model. However, it does not allow us to\nassess the accuracy of the methods used. To do that we need to compare the\nstatistical metrics with their analytical counterparts. To do so, we use the\nreference analytical solution and error function as defined in problem\nformulation.", "from problem_formulation import error_in_mean, indices, eps_mean\n\neps_sobol_mean = [error_in_mean(\n numpy.mean(sobol_evals[:idx], 0)) for idx in indices]\n\neps_antithetic_mean = [error_in_mean(\n numpy.mean(antithetic_evals[:idx], 0)) for idx in indices]\n\neps_halton_mean = [error_in_mean(\n numpy.mean(halton_evals[:idx], 0)) for idx in indices]\n\npyplot.rc(\"figure\", figsize=[6, 4])\n\npyplot.semilogy(indices, eps_mean, \"r\", label=\"random\")\npyplot.semilogy(indices, eps_sobol_mean, \"-\", label=\"sobol\")\npyplot.semilogy(indices, eps_antithetic_mean, \":\", label=\"antithetic\")\npyplot.semilogy(indices, eps_halton_mean, \"--\", label=\"halton\")\n\npyplot.legend()\npyplot.show()", "Here we see that for our little problem, all new schemes outperforms\nclassical random samples with Sobol on top, followed by Halton and antithetic\nvariate.\nFor the error in variance estimation we have:", "from problem_formulation import error_in_variance, eps_variance\n\neps_halton_variance = [error_in_variance(\n numpy.var(halton_evals[:idx], 0)) for idx in indices]\n\neps_sobol_variance = [error_in_variance(\n numpy.var(sobol_evals[:idx], 0)) for idx in indices]\n\neps_antithetic_variance = [error_in_variance(\n numpy.var(antithetic_evals[:idx], 0)) for idx in indices]\n\npyplot.semilogy(indices, eps_variance, \"r\", label=\"random\")\npyplot.semilogy(indices, eps_sobol_variance, \"-\", label=\"sobol\")\npyplot.semilogy(indices, eps_antithetic_variance, \":\", label=\"antithetic\")\npyplot.semilogy(indices, eps_halton_variance, \"--\", label=\"halton\")\n\npyplot.legend()\npyplot.show()", "In this case Sobol and Halton er quite comparable as the best performers.\nAntithetic variate seems to now work out, with performance lower than\nclassical random samples.\nNote that the conclusion here is not that antithetic variate don't work, but\nrather that it is perhaps not the right tool for this job." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/examples
courses/udacity_intro_to_tensorflow_for_deep_learning/l06c01_tensorflow_hub_and_transfer_learning.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c01_tensorflow_hub_and_transfer_learning.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c01_tensorflow_hub_and_transfer_learning.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nTensorFlow Hub and Transfer Learning\nTensorFlow Hub is an online repository of already trained TensorFlow models that you can use.\nThese models can either be used as is, or they can be used for Transfer Learning.\nTransfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.\nIn this Colab we will do both.\nHere, you can see all the models available in TensorFlow Module Hub.\nConcepts that will be covered in this Colab\n\nUse a TensorFlow Hub model for prediction.\nUse a TensorFlow Hub model for Dogs vs. Cats dataset.\nDo simple transfer learning with TensorFlow Hub.\n\nBefore starting this Colab, you should reset the Colab environment by selecting Runtime -&gt; Reset all runtimes... from menu above.\nImports\nSome normal imports we've seen before. The new one is importing tensorflow_hub which was installed above, and which this Colab will make heavy use of.", "import tensorflow as tf\n\nimport matplotlib.pylab as plt\n\nimport tensorflow_hub as hub\nimport tensorflow_datasets as tfds\n\nfrom tensorflow.keras import layers\n\nimport logging\nlogger = tf.get_logger()\nlogger.setLevel(logging.ERROR)", "Part 1: Use a TensorFlow Hub MobileNet for prediction\nIn this part of the Colab, we'll take a trained model, load it into to Keras, and try it out.\nThe model that we'll use is MobileNet v2 (but any model from tf2 compatible image classifier URL from tfhub.dev would work).\nDownload the classifier\nDownload the MobileNet model and create a Keras model from it.\nMobileNet is expecting images of 224 $\\times$ 224 pixels, in 3 color channels (RGB).", "CLASSIFIER_URL =\"https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2\"\nIMAGE_RES = 224\n\nmodel = tf.keras.Sequential([\n hub.KerasLayer(CLASSIFIER_URL, input_shape=(IMAGE_RES, IMAGE_RES, 3))\n])", "Run it on a single image\nMobileNet has been trained on the ImageNet dataset. ImageNet has 1000 different output classes, and one of them is military uniforms.\nLet's get an image containing a military uniform that is not part of ImageNet, and see if our model can predict that it is a military uniform.", "import numpy as np\nimport PIL.Image as Image\n\ngrace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')\ngrace_hopper = Image.open(grace_hopper).resize((IMAGE_RES, IMAGE_RES))\ngrace_hopper \n\ngrace_hopper = np.array(grace_hopper)/255.0\ngrace_hopper.shape", "Remember, models always want a batch of images to process. So here, we add a batch dimension, and pass the image to the model for prediction.", "result = model.predict(grace_hopper[np.newaxis, ...])\nresult.shape", "The result is a 1001 element vector of logits, rating the probability of each class for the image.\nSo the top class ID can be found with argmax. But how can we know what class this actually is and in particular if that class ID in the ImageNet dataset denotes a military uniform or something else?", "predicted_class = np.argmax(result[0], axis=-1)\npredicted_class", "Decode the predictions\nTo see what our predicted_class is in the ImageNet dataset, download the ImageNet labels and fetch the row that the model predicted.", "labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')\nimagenet_labels = np.array(open(labels_path).read().splitlines())\n\nplt.imshow(grace_hopper)\nplt.axis('off')\npredicted_class_name = imagenet_labels[predicted_class]\n_ = plt.title(\"Prediction: \" + predicted_class_name.title())", "Bingo. Our model correctly predicted military uniform!\nPart 2: Use a TensorFlow Hub models for the Cats vs. Dogs dataset\nNow we'll use the full MobileNet model and see how it can perform on the Dogs vs. Cats dataset.\nDataset\nWe can use TensorFlow Datasets to load the Dogs vs Cats dataset.", "(train_examples, validation_examples), info = tfds.load(\n 'cats_vs_dogs', \n with_info=True, \n as_supervised=True, \n split=['train[:80%]', 'train[80%:]'],\n)\n\nnum_examples = info.splits['train'].num_examples\nnum_classes = info.features['label'].num_classes", "The images in the Dogs vs. Cats dataset are not all the same size.", "for i, example_image in enumerate(train_examples.take(3)):\n print(\"Image {} shape: {}\".format(i+1, example_image[0].shape))", "So we need to reformat all images to the resolution expected by MobileNet (224, 224).\nThe .repeat() and steps_per_epoch here is not required, but saves ~15s per epoch, since the shuffle-buffer only has to cold-start once.", "def format_image(image, label):\n image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0\n return image, label\n\nBATCH_SIZE = 32\n\ntrain_batches = train_examples.shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)\nvalidation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)", "Run the classifier on a batch of images\nRemember our model object is still the full MobileNet model trained on ImageNet, so it has 1000 possible output classes.\nImageNet has a lot of dogs and cats in it, so let's see if it can predict the images in our Dogs vs. Cats dataset.", "image_batch, label_batch = next(iter(train_batches.take(1)))\nimage_batch = image_batch.numpy()\nlabel_batch = label_batch.numpy()\n\nresult_batch = model.predict(image_batch)\n\npredicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)]\npredicted_class_names", "The labels seem to match names of Dogs and Cats. Let's now plot the images from our Dogs vs Cats dataset and put the ImageNet labels next to them.", "plt.figure(figsize=(10,9))\nfor n in range(30):\n plt.subplot(6,5,n+1)\n plt.subplots_adjust(hspace = 0.3)\n plt.imshow(image_batch[n])\n plt.title(predicted_class_names[n])\n plt.axis('off')\n_ = plt.suptitle(\"ImageNet predictions\")", "Part 3: Do simple transfer learning with TensorFlow Hub\nLet's now use TensorFlow Hub to do Transfer Learning.\nWith transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset.\nIn addition to complete models, TensorFlow Hub also distributes models without the last classification layer. These can be used to easily do transfer learning. We will continue using MobileNet v2 because in later parts of this course, we will take this model and deploy it on a mobile device using TensorFlow Lite. Any image feature vector URL from tfhub.dev would work here.\nWe'll also continue to use the Dogs vs Cats dataset, so we will be able to compare the performance of this model against the ones we created from scratch earlier.\nNote that we're calling the partial model from TensorFlow Hub (without the final classification layer) a feature_extractor. The reasoning for this term is that it will take the input all the way to a layer containing a number of features. So it has done the bulk of the work in identifying the content of an image, except for creating the final probability distribution. That is, it has extracted the features of the image.", "URL = \"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2\"\nfeature_extractor = hub.KerasLayer(URL,\n input_shape=(IMAGE_RES, IMAGE_RES,3))", "Let's run a batch of images through this, and see the final shape. 32 is the number of images, and 1280 is the number of neurons in the last layer of the partial model from TensorFlow Hub.", "feature_batch = feature_extractor(image_batch)\nprint(feature_batch.shape)", "Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.", "feature_extractor.trainable = False", "Attach a classification head\nNow wrap the hub layer in a tf.keras.Sequential model, and add a new classification layer.", "model = tf.keras.Sequential([\n feature_extractor,\n layers.Dense(2)\n])\n\nmodel.summary()", "Train the model\nWe now train this model like any other, by first calling compile followed by fit.", "model.compile(\n optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nEPOCHS = 6\nhistory = model.fit(train_batches,\n epochs=EPOCHS,\n validation_data=validation_batches)", "You can see we get ~97% validation accuracy, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~83% accuracy. The reason for this difference is that MobileNet was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet).\nAlthough not equivalent to TensorFlow Hub, you can check out how to create MobileNet in Keras here.\nLet's plot the training and validation accuracy/loss graphs.", "acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\n\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs_range = range(EPOCHS)\n\nplt.figure(figsize=(8, 8))\nplt.subplot(1, 2, 1)\nplt.plot(epochs_range, acc, label='Training Accuracy')\nplt.plot(epochs_range, val_acc, label='Validation Accuracy')\nplt.legend(loc='lower right')\nplt.title('Training and Validation Accuracy')\n\nplt.subplot(1, 2, 2)\nplt.plot(epochs_range, loss, label='Training Loss')\nplt.plot(epochs_range, val_loss, label='Validation Loss')\nplt.legend(loc='upper right')\nplt.title('Training and Validation Loss')\nplt.show()", "What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.\nOne reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.\nThe bigger reason though is that we're reusing a large part of MobileNet which is already trained on Dogs and Cats images. While doing training, the network is still performing image augmentation on the training images, but not on the validation dataset. This means the training images may be harder to classify compared to the normal images in the validation dataset.\nCheck the predictions\nTo redo the plot from before, first get the ordered list of class names.", "class_names = np.array(info.features['label'].names)\nclass_names", "Run the image batch through the model and convert the indices to class names.", "predicted_batch = model.predict(image_batch)\npredicted_batch = tf.squeeze(predicted_batch).numpy()\npredicted_ids = np.argmax(predicted_batch, axis=-1)\npredicted_class_names = class_names[predicted_ids]\npredicted_class_names", "Let's look at the true labels and predicted ones.", "print(\"Labels: \", label_batch)\nprint(\"Predicted labels: \", predicted_ids)\n\nplt.figure(figsize=(10,9))\nfor n in range(30):\n plt.subplot(6,5,n+1)\n plt.subplots_adjust(hspace = 0.3)\n plt.imshow(image_batch[n])\n color = \"blue\" if predicted_ids[n] == label_batch[n] else \"red\"\n plt.title(predicted_class_names[n].title(), color=color)\n plt.axis('off')\n_ = plt.suptitle(\"Model predictions (blue: correct, red: incorrect)\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Vvkmnn/books
AutomateTheBoringStuffWithPython/lesson22.ipynb
gpl-3.0
[ "Lesson 22:\nLaunching Python in Other Programs\nThe first line of any Pthon Script should be the Shebang Line.\nOSX: \n #! /usr/bin/env python3\n\nLinux: \n #! usr/bin/python3\n\nWindows: \n python3\n\nThis lets you run scrips in Terminal/CMD Prompt. \npython3 /path/to/script.py\n\nBatch files, or shell scripts, can run multiple seperate programs/scripts.\nOSX/Linux: \n .sh\n\nWindows: \n .bat\n\nA shell script includes references to multiple programs:", "#! usr/bin/env bash\n# This is a shell script\n#python3 runthisscript.py\n#echo 'I'm running a python script'", "You can then use CMD/Terminal to run this script\n$sh /path/to/shellscript.sh\n\nYou can skip adding the absolute path to scripts by adding folders to the PATH environment variable.\nThis lets the operating system check these folders before looking anywhere else. Editing the PATH gives the entire OS access to those folders from any location. \nTemporarily edit the PATH in OSX\nShow PATH in Terminal\n$ echo $PATH\n\nTemporarily add to $PATH for that terminal session\n$ PATH=/usr/bin:/bin:/usr/sbin:/newpathtofolder/\n\nPermanently edit the PATH in OSX\nMove to home folder\n$ cd\n\nEdit the .bash_profile (use whatever editor you want instead of 'nano')\n$nano .bash_profile #\n\nAdd the new value to the PATH and include exiting folders (with :PATH)\n$ export PATH=\"/usr/local/mysql/bin:$PATH\"\n\nShould now show the new PATH with the added folder. \n$ echo $PATH\n\nScripts can also take command line arguments:\nsh script.sh arg1 arg2\n\nThese are called system arguments, and can be accessed in a Python program via the sys.argv command.", "import sys\nprint('Hello world')\nprint(sys.argv)", "This is useful for allowing your program to take additional parameters when incorporated into batch files.\nTypically, you will need to add a %* to forward those arguments to the Python script:", "#! usr/bin/env bash\n# This is a shell script\n#python3 runthisscript.py %*\n#echo 'I'm running a python script with system arguments'", "Recap\n\nThe shebang line tells your computer that you want to run the script using Python 3.\nOn Windows, you can bring up the Run dialog by pressing &lt;Win&gt;+R. OSX/Unix uses Terminal.\nA batch file can save you a lot of typing by running multiple commands.\nYou can add a folder to the $PATH environment variable to skip absolute paths.\nCommand line arguments can be read via the sys.argv list." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
INM-6/python-odmltables
tutorials/tutorial-2_experimental_data/demo_complex_experiment.ipynb
bsd-3-clause
[ "Handling of experimental metadata using odMLtables\nThis notebook presents the interaction with metadata from an experiment using odMLtables. Here we use the metadata of a complex electrophysiology experiment published in Brochier T. et al. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Sci. Data 5:180055. In the first step, we show how to extract a subset of information from the complete metadata collection for visualization. In the second step, we compare numerous similar values available with in one metadata file and generate an overview sheet, eg. for addition into a lab book.\nDownload of published metadata\nThe published datasets are hosted on GIN and can be accessed directly via the webinterface or the local gin client. Here we download the metadata files of the two datasets directly.", "# helper function by Ali Faki, https://stackoverflow.com/questions/7243750/download-file-from-web-in-python-3\nfrom requests import get\ndef download(url, file_name):\n # open in binary mode\n with open(file_name, \"wb\") as file:\n # get request\n response = get(url)\n # write to file\n file.write(response.content)\n\n# location of the repository and metadata files on GIN\ngin_repo = \"https://web.gin.g-node.org/INT/multielectrode_grasp/raw/master/\"\nfilenames = [\"i140703-001.odml\", \"l101210-001.odml\"]\nfilepaths = [\"datasets/\" + f for f in filenames]\n\n# download metadata files from GIN\nfor filepath, filename in zip(filepaths, filenames):\n download(gin_repo + filepath, filename)", "Extraction of a subset of metadata\nThe metadata collections of the electrophysiological experiment contain due to the complexity of the experiment thousands of entries organized in a hierarchical odML structure. This is of advantage for the storage of the metadata in an organized fashion, but for human interaction and visualization the amout of data needs to be reduced to a subset relevant for the current question. This can be achieved using the filter functionality of odMLtables. Here we extract all metadata related to the subject performing the task, which includes information such as the species, name, birthdate and handedness.\nSince the publication of the datasets odML was developed further, so an updated odML version exists. In a first step we are converting the metadata files from odML file version 1.0 to file version 1.1 using the version conversion tool provided by odML.\nIn a second step we use odMLtables to select only information related to the subject of the experiment and store this in a separate metadata file.", "# inplace conversion of odMl files to latest odML version\nfrom odml.tools.version_converter import VersionConverter\nfor filename in filenames:\n converter = VersionConverter(filename)\n converter.convert()\n converter.write_to_file(filename)\n\n\n# load odml file using odmltables and extract subset of information\nimport odmltables as odt\n\n# extract only directly related subject information\nfor filename in filenames:\n odmlfile = odt.OdmlTable(filename)\n odmlfile.filter(SectionName='Subject')\n new_filename = filename.split('.')[0] + '_filtered.odml'\n odmlfile.write2odml(new_filename)", "The generated odML files, which contain only a subset of the metadata can now be visualized in a browser using the odML style sheet. For this just place the style sheet in the same folder as your odML file and open the odML file in your browser. Here we use a helper function to display the odML content using the odML style sheet.", "# This is utility code for displaying the odML file as html representation here.\n# You can also just open the odML file in your browser having the style sheet in the same location as your odML file and\n# will get the same result\nfrom IPython.display import display, HTML\nimport lxml.etree as ET\n\ndef display_odML_as_html(odML_file, xsl_file='odml.xsl'):\n # generate html representation from odML file and style sheet\n dom = ET.parse(odML_file)\n xslt = ET.parse(xsl_file)\n transform = ET.XSLT(xslt)\n newdom = transform(dom)\n # display html\n display(HTML(ET.tostring(newdom, pretty_print=True).decode()))\n\n# display the extracted subsection for one of the datasets\ndisplay_odML_as_html(filenames[0].split('.')[0] + '_filtered.odml')", "Comparison of metadata\nIn many cases metadata collections contain repeating structures, containing similar information for different contexts. Comparing these information located in different parts of the odML structure in a single table helps gaining an overview of the metadata and is sometimes required to be tracked in lab book tables. odMLtables offers a comparison function within an odML document, which generates such an overview table for a selected number of subsections.\nThe published metadata collection contains information about 96 active recording electrodes, characterizing each in a separate odML section. The properties in these sections are always the same and only the structure of the subsections differs depending on the preprocessing (spikesorting) of the data. The basic electrode properties are a variety of different IDs associated with each electrode, it's impedance and length.\nThe odMLtables comparison generates a xls or csv table listing the shared properties against the sections to compare.", "for filename in filenames:\n # create an odMLtables table for comparison with functionality to write to xls\n comparetable = odt.CompareSectionXlsTable()\n comparetable.load_from_file(filename)\n # specify which section to compare, here the first 5 of 97 electrodes\n comparetable.choose_sections(*['Electrode_{:03d}'.format(i) for i in range(1,6)])\n # write comparison table to xls format\n comparetable.write2file(filename.split('.')[0] + '_electrode_comparison.xls')", "The resulting overview table looks like shown below, listing the shared properties in different rows and the selected electrodes in columns. Comparison tables only contain a subset of the metadata from the original document and are lacking the corresponding metadata for reconstruction of the original odML file. Therefore these are only for visualization and documentation purposes, but not for further enrichment or storage of metadata." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stevetjoa/stanford-music-364
genre_classification.ipynb
mit
[ "%matplotlib inline\nimport seaborn\nimport numpy, scipy, matplotlib.pyplot as plt, sklearn, pandas, librosa, urllib, IPython.display, os.path\nplt.rcParams['figure.figsize'] = (14,5)", "Homework Part 2: Genre Classification\nGoals\n\nExtract features from an audio signal.\nTrain a genre classifier.\nUse the classifier to classify genre in a song.\n\nStep 1: Retrieve Audio\nDownload an audio file onto your local machine.", "filename1 = 'brahms_hungarian_dance_5.mp3'\nurl = \"http://audio.musicinformationretrieval.com/\" + filename1\nif not os.path.exists(filename1):\n urllib.urlretrieve(url, filename=filename1)", "Load 120 seconds of an audio file:", "librosa.load?\n\nx1, fs1 = librosa.load(filename1, duration=120)", "Plot the time-domain waveform of the audio signal:", "plt.plot?\n\n# Your code here:", "Play the audio file:", "IPython.display.Audio?\n\n# Your code here:", "Step 2: Extract Features\nFor each segment, compute the MFCCs. Experiment with n_mfcc to select a different number of coefficients, e.g. 12.", "librosa.feature.mfcc?\n\nmfcc1 = librosa.feature.mfcc(x1, sr=fs1, n_mfcc=12).T\n\nmfcc1.shape", "Scale the features to have zero mean and unit variance:", "scaler = sklearn.preprocessing.StandardScaler()\n\nmfcc1_scaled = scaler.fit_transform(mfcc1)", "Verify that the scaling worked:", "mfcc1_scaled.mean(axis=0)\n\nmfcc1_scaled.std(axis=0)", "Repeat steps 1 and 2 for another audio file:", "filename2 = 'busta_rhymes_hits_for_days.mp3'\nurl = \"http://audio.musicinformationretrieval.com/\" + filename2\n\nurllib.urlretrieve?\n\n# Your code here. Download the second audio file in the same manner as the first audio file above.", "Load 120 seconds of an audio file:", "librosa.load?\n\n# Your code here. Load the second audio file in the same manner as the first audio file.", "Listen to the second audio file.", "IPython.display.Audio?", "Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.", "plt.plot?\n\n# See http://musicinformationretrieval.com/stft.html for more details on displaying spectrograms.\nlibrosa.feature.melspectrogram?\n\nlibrosa.logamplitude?\n\nlibrosa.display.specshow?", "[Please share your answer in this editable text cell.]\nExtract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.", "librosa.feature.mfcc?\n\nmfcc2.shape", "Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.", "scaler.transform?", "Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.", "mfcc2_scaled.mean?\n\nmfcc2_scaled.std?", "Step 3: Train a Classifier\nConcatenate all of the scaled feature vectors into one feature table.", "features = numpy.vstack((mfcc1_scaled, mfcc2_scaled))\n\nfeatures.shape", "Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.", "labels = numpy.concatenate((numpy.zeros(len(mfcc1_scaled)), numpy.ones(len(mfcc2_scaled))))", "Create a classifer model object:", "# Support Vector Machine\nmodel = sklearn.svm.SVC()", "Train the classifier:", "model.fit?\n\n# Your code here", "Step 4: Run the Classifier\nTo test the classifier, we will extract an unused 10-second segment from the earlier audio fields as test excerpts:", "x1_test, fs1 = librosa.load(filename1, duration=10, offset=120)\n\nx2_test, fs2 = librosa.load(filename2, duration=10, offset=120)", "Listen to both of the test audio excerpts:", "IPython.display.Audio?\n\nIPython.display.Audio?", "Compute MFCCs from both of the test audio excerpts:", "librosa.feature.mfcc?\n\nlibrosa.feature.mfcc?", "Scale the MFCCs using the previous scaler:", "scaler.transform?\n\nscaler.transform?", "Concatenate all test features together:", "numpy.vstack?", "Concatenate all test labels together:", "numpy.concatenate?", "Compute the predicted labels:", "model.predict?", "Finally, compute the accuracy score of the classifier on the test data:", "score = model.score(test_features, test_labels)\n\nscore", "Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?", "# Your code here.", "[Explain your approach in this editable text cell.]\nStep 5: Analysis in Pandas\nRead the MFCC features from the first test audio excerpt into a data frame:", "df1 = pandas.DataFrame(mfcc1_test_scaled)\n\ndf1.shape\n\ndf1.head()\n\ndf2 = pandas.DataFrame(mfcc2_test_scaled)", "Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?", "df1.corr()\n\ndf2.corr()", "[Explain your answer in this editable text cell.]\nDisplay a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.", "df1.plot.scatter?", "Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.", "df2.plot.scatter?", "Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers:", "df1[0].plot.hist()\n\ndf1[11].plot.hist()", "Extra Credit\nCreate a new genre classifier by repeating the steps above, but this time use training data and test data from your own audio collection representing two or more different genres. For what genres and audio data styles does the classifier work well, and for which (pairs of) genres does the classifier fail?\nCreate a new genre classifier by repeating the steps above, but this time use a different machine learning classifier, e.g. random forest, Gaussian mixture model, Naive Bayes, k-nearest neighbor, etc. Adjust the parameters. How well do they perform?\nCreate a new genre classifier by repeating the steps above, but this time use different features. Consult the librosa documentation on feature extraction for different choices of features. Which features work well? not well?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gururajl/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)\n\nlen(source_text)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n #from IPython.core.debugger import Tracer; Tracer()()\n source_id_text = [[source_vocab_to_int[w] for w in sen.split()] for sen in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[w] for w in sen.split()] + [target_vocab_to_int['<EOS>']] for sen in target_text.split('\\n')]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n # TODO: Implement Function\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n # strip the last col\n stripped = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n # prepend with <GO>\n processed = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), stripped], 1)\n\n return processed\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # TODO: Implement Function\n embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n \n #LSTM cell\n def build_cell(rnn_size):\n lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=123))\n dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return dropout\n \n cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(cell, embed, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state\n \n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # TODO: Implement Function\n helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, \n sequence_length=target_sequence_length, \n time_major=False)\n \n decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n helper,\n encoder_state,\n output_layer)\n \n decoder_output = tf.contrib.seq2seq.dynamic_decode(decoder,\n impute_finished=True,\n maximum_iterations=max_summary_length)[0]\n \n return decoder_output \n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # TODO: Implement Function\n \n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), \n [batch_size], name='start_tokens')\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n end_of_sequence_id)\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n encoder_state,\n output_layer)\n\n # Perform dynamic decoding using the decoder\n inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n\n\n return inference_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n #LSTM cell\n def build_cell(rnn_size):\n lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=123))\n dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return dropout\n \n dec_cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size) for _ in range(num_layers)])\n \n output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))\n \n with tf.variable_scope(\"decode\"):\n decoder_output = decoding_layer_train(encoder_state, dec_cell, \n dec_embed_input, target_sequence_length, \n max_target_sequence_length, output_layer, keep_prob)\n \n with tf.variable_scope(\"decode\", reuse=True):\n decoder_infer = decoding_layer_infer(encoder_state, dec_cell, \n dec_embeddings, target_vocab_to_int['<GO>'], \n target_vocab_to_int['<EOS>'], max_target_sequence_length, \n target_vocab_size, output_layer, batch_size, keep_prob)\n \n return decoder_output, decoder_infer\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n _, enc_state = encoding_layer(input_data, rnn_size, num_layers, \n keep_prob, source_sequence_length, \n source_vocab_size, enc_embedding_size)\n \n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n \n \n train_decoder_output, infer_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, \n max_target_sentence_length, rnn_size, \n num_layers, target_vocab_to_int, \n target_vocab_size, batch_size, keep_prob, \n dec_embedding_size)\n return train_decoder_output, infer_decoder_output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = 8\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 128\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 200\ndecoding_embedding_size = 200\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.7\ndisplay_step = 200", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n input = [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split()]\n return input\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xaibeing/cn-deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n word_set = set(text)\n int_to_vocab = {ii: word for ii, word in enumerate(word_set)}\n vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}\n\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return {'.' : '||Period||',\n ',' : '||Comma||',\n '\"' : '||QuotationMark||',\n ';' : '||Semicolon||',\n '!' : '||ExclamationMark||',\n '?' : '||QuestionMark||',\n '(' : '||LeftParentheses||',\n ')' : '||RightParentheses||',\n '--' : '||Dash||',\n '\\n' : '||Return||'}\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "#The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). \n#Here the \"2\" means the input and the target\n#Each batch contains two elements:\n# The first element is a single batch of input with the shape [batch size, sequence length]\n# The second element is a single batch of targets with the shape [batch size, sequence length]\n\ndef get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n \n # The input shape should be (batch_size, seq_length)\n input = tf.placeholder(tf.int32, shape=(None, None), name='input')\n # The output shape should be (batch_size, vocab_size)\n output = tf.placeholder(tf.int32, shape=(None, None), name='output')\n learning_rate = tf.placeholder(tf.float32, shape=None, name='learning_rate')\n return input, output, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "# num LSTM layers\nnum_layers = 1\n\ndef get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)\n cells = tf.contrib.rnn.MultiRNNCell(num_layers * [cell])\n initial_state = cells.zero_state(batch_size, tf.float32)\n initial_state = tf.identity(initial_state, name='initial_state')\n return cells, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "# calculate from input_data to embed output\ndef get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.truncated_normal(shape=[vocab_size, embed_dim], mean=0, stddev=1)) # create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, input_data) # use tf.nn.embedding_lookup to get the hidden layer output\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "# calculate from embed output to LSTM output, fully dynamic unrolling of sequence steps\ndef build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(final_state, name='final_state')\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed = get_embed(input_data, vocab_size, embed_dim)\n outputs, final_state = build_rnn(cell, embed)\n logits = tf.layers.dense(outputs, vocab_size)\n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n segment_len = (len(int_text) - 1) // batch_size\n num_seqs = segment_len // seq_length\n segment_len = num_seqs * seq_length\n# use_text_len = segment_len * batch_size + 1\n \n batches = np.zeros(shape=(num_seqs, 2, batch_size, seq_length))\n for s in range(num_seqs):\n# for j in range(2):\n for b in range(batch_size):\n batches[s, 0, b, :] = int_text[b*segment_len+s*seq_length : b*segment_len+s*seq_length+seq_length]\n batches[s, 1, b, :] = int_text[b*segment_len+s*seq_length+1 : b*segment_len+s*seq_length+seq_length+1]\n return batches\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 80\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 128\n# Embedding Dimension Size\nembed_dim = 200\n# Sequence Length\nseq_length = 20\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 26\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return loaded_graph.get_tensor_by_name('input:0'), \\\n loaded_graph.get_tensor_by_name('initial_state:0'), \\\n loaded_graph.get_tensor_by_name('final_state:0'), \\\n loaded_graph.get_tensor_by_name('probs:0')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n rnd_idx = np.random.choice(len(probabilities), p=probabilities)\n return int_to_vocab[rnd_idx]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ioos/notebooks_demos
notebooks/2020-10-10-GTS.ipynb
mit
[ "IOOS GTS Statistics\nThe Global Telecommunication System (GTS) is a coordinated effort for rapid distribution of observations.\nThe GTS monthly reports show the number of messages released to GTS for each station.\nThe reports contain the following fields:\n\nlocation ID: Identifier that station messages are released under to the GTS;\nregion: Designated IOOS Regional Association (only for IOOS regional report);\nsponsor: Organization that owns and maintains the station;\nMet: Total number of met messages released to the GTS\nWave: Total number of wave messages released to the GTS\n\nIn this notebook we will explore the statistics of the messages IOOS is releasing to GTS.\nThe first step is to download the data. We will use an ERDDAP server that hosts the CSV files with the ingest data.", "from datetime import date\n\nfrom erddapy import ERDDAP\n\nserver = \"http://osmc.noaa.gov/erddap\"\ne = ERDDAP(server=server, protocol=\"tabledap\")\n\ne.dataset_id = \"ioos_obs_counts\"\ne.variables = [\"time\", \"locationID\", \"region\", \"sponsor\", \"met\", \"wave\"]\ne.constraints = {\n \"time>=\": \"2019-09\",\n \"time<\": \"2020-11\",\n}\n\ndf = e.to_pandas(parse_dates=True)\n\ndf[\"locationID\"] = df[\"locationID\"].str.lower()\n\ndf.tail()", "The table has all the ingest data from 2019-01-01 to 2020-06-01. We can now explore it grouping the data by IOOS Regional Association (RA).", "groups = df.groupby(\"region\")\n\nax = groups.sum().plot(kind=\"bar\", figsize=(11, 3.75))\nax.yaxis.get_major_formatter().set_scientific(False)\nax.set_ylabel(\"# observations\");", "Let us check the monthly sum of data released both for individuak met and wave and the totdals.", "import pandas as pd\n\ndf[\"time (UTC)\"] = pd.to_datetime(df[\"time (UTC)\"])\n# Remove time-zone info for easier plotting, it is all UTC.\ndf[\"time (UTC)\"] = df[\"time (UTC)\"].dt.tz_localize(None)\n\ngroups = df.groupby(pd.Grouper(key=\"time (UTC)\", freq=\"M\"))", "We can create a table of observations per month,", "s = groups.sum()\ntotals = s.assign(total=s[\"met\"] + s[\"wave\"])\ntotals.index = totals.index.to_period(\"M\")\n\ntotals", "and visualize it in a bar plot.", "%matplotlib inline\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(11, 3.75))\n\ns.plot(ax=ax, kind=\"bar\")\nax.set_xticklabels(\n labels=s.index.to_series().dt.strftime(\"%Y-%b\"),\n rotation=70,\n rotation_mode=\"anchor\",\n ha=\"right\",\n)\nax.yaxis.get_major_formatter().set_scientific(False)\nax.set_ylabel(\"# observations\")", "Those plots are interesting to understand the RAs role in the GTS ingest and how much data is being released over time. It would be nice to see those per buoy on a map.\nFor that we need to get the position of the NDBC buoys. Let's get a table of all the buoys and match with what we have in the GTS data.", "import xml.etree.ElementTree as et\n\nimport pandas as pd\nimport requests\n\n\ndef make_ndbc_table():\n url = \"https://www.ndbc.noaa.gov/activestations.xml\"\n with requests.get(url) as r:\n elems = et.fromstring(r.content)\n df = pd.DataFrame([elem.attrib for elem in list(elems)])\n df[\"id\"] = df[\"id\"].str.lower()\n return df.set_index(\"id\")\n\n\nbuoys = make_ndbc_table()\nbuoys[\"lon\"] = buoys[\"lon\"].astype(float)\nbuoys[\"lat\"] = buoys[\"lat\"].astype(float)\n\nbuoys.head()", "For simplificty we will plot the total of observations per buoys.", "groups = df.groupby(\"locationID\")\nlocation_sum = groups.sum()\n\nbuoys = buoys.T\n\nextra_cols = pd.DataFrame({k: buoys.get(k) for k, row in location_sum.iterrows()}).T\nextra_cols = extra_cols[[\"lat\", \"lon\", \"type\", \"pgm\", \"name\"]]\n\nmap_df = pd.concat([location_sum, extra_cols], axis=1)\nmap_df = map_df.loc[map_df[\"met\"] + map_df[\"wave\"] > 0]", "And now we can overlay an HTML table with the buoy information and ingest data totals.", "from ipyleaflet import AwesomeIcon, Marker, Map, LegendControl, FullScreenControl, Popup\nfrom ipywidgets import HTML\n\n\nm = Map(center=(35, -95), zoom=4)\nm.add_control(FullScreenControl())\n\nlegend = LegendControl(\n {\n \"wave\": \"#FF0000\",\n \"met\": \"#FFA500\",\n \"both\": \"#008000\"\n },\n name=\"GTS\",\n position=\"bottomright\",\n)\nm.add_control(legend)\n\n\ndef make_popup(row):\n classes = \"table table-striped table-hover table-condensed table-responsive\"\n return pd.DataFrame(row[[\"met\", \"wave\", \"type\", \"name\", \"pgm\"]]).to_html(\n classes=classes\n )\n\nfor k, row in map_df.iterrows():\n if (row[\"met\"] + row[\"wave\"]) > 0:\n location = row[\"lat\"], row[\"lon\"]\n if row[\"met\"] == 0:\n color = \"red\"\n elif row[\"wave\"] == 0:\n color = \"orange\"\n else:\n color = \"green\"\n marker = Marker(\n draggable=False,\n icon=AwesomeIcon(name=\"life-ring\", marker_color=color),\n location=location,\n )\n msg = HTML()\n msg.value = make_popup(row)\n marker.popup = msg\n m.add_layer(marker)\nm" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mauriciogtec/PropedeuticoDataScience2017
Alumnos/JuanPabloDeBotton/Tarea2_JuanPabloDeBotton.ipynb
mit
[ "Tarea 2: Álgebra Lineal y Descomposición SVD\nTeoría de Algebra Lineal y Optimización\n1. ¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales?\nUna multiplicación de una matriz, implica escalamientos, reducciones de dimensiones y rotaciones que se pueden observar en el espacio geométrico. Todas estas transformaciones son lineales en tanto se pueden representar como combinaciones lineales de la forma $\\alpha*x + \\beta$, donde $\\alpha$ y $\\beta$ son parámetros que determina la matriz.\n2. ¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal?\nUna matriz diagonal va a rescalar cada una de las columnas del vector o matriz al que se le aplique la transformación lineal, mientras que una matriz ortogonal va a generar una transformación isométrica que puede ser cualquiera de tres tipos: rotación, traslación o generar el reflejo.\n3. ¿Qué es la descomposición en valores singulares de una matriz (Single Value Descomposition)?\nEs la factorización de una matriz en la multiplicación de tres matrices: la de eigenvectores, los componentes singulares y la transpuesta de eigenvectores. La descomposicion se lleva a cabo de la siguiente manera: $ A = U \\Sigma V^{T} $ En donde U = matriz de eigenvectores, $\\Sigma$ = matriz de valores singulares o raices cuadradas de eigenvalores de $A^{T}A$ y $V^{T}$ = matriz transpuesta de eigenvectores por derecha. La descomposición SVD tiene muchas aplicaciones en las áreas de principal component analytsis, dimensionality reduction, compresión de imágenes, etcétera.\n4. ¿Qué es diagonalizar una matriz y que representan los eigenvectores?\nEs la factorización de la matriz en las tres matrices básicas mencionadas arriba (eigenvectores, eigenvalores e inversa de eigenvectores). Esto se observa de la siguiente manera: $A = PDP^{-1}$ donde P = matriz de eigenvectores, D = Matriz diagonal de eigenvalores y $P^{-1}$ = matriz inversa de eigenvectores. Los eigenvectores de una transformación lineal son vectores que cumplen que el resultado de multiplicarlos por la matriz de transformación equivale a multiplicarlos por un escalar que llamamos 'eigenvalor' ($ Ax = \\lambda x $). Esto implica que a estos vectores al aplicarles la transformación lineal (multiplicarlos por la matriz de transformación) no cambian su dirección solamente cambian su longitud o su sentido (en caso de que haya eigenvalor con signo negativo). \n5. ¿Intuitivamente que son los eigenvectores?\nSe pueden interpretar como ejes cartesianos de la transformación lineal.\n6. ¿Cómo interpretas la descomposición en valores singulares como una composición de tres tipos de transformaciones lineales simples?\nLas tres matrices que conforman la SVD se pueden ver como transformaciones simples que son una rotación inicial, un escalamiento a lo largo de los ejes principales (valores singulares) y una rotación final. \n7. ¿Qué relación hay entre la descomposición en valores singulares y la diagonalización?\nLa descomposición svd es una generalización de la diagonalización de una matriz. Cuando una matriz no es cuadrada, no es diagonalizable, sin embargo si se puede descomponer mediante svd. Así mismo si se utiliza la descomposición svd de una matriz, es posible resolver sistemas de ecuaciones lineales que no tengan una única solución sino que la solución devuelta es el ajuste a mínimos cuadrados, (obviamente si tienen solución, regresa la solución al sistema). \n8. ¿Cómo se usa la descomposición en valores singulares para dar una aproximación de rango menor a una matriz?\nEn la descomposición svd de una matriz mxn, se obtiene una descomposición de tres matrices cuya multiplicación resulta en la matriz original completa, sin embargo dadas las propiedades de la descomposición, se pueden utilizar solamente las componentes principales de mayor relevancia desechando columnas de la matriz $U_{mxr}$, filas o columnas de la matriz de componentes principales $\\Sigma {rxr} $ y filas de la matriz $V{rxn}^{T}$ La multiplicación de estas resulta en el tamaño mxn de la matriz original pero con un determinado error y a medida que se consideran más componentes principales, columnas y vectores de las matrices U,S,VT , mejor será la reconstrucción de la matriz original. Esta descomposición es muy útil para comprimir información y análisis de componentes principales (PCA).\n9. Describe el método de minimización por descenso gradiente\nEs un método iterativo de primer orden de minimización de una función dado su gradiente. Conociendo la función a optimizar, se calcula el gradiente (vector de derivadas parciales), posteriormente se considera un punto aleatorio de inicio, se substituye la coordenada en el vector gradiente, se analiza en cuál de las componentes del vector gradiente (x, y, o z) es el valor más negativo y se da un pequeño incremento en esa dirección de manera que se acerca al mínimo local, y así sucesivamente hasta llegar a un punto de convergencia (mínimo local). Entre sus aplicaciones están: Encontrar mínimos locales de funciones, Solución de sistemas de ecuaciones lineales y solución de sistemas de ecuaciones no lineales. \n10. Menciona 4 ejemplos de problemas de optimización(dos con restricciones y dos sin restricciones) que te parezcan interesantes como científico de datos\nCon restricciones: Modelos de crecimiento económico, Modelos de impuestos óptimos. Sin restricciones: Maximización intertemporal de extracción de recursos de una mina, minimización la varianza.\nAplicaciones en Python¶\nEjercicio 1\nRecibir el path de un archivo de una imagen y convertirlo en una matriz numérica que represente la versión en blanco y negro de la imagen", "from PIL import Image\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n#url = sys.argv[1]\nurl = 'pikachu.png'\n\nimg = Image.open(url)\n\nimggray = img.convert('LA')", "Realizar y verificar la descomposición SVD", "imggrayArray = np.array(list(imggray.getdata(band=0)), float)\nimggrayArray.shape = (imggray.size[1], imggray.size[0])\nimggrayArray = np.matrix(imggrayArray)\n\nplt.imshow(imggray)\n\nplt.show()\n\nu, s, v = np.linalg.svd(imggrayArray)\n\nprint(\"U: \")\nprint(u)\nprint(\"S: \")\nprint(s)\nprint(\"V: \")\nprint(v)", "Para alguna imagen de su elección, elegir distintos valores de aproximación a la imagen original", "for i in range(1,60,15):\n reconstimg = np.matrix(u[:, :i]) * np.diag(s[:i]) * np.matrix(v[:i,:])\n plt.imshow(reconstimg, cmap='gray')\n plt.show()", "Ejercicio 2", "def lin_solve_pseudo(A,b): \n pseudoinv = pseudoinverse(A)\n return np.matmul(pseudoinv,b)\n\ndef pseudoinverse(A): \n u,s,v = np.linalg.svd(A) \n diagonal = np.diag(s)\n if v.shape[0] > diagonal.shape[1]: \n print(\"Agregando columnas a la sigma\")\n vector = np.array([[0 for x in range(v.shape[0] - diagonal.shape[1])] for y in range(diagonal.shape[0])]) \n diagonal = np.concatenate((diagonal, vector), axis=1)\n\n elif u.shape[1] > diagonal.shape[0]:\n print(\"Agregando renglones a la sigma\")\n vector = np.array([[0 for x in range(diagonal.shape[0])] for y in range(u.shape[1]-diagonal.shape[0])])\n diagonal = np.concatenate((diagonal, vector), axis=0)\n \n for a in range(diagonal.shape[0]):\n for b in range(diagonal.shape[1]):\n if diagonal[a][b] != 0:\n diagonal[a][b] = 1/diagonal[a][b]\n \n resultante = np.dot(np.transpose(v),np.transpose(diagonal))\n resultante = np.dot(resultante,np.transpose(u))\n return resultante", "Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva cualquier sistema de ecuaciones de la forma Ax=b usando esta pseudoinversa.", "A = np.array([[1,1,1],[1,1,3],[2,4,4]])\nb = np.array([[18,30,68]])\n\nsolve = lin_solve_pseudo(A,np.transpose(b))\nprint(solve)", "2. Jugar con el sistema Ax=b donde A=[[1,1],[0,0]] y b puede tomar distintos valores.\n(a) Observar que pasa si b esta en la imagen de A (contestar cuál es la imagen) y si no está (ej. b = [1,1]).", "print(\"La imagen de A es cualquier vector de dos coordenadas en donde la segunda componente siempre sea cero\")\nprint(\"Vector b en imagen de A\")\nA = np.array([[1,1],[0,0]])\nb = np.array([[12,0]])\nsolve = lin_solve_pseudo(A, np.transpose(b))\nprint(solve)\nprint(\"Cuando b esta en la imagen, la funcion lin_solve_pseudo devuelve la solucion unica a su sistema\")\nprint(\"Vector b no en imagen de A\")\nb = np.array([[12,8]])\nsolve = lin_solve_pseudo(A, np.transpose(b))\nprint(solve)\nprint(\"Cuando b no esta en la imagen, devuelve la solucion mas aproximada a su sistema\")", "(b) Contestar, ¿la solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.", "print(\"Vector b no en imagen de A\")\nb = np.array([[12,8]])\nsolve = lin_solve_pseudo(A, np.transpose(b))\nprint(solve)\nprint(\"Cuando b no esta en la imagen, devuelve la solucion mas aproximada a su sistema\")", "(c) Repetir cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?", "A = np.array([[1,1],[0,1e-32]])\nb = np.array([[12,9]])\nsolve = lin_solve_pseudo(A, np.transpose(b))\nprint(solve)\ncadena = \"\"\"En este caso, la solucion devuelta siempre es el valor de la segunda coordenada del vector b por e+32\\\n y es el valor de ambas incognitas, solo que con signos contrarios ej(x1=-9.0e+32, x2=9.0e+32) \\\n esto debido a que cualquier numero entre un numero muy pequenio tiende a infinito, de manera que la \\\n coordenada dos del vector tiene mucho peso con referencia a la coordenada uno del vector\n \"\"\"\nprint(cadena)", "Ejercicio3\nLeer el archivo study_vs_sat.csv y almacenearlo como un data frame de pandas.", "import pandas as pd\nimport matplotlib.pyplot as plt\n\ndata = pd.read_csv(\"./study_vs_sat.csv\", sep=',')\nprint(data)", "Plantear como un problema de optimización que intente hacer una aproximación de la forma sat_score ~ alpha + beta*study_hours minimizando la suma de los errores de predicción al cuadrado, ¿Cuál es el gradiente de la función que se quiere optimizar (hint: las variables que queremos optimizar son alpha y beta)?", "hrs_studio = np.array(data[\"study_hours\"])\nsat_score = np.array(data[\"sat_score\"])\n\nA = np.vstack([hrs_studio, np.ones(len(hrs_studio))]).T\n\nm,c = np.linalg.lstsq(A,sat_score)[0]\n\nprint(\"Beta y alfa: \")\nprint(m,c)", "Programar una función que reciba valores de alpha, beta y el vector study_hours y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor por cada individuo", "def predict(alfa, beta, study_hours):\n study_hours_i=[]\n for a in range(len(study_hours)):\n study_hours_i.append(alfa + beta*np.array(study_hours[a]))\n return study_hours_i\n\nprint(\"prediccion\")\nprint(predict(353.165, 25.326, hrs_studio))", "Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + beta study_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X*[alpha,beta]", "unos = np.ones((len(hrs_studio),1))\nhrs_studio = [hrs_studio]\nhrs_studio = np.transpose(hrs_studio)\nx = np.hstack((unos, hrs_studio))\nprint(\"La prediccion es: \")\nprint(np.matmul(x,np.array([[353.165],[25.326]])))", "Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones.", "X_pseudo = pseudoinverse(x)\nprint(\"Las alfas y betas son: \")\nprint(np.matmul(X_pseudo,sat_score))", "Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score", "def comparacion(X, sat_score):\n x_transpose = np.transpose(X) \n return np.matmul(np.linalg.inv(np.matmul(x_transpose,X)), np.matmul(x_transpose,sat_score))", "Usen la libreria matplotlib para visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.", "plt.plot(hrs_studio, sat_score, 'x', label='Datos', markersize=20)\nplt.plot(hrs_studio, m*hrs_studio + c, 'r', label='Línea de regresión')\nplt.legend()\nplt.show()", "Programen el método de descenso gradiente para obtener alpha y beta por vía de un método numérico de optimización. Experimenten con distintos learning rates (tamaños de paso).\nNo hice el ejercicio muy avanzado." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alexandrnikitin/algorithm-sandbox
courses/DAT256x/Module02/02 - 02 - Limits.ipynb
mit
[ "Limits\nYou can use algebraeic methods to calculate the rate of change over a function interval by joining two points on the function with a secant line and measuring its slope. For example, a function might return the distance travelled by a cyclist in a period of time, and you can use a secant line to measure the average velocity between two points in time. However, this doesn't tell you the cyclist's vecolcity at any single point in time - just the average speed over an interval.\nTo find the cyclist's velocity at a specific point in time, you need the ability to find the slope of a curve at a given point. Differential Calculus enables us to do through the use of derivatives. We can use derivatives to find the slope at a specific x value by calculating a delta for x<sub>1</sub> and x<sub>2</sub> values that are infinitesimally close together - so you can think of it as measuring the slope of a tiny straight line that comprises part of the curve.\nIntroduction to Limits\nHowever, before we can jump straight into derivatives, we need to examine another aspect of differential calculus - the limit of a function; which helps us measure how a function's value changes as the x<sub>2</sub> value approaches x<sub>1</sub>\nTo better understand limits, let's take a closer look at our function, and note that although we graph the function as a line, it is in fact made up of individual points. Run the following cell to show the points that we've plotted for integer values of x - the line is created by interpolating the points in between:", "%matplotlib inline\n\n# Here's the function\ndef f(x):\n return x**2 + x\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10 to plot\nx = list(range(0, 11))\n\n# Get the corresponding y values from the function\ny = [f(i) for i in x] \n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')\n\nplt.show()", "We know from the function that the f(x) values are calculated by squaring the x value and adding x, so we can easily calculate points in between and show them - run the following code to see this:", "%matplotlib inline\n\n# Here's the function\ndef f(x):\n return x**2 + x\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10 to plot\nx = list(range(0,5))\nx.append(4.25)\nx.append(4.5)\nx.append(4.75)\nx.append(5)\nx.append(5.25)\nx.append(5.5)\nx.append(5.75)\nx = x + list(range(6,11))\n\n# Get the corresponding y values from the function\ny = [f(i) for i in x] \n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')\n\nplt.show()", "Now we can see more clearly that this function line is formed of a continuous series of points, so theoretically for any given value of x there is a point on the line, and there is an adjacent point on either side with a value that is as close to x as possible, but not actually x.\nRun the following code to visualize a specific point for x = 5, and try to identify the closest point either side of it:", "%matplotlib inline\n\n# Here's the function\ndef f(x):\n return x**2 + x\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10 to plot\nx = list(range(0,5))\nx.append(4.25)\nx.append(4.5)\nx.append(4.75)\nx.append(5)\nx.append(5.25)\nx.append(5.5)\nx.append(5.75)\nx = x + list(range(6,11))\n\n# Get the corresponding y values from the function\ny = [f(i) for i in x] \n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')\n\nzx = 5\nzy = f(zx)\nplt.plot(zx, zy, color='red', marker='o', markersize=10)\nplt.annotate('x=' + str(zx),(zx, zy), xytext=(zx - 0.5, zy + 5))\n\n# Plot f(x) when x = 5.1\nposx = 5.25\nposy = f(posx)\nplt.plot(posx, posy, color='blue', marker='<', markersize=10)\nplt.annotate('x=' + str(posx),(posx, posy), xytext=(posx + 0.5, posy - 1))\n\n# Plot f(x) when x = 4.9\nnegx = 4.75\nnegy = f(negx)\nplt.plot(negx, negy, color='orange', marker='>', markersize=10)\nplt.annotate('x=' + str(negx),(negx, negy), xytext=(negx - 1.5, negy - 1))\n\nplt.show()", "You can see the point where x is 5, and you can see that there are points shown on the graph that appear to be right next to this point (at x=4.75 and x=5.25). However, if we zoomed in we'd see that there are still gaps that could be filled by other values of x that are even closer to 5; for example, 4.9 and 5.1, or 4.999 and 5.001. If we could zoom infinitely close to the line we'd see that no matter how close a value you use (for example, 4.999999999999), there is always a value that's fractionally closer (for example, 4.9999999999999).\nSo what we can say is that there is a hypothetical number that's as close as possible to our desired value of x without actually being x, but we can't express it as a real number. Instead, we express its symbolically as a limit, like this:\n\\begin{equation}\\lim_{x \\to 5} f(x)\\end{equation}\nThis is interpreted as the limit of function f(x) as x approaches 5.\nLimits and Continuity\nThe function f(x) is continuous for all real numbered values of x. Put simply, this means that you can draw the line created by the function without lifting your pen (we'll look at a more formal definition later in this course).\nHowever, this isn't necessarily true of all functions. Consider function g(x) below: \n\\begin{equation}g(x) = -(\\frac{12}{2x})^{2}\\end{equation}\nThis function is a little more complex than the previous one, but the key thing to note is that it requires a division by 2x. Now, ask yourself; what would happen if you applied this function to an x value of 0?\nWell, 2 x 0 is 0, and anything divided by 0 is undefined. So the domain of this function does not include 0; in other words, the function is defined when x is any real number such that x is not equal to 0. The function should therefore be written like this:\n\\begin{equation}g(x) = -(\\frac{12}{2x})^{2},\\;\\; x \\ne 0\\end{equation}\nSo why is this important? Let's investigate by running the following Python code to define the function and plot it for a set of arbitrary of values:", "%matplotlib inline\n\n# Define function g\ndef g(x):\n if x != 0:\n return -(12/(2*x))**2\n \n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-20, 21)\n\n# Get the corresponding y values from the function\ny = [g(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='green')\n\nplt.show()", "Look closely at the plot, and note the gap the line where x = 0. This indicates that the function is not defined here.The domain of the function (it's set of possible input values) not include 0, and it's range (the set of possible output values) does not include a value for x=0.\nThis is a non-continuous function - in other words, it includes at least one gap when plotted (so you couldn't plot it by hand without lifting your pen). Specifically, the function is non-continuous at x=0.\nBy convention, when a non-continuous function is plotted, the points that form a continuous line (or interval) are shown as a line, and the end of each line where there is a discontinuity is shown as a circle, which is filled if the value at that point is included in the line and empty if the value is not included in the line.\nIn this case, the function produces two intervals with a gap between them where the function is not defined, so we can show the discontinuous point as an unfilled circle - run the following code to visualize this with Python:", "%matplotlib inline\n\n# Define function g\ndef g(x):\n if x != 0:\n return -(12/(2*x))**2\n \n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-20, 21)\n\n\n# Get the corresponding y values from the function\ny = [g(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='green')\n\n# plot a circle at the gap (or close enough anyway!)\nxy = (0,g(1))\nplt.annotate('O',xy, xytext=(-0.7, -37),fontsize=14,color='green')\n\nplt.show()", "There are a number of reasons a function might be non-continuous. For example, consider the following function:\n\\begin{equation}h(x) = 2\\sqrt{x},\\;\\; x \\ge 0\\end{equation}\nApplying this function to a non-negative x value returns a valid output; but for any value where x is negative, the output is undefined, because the square root of a negative value is not a real number.\nHere's the Python to plot function h:", "%matplotlib inline\n\ndef h(x):\n if x >= 0:\n import numpy as np\n return 2 * np.sqrt(x)\n\n# Plot output from function h\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-20, 21)\n\n# Get the corresponding y values from the function\ny = [h(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='green')\n\n# plot a circle close enough to the h(-x) limit for our purposes!\nplt.plot(0, h(0), color='green', marker='o', markerfacecolor='green', markersize=10)\n\nplt.show()", "Now, suppose we have a function like this:\n\\begin{equation}\nk(x) = \\begin{cases}\n x + 20, & \\text{if } x \\le 0, \\\n x - 100, & \\text{otherwise }\n\\end{cases}\n\\end{equation}\nIn this case, the function's domain includes all real numbers, but its output is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for k(x &le; 0) is &le; 20, and the range of output values for k(x > 0) is x &ge; -100.\nLet's use Python to plot function k:", "%matplotlib inline\n\ndef k(x):\n import numpy as np\n if x <= 0:\n return x + 20\n else:\n return x - 100\n\n# Plot output from function h\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values for each non-contonuous interval\nx1 = range(-20, 1)\nx2 = range(1, 20)\n\n# Get the corresponding y values from the function\ny1 = [k(i) for i in x1]\ny2 = [k(i) for i in x2]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('k(x)')\nplt.grid()\n\n# Plot x against k(x)\nplt.plot(x1,y1, color='green')\nplt.plot(x2,y2, color='green')\n\n# plot a circle at the interval ends\nplt.plot(0, k(0), color='green', marker='o', markerfacecolor='green', markersize=10)\nplt.plot(0, k(0.0001), color='green', marker='o', markerfacecolor='w', markersize=10)\n\nplt.show()", "Finding Limits of Functions Graphically\nSo the question arises, how do we find a value for the limit of a function at a specific point?\nLet's explore this function, a:\n\\begin{equation}a(x) = x^{2} + 1\\end{equation}\nWe can start by plotting it:", "%matplotlib inline\n\n# Define function a\ndef a(x):\n return x**2 + 1\n\n\n# Plot output from function a\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [a(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('a(x)')\nplt.grid()\n\n# Plot x against a(x)\nplt.plot(x,y, color='purple')\n\nplt.show()", "Note that this function is continuous at all points, there are no gaps in its range. However, the range of the function is {a(x) &ge; 1} (in other words, all real numbers that are greater than or equal to 1). For negative values of x, the function appears to return ever-decreasing values as x gets closer to 0, and for positive values of x, the function appears to return ever-increasing values as x gets further from 0; but it never returns 0.\nLet's plot the function for an x value of 0 and find out what the a(0) value is returned:", "%matplotlib inline\n\n# Define function a\ndef a(x):\n return x**2 + 1\n\n\n# Plot output from function a\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [a(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('a(x)')\nplt.grid()\n\n# Plot x against a(x)\nplt.plot(x,y, color='purple')\n\n# Plot a(x) when x = 0\nzx = 0\nzy = a(zx)\nplt.plot(zx, zy, color='red', marker='o', markersize=10)\nplt.annotate(str(zy),(zx, zy), xytext=(zx, zy + 5))\n\nplt.show()", "OK, so a(0) returns 1.\nWhat happens if we use x values that are very slightly higher or lower than 0?", "%matplotlib inline\n\n# Define function a\ndef a(x):\n return x**2 + 1\n\n\n# Plot output from function a\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [a(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('a(x)')\nplt.grid()\n\n# Plot x against a(x)\nplt.plot(x,y, color='purple')\n\n# Plot a(x) when x = 0.1\nposx = 0.1\nposy = a(posx)\nplt.plot(posx, posy, color='blue', marker='<', markersize=10)\nplt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))\n\n# Plot a(x) when x = -0.1\nnegx = -0.1\nnegy = a(negx)\nplt.plot(negx, negy, color='orange', marker='>', markersize=10)\nplt.annotate(str(negy),(negx, negy), xytext=(negx - 2, negy))\n\nplt.show()", "These x values return a(x) values that are just slightly above 1, and if we were to keep plotting numbers that are increasingly close to 0, for example 0.0000000001 or -0.0000000001, the function would still return a value that is just slightly greater than 1. The limit of function a(x) as x approaches 0, is 1; and the notation to indicate this is:\n\\begin{equation}\\lim_{x \\to 0} a(x) = 1 \\end{equation}\nThis reflects a more formal definition of function continuity. Previously, we stated that a function is continuous at a point if you can draw it at that point without lifting your pen. The more mathematical definition is that a function is continuous at a point if the limit of the function as it approaches that point from both directions is equal to the function's value at that point. In this case, as we approach x = 0 from both sides, the limit is 1; and the value of a(0) is also 1; so the function is continuous at x = 0.\nLimits at Non-Continuous Points\nLet's try another function, which we'll call b:\n\\begin{equation}b(x) = -2x^{2} \\cdot \\frac{1}{x},\\;\\;x\\ne0\\end{equation}\nNote that this function has a domain that includes all real number values of x such that x does not equal 0. In other words, the function will return a valid output for any number other than 0.\nLet's create it and plot it with Python:", "%matplotlib inline\n\n# Define function b\ndef b(x):\n if x != 0:\n return (-2*x**2) * 1/x\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [b(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('b(x)')\nplt.grid()\n\n# Plot x against b(x)\nplt.plot(x,y, color='purple')\n\nplt.show()", "The output from this function contains a gap in the line where x = 0. It seems that not only does the domain of the function (the values that can be passed in as x) exclude 0; but the range of the function (the set of values that can be returned from it) also excludes 0.\nWe can't evaluate the function for an x value of 0, but we can see what it returns for a value that is just very slightly less than 0:", "%matplotlib inline\n\n# Define function b\ndef b(x):\n if x != 0:\n return (-2*x**2) * 1/x\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [b(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('b(x)')\nplt.grid()\n\n# Plot x against b(x)\nplt.plot(x,y, color='purple')\n\n# Plot b(x) for x = -0.1\nnegx = -0.1\nnegy = b(negx)\nplt.plot(negx, negy, color='orange', marker='>', markersize=10)\nplt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))\n\nplt.show()", "We can even try a negative x value that's a little closer to 0.", "%matplotlib inline\n\n# Define function b\ndef b(x):\n if x != 0:\n return (-2*x**2) * 1/x\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [b(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('b(x)')\nplt.grid()\n\n# Plot x against b(x)\nplt.plot(x,y, color='purple')\n\n# Plot b(x) for x = -0.0001\nnegx = -0.0001\nnegy = b(negx)\nplt.plot(negx, negy, color='orange', marker='>', markersize=10)\nplt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))\n\nplt.show()", "So as the value of x gets closer to 0 from the left (negative), the value of b(x) is decreasing towards 0. We can show this with the following notation:\n\\begin{equation}\\lim_{x \\to 0^{-}} b(x) = 0 \\end{equation}\nNote that the arrow points to 0<sup>-</sup> (with a minus sign) to indicate that we're describing the limit as we approach 0 from the negative side.\nSo what about the positive side?\nLet's see what the function value is when x is 0.1:", "%matplotlib inline\n\n# Define function b\ndef b(x):\n if x != 0:\n return (-2*x**2) * 1/x\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [b(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('b(x)')\nplt.grid()\n\n# Plot x against b(x)\nplt.plot(x,y, color='purple')\n\n# Plot b(x) for x = 0.1\nposx = 0.1\nposy = b(posx)\nplt.plot(posx, posy, color='blue', marker='<', markersize=10)\nplt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))\n\nplt.show()", "What happens if we decrease the value of x so that it's even closer to 0?", "%matplotlib inline\n\n# Define function b\ndef b(x):\n if x != 0:\n return (-2*x**2) * 1/x\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the function\ny = [b(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('b(x)')\nplt.grid()\n\n# Plot x against b(x)\nplt.plot(x,y, color='purple')\n\n# Plot b(x) for x = 0.0001\nposx = 0.0001\nposy = b(posx)\nplt.plot(posx, posy, color='blue', marker='<', markersize=10)\nplt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))\n\nplt.show()", "As with the negative side, as x approaches 0 from the positive side, the value of b(x) gets closer to 0; and we can show that like this:\n\\begin{equation}\\lim_{x \\to 0^{+}} b(x) = 0 \\end{equation}\nNow, even although the function is not defined at x = 0; since the limit as we approach x = 0 from the negative side is 0, and the limit when we approach x = 0 from the positive side is also 0; we can say that the overall, or two-sided limit for the function at x = 0 is 0:\n\\begin{equation}\\lim_{x \\to 0} b(x) = 0 \\end{equation}\nSo can we therefore just ignore the gap and say that the function is continuous at x = 0? Well, recall that the formal definition for continuity is that to be continuous at a point, the function's limit as we approach the point in both directions must be equal to the function's value at that point. In this case, the two-sided limit as we approach x = 0 is 0, but b(0) is not defined; so the function is non-continuous at x = 0.\nOne-Sided Limits\nLet's take a look at a different function. We'll call this one c:\n\\begin{equation}\nc(x) = \\begin{cases}\n x + 20, & \\text{if } x \\le 0, \\\n x - 100, & \\text{otherwise }\n\\end{cases}\n\\end{equation}\nIn this case, the function's domain includes all real numbers, but its range is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for c(x &le; 0) is &le; 20, and the range of output values for c(x > 0) is x &ge; -100.\nLet's use Python to plot function c with some values for c(x) marked on the line", "%matplotlib inline\n\ndef c(x):\n import numpy as np\n if x <= 0:\n return x + 20\n else:\n return x - 100\n\n# Plot output from function h\nfrom matplotlib import pyplot as plt\n\n# Create arrays of x values\nx1 = range(-20, 6)\nx2 = range(6, 21)\n\n# Get the corresponding y values from the function\ny1 = [c(i) for i in x1]\ny2 = [c(i) for i in x2]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('c(x)')\nplt.grid()\n\n# Plot x against c(x)\nplt.plot(x1,y1, color='purple')\nplt.plot(x2,y2, color='purple')\n\n# plot a circle close enough to the c limits for our purposes!\nplt.plot(5, c(5), color='purple', marker='o', markerfacecolor='purple', markersize=10)\nplt.plot(5, c(5.001), color='purple', marker='o', markerfacecolor='w', markersize=10)\n\n# plot some points from the +ve direction\nposx = [20, 15, 10, 6]\nposy = [c(i) for i in posx]\nplt.scatter(posx, posy, color='blue', marker='<', s=70)\nfor p in posx:\n plt.annotate(str(c(p)),(p, c(p)),xytext=(p, c(p) + 5))\n \n# plot some points from the -ve direction\nnegx = [-15, -10, -5, 0, 4]\nnegy = [c(i) for i in negx]\nplt.scatter(negx, negy, color='orange', marker='>', s=70)\nfor n in negx:\n plt.annotate(str(c(n)),(n, c(n)),xytext=(n, c(n) + 5))\n\nplt.show()", "The plot of the function shows a line in which the c(x) value increases towards 25 as x approaches 5 from the negative side:\n\\begin{equation}\\lim_{x \\to 5^{-}} c(x) = 25 \\end{equation}\nHowever, the c(x) value decreases towards -95 as x approaches 5 from the positive side:\n\\begin{equation}\\lim_{x \\to 5^{+}} c(x) = -95 \\end{equation}\nSo what can we say about the two-sided limit of this function at x = 5?\nThe limit as we approach x = 5 from the negative side is not equal to the limit as we approach x = 5 from the positive side, so no two-sided limit exists for this function at that point:\n\\begin{equation}\\lim_{x \\to 5} \\text{does not exist} \\end{equation}\nAsymptotes and Infinity\nOK, time to look at another function:\n\\begin{equation}d(x) = \\frac{4}{x - 25},\\;\\; x \\ne 25\\end{equation}", "%matplotlib inline\n\n# Define function d\ndef d(x):\n if x != 25:\n return 4 / (x - 25)\n\n\n# Plot output from function d\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = list(range(-100, 24))\nx.append(24.9) # Add some fractional x\nx.append(25) # values around\nx.append(25.1) # 25 for finer-grain results\nx = x + list(range(26, 101))\n# Get the corresponding y values from the function\ny = [d(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('d(x)')\nplt.grid()\n\n# Plot x against d(x)\nplt.plot(x,y, color='purple')\n\nplt.show()", "What's the limit of d as x approaches 25?\nWe can plot a few points to help us:", "%matplotlib inline\n\n# Define function d\ndef d(x):\n if x != 25:\n return 4 / (x - 25)\n\n\n# Plot output from function d\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = list(range(-100, 24))\nx.append(24.9) # Add some fractional x\nx.append(25) # values around\nx.append(25.1) # 25 for finer-grain results\nx = x + list(range(26, 101))\n# Get the corresponding y values from the function\ny = [d(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('d(x)')\nplt.grid()\n\n# Plot x against d(x)\nplt.plot(x,y, color='purple')\n\n# plot some points from the +ve direction\nposx = [75, 50, 30, 25.5, 25.2, 25.1]\nposy = [d(i) for i in posx]\nplt.scatter(posx, posy, color='blue', marker='<')\nfor p in posx:\n plt.annotate(str(d(p)),(p, d(p)))\n \n# plot some points from the -ve direction\nnegx = [-55, 0, 23, 24.5, 24.8, 24.9]\nnegy = [d(i) for i in negx]\nplt.scatter(negx, negy, color='orange', marker='>')\nfor n in negx:\n plt.annotate(str(d(n)),(n, d(n)))\n\nplt.show()", "From these plotted values, we can see that as x approaches 25 from the negative side, d(x) is decreasing, and as x approaches 25 from the positive side, d(x) is increasing. As x gets closer to 25, d(x) increases or decreases more significantly.\nIf we were to plot every fractional value of d(x) for x values between 24.9 and 25, we'd see a line that decreases indefintely, getting closer and closer to the x = 25 vertical line, but never actually reaching it. Similarly, plotting every x value between 25 and 25.1 would result in a line going up indefinitely, but always staying to the right of the vertical x = 25 line.\nThe x = 25 line in this case is an asymptote - a line to which a curve moves ever closer but never actually reaches. The positive limit for x = 25 in this case in not a real numbered value, but infinity:\n\\begin{equation}\\lim_{x \\to 25^{+}} d(x) = \\infty \\end{equation}\nConversely, the negative limit for x = 25 is negative infinity:\n\\begin{equation}\\lim_{x \\to 25^{-}} d(x) = -\\infty \\end{equation}\nFinding Limits Numerically Using a Table\nUp to now, we've estimated limits for a point graphically by examining a graph of a function. You can also approximate limits by creating a table of x values and the corresponding function values either side of the point for which you want to find the limits.\nFor example, let's return to our a function:\n\\begin{equation}a(x) = x^{2} + 1\\end{equation}\nIf we want to find the limits as x is approaching 0, we can apply the function to some values either side of 0 and view them as a table. Here's some Python code to do that:", "# Define function a\ndef a(x):\n return x**2 + 1\n\n\nimport pandas as pd\n\n# Create a dataframe with an x column containing values either side of 0\ndf = pd.DataFrame ({'x': [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]})\n\n# Add an a(x) column by applying the function to x\ndf['a(x)'] = a(df['x'])\n\n#Display the dataframe\ndf", "Looking at the output, you can see that the function values are getting closer to 1 as x approaches 0 from both sides, so:\n\\begin{equation}\\lim_{x \\to 0} a(x) = 1 \\end{equation}\nAdditionally, you can see that the actual value of the function when x = 0 is also 1, so:\n\\begin{equation}\\lim_{x \\to 0} a(x) = a(0) \\end{equation}\nWhich according to our earlier definition, means that the function is continuous at 0.\nHowever, you should be careful not to assume that the limit when x is approaching 0 will always be the same as the value when x = 0; even when the function is defined for x = 0.\nFor example, consider the following function:\n\\begin{equation}\ne(x) = \\begin{cases}\n x = 5, & \\text{if } x = 0, \\\n x = 1 + x^{2}, & \\text{otherwise }\n\\end{cases}\n\\end{equation}\nLet's see what the function returns for x values either side of 0 in a table:", "# Define function e\ndef e(x):\n if x == 0:\n return 5\n else:\n return 1 + x**2\n\nimport pandas as pd\n# Create a dataframe with an x column containing values either side of 0\nx= [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]\ny =[e(i) for i in x]\ndf = pd.DataFrame ({' x':x, 'e(x)': y })\ndf", "As before, you can see that as the x values approach 0 from both sides, the value of the function gets closer to 1, so:\n\\begin{equation}\\lim_{x \\to 0} e(x) = 1 \\end{equation}\nHowever the actual value of the function when x = 0 is 5, not 1; so:\n\\begin{equation}\\lim_{x \\to 0} e(x) \\ne e(0) \\end{equation}\nWhich according to our earlier definition, means that the function is non-continuous at 0.\nRun the following cell to see what this looks like as a graph:", "%matplotlib inline\n\n# Define function e\ndef e(x):\n if x == 0:\n return 5\n else:\n return 1 + x**2\n\nfrom matplotlib import pyplot as plt\n\nx= [-1, -0.5, -0.2, -0.1, -0.01, 0.01, 0.1, 0.2, 0.5, 1]\ny =[e(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('e(x)')\nplt.grid()\n\n# Plot x against e(x)\nplt.plot(x, y, color='purple')\n# (we're cheating slightly - we'll manually plot the discontinous point...)\nplt.scatter(0, e(0), color='purple')\n# (... and overplot the gap)\nplt.plot(0, 1, color='purple', marker='o', markerfacecolor='w', markersize=10)\nplt.show()", "Determining Limits Analytically\nWe've seen how to estimate limits visually on a graph, and by creating a table of x and f(x) values either side of a point. There are also some mathematical techniques we can use to calculate limits.\nDirect Substitution\nRecall that our definition for a function to be continuous at a point is that the two-directional limit must exist and that it must be equal to the function value at that point. It therefore follows, that if we know that a function is continuous at a given point, we can determine the limit simply by evaluating the function for that point.\nFor example, let's consider the following function g:\n\\begin{equation}g(x) = \\frac{x^{2} - 1}{x - 1}, x \\ne 1\\end{equation}\nRun the following code to see this function as a graph:", "%matplotlib inline\n\n# Define function f\ndef g(x):\n if x != 1:\n return (x**2 - 1) / (x - 1)\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx= range(-20, 21)\ny =[g(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\nplt.show()", "Now, suppose we need to find the limit of g(x) as x approaches 4. We can try to find this by simply substituting 4 for the x values in the function:\n\\begin{equation}g(4) = \\frac{4^{2} - 1}{4 - 1}\\end{equation}\nThis simplifies to:\n\\begin{equation}g(4) = \\frac{15}{3}\\end{equation}\nSo:\n\\begin{equation}\\lim_{x \\to 4} g(x) = 5\\end{equation}\nLet's take a look:", "%matplotlib inline\n\n# Define function g\ndef g(x):\n if x != 1:\n return (x**2 - 1) / (x - 1)\n\n\n# Plot output from function f\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx= range(-20, 21)\ny =[g(i) for i in x]\n\n# Set the x point we're interested in\nzx = 4\n\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\n# Plot g(x) when x = 0\nzy = g(zx)\nplt.plot(zx, zy, color='red', marker='o', markersize=10)\nplt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))\n\nplt.show()\n\nprint ('Limit as x -> ' + str(zx) + ' = ' + str(zy))", "Factorization\nOK, now let's try to find the limit of g(x) as x approaches 1.\nWe know from the function definition that the function is not defined at x = 1, but we're not trying to find the value of g(x) when x equals 1; we're trying to find the limit of g(x) as x approaches 1.\nThe direct substitution approach won't work in this case:\n\\begin{equation}g(1) = \\frac{1^{2} - 1}{1 - 1}\\end{equation}\nSimplifies to:\n\\begin{equation}g(1) = \\frac{0}{0}\\end{equation}\nAnything divided by 0 is undefined; so all we've done is to confirm that the function is not defined at this point. You might be tempted to assume that this means the limit does not exist, but <sup>0</sup>/<sub>0</sub> is a special case; it's what's known as the indeterminate form; and there may be a way to solve this problem another way.\nWe can factor the x<sup>2</sup> - 1 numerator in the definition of g as as (x - 1)(x + 1), so the limit equation can we rewritten like this:\n\\begin{equation}\\lim_{x \\to a} g(x) = \\frac{(x-1)(x+1)}{x - 1}\\end{equation}\nThe x - 1 in the numerator and the x - 1 in the denominator cancel each other out:\n\\begin{equation}\\lim_{x \\to a} g(x)= x+1\\end{equation}\nSo we can now use substitution for x = 1 to calculate the limit as 1 + 1:\n\\begin{equation}\\lim_{x \\to 1} g(x) = 2\\end{equation}\nLet's see what that looks like:", "%matplotlib inline\n\n# Define function g\ndef f(x):\n if x != 1:\n return (x**2 - 1) / (x - 1)\n\n\n# Plot output from function g\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx= range(-20, 21)\ny =[g(i) for i in x]\n\n# Set the x point we're interested in\nzx = 1\n\n# Calculate the limit of g(x) when x->zx using the factored equation\nzy = zx + 1\n\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\n# Plot the limit of g(x)\nzy = zx + 1\nplt.plot(zx, zy, color='red', marker='o', markersize=10)\nplt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))\n\nplt.show()\n\nprint ('Limit as x -> ' + str(zx) + ' = ' + str(zy))", "Rationalization\nLet's look at another function:\n\\begin{equation}h(x) = \\frac{\\sqrt{x} - 2}{x - 4}, x \\ne 4 \\text{ and } x \\ge 0\\end{equation}\nRun the following cell to plot this function as a graph:", "%matplotlib inline\n\n# Define function h\ndef h(x):\n import math\n if x >= 0 and x != 4:\n return (math.sqrt(x) - 2) / (x - 4)\n\n\n# Plot output from function h\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx= range(-20, 21)\ny =[h(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='purple')\n\nplt.show()", "To find the limit of h(x) as x approaches 4, we can't use the direct substitution method because the function is not defined at that point. However, we can take an alternative approach by multiplying both the numerator and denominator in the function by the conjugate of the numerator to rationalize the square root term (a conjugate is a binomial formed by reversing the sign of the second term of a binomial):\n\\begin{equation}\\lim_{x \\to a}h(x) = \\frac{\\sqrt{x} - 2}{x - 4}\\cdot\\frac{\\sqrt{x} + 2}{\\sqrt{x} + 2}\\end{equation}\nThis simplifies to:\n\\begin{equation}\\lim_{x \\to a}h(x) = \\frac{(\\sqrt{x})^{2} - 2^{2}}{(x - 4)({\\sqrt{x} + 2})}\\end{equation}\nThe &radic;x<sup>2</sup> is x, and 2<sup>2</sup> is 4, so we can simplify the numerator as follows:\n\\begin{equation}\\lim_{x \\to a}h(x) = \\frac{x - 4}{(x - 4)({\\sqrt{x} + 2})}\\end{equation}\nNow we can cancel out the x - 4 in both the numerator and denominator:\n\\begin{equation}\\lim_{x \\to a}h(x) = \\frac{1}{{\\sqrt{x} + 2}}\\end{equation}\nSo for x approaching 4, this is:\n\\begin{equation}\\lim_{x \\to 4}h(x) = \\frac{1}{{\\sqrt{4} + 2}}\\end{equation}\nThis simplifies to:\n\\begin{equation}\\lim_{x \\to 4}h(x) = \\frac{1}{2 + 2}\\end{equation}\nWhich is of course:\n\\begin{equation}\\lim_{x \\to 4}h(x) = \\frac{1}{4}\\end{equation}\nSo the limit of h(x) as x approaches 4 is <sup>1</sup>/<sub>4</sub> or 0.25.\nLet's calculate and plot this with Python:", "%matplotlib inline\n\n# Define function h\ndef h(x):\n import math\n if x >= 0 and x != 4:\n return (math.sqrt(x) - 2) / (x - 4)\n\n\n# Plot output from function h\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx= range(-20, 21)\ny =[h(i) for i in x]\n\n# Specify the point we're interested in\nzx = 4\n\n# Calculate the limit of f(x) when x->zx using factored equation\nimport math\nzy = 1 / ((math.sqrt(zx)) + 2)\n\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='purple')\n\n# Plot the limit of h(x) when x->zx \nplt.plot(zx, zy, color='red', marker='o', markersize=10)\nplt.annotate(str(zy),(zx, zy), xytext=(zx + 2, zy))\n\nplt.show()\n\nprint ('Limit as x -> ' + str(zx) + ' = ' + str(zy))", "Rules for Limit Operations\nWhen you are working with functions and limits, you may want to combine limits using arithmetic operations. There are some intuitive rules for doing this.\nLet's define two simple functions, j:\n\\begin{equation}j(x) = 2x - 2\\end{equation}\nand l:\n\\begin{equation}l(x) = -2x + 4\\end{equation}\nRun the cell below to plot these functions:", "%matplotlib inline\n\n# Define function j\ndef j(x):\n return x * 2 - 2\n\n# Define function l\ndef l(x):\n return -x * 2 + 4\n\n\n# Plot output from functions j and l\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values\nx = range(-10, 11)\n\n# Get the corresponding y values from the functions\njy = [j(i) for i in x]\nly = [l(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.xticks(range(-10,11, 1))\nplt.ylabel('y')\nplt.yticks(range(-30,30, 2))\nplt.grid()\n\n# Plot x against j(x)\nplt.plot(x,jy, color='green', label='j(x)')\n\n# Plot x against l(x)\nplt.plot(x,ly, color='magenta', label='l(x)')\n\nplt.legend()\n\nplt.show()\n", "Addition of Limits\nFirst, let's look at the rule for addition:\n\\begin{equation}\\lim_{x \\to a} (j(x) + l(x)) = \\lim_{x \\to a} j(x) + \\lim_{x \\to a} l(x)\\end{equation}\nWhat we're saying here, is that the limit of j(x) + l(x) as x approaches a, is the same as the limit of j(x) as x approaches a added to the limit of l(x) as x approaches a.\nLooking at the graph for our functions j and l, let's apply this rule to an a value of 8.\nBy visually inspecting the graph, you can see that as x approaches 8 from either direction, j(x) gets closer to 14, so:\n\\begin{equation}\\lim_{x \\to 8} j(x) = 14\\end{equation}\nSimilarly, as x approaches 8 from either direction, l(x) gets closer to -12, so:\n\\begin{equation}\\lim_{x \\to 8} l(x) = -12\\end{equation}\nSo based on the addition rule:\n\\begin{equation}\\lim_{x \\to 8} (j(x) + l(x)) = 14 + -12 = 2\\end{equation}\nSubtraction of Limits\nHere's the rule for subtraction:\n\\begin{equation}\\lim_{x \\to a} (j(x) - l(x)) = \\lim_{x \\to a} j(x) - \\lim_{x \\to a} l(x)\\end{equation}\nAs you've probably noticed, this is consistent with the rule of addition. Based on an a value of 8 (and the limits we identified for this a value above), we can apply this rule like this:\n\\begin{equation}\\lim_{x \\to 8} (j(x) - l(x)) = 14 - -12 = 26\\end{equation}\nMultiplication of Limits\nHere's the rule for multiplication:\n\\begin{equation}\\lim_{x \\to a} (j(x) \\cdot l(x)) = \\lim_{x \\to a} j(x) \\cdot \\lim_{x \\to a} l(x)\\end{equation}\nAgain, you can apply this to the limits as x approached an a value of 8 we identified previously:\n\\begin{equation}\\lim_{x \\to 8} (j(x) \\cdot l(x)) = 14 \\cdot -12 = -168\\end{equation}\nThis rule also applies to multipying a limit by a constant:\n\\begin{equation}\\lim_{x \\to a} c \\cdot l(x) = c \\cdot \\lim_{x \\to a} l(x)\\end{equation}\nSo for an a value of 8 and a constant c value of 3, this equates to:\n\\begin{equation}\\lim_{x \\to 8} 3 \\cdot l(x) = 3 \\cdot -12 = -36\\end{equation}\nDivision of Limits\nFor division, assuming the limit of l(x) when x is approaching a is not 0:\n\\begin{equation}\\lim_{x \\to a} \\frac{j(x)}{l(x)} = \\frac{\\lim_{x \\to a} j(x)}{\\lim_{x \\to a} l(x)}\\end{equation}\nSo, based on our limits for j(x) and l(x) when x approaches 8:\n\\begin{equation}\\lim_{x \\to 8} \\frac{j(x)}{l(x)} = \\frac{14}{-12}= \\frac{7}{-6}\\end{equation}\nLimit Exponentials and Roots\nAssuming n is an integer:\n\\begin{equation}\\lim_{x \\to a} (j(x))^{n} = \\Big(\\lim_{x \\to a} j(x)\\Big)^{n}\\end{equation}\nSo for example:\n\\begin{equation}\\lim_{x \\to 8} (j(x))^{2} = \\Big(\\lim_{x \\to 8} j(x)\\Big)^{2} = 14^{2} = 196\\end{equation}\nFor roots, again assuming n is an integer:\n\\begin{equation}\\lim_{x \\to a} \\sqrt[n]{j(x)} = \\sqrt[n]{\\lim_{x \\to a} j(x)}\\end{equation}\nSo:\n\\begin{equation}\\lim_{x \\to 8} \\sqrt[2]{j(x)} = \\sqrt[2]{\\lim_{x \\to 8} j(x)} = \\sqrt[2]{14} \\approx 3.74\\end{equation}" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turnerbw/JNBinder
Floridas_Water_Budget.ipynb
mit
[ "Florida's Water Budget\nThe Impact of Humans & Nature on Freshwater Sources\nThis activity displays the water use, rainfall and water levels in Florida counties. You can use the code to change the charts and graphs below. Try to find the significance between the data.\nSources include (waterdata.usgs.gov/fl/nwis/current/?type=precip)\nand (www.sjrwmd.com)", "# Import modules that contain functions we need\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Our data is a table and is defined as the word 'data'.\n# 'data' is set equal to the .csv file that is read by the pandas function.\n# The .csv file must be in the same disrectory as the program.\n#data = pd.read_csv(\"Public Water Supply FL 2010.csv\")\n\n# You can also use external links to .xls, .csv, or .txt files and would import useing the same funtion but replaceing the\n# file name with the webpage. For example:\ndata = pd.read_csv(\"https://gist.githubusercontent.com/GoodmanSciences/9d53d0874281a61354cc8a9a962cb926/raw/e379c22e667aa309cc02048bd2b7bb31ce540d60/Public%2520Water%2520Supply%2520FL%25202010.csv\")", "PART 1: Water Used by Florida Counties in 2010\nThis table displays the County name, its population, the puplic water supply for that county, and the total water used by that county.", "# displays the first few rows of the table\ndata.head(4)\n\n# Set variables for scatter plot\nx = data.Population\ny = data.WaterUsed\n\nfig = plt.figure(figsize=(15, 6))\nplt.scatter(x,y)\nplt.xlim(0,3000000)\nplt.ylim(0,350)\nplt.title('The Relationship Between Population and How Much Water a County Consumes Each Year')\nplt.xlabel('Population (individuals)')\nplt.ylabel('Water Used (million gallons)')\n\n# This actually shows the plot\nplt.show()\n\n# Creates a new dataset for County\n\nplace = data.groupby(\"County\", as_index = False).sum()", "This table shows the top 10 water consuming counties, the population, the amount of the population that is connected to public water (PublicSupply), and the total water used by each county.", "# Orginizes by County with the highest water usage in decending order\n#Only displays the top 10 highestest water consuming counties by putting .head(10)\n\nmostwater = place.sort_values(by=\"WaterUsed\", ascending = False).head(10)\nmostwater", "Most Water Consuming Counties in Florida", "# Displays a histogram of the top 10 water consuming counties in ascending order\n\nmostwater.sort_values(by=\"WaterUsed\", ascending=True).plot(x=\"County\", y=\"WaterUsed\", kind=\"barh\", title=\"Top 10 Water Consuming Counties\", legend=False);", "Try to change the histogram so that it displays the County and the Population Total. (Right now it is displaying County and Water Use Total)\nPART 2: Rainfall Additions to the Water Supply", "# Imports more csv files locally\n#feb = pd.read_csv(\"Feb2005_FL_rainfall.csv\")\n#july = pd.read_csv(\"July2005_FL_rainfall.csv\")\n\n# Imports more csv files from the web\njuly = pd.read_csv(\"https://gist.githubusercontent.com/GoodmanSciences/354fa30fb1e506c055621b893b26ebe8/raw/523e483ae4534c9432f91e5d5b7f9fb0356e95e1/Rainfall%2520FL%2520Jul2005.csv\")\nfeb = pd.read_csv(\"https://gist.githubusercontent.com/GoodmanSciences/7088ff6b7b8e915a87ee987f3b767641/raw/a76a0dd975f95e6c0c5e6ee810e6f6e66faeca9b/Rainfall%2520FL%2520Feb2005.csv\")", "Rainfall in February 2005 (Inches)", "feb.head()\n\n# Plots rainfall form ascending order\nfeb.sort_values(by=\"Monthly Total\", ascending=True).plot(x=\"County\", y=\"Monthly Total\", kind=\"barh\", title=\"Rainfall in February (Inches)\", legend=False);", "Try to change the histogram to display the data in decending order.\nRainfall in July 2005 (Inches)", "july.head()\n\njuly.sort_values(by=\"Monthly Total\", ascending=True).plot(x=\"County\", y=\"Monthly Total\", kind=\"barh\", title=\"Rainfall in July (Inches)\", legend=False);\n\nfrom IPython.display import Image\nfrom IPython.core.display import HTML \nImage(url= 'https://preview.ibb.co/g7Z6sa/Average_Monthly_Water_Consumption.png')\n\nImage(url= 'https://floridamaps.files.wordpress.com/2015/03/florida-counties.jpg')\n\n#Double-click to make this image GINORMOUS", "PART 3: Monitoring Lake Apopka a Freshwater Source for Agriculture", "# Imports another csv file locally\n#level = pd.read_csv(\"Lake Apopka Waterlevel 2005.csv\")\n\n# Imports another csv file from the web\nlevel = pd.read_csv(\"https://gist.githubusercontent.com/GoodmanSciences/e63b6cb68cd6ef5235dc8c113ea9995a/raw/39139535f7ef05057ecce1126ea336ca7bcfb879/Lake%2520Apopka%2520Waterlevel%25202005.csv\")\n\n# Sets Date as index\nlev2 = level.set_index(\"Date\")", "Water Level in February", "# Displays only Feb 1st through the 28th\nlev2.loc[\"2/1/2005\":\"2/28/2005\", :]", "Water Level in July", "# Displays only July 1st through the 7th\nlev2.loc[\"7/1/2005\":\"7/7/2005\", :]\n\n# Plot of all values in level dataset\nlevel.plot('Date', 'Water Level')\n\nImage(url= 'http://www.floridacountiesmap.com/graphics/orange.gif')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NeuPhysics/aNN
ipynb/test.ipynb
mit
[ "Artificial Neural Network Test\n<a id='toc'></a>\nTOC\n\nDocs\nPredefined Functions and Tests\nMinimization\nwithout Jacobian\nAnalytical Jacobian\nSummary\n\n<a id='docs'></a>\nDocs - Example\nThe problem to solve is the differential equation $$\\frac{d}{dt}y(t)= - y(t).$$ Using the network, this is $$y_i= 1+t_i v_k f(t_i w_k+u_k).$$\nThe procedures are\nDeal with the function first.\n\n\nThe cost is $$I=\\sum_i\\left( \\frac{dy_i}{dt}+y_i \\right)^2.$$ Our purpose is to minimize this cost.\n\n\nTo calculate the differential of y, we can write down the explicit expression for it. $$\\frac{dy}{dt} = v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k))w_k,$$ where the function f is defined as a trigf().\n\n\nSo the cost becomse $$I = \\sum_i \\left( v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k + y \\right)^2.$$\n\n\n<a id='general'></a>\nGeneral Functions\nImport Modules used in this notebook", "# This line configures matplotlib to show figures embedded in the notebook, \n# instead of opening a new window for each figure. More about that later. \n# If you are using an old version of IPython, try using '%pylab inline' instead.\n%matplotlib inline\n%load_ext snakeviz\n\nimport numpy as np\nfrom scipy.optimize import minimize\nfrom scipy.special import expit\nimport matplotlib.pyplot as plt\n\nfrom matplotlib.lines import Line2D\n\nimport timeit\n\nimport pandas as pd\n\nimport plotly.plotly as py\nfrom plotly.graph_objs import *\nimport plotly.tools as tls", "Define the general part of cost function\nFor an first order ODE, $$ L \\equiv \\frac{d}{dt}y + f(t) =0 ,$$ in which $y(t)$ is the function to be solved.\nThe cost is defined as $$I = \\sum_{i} I_i^2 = I_i * I_i,$$ where \n$$ I_{i} = \\left( \\frac{d}{dt}y + f(t) \\right) $$\nUsing the following function", "def trigf(x):\n #return 1/(1+np.exp(-x)) # It's not bad to define this function here for people could use other functions other than expit(x).\n return expit(x)\n\n## Very important notes\n## fOft(x,ti) should be specified, here ti is a scalar value not a list. and it should return a value.\n## Here I use t as the variables list and ti as the variable.\n\n\n\ndef costODE(x,t,initialCondition,fOfArg): # x is a list of the order v,w,u. x will be splited to three equal parts.\n # initialCondition can only be constants.\n\n t = np.array(t)\n \n costODETotal = np.sum( costODETList(x,t,initialCondition,fOfArg) )\n \n return costODETotal\n \n\ndef costODETList(x,t,initialCondition,fOfArg): ## This is the function WITHOUT the square!!! \n \n v,w,u = np.split(x,3)[:3]\n \n t = np.array(t)\n \n costList = np.asarray([])\n \n for temp in t:\n tempElement = costODETi(x,temp,initialCondition,fOfArg)\n costList = np.append(costList, tempElement)\n \n return np.array(costList)\n\n \n\ndef costODETi(x,ti,initialCondition,fOfArg): # function for each t. here t is a single value \n # fOfArg is the function f(t) in the example\n\n v,w,u = np.split(x,3)[:3]\n \n args = np.array([x,ti,initialCondition])\n \n fvec = np.array(trigf(ti*w + u) ) # This is a vector!!!\n ft = fOfArg(args) ## fOft should be specified in a problem!!!!!!!!!!!! And it takes a whole array of all the arguments\n # For a given t, this calculates the value of y(t), given the parameters, v, w, u. Notice this initialCondition.\n \n return ( np.sum (v*fvec + ti * v* fvec * ( 1 - fvec ) * w ) + ft ) ** 2\n \n\n \n## The funNNi(x,ti,initialCondition) takes a time ti and exports the function value with input x.\n\n \ndef funNNi(x,ti,initialCondition): # for a single time stamp t\n \n v,w,u = np.split(x,3)[:3]\n \n return initialCondition + np.sum(ti * v * trigf( ti*w +u ) )\n\n## funNNList(x,t,initialCondition) takes a list of time and exports the function values of these times.\n\ndef funNNList(x,t,initialCondition):\n \n t = np.array(t)\n \n tempList = np.asarray([])\n \n for ti in t:\n tempElement = funNNi(x,ti,initialCondition)\n tempList = np.append(tempList,tempElement)\n \n return np.array(tempList)", "An example of the fOfArgs(args) which corresponds to the equation $$\\frac{d}{dt}y + y(t)=0 .$$", "def yOft(args):\n \n return funNNi(args[0],args[1],args[2]) # As in our definition, args[0] is x, args[1] is ti, args[2] is initialCondition\n\n\n### This is a test based on the comparison between this notebook and the other one called Basics.ipynb.\n\ntestx = np.ones(30)\ntestt = np.linspace(0,1,2)\n\nprint costODE(testx,testt,1,yOft) ## This is not right, should be 455.812570558\nprint costODETi(np.ones(9),1,1,yOft) ## This should be 43.556874613889988", "<a id='minimization'></a>\nMinimizatioin | toc\nThe example $$\\frac{d}{dt}y +y=0$$", "def funY(t):\n\n return np.exp(-t)", "<a id='withoutJac'></a>\nWithout Jac", "tlin = np.linspace(0,5,11)\ninitGuess = np.zeros(30)\n# initGuess = np.random.rand(1,30)+2\n\ncostODEF = lambda x: costODE(x,tlin,1,yOft)\n\n# %%snakeviz\nstartCG = timeit.default_timer()\ncostODEFResultCG = minimize(costODEF,initGuess,method=\"CG\")\nstopCG = timeit.default_timer()\n\nprint stopCG - startCG\n\nprint costODEFResultCG\n\n# %%snakeviz\nstartSLSQP = timeit.default_timer()\ncostODEFResultSLSQP = minimize(costODEF,initGuess,method=\"SLSQP\")\nstopSLSQP = timeit.default_timer()\n\nprint stopSLSQP - startSLSQP\n\nprint costODEFResultSLSQP\n\n# %%snakeviz\nstartLBFGSB = timeit.default_timer()\n# costODEFResult = minimize(costODEF,initGuess,method=\"Nelder-Mead\")\n# minimize(costTotalF,initGuess,method=\"TNC\")\ncostODEFResultLBFGSB = minimize(costODEF,initGuess,method=\"L-BFGS-B\")\nstopLBFGSB = timeit.default_timer()\n\nprint stopLBFGSB - startLBFGSB\n\nprint costODEFResultLBFGSB\n\n# %%snakeviz\nstartNM = timeit.default_timer()\ncostODEFResultNM = minimize(costODEF,initGuess,method=\"Nelder-Mead\")\nstopNM = timeit.default_timer()\n\nprint stopNM - startNM\n\nprint costODEFResultNM\n\n# %%snakeviz\nstartTNC = timeit.default_timer()\n# costODEFResult = minimize(costODEF,initGuess,method=\"Nelder-Mead\")\ncostODEFResultTNC = minimize(costODEF,initGuess,method=\"TNC\")\nstopTNC = timeit.default_timer()\n\nprint stopTNC - startTNC\n\nprint costODEFResultTNC\n\n### This method is extremely slow\n# %%snakeviz\n# startPowell = timeit.default_timer()\n# costODEFResultPowell = minimize(costODEF,initGuess,method=\"Powell\")\n# stopPowell = timeit.default_timer()\n\n# print stopPowell - startPowell\n\n# print costODEFResultPowell\n\n# for L-BFGS-B method\n\n# print \"Success is\", costODEFResult.get('success'), \"\\n\", \"# of function evaluations\", costODEFResult.get('nfev'), \"\\n\",\\\n# \"value of minimized function\", costODEFResult.get('fun'), \"\\n\",\"# of iter\", costODEFResult.get('nit')\n#\n# print \"x is\\n\", costODEFResult.get('x')\n\nprint funNNList(np.array( costODEFResultCG.get('x') ),tlin,1)\nprint funNNList(np.array( costODEFResultSLSQP.get('x') ),tlin,1)\nprint funNNList(np.array( costODEFResultLBFGSB.get('x') ),tlin,1)\nprint funNNList(np.array( costODEFResultNM.get('x') ),tlin,1)\n\ntlinplt = np.linspace(0,5,20)\ntlinplt2 = np.linspace(0,40,50)\nplt.figure(figsize=(20,12.35))\nplt.plot(tlinplt,funY(tlinplt),'g-', label = \"pltAnalytical\")\nplt.plot(tlinplt,funNNList(np.array( costODEFResultCG.get('x') ),tlinplt,1),'c^',label = \"pltODEFResultCG\")\nplt.plot(tlinplt,funNNList(np.array( costODEFResultSLSQP.get('x') ),tlinplt,1),'r+',label=\"pltODEFResultSLSQP\")\nplt.plot(tlinplt,funNNList(np.array( costODEFResultLBFGSB.get('x') ),tlinplt,1),'bo',label=\"pltODEFResultLBFGSB\")\nplt.plot(tlinplt,funNNList(np.array( costODEFResultNM.get('x') ),tlinplt,1),'m1',label=\"pltODEFResultNM\")\n# plt.yscale('log')\n# plt.legend()\n# plt.show()\n\n\n##############\n# plot using plot.ly\n\n# figMethodTest = plt.gcf()\n\n# Send figure object to Plotly, show result in notebook\n# py.iplot_mpl(figMethodTest,filename=\"ANN-test-figMethodTest\")\npy.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethodTest\")\n\n# Let's not worry about that.\n\n# plt.figure(figsize=(20,12.35))\n# plt.plot(tlinplt2,funY(tlinplt2),'g-', label = \"pltAnalytical\")\n# plt.plot(tlinplt2,funNNList(np.array( costODEFResultSLSQP.get('x') ),tlinplt2,1),'r+',label=\"pltODEFResultSLSQP\")\n# plt.plot(tlinplt2,funNNList(np.array( costODEFResultLBFGSB.get('x') ),tlinplt2,1),'bo',label=\"pltODEFResultLBFGSB\")\n\n# plt.yscale('log')\n# plt.legent()\n# plt.show()\n\n# py.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethodTest2\")", "<a id='usingJac'></a>\nUsing Jac", "def mhelper(v,w,u,t): ## This function should output a result ## t is a number in this function not array!!\n v = np.array(v)\n w = np.array(w)\n u = np.array(u)\n \n return np.sum( v*trigf( t*w + u ) + t* v* trigf(t*w + u) * ( 1 - trigf( t*w +u) ) * w ) + ( 1 + np.sum( t * v * trigf( t*w +u ) ) ) \n # Checked # Pass\n \ndef vhelper(v,w,u,t):\n v = np.array(v)\n w = np.array(w)\n u = np.array(u)\n \n return trigf(t*w+u) + t*trigf(t*w+u)*( 1-trigf(t*w+u) )*w + t*trigf(t*w+u)\n\ndef whelper(v,w,u,t):\n v = np.array(v)\n w = np.array(w)\n u = np.array(u)\n \n return v*t*trigf(t*w+u)*( 1- trigf(t*w+u) ) + t*v*( trigf(t*w+u)*(1-trigf(t*w+u))*t* (1-trigf(t*w+u)) )*w - t*v*trigf(t*w+u)*trigf(t*w+u)*(1-trigf(t*w+u))*t*w + t*v*trigf(t*w+u)*(1-trigf(t*w+u)) + t*v*trigf(t*w+u)*(1-trigf(t*w+u))*t \n\ndef uhelper(v,w,u,t):\n v = np.array(v)\n w = np.array(w)\n u = np.array(u)\n \n return v*trigf(t*w+u)*( 1 - trigf(t*w+u)) + t* v * trigf(t*w+u) * (1-trigf(t*w+u))*(1-trigf(t*w+u))*w - t*v*trigf(t*w+u)*trigf(t*w+u)*(1-trigf(t*w+u))*w + t*v*trigf(t*w+u)*(1-trigf(t*w+u))\n\n\ndef costJac(v,w,u,t):\n v = np.array(v)\n w = np.array(w)\n u = np.array(u)\n \n vout = 0\n wout = 0\n uout = 0\n \n for temp in t:\n vout = vout + 2*mhelper(v,w,u,temp)*vhelper(v,w,u,temp)\n wout = wout + 2*mhelper(v,w,u,temp)*whelper(v,w,u,temp)\n uout = uout + 2*mhelper(v,w,u,temp)*uhelper(v,w,u,temp)\n \n out = np.hstack((vout,wout,uout))\n \n return np.array(out)\n\n\ncostODEJacF = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin)\ninitGuessJ = np.zeros(30)\n# initGuessJ = np.random.rand(1,30)+2\n\n# %%snakeviz\nstartJacNCG = timeit.default_timer()\ncostODEResultJacNCG = minimize(costODEF,initGuessJ,method=\"Newton-CG\",jac=costODEJacF)\nstopJacNCG = timeit.default_timer()\n\nprint stopJacNCG - startJacNCG\nprint costODEResultJacNCG", "A test of the results", "# for NCG method\n\nprint \"Success is\", costODEResultJacNCG.get('success'), \"\\n\", \"# of function evaluations\", costODEResultJacNCG.get('nfev'), \"\\n\",\\\n\"value of minimized function\", costODEResultJacNCG.get('fun'), \"\\n\",\"# of iter\", costODEResultJacNCG.get('nit')\n\nprint \"x is\\n\", costODEResultJacNCG.get('x')\n\n# %%snakeviz\nstartJacCG = timeit.default_timer()\ncostODEResultJacCG = minimize(costODEF,initGuessJ,method=\"CG\",jac=costODEJacF)\nstopJacCG = timeit.default_timer()\n\nprint stopJacCG - startJacCG\nprint costODEResultJacCG\n\n# %%snakeviz\nstartJacSLSQP = timeit.default_timer()\ncostODEResultJacSLSQP = minimize(costODEF,initGuessJ,method=\"SLSQP\",jac=costODEJacF)\nstopJacSLSQP = timeit.default_timer()\n\nprint stopJacSLSQP - startJacSLSQP\nprint costODEResultJacSLSQP\n\n# %%snakeviz\nstartJacLBFGSB = timeit.default_timer()\ncostODEResultJacLBFGSB = minimize(costODEF,initGuessJ,method=\"L-BFGS-B\",jac=costODEJacF)\nstopJacLBFGSB = timeit.default_timer()\n\nprint stopJacLBFGSB - startJacLBFGSB\nprint costODEResultJacLBFGSB\n\ninitGuessJ2 = np.zeros(30)\ntlin2 = np.linspace(0,5,100)\ncostODEJacF2 = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin2)\ncostODEF2 = lambda x: costODE(x,tlin2,1,yOft)\n\ninitGuessJ3 = np.zeros(30)\ntlin3 = np.linspace(0,5,5)\ncostODEJacF3 = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin3)\ncostODEF3 = lambda x: costODE(x,tlin3,1,yOft)\n\n\n# %%snakeviz\nstartJacLBFGSB2 = timeit.default_timer()\ncostODEResultJacLBFGSB2 = minimize(costODEF2,initGuessJ2,method=\"L-BFGS-B\",jac=costODEJacF2)\nstopJacLBFGSB2 = timeit.default_timer()\n\nprint stopJacLBFGSB2 - startJacLBFGSB2\nprint costODEResultJacLBFGSB2\n\n# %%snakeviz\nstartJacLBFGSB3 = timeit.default_timer()\ncostODEResultJacLBFGSB3 = minimize(costODEF3,initGuessJ3,method=\"L-BFGS-B\",jac=costODEJacF3)\nstopJacLBFGSB3 = timeit.default_timer()\n\nprint stopJacLBFGSB3 - startJacLBFGSB3\nprint costODEResultJacLBFGSB3\n\n# %%snakeviz\nstartJacSLSQP2 = timeit.default_timer()\ncostODEResultJacSLSQP2 = minimize(costODEF2,initGuessJ2,method=\"SLSQP\",jac=costODEJacF2)\nstopJacSLSQP2 = timeit.default_timer()\n\nprint stopJacSLSQP2 - startJacSLSQP2\nprint costODEResultJacSLSQP2\n\n# %%snakeviz\nstartJacSLSQP3 = timeit.default_timer()\ncostODEResultJacSLSQP3 = minimize(costODEF3,initGuessJ3,method=\"SLSQP\",jac=costODEJacF3)\nstopJacSLSQP3 = timeit.default_timer()\n\nprint stopJacSLSQP3 - startJacSLSQP3\nprint costODEResultJacSLSQP3\n\n# %%snakeviz\nstartJacTNC = timeit.default_timer()\ncostODEResultJacTNC = minimize(costODEF,initGuessJ,method=\"TNC\",jac=costODEJacF)\nstopJacTNC = timeit.default_timer()\n\nprint stopJacTNC - startJacTNC\nprint costODEResultJacTNC\n\nprint funNNList(np.array( costODEResultJacNCG.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacCG.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacLBFGSB.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacTNC.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacSLSQP.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacLBFGSB2.get('x') ),tlin,1)\nprint funNNList(np.array( costODEResultJacSLSQP2.get('x') ),tlin2,1)\n\nplt.figure(figsize=(20,12.36))\nplt.plot(tlinplt,funY(tlinplt),'g-')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacNCG.get('x') ),tlinplt,1),'c^')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacCG.get('x') ),tlinplt,1),'g:')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacSLSQP.get('x') ),tlinplt,1),'ro')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacLBFGSB.get('x') ),tlinplt,1),'bo')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacTNC.get('x') ),tlinplt,1),'m+')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacLBFGSB2.get('x') ),tlinplt,1),'b*')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacSLSQP2.get('x') ),tlinplt,1),'r*')\n# plt.yscale('log')\nplt.show()\n\nplt.figure(figsize=(20,12.36))\nplt.plot(tlinplt,funY(tlinplt),'g-')\nplt.plot(tlinplt,funNNList(np.array( costODEFResultLBFGSB.get('x') ),tlinplt,1),'b:')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacLBFGSB.get('x') ),tlinplt,1),'bo')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacLBFGSB2.get('x') ),tlinplt,1),'b*')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacLBFGSB3.get('x') ),tlinplt,1),'b^')\nplt.plot(tlinplt,funNNList(np.array( costODEFResultSLSQP.get('x') ),tlinplt,1),'r:')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacSLSQP.get('x') ),tlinplt,1),'ro')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacSLSQP2.get('x') ),tlinplt,1),'r*')\nplt.plot(tlinplt,funNNList(np.array( costODEResultJacSLSQP3.get('x') ),tlinplt,1),'r^')\n# plt.yscale('log')\nplt.show()\n\n# plt.figure(figsize=(20,12.36))\n# plt.plot(tlinplt2,funY(tlinplt2),'g-')\n# plt.plot(tlinplt2,funNNList(np.array( costODEFResultLBFGSB.get('x') ),tlinplt2,1),'b:')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacLBFGSB.get('x') ),tlinplt2,1),'bo')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacLBFGSB2.get('x') ),tlinplt2,1),'b*')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacLBFGSB3.get('x') ),tlinplt2,1),'b^')\n# plt.plot(tlinplt2,funNNList(np.array( costODEFResultSLSQP.get('x') ),tlinplt2,1),'r:')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacSLSQP.get('x') ),tlinplt2,1),'ro')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacSLSQP2.get('x') ),tlinplt2,1),'r*')\n# plt.plot(tlinplt2,funNNList(np.array( costODEResultJacSLSQP3.get('x') ),tlinplt2,1),'r^')\n# plt.yscale('log')\n# plt.show()", "<a id='summary'></a>\nSummary | toc\nTo tests the time increase as dimension increases.\n\nCompare the numerical Jac and analytical Jac trends.", "tlin4 = np.linspace(0,5,11)\n\ncostODEJacF4 = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin4)\ncostODEF4 = lambda x: costODE(x,tlin4,1,yOft)\n\ndim = np.linspace(10,300,11)*3\nprint dim\n\n# Play with the \n\n# %%snakeviz\nresultdim = np.asarray([])\nvwuDimTest = np.linspace(10,300,11)*3\n \nfor dimele in vwuDimTest:\n \n initGuessJ4 = np.zeros(dimele)\n \n startJacSLSQP4 = timeit.default_timer()\n costODEResultJacSLSQP4 = minimize(costODEF4,initGuessJ4,method=\"SLSQP\",jac=costODEJacF4)\n stopJacSLSQP4 = timeit.default_timer()\n \n resultdim = np.append(resultdim, stopJacSLSQP4 - startJacSLSQP4)\n \n print stopJacSLSQP4 - startJacSLSQP4, costODEResultJacSLSQP4.get(\"fun\"), costODEResultJacSLSQP4.get(\"success\")\n \nprint resultdim\n\n\nnp.savetxt('./assets/dataTest_ResultDim.txt', resultdim, delimiter = ',')\n\nresultdim = np.genfromtxt('./assets/dataTest_ResultDim.txt', delimiter = ',')\n\nplt.figure(figsize=(20,12.36))\nplt.ylabel('Time needed')\nplt.xlabel('Dimension of total uvw array')\nplt.plot(vwuDimTest,resultdim,\"b4-\",label=\"ANN-test-figMethod-TimeTest-InitGuess\")\npy.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethod-TimeTest-InitGuess\")\n\n# tls.embed(\"https://plot.ly/~emptymalei/73/\")\n\nplt.figure(figsize=(20,12.36))\nplt.ylabel('Time needed')\nplt.xlabel('Dimension of total uvw array')\nplt.plot(vwuDimTest,resultdim,\"b4-\",label=\"ANN-test-figMethod-TimeTest-InitGuess\")\nplt.yscale('log')\npy.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethod-TimeTest-InitGuess-log\")\n\n\nnp.linspace(10,120,12)\n\n# %%snakeviz\nresultdim2 = np.asarray([])\n\ninitGuessJ5 = np.zeros(30)\n\ntimeseg = np.linspace(10,120,12)\n\nfor dim5 in timeseg:\n \n tlin5 = np.linspace(0,5,dim5)\n \n costODEJacF5 = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin5)\n costODEF5 = lambda x: costODE(x,tlin5,1,yOft)\n\n startJacSLSQP5 = timeit.default_timer()\n costODEResultJacSLSQP5 = minimize(costODEF5,initGuessJ5,method=\"SLSQP\",jac=costODEJacF5)\n stopJacSLSQP5 = timeit.default_timer()\n \n resultdim2 = np.append(resultdim2, stopJacSLSQP5 - startJacSLSQP5)\n \n print stopJacSLSQP5 - startJacSLSQP5, costODEResultJacSLSQP5.get(\"fun\"), costODEResultJacSLSQP5.get(\"success\")\n \nprint resultdim2\n\n\n# np.savetxt('./assets/dataTest_ResultDim2.txt', resultdim2, delimiter = ',')\n\nresultdim2 = np.genfromtxt('./assets/dataTest_ResultDim2.txt', delimiter = ',')\n\nplt.figure(figsize=(20,12.36))\nplt.ylabel('Time needed')\nplt.xlabel('Dimension of time array')\nplt.plot(timeseg,resultdim2,\"b4-\",label=\"ANN-test-figMethod-TimeTest-TimeSeg\")\n\npy.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethod-TimeTest-TimeSeg\")\n\n# tls.embed(\"https://plot.ly/~emptymalei/\")\n\nplt.figure(figsize=(20,12.36))\nplt.ylabel('Time needed')\nplt.xlabel('Dimension of time array')\nplt.plot(timeseg,resultdim2,\"b4-\",label=\"ANN-test-figMethod-TimeTest-TimeSeg\")\nplt.yscale('log')\n\npy.iplot_mpl(plt.gcf(),filename=\"ANN-test-figMethod-TimeTest-TimeSeg-log\")\n", "To summarize, \n\nCG is extremely fast, however, it's not so accurate.\nSLSQP is also very fast, and it is very accurate\nL-BFGS-B gives us the best result and it's not slow.\n\n| Method | Time | Function | nfev/njev | nit |\n|:--------:|:----------------------------------:|:-------------------------------------------------:|:---------------------------------------:|:-----------------:|\n| CG | 2.95948886871(Jac: 0.341506004333) | 0.1920503701634634(Jac: 0.19205041475267956) | nfev: 492,njev: 15 (nfev: 27,njev: 15) | |\n| SLSQP | 10.7389249802(Jac:0.955240011215) | 2.5774431624374406e-05(Jac:1.080248266970841e-05) | nfev: 1649,njev: 51 (nfev: 66,njev: 51) | nit: 51(nit: 51) |\n| L-BFGS-B | 18.576128006(Jac:8.60380482674) | 1.6905137501642895e-07(Jac:0.093712249132977959) | nfev: 119(nfev: 466) | nit: 93(nit: 358) |\n| N-M | 30.9348311424 | 0.00031884090490975285 | nfev: 6001 | |\n| NCG | (Jac:37.292183876) | (0.09397294690879396) | (njev: 3149,nfev: 526) | |\n\nSmaller vwu dimensions, less time? True for SLSQP Jac. \nSeems that by solving an equation, we are solving a equation in a particular region with a specific initial condition. Be careful with extrapolation.\n\nPlotting\nHere in this section I def some function used to plot the data.\nThe plot function outputs a plot with inputs of a list of ts and ys.", "startTemp = timeit.default_timer()\ntestArraySplit = np.zeros(30)\nnp.split(testArraySplit,3)[0]\nnp.split(testArraySplit,3)[1]\nnp.split(testArraySplit,3)[2]\nstopTemp = timeit.default_timer()\nprint stopTemp - startTemp\n\nprint tlin2\nprint tlin\n\nx1 = np.zeros(30)\nv1, w1, u1 = np.split(x1,3)[:3]\nprint v1, w1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Danghor/Formal-Languages
ANTLR4-Python/LR-Parser-Generator/LR-Table-Generator.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../../style.css', 'r') as file:\n css = file.read()\nHTML(css)", "Implementing an LR-Table-Generator\nA Grammar for Grammars\nAs the goal is to generate an LR-table-generator we first need to implement a parser for context free grammars.\nThe file arith.g contains an example grammar that describes arithmetic expressions.", "!cat Examples/c-fragment.g", "We use <span style=\"font-variant:small-caps;\">Antlr</span> to develop a parser for context free grammars. The pure grammar used to parse context free grammars is stored in the file Pure.g4. It is similar to the grammar that we have already used to implement Earley's algorithm, but allows additionally the use of the operator |, so that all grammar rules that define a variable can be combined in one rule.", "!cat Pure.g4", "The annotated grammar is stored in the file Grammar.g4.\nThe parser will return a list of grammar rules, where each rule of the form\n$$ a \\rightarrow \\beta $$\nis stored as the tuple (a,) + 𝛽.", "!cat -n Grammar.g4", "We start by generating both scanner and parser.", "!antlr4 -Dlanguage=Python3 Grammar.g4\n\nfrom GrammarLexer import GrammarLexer\nfrom GrammarParser import GrammarParser\nimport antlr4", "The Class GrammarRule\nThe class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__.", "class GrammarRule:\n def __init__(self, variable, body):\n self.mVariable = variable\n self.mBody = body\n \n def __eq__(self, other):\n return isinstance(other, GrammarRule) and \\\n self.mVariable == other.mVariable and \\\n self.mBody == other.mBody\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n return f'{self.mVariable} → {\" \".join(self.mBody)}'", "The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.", "def parse_grammar(filename):\n input_stream = antlr4.FileStream(filename, encoding=\"utf-8\")\n lexer = GrammarLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = GrammarParser(token_stream)\n grammar = parser.start()\n return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]\n\ngrammar = parse_grammar('Examples/c-fragment.g')\ngrammar", "Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character \"'\".", "def is_var(name):\n return name[0] != \"'\" and name.islower()", "Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.", "def collect_variables(Rules):\n Variables = set()\n for rule in Rules:\n Variables.add(rule.mVariable)\n for item in rule.mBody:\n if is_var(item):\n Variables.add(item)\n return Variables", "Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.", "def collect_tokens(Rules):\n Tokens = set()\n for rule in Rules:\n for item in rule.mBody:\n if not is_var(item):\n Tokens.add(item)\n return Tokens", "Extended Marked Rules\nThe class ExtendedMarkedRule stores a single marked rule of the form\n$$ v \\rightarrow \\alpha \\bullet \\beta : L $$\nwhere the variable $v$ is stored in the member variable mVariable, while $\\alpha$ and $\\beta$ are stored in the variables mAlphaand mBeta respectively. The set of follow tokens $L$ is stored in the variable mFollow. These variables are assumed to contain tuples of grammar symbols. A grammar symbol is either\n- a variable,\n- a token, or\n- a literal, i.e. a string enclosed in single quotes.\nLater, we need to maintain sets of marked rules to represent states. Therefore, we have to define the methods __eq__, __ne__, and __hash__.", "class ExtendedMarkedRule():\n def __init__(self, variable, alpha, beta, follow):\n self.mVariable = variable\n self.mAlpha = alpha\n self.mBeta = beta\n self.mFollow = follow\n \n def __eq__(self, other):\n return isinstance(other, ExtendedMarkedRule) and \\\n self.mVariable == other.mVariable and \\\n self.mAlpha == other.mAlpha and \\\n self.mBeta == other.mBeta and \\\n self.mFollow == other.mFollow\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.mVariable) + \\\n hash(self.mAlpha) + \\\n hash(self.mBeta) + \\\n hash(self.mFollow)\n \n def __repr__(self):\n alphaStr = ' '.join(self.mAlpha)\n betaStr = ' '.join(self.mBeta)\n if len(self.mFollow) > 1:\n followStr = '{' + ','.join(self.mFollow) + '}'\n else:\n followStr = ','.join(self.mFollow)\n return f'{self.mVariable} → {alphaStr} • {betaStr}: {followStr}'", "Given an extended marked rule self, the function is_complete checks, whether the extended marked rule self has the form\n$$ c \\rightarrow \\alpha\\; \\bullet: L,$$\ni.e. it checks, whether the $\\bullet$ is at the end of the grammar rule.", "def is_complete(self):\n return len(self.mBeta) == 0\n\nExtendedMarkedRule.is_complete = is_complete\ndel is_complete", "Given an extended marked rule self of the form\n$$ c \\rightarrow \\alpha \\bullet X\\, \\delta: L, $$\nthe function symbol_after_dot returns the symbol $X$. If there is no symbol after the $\\bullet$, the method returns None.", "def symbol_after_dot(self):\n if len(self.mBeta) > 0:\n return self.mBeta[0]\n return None\n\nExtendedMarkedRule.symbol_after_dot = symbol_after_dot\ndel symbol_after_dot", "Given a grammar symbol name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character \"'\".", "def is_var(name):\n return name[0] != \"'\" and name.islower()", "Given an extended marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None.", "def next_var(self):\n if len(self.mBeta) > 0:\n var = self.mBeta[0]\n if is_var(var):\n return var\n return None\n\nExtendedMarkedRule.next_var = next_var\ndel next_var", "The function move_dot(self) transforms an extended marked rule of the form \n$$ c \\rightarrow \\alpha \\bullet X\\, \\beta: L $$\ninto an extended marked rule of the form\n$$ c \\rightarrow \\alpha\\, X \\bullet \\beta: L, $$\ni.e. the $\\bullet$ is moved over the next symbol. Invocation of this method assumes that there is a symbol\nfollowing the $\\bullet$.", "def move_dot(self):\n return ExtendedMarkedRule(self.mVariable, \n self.mAlpha + (self.mBeta[0],), \n self.mBeta[1:],\n self.mFollow)\n\nExtendedMarkedRule.move_dot = move_dot\ndel move_dot", "The function to_rule(self) turns the extended marked rule self into a GrammarRule, i.e. the extended marked rule\n$$ c \\rightarrow \\alpha \\bullet \\beta: L $$\nis turned into the grammar rule\n$$ c \\rightarrow \\alpha\\, \\beta. $$", "def to_rule(self):\n return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)\n\nExtendedMarkedRule.to_rule = to_rule\ndel to_rule", "The function to_rule(self) turns the extended marked rule self into a MarkedRule, i.e. the extended marked rule\n$$ c \\rightarrow \\alpha \\bullet \\beta: L $$\nis turned into the marked rule\n$$ c \\rightarrow \\alpha\\bullet \\beta. $$", "def to_marked_rule(self):\n return MarkedRule(self.mVariable, self.mAlpha, self.mBeta)\n\nExtendedMarkedRule.to_marked_rule = to_marked_rule\ndel to_marked_rule", "The class MarkedRule is similar to the class ExtendedMarkedRule but does not have the follow set.", "class MarkedRule():\n def __init__(self, variable, alpha, beta):\n self.mVariable = variable\n self.mAlpha = alpha\n self.mBeta = beta\n \n def __eq__(self, other):\n return isinstance(other, MarkedRule) and \\\n self.mVariable == other.mVariable and \\\n self.mAlpha == other.mAlpha and \\\n self.mBeta == other.mBeta \n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.mVariable) + \\\n hash(self.mAlpha) + \\\n hash(self.mBeta) \n \n def __repr__(self):\n alphaStr = ' '.join(self.mAlpha)\n betaStr = ' '.join(self.mBeta)\n return f'{self.mVariable} → {alphaStr} • {betaStr}'", "Given a set of extended marked rules M, the function combine_rule combines those extended marked ruless that have the same core: If \n$$ a \\rightarrow \\beta \\bullet \\gamma : L $$\nis an extended marked rule, then its core is defined as the marked rule\n$$ a \\rightarrow \\beta \\bullet \\gamma. $$\nIf $a \\rightarrow \\beta \\bullet \\gamma : L_1$ and $a \\rightarrow \\beta \\bullet \\gamma : L_2$ are two extended marked rules, then they can be combined into the rule\n$$ a \\rightarrow \\beta \\bullet \\gamma : L_1\\cup L_2 $$", "def combine_rules(M):\n Result = set()\n Core = set()\n for emr1 in M:\n Follow = set()\n core1 = emr1.to_marked_rule()\n if core1 in Core:\n continue\n Core.add(core1)\n for emr2 in M:\n core2 = emr2.to_marked_rule()\n if core1 == core2:\n Follow |= emr2.mFollow\n new_emr = ExtendedMarkedRule(core1.mVariable, core1.mAlpha, core1.mBeta, frozenset(Follow))\n Result.add(new_emr)\n return frozenset(Result)", "LR-Table-Generation\nThe class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.\nEach grammar rule of the form\n$$ a \\rightarrow \\beta $$\nThe start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule\n$$ \\widehat{s} \\rightarrow s. $$\nHere $s$ is the start variable of the given grammar and $\\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation\n- mStates is the set of all states of the LR-parser. These states are sets of extended marked rules.\n- mStateNamesis a dictionary assigning names of the form s0, s1, $\\cdots$, sn to the states stored in \n mStates. The functions action and goto will be defined for state names, not for states, because \n otherwise the table representing these functions would become both huge and unreadable.\n- mConflicts is a Boolean variable that will be set to true if the table generation discovers \n shift/reduce conflicts or reduce/reduce conflicts.", "class Grammar():\n def __init__(self, Rules):\n self.mRules = Rules\n self.mStart = Rules[0].mVariable\n self.mVariables = collect_variables(Rules)\n self.mTokens = collect_tokens(Rules)\n self.mStates = set()\n self.mStateNames = {}\n self.mConflicts = False\n self.mVariables.add('ŝ')\n self.mTokens.add('$')\n self.mRules.append(GrammarRule('ŝ', (self.mStart, ))) # augmenting\n self.compute_tables()", "Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.", "def initialize_dictionary(Variables):\n return { a: set() for a in Variables }", "Given a Grammar, the function compute_tables computes\n- the sets First(v) and Follow(v) for every variable v,\n- the set of all states of the LR-Parser,\n- the action table, and\n- the goto table. \nGiven a grammar g,\n- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and\n- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a.", "def compute_tables(self):\n self.mFirst = initialize_dictionary(self.mVariables)\n self.mFollow = initialize_dictionary(self.mVariables)\n self.compute_first()\n self.compute_follow()\n self.compute_rule_names()\n self.all_states()\n self.compute_action_table()\n self.compute_goto_table()\n \nGrammar.compute_tables = compute_tables\ndel compute_tables", "The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later\nto represent reduce actions in the action table.", "def compute_rule_names(self):\n self.mRuleNames = {}\n counter = 0\n for rule in self.mRules:\n self.mRuleNames[rule] = 'r' + str(counter)\n counter += 1\n \nGrammar.compute_rule_names = compute_rule_names\ndel compute_rule_names", "The function compute_first(self) computes the sets $\\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$:\n$$\\texttt{First}(\\texttt{c}) := \n \\Bigl{ t \\in T \\Bigm| \\exists \\gamma \\in (V \\cup T)^: \\texttt{c} \\Rightarrow^ t\\,\\gamma \\Bigr}.\n$$\nThe definition of the function $\\texttt{First}()$ is extended to strings from $(V \\cup T)^$ as follows:\n- $\\texttt{FirstList}(\\varepsilon) = {}$.\n- $\\texttt{FirstList}(t \\beta) = { t }$ if $t \\in T$.\n- $\\texttt{FirstList}(\\texttt{a} \\beta) = \\left{\n \\begin{array}[c]{ll}\n \\texttt{First}(\\texttt{a}) \\cup \\texttt{FirstList}(\\beta) & \\mbox{if $\\texttt{a} \\Rightarrow^ \\varepsilon$;} \\\n \\texttt{First}(\\texttt{a}) & \\mbox{otherwise.}\n \\end{array}\n \\right.\n $ \nIf $\\texttt{a}$ is a variable of $G$ and the rules defining $\\texttt{a}$ are given as \n$$\\texttt{a} \\rightarrow \\alpha_1 \\mid \\cdots \\mid \\alpha_n, $$\nthen we have\n$$\\texttt{First}(\\texttt{a}) = \\bigcup\\limits_{i=1}^n \\texttt{FirstList}(\\alpha_i). $$\nThe dictionary mFirst that stores this function is computed via a fixed point iteration.", "def compute_first(self):\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n first_body = self.first_list(body)\n if not (first_body <= self.mFirst[a]):\n change = True\n self.mFirst[a] |= first_body \n print('First sets:')\n for v in self.mVariables:\n print(f'First({v}) = {self.mFirst[v]}')\n \nGrammar.compute_first = compute_first\ndel compute_first", "Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\\texttt{FirstList}(\\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\\varepsilon = \\texttt{''}$.", "def first_list(self, alpha):\n if len(alpha) == 0:\n return { '' }\n elif is_var(alpha[0]): \n v, *r = alpha\n return eps_union(self.mFirst[v], self.first_list(r))\n else:\n t = alpha[0]\n return { t }\n \nGrammar.first_list = first_list\ndel first_list", "The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string.", "def eps_union(S, T):\n if '' in S: \n if '' in T: \n return S | T\n return (S - { '' }) | T\n return S", "Given an augmented grammar $G = \\langle V,T,R\\cup{\\widehat{s} \\rightarrow s\\,\\$}, \\widehat{s}\\rangle$ \nand a variable $a$, the set of tokens that might follow $a$ is defined as:\n$$\\texttt{Follow}(a) := \n \\bigl{ t \\in \\widehat{T} \\,\\bigm|\\, \\exists \\beta,\\gamma \\in (V \\cup \\widehat{T})^: \n \\widehat{s} \\Rightarrow^ \\beta \\,a\\, t\\, \\gamma \n \\bigr}.\n$$\nThe function compute_follow computes the sets $\\texttt{Follow}(a)$ for all variables $a$ via a fixed-point iteration.", "def compute_follow(self):\n self.mFollow[self.mStart] = { '$' }\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n for i in range(len(body)):\n if is_var(body[i]):\n yi = body[i]\n Tail = self.first_list(body[i+1:])\n firstTail = eps_union(Tail, self.mFollow[a])\n if not (firstTail <= self.mFollow[yi]): \n change = True\n self.mFollow[yi] |= firstTail \n print('Follow sets (note that \"$\" denotes the end of file):');\n for v in self.mVariables:\n print(f'Follow({v}) = {self.mFollow[v]}')\n \nGrammar.compute_follow = compute_follow\ndel compute_follow", "If $\\mathcal{M}$ is a set of extended marked rules, then the closure of $\\mathcal{M}$ is the smallest set $\\mathcal{K}$ such that\nwe have the following:\n- $\\mathcal{M} \\subseteq \\mathcal{K}$,\n- If $a \\rightarrow \\beta \\bullet c\\, \\delta: L$ is a extended marked rule from \n $\\mathcal{K}$, $c$ is a variable, and $t\\in L$ and if, furthermore,\n $c \\rightarrow \\gamma$ is a grammar rule,\n then the marked rule $c \\rightarrow \\bullet \\gamma: \\texttt{First}(\\delta\\,t)$\n is an element of $\\mathcal{K}$:\n $$(a \\rightarrow \\beta \\bullet c\\, \\delta) \\in \\mathcal{K} \n \\;\\wedge\\; \n (c \\rightarrow \\gamma) \\in R\n \\;\\Rightarrow\\; (c \\rightarrow \\bullet \\gamma: \\texttt{First}(\\delta\\,t)) \\in \\mathcal{K}\n $$\nWe define $\\texttt{closure}(\\mathcal{M}) := \\mathcal{K}$. The function cmp_closure computes this closure for a given set of extended marked rules via a fixed-point iteration.", "def cmp_closure(self, Marked_Rules):\n All_Rules = Marked_Rules\n New_Rules = Marked_Rules\n while True:\n More_Rules = set()\n for rule in New_Rules:\n c = rule.next_var()\n if c == None:\n continue\n delta = rule.mBeta[1:]\n L = rule.mFollow\n for rule in self.mRules:\n head, alpha = rule.mVariable, rule.mBody\n if c == head:\n newL = frozenset({ x for t in L for x in self.first_list(delta + (t,)) })\n More_Rules |= { ExtendedMarkedRule(head, (), alpha, newL) }\n if More_Rules <= All_Rules:\n return frozenset(All_Rules)\n New_Rules = More_Rules - All_Rules\n All_Rules |= New_Rules\n\nGrammar.cmp_closure = cmp_closure\ndel cmp_closure", "Given a set of extended marked rules $\\mathcal{M}$ and a grammar symbol $X$, the function $\\texttt{goto}(\\mathcal{M}, X)$ \nis defined as follows:\n$$\\texttt{goto}(\\mathcal{M}, X) := \\texttt{closure}\\Bigl( \\bigl{ \n a \\rightarrow \\beta\\, X \\bullet \\delta:L \\bigm| (a \\rightarrow \\beta \\bullet X\\, \\delta:L) \\in \\mathcal{M} \n \\bigr} \\Bigr).\n$$", "def goto(self, Marked_Rules, x):\n Result = set()\n for mr in Marked_Rules:\n if mr.symbol_after_dot() == x:\n Result.add(mr.move_dot())\n return combine_rules(self.cmp_closure(Result))\n\nGrammar.goto = goto\ndel goto", "The function all_states computes the set of all states of an LR-parser. The function starts with the state\n$$ \\texttt{closure}\\bigl({ \\widehat{s} \\rightarrow \\bullet s : {\\$} }\\bigr) $$\nand then tries to compute new states by using the function goto. This computation proceeds via a \nfixed-point iteration. Once all states have been computed, the function assigns names to these states.\nThis association is stored in the dictionary mStateNames.", "def all_states(self): \n start_state = self.cmp_closure({ ExtendedMarkedRule('ŝ', (), (self.mStart,), frozenset({'$'})) })\n start_state = combine_rules(start_state)\n self.mStates = { start_state }\n New_States = self.mStates\n while True:\n More_States = set()\n for Rule_Set in New_States:\n for mr in Rule_Set: \n if not mr.is_complete():\n x = mr.symbol_after_dot()\n next_state = self.goto(Rule_Set, x)\n if next_state not in self.mStates and next_state not in More_States:\n More_States.add(next_state)\n print('.', end='')\n if len(More_States) == 0:\n break\n New_States = More_States;\n self.mStates |= New_States\n print('\\n', len(self.mStates), sep='')\n print(\"All LR-states:\")\n counter = 1\n self.mStateNames[start_state] = 's0'\n print(f's0 = {set(start_state)}')\n for state in self.mStates - { start_state }:\n self.mStateNames[state] = f's{counter}'\n print(f's{counter} = {set(state)}')\n counter += 1\n\nGrammar.all_states = all_states\ndel all_states", "The following function computes the action table and is defined as follows:\n- If $\\mathcal{M}$ contains an extended marked rule of the form $a \\rightarrow \\beta \\bullet t\\, \\delta:L$\n then we have\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{shift}, \\texttt{goto}(\\mathcal{M},t) \\rangle.$$\n- If $\\mathcal{M}$ contains an extended marked rule of the form $a \\rightarrow \\beta\\, \\bullet:L$ and we have\n $t \\in L$, then we define\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{reduce}, a \\rightarrow \\beta \\rangle$$\n- If $\\mathcal{M}$ contains the extended marked rule $\\widehat{s} \\rightarrow s \\bullet:{\\$}$, then we define \n $$\\texttt{action}(\\mathcal{M},\\$) := \\texttt{accept}. $$\n- Otherwise, we have\n $$\\texttt{action}(\\mathcal{M},t) := \\texttt{error}. $$", "def compute_action_table(self):\n self.mActionTable = {}\n print('\\nAction Table:')\n for state in self.mStates:\n stateName = self.mStateNames[state]\n actionTable = {}\n # compute shift actions\n for token in self.mTokens: \n newState = self.goto(state, token)\n if newState != set():\n newName = self.mStateNames[newState]\n actionTable[token] = ('shift', newName)\n self.mActionTable[stateName, token] = ('shift', newName)\n print(f'action(\"{stateName}\", {token}) = (\"shift\", {newName})')\n # compute reduce actions\n for mr in state:\n if mr.is_complete():\n for token in mr.mFollow:\n action1 = actionTable.get(token)\n action2 = ('reduce', mr.to_rule())\n if action1 == None:\n actionTable[token] = action2 \n r = self.mRuleNames[mr.to_rule()]\n self.mActionTable[stateName, token] = ('reduce', r)\n print(f'action(\"{stateName}\", {token}) = {action2}')\n elif action1 != action2: \n self.mConflicts = True\n print('')\n print(f'conflict in state {stateName}:')\n print(f'{stateName} = {state}')\n print(f'action(\"{stateName}\", {token}) = {action1}') \n print(f'action(\"{stateName}\", {token}) = {action2}')\n print('')\n for mr in state:\n if mr == ExtendedMarkedRule('ŝ', (self.mStart,), (), frozenset({'$'})):\n actionTable['$'] = 'accept'\n self.mActionTable[stateName, '$'] = 'accept'\n print(f'action(\"{stateName}\", $) = accept')\n\nGrammar.compute_action_table = compute_action_table\ndel compute_action_table", "The function compute_goto_table computes the goto table.", "def compute_goto_table(self):\n self.mGotoTable = {}\n print('\\nGoto Table:')\n for state in self.mStates:\n for var in self.mVariables:\n newState = self.goto(state, var)\n if newState != set():\n stateName = self.mStateNames[state]\n newName = self.mStateNames[newState]\n self.mGotoTable[stateName, var] = newName\n print(f'goto({stateName}, {var}) = {newName}')\n\nGrammar.compute_goto_table = compute_goto_table\ndel compute_goto_table\n\n%%time\ng = Grammar(grammar)\n\ndef strip_quotes(t):\n if t[0] == \"'\" and t[-1] == \"'\":\n return t[1:-1]\n return t\n\ndef dump_parse_table(self, file):\n with open(file, 'w') as handle:\n handle.write('# Grammar rules:\\n')\n for rule in self.mRules:\n rule_name = self.mRuleNames[rule] \n handle.write(f'{rule_name} =(\"{rule.mVariable}\", {rule.mBody})\\n')\n handle.write('\\n# Action table:\\n')\n handle.write('actionTable = {}\\n')\n for s, t in self.mActionTable:\n action = self.mActionTable[s, t]\n t = strip_quotes(t)\n if action[0] == 'reduce':\n rule_name = action[1]\n handle.write(f\"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\\n\")\n elif action == 'accept':\n handle.write(f\"actionTable['{s}', '{t}'] = 'accept'\\n\")\n else:\n handle.write(f\"actionTable['{s}', '{t}'] = {action}\\n\")\n handle.write('\\n# Goto table:\\n')\n handle.write('gotoTable = {}\\n')\n for s, v in self.mGotoTable:\n state = self.mGotoTable[s, v]\n handle.write(f\"gotoTable['{s}', '{v}'] = '{state}'\\n\")\n \nGrammar.dump_parse_table = dump_parse_table\ndel dump_parse_table\n\ng.dump_parse_table('parse-table.py')\n\n!cat parse-table.py", "The command below cleans the directory. If you are running windows, you have to replace rm with del.", "!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp \n!rm -r __pycache__\n\n!ls" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xR86/ml-stuff
labs-digital-image-processing/Lab 3 - Morphological image processing.ipynb
mit
[ "Lab 3 - Morphological image processing <a class=\"tocSkip\">\nImport dependencies", "import numpy as np\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport matplotlib.image as mpimg\n\nimport cv2\n\n%%bash\nls -l | grep .tiff\n\nimg = mpimg.imread('Lab_3_DIP.tiff')\n\nplt.figure(figsize=(15,10))\nplt.imshow(img)", "Erosion / dilation steps", "plt.figure(figsize=(20,20))\n\n\nkernel = np.ones((5,5),np.uint8)\nerosion = cv2.erode(img,kernel,iterations = 1)\ndilation = cv2.dilate(erosion,kernel,iterations = 1)\n\n\nplt.subplot(1,3,1),\nplt.imshow(img),\nplt.title('img')\n\nplt.subplot(1,3,2),\nplt.imshow(erosion),\nplt.title('erosion(img, 1)')\n\nplt.subplot(1,3,3),\nplt.imshow(dilation),\nplt.title('dilate(erosion(img, 1), 1)')\n\nplt.figure(figsize=(20,30))\n\n\nkernel = np.ones((5,5),np.uint8)\nerosion = cv2.erode(img,kernel,iterations = 1)\ndilation = cv2.dilate(erosion,kernel,iterations = 1)\n\n\nplt.subplot(4,3,1),\nplt.imshow(img),\nplt.title('img')\n\nplt.subplot(4,3,2),\nplt.imshow(erosion),\nplt.title('erosion(img, 1)')\n\nplt.subplot(4,3,3),\nplt.imshow(dilation),\nplt.title('dilate(erosion(img, 1), 1)')\n\n\n\nerosion2 = cv2.erode(img,kernel,iterations = 2)\nerosion3 = cv2.erode(img,kernel,iterations = 3)\nerosion4 = cv2.erode(img,kernel,iterations = 4)\n\nplt.subplot(4,3,4),\nplt.imshow(erosion2),\nplt.title('erosion(img, 2)')\n\nplt.subplot(4,3,5),\nplt.imshow(erosion3),\nplt.title('erosion(img, 3)')\n\nplt.subplot(4,3,6),\nplt.imshow(erosion4),\nplt.title('erosion(img, 4)')\n\n\ndilation2 = cv2.dilate(img,kernel,iterations = 2)\ndilation3 = cv2.dilate(img,kernel,iterations = 3)\ndilation4 = cv2.dilate(img,kernel,iterations = 4)\n\nplt.subplot(4,3,7),\nplt.imshow(dilation2),\nplt.title('dilate(img, 2)')\n\nplt.subplot(4,3,8),\nplt.imshow(dilation3),\nplt.title('dilate(img, 3)')\n\nplt.subplot(4,3,9),\nplt.imshow(dilation4),\nplt.title('dilate(img, 4)')\n\n\ndil_1_ero_2 = cv2.dilate(\n cv2.erode(img,kernel,iterations = 2)\n ,kernel,iterations = 1\n)\ndil_2_ero_1 = cv2.dilate(\n cv2.erode(img,kernel,iterations = 1)\n ,kernel,iterations = 2\n)\ndil_2_ero_2 = cv2.dilate(\n cv2.erode(img,kernel,iterations = 2)\n ,kernel,iterations = 2\n)\nplt.subplot(4,3,10),\nplt.imshow(dil_1_ero_2),\nplt.title('dilate(erosion(img, 2), 1)')\n\nplt.subplot(4,3,11),\nplt.imshow(dil_2_ero_1),\nplt.title('dilate(erosion(img, 1), 2)')\n\nplt.subplot(4,3,12),\nplt.imshow(dil_2_ero_2),\nplt.title('dilate(erosion(img, 2), 2)')\n\n\n# plt.tight_layout()\nplt.subplots_adjust(wspace=0, hspace=0.1)\nplt.show()\n\nplt.figure(figsize=(70,40))\n\nplt.subplot(1,3,1),\nplt.imshow(dilation),\nplt.title('dilate(erosion(img, 1), 1)')\n\nplt.subplot(1,3,2),\nplt.imshow(dil_2_ero_1),\nplt.title('dilate(erosion(img, 1), 2)')\n\nplt.subplot(1,3,3),\nplt.imshow(dil_2_ero_2),\nplt.title('dilate(erosion(img, 2), 2)')", "Bibliography\n\nhttp://www.homepages.ucl.ac.uk/~zceeg99/imagepro/ocr.pdf" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
opencobra/cobrapy
documentation_builder/media.ipynb
gpl-2.0
[ "Growth media\nThe availability of nutrients has a major impact on metabolic fluxes and cobrapy provides some helpers to manage the exchanges between the external environment and your metabolic model. In experimental settings the \"environment\" is usually constituted by the growth medium, ergo the concentrations of all metabolites and co-factors available to the modeled organism. However, constraint-based metabolic models only consider fluxes. Thus, you can not simply use concentrations since fluxes have the unit mmol / [gDW h] (concentration per gram dry weight of cells and hour). \nAlso, you are setting an upper bound for the particular import flux and not the flux itself. There are some crude approximations. For instance, if you supply 1 mol of glucose every 24h to 1 gram of bacteria you might set the upper exchange flux for glucose to 1 mol / [1 gDW * 24 h] since that is the nominal maximum that can be imported. There is no guarantee however that glucose will be consumed with that flux. Thus, the preferred data for exchange fluxes are direct flux measurements as the ones obtained from timecourse exa-metabolome measurements for instance. \nSo how does that look in COBRApy? The current growth medium of a model is managed by the medium attribute.", "from cobra.io import load_model\n\nmodel = load_model(\"textbook\")\nmodel.medium", "This will return a dictionary that contains the upper flux bounds for all active exchange fluxes (the ones having non-zero flux bounds). Right now we see that we have enabled aerobic growth. You can modify a growth medium of a model by assigning a dictionary to model.medium that maps exchange reactions to their respective upper import bounds. For now let us enforce anaerobic growth by shutting off the oxygen import.", "medium = model.medium\nmedium[\"EX_o2_e\"] = 0.0\nmodel.medium = medium\n\nmodel.medium", "As we can see oxygen import is now removed from the list of active exchanges and we can verify that this also leads to a lower growth rate.", "model.slim_optimize()", "There is a small trap here. model.medium can not be assigned to directly. So the following will not work:", "model.medium[\"EX_co2_e\"] = 0.0\nmodel.medium", "As you can see EX_co2_e is not set to zero. This is because model.medium is just a copy of the current exchange fluxes. Assigning to it directly with model.medium[...] = ... will not change the model. You have to assign an entire dictionary with the changed import flux upper bounds:", "medium = model.medium\nmedium[\"EX_co2_e\"] = 0.0\nmodel.medium = medium\n\nmodel.medium # now it worked", "Setting the growth medium also connects to the context manager, so you can set a specific growth medium in a reversible manner.", "model = load_model(\"textbook\")\n\nwith model:\n medium = model.medium\n medium[\"EX_o2_e\"] = 0.0\n model.medium = medium\n print(model.slim_optimize())\nprint(model.slim_optimize())\nmodel.medium", "So the medium change is only applied within the with block and reverted automatically.\nMinimal media\nIn some cases you might be interested in the smallest growth medium that can maintain a specific growth rate, the so called \"minimal medium\". For this we provide the function minimal_medium which by default obtains the medium with the lowest total import flux. This function needs two arguments: the model and the minimum growth rate (or other objective) the model has to achieve.", "from cobra.medium import minimal_medium\n\nmax_growth = model.slim_optimize()\nminimal_medium(model, max_growth)", "So we see that growth is actually limited by glucose import.\nAlternatively you might be interested in a minimal medium with the smallest number of active imports. This can be achieved by using the minimize_components argument (note that this uses a MIP formulation and will therefore be much slower).", "minimal_medium(model, 0.1, minimize_components=True)", "When minimizing the number of import fluxes there may be many alternative solutions. To obtain several of those you can also pass a positive integer to minimize_components which will give you at most that many alternative solutions. Let us try that with our model and also use the open_exchanges argument which will assign a large upper bound to all import reactions in the model. The return type will be a pandas.DataFrame.", "minimal_medium(model, 0.8, minimize_components=8, open_exchanges=True)", "So there are 4 alternative solutions in total. One aerobic and three anaerobic ones using different carbon sources.\nBoundary reactions\nApart from exchange reactions there are other types of boundary reactions such as demand or sink reactions. cobrapy uses various heuristics to identify those and they can be accessed by using the appropriate attribute.\nFor exchange reactions:", "ecoli = load_model(\"iJO1366\")\necoli.exchanges[0:5]", "For demand reactions:", "ecoli.demands", "For sink reactions:", "ecoli.sinks", "All boundary reactions (any reaction that consumes or introduces mass into the system) can be obtained with the boundary attribute:", "ecoli.boundary[0:10]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/model_optimization/guide/pruning/comprehensive_guide.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "プルーニングの総合ガイド\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide\"> <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.org で表示</a>\n</td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/pruning/comprehensive_guide.ipynb\"> <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab で実行</a>\n</td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/pruning/comprehensive_guide.ipynb\"> <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"> GitHubでソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/model_optimization/guide/pruning/comprehensive_guide.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\nKeras 重みプルーニングの総合ガイドへようこそ。\nこのページでは、さまざまなユースケースを示し、それぞれで API を使用する方法を説明します。どの API が必要であるかを特定したら、API ドキュメントでパラメータと詳細を確認してください。\n\nプルーニングのメリットとサポート対象を確認する場合は、概要をご覧ください。\n単一のエンドツーエンドの例については、プルーニングの例をご覧ください。\n\n次のユースケースについて説明しています。\n\nプルーニングされたモデルの定義とトレーニング\nSequential と Functional\nKeras model.fit とカスタムトレーニングループ\n\n\nプルーニングされたモデルのチェックポイントと逆シリアル化\nプルーニングされたモデルのデプロイと圧縮のメリットの確認\n\nプルーニングアルゴリズムの構成については、tfmot.sparsity.keras.prune_low_magnitude API ドキュメントをご覧ください。\nセットアップ\n必要な API の特定と目的の理解については、次を実行できますが、このセクションを読まずに進むことができます。", "! pip install -q tensorflow-model-optimization\n\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow_model_optimization as tfmot\n\n%load_ext tensorboard\n\nimport tempfile\n\ninput_shape = [20]\nx_train = np.random.randn(1, 20).astype(np.float32)\ny_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)\n\ndef setup_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(20, input_shape=input_shape),\n tf.keras.layers.Flatten()\n ])\n return model\n\ndef setup_pretrained_weights():\n model = setup_model()\n\n model.compile(\n loss=tf.keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy']\n )\n\n model.fit(x_train, y_train)\n\n _, pretrained_weights = tempfile.mkstemp('.tf')\n\n model.save_weights(pretrained_weights)\n\n return pretrained_weights\n\ndef get_gzipped_model_size(model):\n # Returns size of gzipped model, in bytes.\n import os\n import zipfile\n\n _, keras_file = tempfile.mkstemp('.h5')\n model.save(keras_file, include_optimizer=False)\n\n _, zipped_file = tempfile.mkstemp('.zip')\n with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:\n f.write(keras_file)\n\n return os.path.getsize(zipped_file)\n\nsetup_model()\npretrained_weights = setup_pretrained_weights()", "モデルを定義する\nすべてのモデルをプルーニングする(Sequential と Functional)\nモデルの精度を高めるためのヒント:\n\n「一部のレイヤーをプルーニングする」を試して、最も精度を低下させるレイヤーのプルーニングを省略します。\n一般的に、始めからトレーニングするよりも、プルーニングで微調整する方が優れています。\n\nプルーニングでモデル全体をトレーニングするには、モデルに tfmot.sparsity.keras.prune_low_magnitude を適用します。", "base_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended.\n\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\nmodel_for_pruning.summary()", "一部のレイヤーをプルーニングする(Sequential と Functional)\nモデルのプルーニングによって制度に悪影響が及ぶことがあります。そのため、選択的にモデルのレイヤーをプルーニングすることで、精度、速度、およびモデルサイズ間のトレードオフを探ることができます。\nモデルの精度を高めるためのヒント:\n\n一般的に、始めからトレーニングするよりも、プルーニングで微調整する方が優れています。\n初期のレイヤーの代わりに後のレイヤーのプルーニングを試します。\nクリティカルレイヤー(注意メカニズムなど)のプルーニングを回避します。\n\nその他:\n\ntfmot.sparsity.keras.prune_low_magnitude API ドキュメントには、レイヤーごとにプルーニング構成を変える方法が示されています。\n\n次の例では、Dense レイヤーのみをプルーニングしています。", "# Create a base model\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\n\n# Helper function uses `prune_low_magnitude` to make only the \n# Dense layers train with pruning.\ndef apply_pruning_to_dense(layer):\n if isinstance(layer, tf.keras.layers.Dense):\n return tfmot.sparsity.keras.prune_low_magnitude(layer)\n return layer\n\n# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense` \n# to the layers of the model.\nmodel_for_pruning = tf.keras.models.clone_model(\n base_model,\n clone_function=apply_pruning_to_dense,\n)\n\nmodel_for_pruning.summary()", "この例ではプルーニングするものを決定するためにレイヤーの種類が使用されていますが、特定のレイヤーをプルーニングする上で最も簡単な方法は、name プロパティを設定し、clone_function でその名前を探す方法です。", "print(base_model.layers[0].name)", "可読性が高くても、モデルの精度を潜在的に低下させる\nこれは、プルーニングによる微調整とは使用できません。そのため、微調整をサポートする上記の例よりも制度に劣る可能性があります。\n初期のモデルを定義する際に prune_low_magnitude を適用することも可能ですが、後で重みを読み込む場合、以下の例が機能しません。\nFunctional の例", "# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.\ni = tf.keras.Input(shape=(20,))\nx = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)\no = tf.keras.layers.Flatten()(x)\nmodel_for_pruning = tf.keras.Model(inputs=i, outputs=o)\n\nmodel_for_pruning.summary()", "Sequential の例", "# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.\nmodel_for_pruning = tf.keras.Sequential([\n tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),\n tf.keras.layers.Flatten()\n])\n\nmodel_for_pruning.summary()", "カスタム Keras レイヤーのプルーニングまたはプルーニングするレイヤーの部分の変更\n一般的な過ち: バイアスをプルーニングすると通常、モデルの精度を著しく悪化させてしまう。\ntfmot.sparsity.keras.PrunableLayer は、2 つのユースケースに役立ちます。\n\nカスタム Keras レイヤーのプルーニング\nプルーニングする組み込み Keras レイヤーの部分の変更\n\nたとえば、API はデフォルトで Dense レイヤーのカーネルのプルーニングのみを行います。以下の例ではバイアスもプルーニングします。", "class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):\n\n def get_prunable_weights(self):\n # Prune bias also, though that usually harms model accuracy too much.\n return [self.kernel, self.bias]\n\n# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.\nmodel_for_pruning = tf.keras.Sequential([\n tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),\n tf.keras.layers.Flatten()\n])\n\nmodel_for_pruning.summary()\n", "モデルのトレーニング\nModel.fit\nトレーニング中に tfmot.sparsity.keras.UpdatePruningStep コールバックを呼び出します。\nトレーニングをデバッグできるようにするには、tfmot.sparsity.keras.PruningSummaries コールバックを使用します。", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\nlog_dir = tempfile.mkdtemp()\ncallbacks = [\n tfmot.sparsity.keras.UpdatePruningStep(),\n # Log sparsity and other metrics in Tensorboard.\n tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)\n]\n\nmodel_for_pruning.compile(\n loss=tf.keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy']\n)\n\nmodel_for_pruning.fit(\n x_train,\n y_train,\n callbacks=callbacks,\n epochs=2,\n)\n\n#docs_infra: no_execute\n%tensorboard --logdir={log_dir}", "Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。\nカスタムトレーニングループ\nトレーニング中に tfmot.sparsity.keras.UpdatePruningStep コールバックを呼び出します。\nトレーニングをデバッグできるようにするには、tfmot.sparsity.keras.PruningSummaries コールバックを使用します。", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n# Boilerplate\nloss = tf.keras.losses.categorical_crossentropy\noptimizer = tf.keras.optimizers.Adam()\nlog_dir = tempfile.mkdtemp()\nunused_arg = -1\nepochs = 2\nbatches = 1 # example is hardcoded so that the number of batches cannot change.\n\n# Non-boilerplate.\nmodel_for_pruning.optimizer = optimizer\nstep_callback = tfmot.sparsity.keras.UpdatePruningStep()\nstep_callback.set_model(model_for_pruning)\nlog_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.\nlog_callback.set_model(model_for_pruning)\n\nstep_callback.on_train_begin() # run pruning callback\nfor _ in range(epochs):\n log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback\n for _ in range(batches):\n step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback\n\n with tf.GradientTape() as tape:\n logits = model_for_pruning(x_train, training=True)\n loss_value = loss(y_train, logits)\n grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)\n optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))\n\n step_callback.on_epoch_end(batch=unused_arg) # run pruning callback\n\n#docs_infra: no_execute\n%tensorboard --logdir={log_dir}", "Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。\nプルーニングされたモデルの精度を改善する\nまず、tfmot.sparsity.keras.prune_low_magnitude API ドキュメントを確認し、プルーニングスケジュールが何か、また各種プルーニングスケジュールの計算について理解してください。\nヒント:\n\n\nモデルがプルーニングする際に、高すぎず低すぎない学習率を使用します。プルーニングスケジュールをハイパーパラメータとすることを検討してください。\n\n\n簡易テストとして、トレーニング始めに最終的なスパース性までモデルを実験的にプルーニングします。tfmot.sparsity.keras.ConstantSparsity スケジュールで begin_step を 0 に設定します。良い結果が得る可能性があります。\n\n\nモデルに回復する時間を与えられうように、あまり頻繁にプルーニングしないようにします。プルーニングスケジュールには十分なデフォルトの頻度が指定されています。\n\n\nモデルの精度を改善するための一般的なアイデアについては、「モデルを定義する」に記載のケース別のヒントをご覧ください。\n\n\nチェックポイントと逆シリアル化\nチェックポイント作成時には、オプティマイザのステップを保持する必要があります。つまり、チェックポイント作成に Keras HDF5 モデルを使用することはできますが、Keras HDF5 の重みは使用できません。", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n_, keras_model_file = tempfile.mkstemp('.h5')\n\n# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).\nmodel_for_pruning.save(keras_model_file, include_optimizer=True)", "上記は一般的に適用されます。次のコードは、HDF5 モデル形式のみで必要です(HDF5 重みまたはその他の形式では不要です)。", "# Deserialize model.\nwith tfmot.sparsity.keras.prune_scope():\n loaded_model = tf.keras.models.load_model(keras_model_file)\n\nloaded_model.summary()", "プルーニングされたモデルをデプロイする\nサイズ圧縮によるモデルのエクスポート\n一般的な過ち: strip_pruning と標準圧縮アルゴリズム(gzip など)の適用は、プルーニングの圧縮のメリットを確認する上で必要です。", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n# Typically you train the model here.\n\nmodel_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)\n\nprint(\"final model\")\nmodel_for_export.summary()\n\nprint(\"\\n\")\nprint(\"Size of gzipped pruned model without stripping: %.2f bytes\" % (get_gzipped_model_size(model_for_pruning)))\nprint(\"Size of gzipped pruned model with stripping: %.2f bytes\" % (get_gzipped_model_size(model_for_export)))", "ハードウェア固有の最適化\n異なるバックエンドでプルーニングによるレイテンシの改善が可能になったら、ブロックのスパース性を使用することで、特定のハードウェアのレイテンシを改善することができます。\nブロックサイズを大きくすると、ターゲットモデルの精度を得るために達成可能なピークスパース性が低下します。これにも関わらず、レイテンシは改善されることがあります。\nブロックスパース性のサポート状況に関する詳細は、tfmot.sparsity.keras.prune_low_magnitude API ドキュメントをご覧ください。", "base_model = setup_model()\n\n# For using intrinsics on a CPU with 128-bit registers, together with 8-bit\n# quantized weights, a 1x16 block size is nice because the block perfectly\n# fits into the register.\npruning_params = {'block_size': [1, 16]}\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)\n\nmodel_for_pruning.summary()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
emiliom/stuff
DRB_vizer_json_services.ipynb
cc0-1.0
[ "DRB Vizer json services\n10/30/2015. Emilio Mayorga. \nRun with Emilio's \"IOOS_test1\" conda envioronment (recent versions of most packages)\nhttp://www.wikiwatershed-vs.org examples:\n- meta requests\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=meta&asset_type=siso\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=meta&asset_type=siso&asset_id=CRBCZO_WCC019\n- data & recent_values requests\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=data&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=all&units_mode=v1\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=data&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=H1_WaterTemp\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=recent_values&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=H1_WaterTemp&units_mode=v2\n- plot request\n - http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=plot&asset_type=siso&asset_id=USGS_01474500&var_id=H1_WaterTemp&range=30d&y_axis_mode=global\nImport packages and set up utility functions", "import json\nimport requests\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport datetime\nimport time\nimport calendar\nimport pytz\n#from matplotlib.dates import date2num, num2date\n\nutc_tz = pytz.utc\n\ndef epochsec_to_dt(epochsec):\n \"\"\" Return the datetime object for epoch seconds epochsec\n \"\"\"\n dtnaive_dt = datetime.datetime.utcfromtimestamp(epochsec)\n dtutc_dt = dtnaive_dt.replace(tzinfo=pytz.utc)\n return dtutc_dt\n\ndef get_measurement_byvarid(metaresult, var_id):\n return [e for e in metaresult['measurements'] if e['var_id'] == var_id][0]", "Service end point\nFor vizer instance that currently serves only the CRB and outlying areas, but will be expanded to the DRB to support the William Penn project needs.", "vz_gai_url = \"http://www.wikiwatershed-vs.org/services/get_asset_info.php\"", "Meta info (metadata) requests\nsiso = \"stationary in-situ observations\" (eg, a river gage, weather stationg, moored buoy, etc)", "meta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta'})\nmeta = meta_r.json()\n\nmeta.keys(), meta['success']\n\ntype(meta['result']), len(meta['result'])\n\n# siso_id is the unique identifier (string type) for the station\nsiso_id_lst = [e['siso_id'] for e in meta['result']]\n\n# Examine the response for the first station (index 0) in the returned list\nmeta['result'][0]['siso_id']\n\nmeta['result'][0]", "Examine all stations (siso assets) by first importing into a Pandas Dataframe", "stations_rec = []\nfor sta in meta['result']:\n sta_rec = {key:sta[key] for key in ['siso_id', 'name', 'lat', 'lon', \n 'platform_type', 'provider']}\n stations_rec.append(sta_rec)\n\nstations_df = pd.DataFrame.from_records(stations_rec)\nstations_df.set_index('siso_id', inplace=True, verify_integrity=True)\nstations_df.index.name = 'siso_id'\nprint len(stations_df)\n\nstations_df.head(10)", "Summaries (station counts) by platform_type and provider", "stations_df.platform_type.value_counts()\n\nstations_df.provider.value_counts()", "Request and examine one station", "# USGS Schuylkill River at Philadelphia\nsiso_id = 'USGS_01474500'", "Meta info request", "# (asset_id, a more generic descriptor for the unique id of any asset)\nmeta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta', \n 'asset_id':siso_id})\n\n# use [0] to pull out the dict from the single-element list\nmetaresult = meta_r.json()['result'][0] # ideally, should first test for success\n\n# var_id is the unique identifier for a \"measurement\" (or variable)\n[(d['var_id'], d['depth']) for d in metaresult['measurements']]\n\nmetaresult['name']", "Data request", "data_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'data', 'units_mode': 'v1',\n 'asset_id':siso_id, 'var_id':'all'})\ndata = data_r.json()\n\ndata['success'], len(data['result']), data['result'][0].keys(), len(data['result'][0]['data'])\n\n# Mapping of var_id string to 'result' list element index\nvar_ids = {e['var_id']:i for i,e in enumerate(data['result'])}\nvar_ids\n\nvar_id = 'H1_Discharge'\n\nget_measurement_byvarid(metaresult, var_id)\n\ndata['result'][var_ids[var_id]]['data'][-10:]\n\n# Pull out data time series for one variable based on var_id\n# returns a list of dicts\ndata_lst = data['result'][var_ids[var_id]]['data']\n\ndata_df = pd.DataFrame.from_records(data_lst)\n\ndata_df.head()", "Create dtutc column with parsed datetime. Also, it's safer to rename the \"value\" column to something unlikely to conflict with pandas method names.", "data_df['dtutc'] = data_df.time.map(lambda es: epochsec_to_dt(es))\ndata_df.set_index('dtutc', inplace=True, verify_integrity=True)\ndata_df.index.name = 'dtutc'\ndata_df = data_df.rename(columns={'value':var_id})\n\ndata_df.info()\n\ndata_df.head()\n\ndata_df.describe()\n\nvar_info = get_measurement_byvarid(metaresult, var_id)\ntitle = \"%s (%s) at %s\" % (var_info['name'], var_id, metaresult['name'])\ndata_df[var_id].plot(title=title, figsize=[11,5])\nplt.ylabel(var_info['units']);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncar/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: NCAR\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:22\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CELMA-project/CELMA
derivations/laplacianInversion/ForwardsBackwardsNonSymmetric.ipynb
lgpl-3.0
[ "Derivation of the inversion stencil using a non-symmetric forward-backward scheme\nDerivation of a non-symmetric stencil of \n$$b = \\nabla\\cdot(A\\nabla_\\perp f)+Bf$$\nusing a forward stencil on $\\nabla\\cdot(A\\nabla_\\perp f)$, and a backward stencil on $\\nabla_\\perp f$.\nThe stencil will not be symmetric as $f(x-h_x)$, $f(x)$ and $f(x+h_x)$ would be multiplied with $J(x,y)$. Symmetry requires that $f(x-h_x)$ is multiplied with $J(x-h_x)$, $f(x)$ with $J(x)$ and $f(x+h_x)$ with $J(x+h_x)$.\nFor symmetric version, see ForwardsBackwards.ipynb and BackwardsForwards.ipynb", "from IPython.display import display\nfrom sympy import init_printing\nfrom sympy import symbols, expand, together, as_finite_diff, collect\nfrom sympy import Function, Eq, Subs\nfrom collections import deque\n\ninit_printing()\n\ndef finiteDifferenceOfOneTerm(factors, wrt, stencil):\n \"\"\"\n Finds the finite different approximation of a term consisting of several factors\n \n Input:\n factors - An iterable containing the factors of the term\n wrt - Take the derivative of the term with respect to this variable\n stencil - An iterable containing the points to be used in the stencil\n \n Output\n term - The finite difference approximation of the term\n \"\"\"\n # Take the derivative\n factorsDiff = []\n for factor in factors:\n factorsDiff.append(as_finite_diff(factor.diff(wrt), stencil))\n \n # Putting together terms\n term = 0\n # Make object for cyclic permutation\n cyclPerm = deque(range(len(factors)))\n for perm in range(len(cyclPerm)):\n # Initialize a dummy term to store temporary variables in\n curTerm = factorsDiff[cyclPerm[0]]\n for permNr in range(1,len(factors)):\n curTerm *= factors[cyclPerm[permNr]]\n # Make a cyclic premutation\n cyclPerm.rotate(1)\n term += curTerm\n return term\n\ndef fromFunctionToGrid(expr, sym):\n \"\"\"\n Change from @(x,z) to @_xz, where @ represents a function\n \n Input:\n expr - The expression to change\n sym - symbols('@_xz, @_xp1z, @_xm1z, @_xzp1, @_xzm1')\n xp1 = x+hx\n zm1 = z-hz\n etc.\n \"\"\"\n curFun = str(syms[0]).split('_')[0]\n for sym in syms:\n curSuffix = str(sym).split('_')[1]\n if curSuffix == 'xz':\n expr = expr.subs(Function(curFun)(x,z), sym)\n elif curSuffix == 'xp1z':\n expr = expr.subs(Subs(Function(curFun)(x,z), x, x+hx).doit(), sym)\n elif curSuffix == 'xm1z':\n expr = expr.subs(Subs(Function(curFun)(x,z), x, x-hx).doit(), sym)\n elif curSuffix == 'xzp1':\n expr = expr.subs(Subs(Function(curFun)(x,z), z, z+hz).doit(), sym)\n elif curSuffix == 'xzm1':\n expr = expr.subs(Subs(Function(curFun)(x,z), z, z-hz).doit(), sym)\n\n return expr\n\nx, z, hx, hz = symbols('x, z, h_x, h_z')\nhx, hz = symbols('h_x, h_z', positive=True)\n\nf = Function('f')(x, z)\nA = Function('A')(x, z)\nB = Function('B')(x, z)\ngxx = Function('g^x^x')(x, z)\ngzz = Function('g^z^z')(x, z)\nJ = Function('J')(x, z)\n\n# Dummy function\ng = Function('g')(x,z)\n\n# Stencils\nbackwardX = [x-hx, x]\nforwardX = [x, x+hx]\nbackwardZ = [z-hz, z]\nforwardZ = [z, z+hz]", "We are here discretizing the equation\n$$ b =\n\\nabla\\cdot(A\\nabla_\\perp f)+Bf\n\\simeq\n\\frac{1}{J}\\partial_x \\left(JAg^{xx}\\partial_x f\\right)\n+ \\frac{1}{J}\\partial_z \\left(JAg^{zz}\\partial_z f\\right) + Bf$$\nwhere the derivatives in $y$ has been assumed small in non-orthogonal grids.\nWe will let $T$ denote \"term\", the superscript $^F$ denote a forward stencil, and the superscript $^B$ denote a backward stencil.\nNOTE:\nsympy has a built in function as_finite_diff, which could do the derivation easy for us. However it fails if\n\nNon derivative terms or factors are present in the expression\nIf the expression is a Subs object (for example unevaluated derivatives calculated at a point)\n\nWe therefore do this in a sligthly tedious way.\nCalculating the first term\nCalculate the finite difference approximation of $\\partial_x f$", "fx = f.diff(x)\nfxB = as_finite_diff(fx, backwardX)\ndisplay(Eq(symbols('f_x'), fx))\ndisplay(Eq(symbols('f_x^B'), together(fxB)))", "Calculate the finite difference approximation of $\\frac{1}{J}\\partial_x \\left(JAg^{xx}\\partial_x f\\right)$\nWe start by making the substitution $\\partial_x f \\to g$ and calulate the first term of the equation under consideration", "# Define the factors\nfactors = [J, A, gxx, g]\nterm1 = finiteDifferenceOfOneTerm(factors, x, forwardX)\nterm1 /= J\ndisplay(Eq(symbols('T_1^F'), term1))", "We now back substitute $g\\to \\partial_x f$", "term1 = term1.subs(Subs(g,x,x+hx).doit(), Subs(fxB,x,x+hx).doit())\nterm1 = term1.subs(g, fxB)\ndisplay(Eq(symbols('T_1^F'), term1))", "Calculating the second term\nCalculate the finite difference approximation of $\\partial_z f$", "fz = f.diff(z)\nfzB = as_finite_diff(fz, backwardZ)\ndisplay(Eq(symbols('f_z'), fz))\ndisplay(Eq(symbols('f_z^B'), together(fzB)))", "Calculate the finite difference approximation of $\\frac{1}{J}\\partial_z \\left(JAg^{zz}\\partial_z f\\right)$\nWe start by making the substitution $\\partial_z f \\to g$ and calulate the second term of the equation under consideration", "# Define the factors\nfactors = [J, A, gzz, g]\nterm2 = finiteDifferenceOfOneTerm(factors, z, forwardZ)\nterm2 /= J\ndisplay(Eq(symbols('T_2^F'), term2))\n\nterm2 = term2.subs(Subs(g,z,z+hz).doit(), Subs(fzB,z,z+hz).doit())\nterm2 = term2.subs(g, fzB)\ndisplay(Eq(symbols('T_2'), term2))", "Calculating the third term", "term3 = B*f\ndisplay(Eq(symbols('T_3^F'), term3))", "Collecting terms", "b = term1 + term2 + term3\ndisplay(Eq(symbols('b'), b))\n\n# Converting to grid syntax\nfunctions = ['f', 'A', 'J', 'g^x^x', 'g^z^z', 'B']\nfor func in functions:\n curStr = '{0}_xz, {0}_xp1z, {0}_xm1z, {0}_xzp1, {0}_xzm1'.format(func)\n syms = symbols(curStr)\n b = fromFunctionToGrid(b, syms)\n\n# We must expand before we collect\nb = collect(expand(b), symbols('f_xz, f_xp1z, f_xm1z, f_xzp1, f_xzm1'), exact=True)\ndisplay(Eq(symbols('b'),b))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pligor/predicting-future-product-prices
04_time_series_prediction/30_price_history_dataset_per_mobile_phone-arima.ipynb
agpl-3.0
[ "# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2\n\nfrom __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\n#from models.price_history_21_seq2seq_dyn_dec_ins import PriceHistorySeq2SeqDynDecIns\nfrom data_providers.PriceHistoryMobileAttrsCombinator import PriceHistoryMobileAttrsCombinator\nfrom sklearn.neighbors import NearestNeighbors\nfrom datetime import datetime\nfrom data_providers.price_hist_with_relevant_deals import PriceHistWithRelevantDeals\nfrom data_providers.price_history_29_dataset_per_mobile_phone import PriceHistoryDatasetPerMobilePhone\nfrom arima.arima_estimator import ArimaEstimator\nimport warnings\nfrom collections import OrderedDict\nfrom mylibs.py_helper import cartesian_coord\nfrom arima.arima_cv import ArimaCV\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline", "Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now", "input_len = 60\ntarget_len = 30\nbatch_size = 50\nwith_EOS = False\n\ncsv_in = '../price_history_03_seq_start_suddens_trimmed.csv'", "Actual Run", "data_path = '../../../../Dropbox/data'\nph_data_path = data_path + '/price_history'\nassert path.isdir(ph_data_path)\n\nnpz_full = ph_data_path + '/price_history_per_mobile_phone.npz'\n\n#dataset_gen = PriceHistoryDatasetPerMobilePhone(random_state=random_state)\n\ndic = np.load(npz_full)\ndic.keys()[:10]", "Arima", "parameters = OrderedDict([\n ('p_auto_regression_order', range(6)), #0-5\n ('d_integration_level', range(3)), #0-2\n ('q_moving_average', range(6)), #0-5\n])\n\ncart = cartesian_coord(*parameters.values())\ncart.shape\n\ncur_key = dic.keys()[0]\ncur_key\n\ncur_sku = dic[cur_key][()]\ncur_sku.keys()\n\ntrain_mat = cur_sku['train']\ntrain_mat.shape\n\ntarget_len\n\ninputs = train_mat[:, :-target_len]\ninputs.shape\n\ntargets = train_mat[:, -target_len:]\ntargets.shape\n\neasy_mode = False\n\nscore_dic_filepath = data_path + \"/arima/scoredic_easy_mode_{}_{}.npy\".format(easy_mode, cur_key)\npath.abspath(score_dic_filepath)\n\n%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n scoredic = ArimaCV.cross_validate(inputs=inputs, targets=targets, cartesian_combinations=cart,\n score_dic_filepath=score_dic_filepath, easy_mode=easy_mode)\n\n#4h 4min 51s / 108 cases => ~= 136 seconds per case !\n\narr = np.array(list(scoredic.iteritems()))\narr.shape\n\n#np.isnan()\nfiltered_arr = arr[ np.logical_not(arr[:, 1] != arr[:, 1]) ]\nfiltered_arr.shape\n\nplt.plot(filtered_arr[:, 1])\n\nminarg = np.argmin(filtered_arr[:, 1])\nminarg\n\nbest_params = filtered_arr[minarg, 0]\nbest_params\n\ntest_mat = cur_sku['test']\ntest_ins = test_mat[:-target_len]\ntest_ins.shape\n\ntest_tars = test_mat[-target_len:]\ntest_tars.shape\n\ntest_ins_vals = test_ins.values.reshape(1, -1)\ntest_ins_vals.shape\n\ntest_tars_vals = test_tars.values.reshape(1, -1)\ntest_tars_vals.shape", "Testing with easy mode on", "%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=best_params[0],\n d_integration_level=best_params[1],\n q_moving_average=best_params[2],\n easy_mode=True)\n score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nscore\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "Testing with easy mode off", "%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=best_params[0],\n d_integration_level=best_params[1],\n q_moving_average=best_params[2],\n easy_mode=False)\n score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nscore\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "Conclusion\nIf you are training in easy mode then what you get at the end is that the model only cares for the previous value in order to do its predictions and this makes it much easier for everybody but in reality we might not have advantage\nTrying", "args = np.argsort(filtered_arr[:, 1])\nargs\n\nfiltered_arr[args[:10], 0]\n\n%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=4,\n d_integration_level=1,\n q_moving_average=3,\n easy_mode=False)\n print ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "All tests", "from arima.arima_testing import ArimaTesting\n\nbest_params, target_len, npz_full\n\n%%time\nkeys, scores, preds = ArimaTesting.full_testing(best_params=best_params, target_len=target_len,\n npz_full=npz_full)\n\n# render graphs here\n\nscore_arr = np.array(scores)\n\nnp.mean(score_arr[np.logical_not(score_arr != score_arr)])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
EvanBianco/Practical_Programming_for_Geoscientists
Part1a_Fundamentals_of_Programming.ipynb
apache-2.0
[ "Fundamentals of Programming\nEvan Bianco\nagilegeoscience, @EvanBianco\n\n\nVariables and Assignment\n\n\nNative data types\n\n\nOperators and Expressions\n\n\nData collections and data structures\n\n\nProcedures and control: Loops and Making choices\n\n\nGetting data, manipulating data\n\n\nDefining functions and calling functions \n\n\nWriting and running programs \n\n\nObjects and classes\n\n\nVariables and Assignment", "x = 7\ny = 10\n\nx, y\n\nx\n\nx.__repr__()", "Checking the type of a variable", "type(x)\n\n%whos\n\ndel y\n\n%whos ", "Native data types", "z = 1.4 + 2.3\n\nprint(z)\n\nc = 2 + 1.5j # same as writing: complex(2, 1.5) \nc\n\n5 / 3.0\n\n5 // 3", "Why are there 2 kinds of numbers?\nStrings str", "s1 = '#Nordegg:'\n\ns1.strip('#')\n\ns1.startswith('Nor')", "str indexing (how to count, part 1)\n\n\nExercise: return the e character in s\n\nTry help(s), s1?, s1??, s1.&lt;tab&gt;, s1.upper() , s1.strip(), s1.startswith(), s1.pop()\nCan we do add two 'strings' together?", "s2, s3 = 'Limestone \\n', 'Shale'\n\ns2 + s3\n\nprint(s2 + s3)\n\nprint(s2 * 5)\n\nlithology = s + s2 + 'has minor ' + s3 + ' fragments'\nlithology\n\n'{0} {1} has minor {2} fragments'.format(s,s2,s3)", "String methods and string formatting\n\n\n\nExercise: Use a combination of string methods on s text formatting to produce the following output:\n&gt; The Nordegg limestone has minor shale fragments \n(ensure sentence case and remove '#', ':', `'\\n')\n\n\nOperators and Expressions\n\n\nmathematical operations\n\n\ncomparison operations\n\n\nbitwise operations\n\n\naugmented assignment, copies, and pointers\n\n\nboolean expressions\n\n\nconversion functions\n\n\nmathematical operations\ncomparison operations\naugmented assignment, copies and pointers", "y += y / 125.0\ny", "boolean expressions", "type(ord('\\t'))", "conversion functions\nData collections and data structures\nlist, dict, tuples, sets\nlist\nLists in Python are one-dimensional, ordered containers whose elements may be any Python objects. Lists are mutable and have methods for adding and removing elements to and from themselves. The literal syntax for lists is surround commas seperated values with square brackets ([]). The square brackets are a syntactic hint that lists are indexable.", "# [1,1] + [3,3] + [4, 4]\nlist(str(10))\n\nfib = [1, 1, 2, 3, 5, 8] + [13]\nfib.append(13)\n\ndel(fib)\n\nfib.extend([21.0, 34.0, 55.0])\nfib\n\nfib += [89.0, 144.0]\nfib\n\nfibm = np.array(fib[:-1])\nfibp = np.array(fib[1:])\nplt.plot(fibp/fibm)\nplt.title('Golden Ratio')\nfibp/fibm", "Indexing, slicing, striding\nIn addition to accessing a single element in a list or string, we can also slice or stride into data structures to access multiple elements at once.", "name = 'Cambrian (C)'\nname\n\nname = 'Cambrian (C)'", "Without using the Python interpreter, what is the expected output of the following commands?:\n\n\na) name[:7]\n\n\nb) name[:-4]\n\n\nc) name[3:7]\n\n\nd) name[::2]", "age_names = ['Cambrian (C)', 'Ordivician (O)', 'Silurian (S)', 'Devonian (D)', \n 'Mississipian (M)', 'Pennsylvanian (IP)', 'Permian (P)',\n 'Triassic (Tr)', 'Jurassic (J)', 'Cretaceous (C)', \n 'Tertiary (T)', 'Quaternary (Q)']", "Indexing practice\n\nExercise:\n\n\nreturn the string: \n&gt; Triassic (Tr) \n\n\nreturn just the word: \n&gt; Triassic\n\n\nreturn the abbreviation:\n&gt; (Tr) enclosed in parenthesis\n\n\nreturn just the abbreviation: \n&gt; Tr \n\n\n(bonus points if you can do (d) all in one line)\nNested list\nlists can contain anything*", "age_list = [['Cambrian (C)', [544,495]], ['Ordivician (O)', [495, 492] ], \n ['Silurian (S)', [442, 416]], ['Devonian (D)',[416, 354]], \n ['Mississipian (M)', [354, 324]], ['Pennsylvanian (IP)', [324, 295]], \n ['Permian (P)', [304, 248]], ['Triassic (Tr)', [248, 205]], \n ['Jurassic (J)', [205, 144]], ['Cretaceous (C)', [160, 65]], \n ['Tertiary (T)', [65, 1.8]], ['Quaternary (Q)']\n ]\n# But at some point it just gets impractical", "Exercise: what is the expected output of:\n\n\na) age_intervals[:2]\n\n\nb) age_intervals[6]\n\n\nc) what command would you type to return the age of the end of the Permian, 248?\n\n\nd) the start of the Cretaceous is wrong (it should be 144). Change it to the correct value\n\n\ne) We've lost the dates for the Quaternary Period [1.8 mya to present (0)]. Index into that entry, and append it.", "# your code here\n\nages = {'Cambrian': {\"Abbreviation\": \"C\", \"Start\": \"544\", \"End\": \"495\"},\n 'Ordivician': {\"Abbreviation\": \"O\", \"Start\": \"495\", \"End\": \"492\"}, \n 'Silurian': {\"Abbreviation\": \"S\", \"Start\": \"442\", \"End\": \"416\"}, \n 'Devonian': {\"Abbreviation\": \"D\", \"Start\": \"416\", \"End\": \"354\"},\n 'Mississipian': {\"Abbreviation\": \"M\", \"Start\": \"354\", \"End\": \"324\"},\n 'Pennsylvanian': {\"Abbreviation\": \"IP\", \"Start\": \"324\", \"End\": \"295\"}, \n 'Permian': {\"Abbreviation\": \"P\", \"Start\": \"304\", \"End\": \"248\"},\n 'Triassic': {\"Abbreviation\": \"Tr\", \"Start\": \"248\", \"End\": \"205\"},\n 'Jurassic': {\"Abbreviation\": \"J\", \"Start\": \"205\", \"End\": \"144\"}, \n 'Cretaceous': {\"Abbreviation\": \"C\", \"Start\": \"144\", \"End\": \"65\"}, \n 'Tertiary': {\"Abbreviation\": \"T\", \"Start\": \"65\", \"End\": \"1.8\"},\n 'Quaternary': {\"Abbreviation\": \"Q\", \"Start\": \"1.8\", \"End\": \"0\"}\n}\n", "dicts\nDictionaries are hands down the most important data structure in Python. Everything in Python is a dictionary. A dictionary, or dict, is a mutable, unordered collection of unique key / value pairs. \ntimescale = dict([(k1,v1),(k1,v1),(k1,v1)])\ntuples\nTuples are the immutable form of lists. They behave almost exactly the same as lists in every way except that you cannot change any of their values. There are no append() or extend() methods, and there are no in-place operators. \nThey also differ from lists in their syntax. They are so central to how Python works, that tuples are defined by commas. Oftentimes, tuples will be seen surrounded by parentheses. These parentheses only serve to group actions or make the code more readable, not to actually define tuples.", "a = 1,2,3,4 # a length-4 tuple\nb = (42,) # length-1 tuple defined by the comma\nc = (42) # not a tuple, just the number 42\nd = () # length-0 tuple- no commas means no elements", "You can concatenate tuples together in the same way as lists, but be careful about the order of opeartions. This is where parentheses come in handy,\n(1, 2) + (3, 4)", "(1,2)+(3,4)", "Note that even though tuples are immutable, they may have immutable elements. Suppose that we have a list embedded in a tuple. This list may be modified in-place even though the list may not be removed or replaced wholesale:", "x = 1.0, [2, 4], 16\nx[1].append(8)\nx", "Sets\nInstances of the set type are equivalent to mathematical sets. Like their math counterparts, literal sets in Python are defined by comma seperated values between curly braces ({}). Sets are unordered containers of unique values. Duplicated elements are ignored. Beacuse they unordered, sets are not sequences and cannot be duplicated.", "# a literal set formed with elements of various types\n{1.0, 10, \"one hundred\", (1, 0, 0, 0)}\n\n# a literal set OF special values\n{True, False, None, \"\", 0.0, 0}\n\n# conversion from a list to a set\nset([2.0, 4, \"eight\", (16,), 4, 4, 2.0])", "Here's a good time to take a break\n\nVariables and Assignment\nNative data types\nOperators and Expressions\nData collections and data structures\n<font color='lightgrey'>Procedures and control: Loops and Making choices</font>\n<font color='lightgrey'>Getting data, manipulating data</font>\n<font color='lightgrey'>Defining functions and calling functions</font>\n<font color='lightgrey'>Writing and running programs</font>\n<font color='lightgrey'>Objects and classes</font>\n\n\nExercise manipulating and viewing lists:\n\na) create a new list that has the integer 1 for sand and 2 for shale.\nb) using plt.plot() create a plot of the porosity values\nc) make a depth vs porosity plot so depth is vertically downwards\nd) Use plt.scatter() to make a cross plot of gamma vs porosity\nf) Explore other keyword arguments for plt.plot() and plt.scatter to pretty things up\nbonus e) Use the keyword c in the call to scatter to distinguish between sand and shale", "layers = ['shale','shale','shale','sand','sand','sand','sand','shale','shale','shale']\ndepth = [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\nporosity = [2, 3, 2, 14, 18, 17, 14, 2, 3, 3]\ngamma = [85, 90, 77, 23, 27, 31, 25, 110, 113, 108]", "Procedures and control: Loops and Making choices\nLoops\nDoing stuff many times\nthe <code><font color=\"green\">while</font></code> loop\nthe <code><font color=\"green\">for</font></code> loop", "layers = ['shale','shale','shale','sand','sand','sand','sand','shale','shale','shale']\n\n# for loop syntax\nfor val in gamma:\n print(val) ", "<font color=\"#0A5394\">*iteration, *iterable</font>\nList comprehension", "[val*2 for val in gamma]", "Making choices\nThe <code><font color=\"green\">if</font></code> statement", "layers = ['shale','shale','shale','sand','sand','sand','sand','shale','shale','shale']\n# the if statement:", "<font color=\"#0A5394\">*conditionals</font>\nGetting data...\n... from text files\nYou can explicitly read from and write to files directly in your code. Python makes working with files pretty simple.\nThe first step to working with a text file is to obtain a 'file object' using open.", "file_for_reading = open('reading_file.txt', 'r') # 'r' means read-only\nfile_for_writing = open('writing_file.txt', 'w') # 'w' is for write - will destroy file if already exists\nfile_for_appending = open('appending_file.txt', 'a') # 'a' is for appending to the end of a file.\nfile_for_writing.close() # don't forget to close your files when you're done.\n\n## Open the file with read only permit\nfname = 'data/L30_tops.txt'\n\nwith open(fname, 'r') as f:\n header = f.readline() # is string containing the next line in the file\n body = f.readlines() # The variable \"lines\" is a list containing all lines", "Every line you get this way ends in a newline character, \\n, so you'll often want to strip() it before doing anything with it.", "fname = 'data/L30_tops.txt'\nwith open(fname) as f:\n for line in f.readlines():\n if not line.startswith('#'):\n print(line.strip())", "Exercise create a tops dictionary:\n\na) Modify the code snippet about to create a dict call tops, containing the formation name as the key and the depth as the value\n\nDefining and calling functions\nthe <code><font color=\"green\">def</font></code> statement", "def myfunc(args):\n \"\"\"\n Documentation string\n \"\"\"\n # statement\n # statement\n return # optional", "<font color=\"#0A5394\">*scope</font>\n\nExercise: write a function called vshale that that converts the list of gamma into a Vshale measurement. Use 20 API for 100% sand and use 150 API for 100% shale", "def vshale(gamma, sand_end, shale_end):\n # your code here\n return # your output", "Exercise: write a function called process_tops that takes a\nfilename as input and return a dictionary of the tops", "topsfile = 'data/L30_tops.txt'\n\ndef process_tops(file):\n \"\"\"\n Takes a file as input and returns a dictionary of tops\n f : a filename path\n \"\"\"\n # Enter code here\n return my_tops\n\n# process_tops(topsfile)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n% matplotlib inline\n\ntopo = np.load('data/topography.npy')\nplt.imshow(topo)\nplt.show()", "Writing and running programs\nPut the previous function in a text file and give it the name, load_tops.py\nYour first module", "import my_module\n\ntopsfile = 'data/L30_tops.txt'\ntops = load_tops.my_tops(topsfile)", "... from delimited files", "import csv\ncsv.\nwith open('data/periods.csv', 'rt') as f:\n reader = csv.DictReader(f, delimiter=',')\n for row in reader:\n print (row)", "You can write out a delimited data using csv.writer:", "my_tops = {'GOC' : 1200.0 , 'OWC' : 1300.0, 'Top Reservoir' : 1100.0}\n\nwith open('comma_delimited_stock_prices.txt', 'wb') as f:\n writer = csv.writer(f, delimiter=',')\n for name, depth in my_tops.items():\n writer.writerow([name, depth])", "... from the web\nUse View Source in your browser to figure out where the age range is on the page, and what it looks like.\nTry to find the same string here.", "url = \"http://en.wikipedia.org/wiki/Cretaceous\"\n\nimport requests\nr = requests.get(url)\nr.text[:2000]", "Using a regular expression:", "import re\n\ns = re.search(r'<i>(.+?million years ago)</i>', r.text)\ntext = s.group(1)", "Exercise: Make a function to get the start and end ages of any geologic period, taking the name of the period as an argument.", "def get_age(period):\n url = \"http://en.wikipedia.org/wiki/\" + period\n r = requests.get(url)\n start, end = re.search(r'<i>([\\.0-9]+)–([\\.0-9]+)&#160;million years ago</i>', r.text).groups()\n return float(start), float(end)\n\nperiod = \"Cretaceous\"\nget_age(period)\n\ndef duration(period):\n t0, t1 = get_age(period)\n duration = t0 - t1\n response = \"According to Wikipedia, the {0} period was {1:.2f} Ma long. \".format(period, duration)\n return response\n\nduration('Cretaceous')", "Using built-in functions\nImporting modules\nthe <code><font color=\"green\">import</font></code> statement", "import this", "The Python standard library\nBuilt-in functions\nBuilt-in Types\ndocs.python.org", "import datetime", "External python languges\nThe Python Package Index, PyPI\n\nSciPy - a collection of often-used libraries\n\nUsing external libraries", "import numpy as np", "Exercise: \nUsing numpy's load text function np.loadtxt(...), load the data file into a variable called data", "data = np.loadtxt('data/my_first_well.las')", "Exercise: \na) write a function called make_curves that takes the data as input and return a dictionary of the curves. Keys are the curve name, values are the columns \nb) write a function that converts the sonic log (us/ft) to velocity (m/s)", "# Acoustic impedance\nvp = data['DT']\nrhob = well['RHOB']\nacimp = vp * rhob ", "Exercise: compute a reflection coefficient series by completing the following for loop", "rc_series = [] \nfor layer in range(len(acimp)-1):\n z2 = acimp[layer + 1]\n z1 = acimp[layer]\n coeff = rc(z2, z1)\n rc_series.append(coeff)\n\ndef rc(ip2, ip1):\n \"\"\"\n returns the normal incidence reflection coefficient\n between two layers with impedances Z2, Z1\n ip2 : Impedance of the bottom layer \n ip1 : Impedance of the upper layer\n \"\"\"\n return (ip2 - ip1) / (ip2 + ip1)", "Exercise: make two tracks of a well log; plot the impedance in one track, and the R.C series in the other", "plt.subplot(121)\nplt.plot(ip, c='g', lw=4, alpha=0.5)\nplt.subplot(122)\nplt.plot(rc, c='g', lw=4, alpha=0.5)", "Getting started with bruges\nWriting and running programs\nObjects and Classes", "class Layers(object):\n \n def __init__(self, layers, label=None):\n # Just make sure we end up with an array\n self.layers = np.array(layers)\n self.label = label or \"My log\"\n self.length = self.layers.size # But storing len in an attribute is unexpected...\n \n def __len__(self): # ...better to do this.\n return len(self.layers)\n \n def rcs(self):\n uppers = self.layers[:-1]\n lowers = self.layers[1:]\n return (lowers-uppers) / (uppers+lowers)\n \n def plot(self, lw=0.5, color='#6699ff'):\n fig = plt.figure(figsize=(2,6))\n ax = fig.add_subplot(111)\n ax.barh(range(len(self.layers)), self.layers, color=color, lw=lw, align='edge', height=1.0, alpha=1.0, zorder=10)\n ax.grid(zorder=2)\n ax.set_ylabel('Layers')\n ax.set_title(self.label)\n ax.set_xlim([-0.5,1.0])\n ax.set_xlabel('Measurement (units)')\n ax.invert_yaxis() \n #ax.set_xticks(ax.get_xticks()[::2]) # take out every second tick\n ax.spines['right'].set_visible(False) # hide the spine on the right\n ax.yaxis.set_ticks_position('left') # Only show ticks on the left and bottom spines\n \n plt.show()\n\nvelocities = [0.23, 0.34, 0.45, 0.25, 0.23, 0.35]\n\nl = Layers(velocities, label='Well # 1')\n\nl.label\n\nl.plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tjwei/HackNTU_Data_2017
Week03/03-Speed.ipynb
mit
[ "計算速度\n先照之前的,讀取資料", "import tqdm\nimport tarfile\nimport pandas\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport PIL\nimport gzip\nfrom urllib.request import urlopen\n%matplotlib inline\nmatplotlib.style.use('ggplot')\n#matplotlib.style.use('bmh')\n\n# progress bar\ntqdm.tqdm.pandas()\n\n# 檔案名稱格式\nfilename_format=\"M06A_{year:04d}{month:02d}{day:02d}.tar.gz\".format\nxz_filename_format=\"xz/M06A_{year:04d}{month:02d}{day:02d}.tar.xz\".format\ncsv_format = \"M06A/{year:04d}{month:02d}{day:02d}/{hour:02d}/TDCS_M06A_{year:04d}{month:02d}{day:02d}_{hour:02d}0000.csv\".format\n\n# 打開剛才下載的檔案試試\ndata_config ={\"year\":2016, \"month\":12, \"day\":18}\ntar = tarfile.open(filename_format(**data_config), 'r')\n\n# 如果沒有下載,可以試試看 xz 檔案\n#data_dconfig ={\"year\":2016, \"month\":11, \"day\":18}\n#tar = tarfile.open(xz_filename_format(**data_config), 'r')\n\n# 設定欄位名稱\nM06A_fields = ['VehicleType',\n 'DetectionTime_O','GantryID_O',\n 'DetectionTime_D','GantryID_D ',\n 'TripLength', 'TripEnd', 'TripInformation']\n# 打開裡面 10 點鐘的資料\ncsv = tar.extractfile(csv_format(hour=10, **data_config))\n\n# 讀進資料\ndata = pandas.read_csv(csv, names=M06A_fields)\n\n# 檢查異常的資料\nprint(\"異常資料數:\", data[data.TripEnd == 'N'].shape[0])\n\n# 去除異常資料\ndata = data[data.TripEnd == 'Y']\n\n# 焦點放在 TripInformation 和 VehicleType\ndata = data[['VehicleType', \"TripInformation\"]]\n\n# 看前五筆\ndata.head(5)\n\nimport datetime\n# 用來解析時間格式\ndef strptime(x):\n return datetime.datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\")\n\ndef parse_tripinfo(tripinfo):\n split1 = tripinfo.split(\"; \")\n split2 = (x.split('+') for x in split1)\n return [(strptime(t), node) for t,node in split2]\n# 試試看\ndata.head(10).TripInformation.apply(parse_tripinfo)\n\n# 新增一欄\ndata['Trip'] = data.TripInformation.progress_apply(parse_tripinfo)", "計算平均速度\n計算兩個偵測站之間的速度,只計算同一條路線上的", "trip = data['Trip'][0]\ntrip\n\nfor (t1,n1), (t2,n2) in zip(trip[:-1], trip[1:]):\n # 去除換路線的\n if n1[:3] != n2[:3] or n1[-1]!=n2[-1]:\n continue\n # 去除額外路線的\n if n1[3]=='R' or n2[3]=='R': \n continue\n # 從站名取得公里數\n km1 = int(n1[3:-1])/10\n km2 = int(n2[3:-1])/10\n hr_delta = (t2-t1).total_seconds()/60/60\n speed = abs(km2-km1)/hr_delta\n print(speed)\n ", "包成函數", "def compute_speed(trip):\n rtn = []\n for (t1,n1), (t2,n2) in zip(trip[:-1], trip[1:]):\n # 去除換路線的\n if n1[:3] != n2[:3] or n1[-1]!=n2[-1]:\n continue\n # 去除額外路線的\n if n1[3]=='R' or n2[3]=='R': \n continue\n # 從站名取得公里數\n km1 = int(n1[3:-1])/10\n km2 = int(n2[3:-1])/10\n hr_delta = (t2-t1).total_seconds()/60/60\n speed = abs(km2-km1)/hr_delta\n rtn.append(speed)\n return np.array(rtn)", "Q\n將 compute_speed 改成 numpy 方式的寫法?", "data.head(10).Trip.apply(compute_speed)\n\ndata['Speed'] = data.Trip.progress_apply(compute_speed)\n\n# 只留下有速度的旅程\nvalid_idx = data.Speed.apply(len).astype('bool')\nvalid_data = data[valid_idx].copy()\ndel valid_data['TripInformation']\nvalid_data.head()", "這樣就能計算旅程中的最高速度", "valid_data['MaxSpeed']=valid_data.Speed.apply(max)\n\nvalid_data.sort_values('MaxSpeed', ascending=False)", "Q\n為什麼有些旅程的速限那麼低?\n發生了什麼事情? 用\npython\nvalid_data.loc[號碼].Trip\n可以看看\n查詢超過最高速限 110 十公里以上的旅程", "valid_data.query(\"MaxSpeed > 120\").shape\n\n# 看看統計圖表\nvalid_data.MaxSpeed.hist(bins=np.arange(0,200,10))\n\n# 不同車種的速度\nvalid_data.boxplot(by='VehicleType', showmeans=True, showfliers=False);", "Q\n參考 http://matplotlib.org/api/pyplot_api.html 用不同方法畫畫看\n如混何應用看看\n```python\n畫很多 histogram 小圖\nvalid_data.hist(by='VehicleType', column='MaxSpeed', bins=np.arange(0,200,10));\n另一種很多 histogram 小圖\nvalid_data.groupby('VehicleType').hist()\nhistogram 疊合在一起\nfor vt in [31,32,41,42,5]:\n valid_data[valid_data.VehicleType==vt].MaxSpeed.hist(bins=np.arange(0,200,10), alpha=0.5)\n空心的畫法,圖示說明,透明度\nfor vt in [41,42,5]:\n valid_data[valid_data.VehicleType==vt].MaxSpeed.hist(bins=np.arange(0,200,10), \n label=str(vt), alpha=0.5, histtype='step', linewidth=2)\nplt.legend();\n不同車種的速度\ngp = valid_data.groupby('VehicleType')\nplt.gca().get_xaxis().set_visible(False)\ngp.mean().plot.bar(ax=plt.gca(), yerr=gp.std(), table=True);\n```", "vt_types = [31,32,41,42, 5]\nvt_labels = [\"小客車\", \"小貨車\", \"大客車\",\"大貨車\", \"聯結車\"]\nplt.hist([valid_data.MaxSpeed[valid_data.VehicleType==vt] for vt in vt_types], \n label=vt_labels, bins=np.arange(50,150,10), normed=1)\nfor t in plt.legend().texts:\n t.set_fontname('Droid Sans Fallback')\n#用 matplotlib.font_manager.fontManager.ttflist 看你有什麼字型", "最後看看我們抓到多少超速了", "num_total = data.shape[0]\nnum_valid = valid_data.shape[0]\nnum_overspeed = valid_data.query(\"MaxSpeed > 120\").shape[0]\nnum_total, num_valid, num_overspeed\n\nnum_overspeed / num_total\n\nnum_overspeed / num_valid", "參考一下國道違規統計的數據", "stat_url = \"http://www.hpb.gov.tw/files/11-1000-138.php\"\n國道違規統計 = pandas.read_html(stat_url, attrs={\"bordercolor\":\"#cccccc\"}, header=1)[0]\n國道違規統計\n\n國道違規統計[國道違規統計[\"違規 項目\"]==\"超速\"]\n\n# 平均每天\n國道違規統計[國道違規統計[\"違規 項目\"]==\"超速\"].applymap(lambda x:int(x)/365 if isinstance(x,int) else x)", "統計一整天\n剛才我們只有統計一個小時,現在來統計整天", "csvs = (tar.extractfile(csv_format(hour=hr, **data_config)) for hr in tqdm.trange(24))\n\ndata = pandas.concat([pandas.read_csv(csv, names=M06A_fields) for csv in csvs])\nprint(\"資料大小\", data.shape)\n\n# 檢查異常的資料\nprint(\"異常資料數:\", data[data.TripEnd == 'N'].shape[0])\n\n# 去除異常資料\ndata = data[data.TripEnd == 'Y']\n\n# 把焦點放在 TripInformation 和 VehicleType\ndata = data[['VehicleType', \"TripInformation\"]]\n\n# 很慢? 可以參考 04-Speed-Limit.ipynb 中 parse_tripinfo 的加速\ndata['Trip'] = data.TripInformation.progress_apply(parse_tripinfo)\ndata['Speed'] = data.Trip.progress_apply(compute_speed)\n\nvalid_idx = data.Speed.apply(len).astype('bool')\nvalid_data = data[valid_idx].copy()\ndel valid_data['TripInformation']\nvalid_data['MaxSpeed']=valid_data.Speed.apply(max)\n\nvalid_data.MaxSpeed.hist(bins=np.arange(0,200,10));\n\nvalid_data.sort_values(\"MaxSpeed\", ascending=False)\n\nnum_total = data.shape[0]\nnum_valid = valid_data.shape[0]\nnum_overspeed = valid_data.query(\"MaxSpeed > 120\").shape[0]\nnum_total, num_valid, num_overspeed\n\n國道違規統計[國道違規統計[\"違規 項目\"]==\"超速\"]", "Q\n依照不同的車型來計算,然後繪圖。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JrtPec/opengrid
notebooks/WIP/Influxdb.ipynb
apache-2.0
[ "Influxdb is a time series database written from scratch (in go) and independent of any other database infrastructure. \nUsing the basic influxdb python client, we create a database, a measurement and upload some time series data", "%%bash\npip install influxdb\n\nimport pandas as pd\nimport charts\n\nfrom opengrid.library import houseprint\nfrom influxdb import DataFrameClient\n\nindbclient = DataFrameClient(host='influxdb')\n\n\nindbclient.drop_database('opengrid')\nindbclient.create_database('opengrid')\n\nhp = houseprint.Houseprint()\nhp.sync_tmpos()", "Pump all tmpo data to influxdb", "for tpe in [#'electricity', \n 'water',\n 'gas']:\n df = hp.get_data(sensortype=tpe, diff=False, resample='raw')\n for col in df:\n print(\"Writing data for {}, sensor {}\".format(tpe, col))\n try:\n indbclient.write_points(dataframe=df[[col]].dropna(),\n measurement=tpe,\n database='opengrid')\n except:\n print(' Upload to influxdb failed')\n\ndf.info()", "Time the querying of data", "head_str = \"2016-01-01 00:00:00\"\nhead = pd.Timestamp(head_str)\ntpe = 'gas'", "Influxdb", "%%timeit\ndf = indbclient.query(\"SELECT * from {} where time > '{}'\".format(tpe, head_str), database='opengrid')[tpe]\n\ndf = indbclient.query(\"SELECT * from {} where time > '{}'\".format(tpe, head_str), database='opengrid')[tpe]\ndf.info()", "tmpo", "%%timeit\ndf = hp.get_data(sensortype=tpe, head=head, diff=False, resample='raw')\n\ndf = hp.get_data(sensortype=tpe, head=head, diff=False, resample='raw')\ndf.info()", "Conclusion\ntmpo seems MORE efficient for large queries!!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]