repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
simpleoier/2016FallSpeechProj
|
1. Analysis.ipynb
|
apache-2.0
|
[
"Feature Analysis\nThe standard features: LLR (low level discriptors)\nFile path: \\emotiondetection\\features_labels_lld\nload data\n\nclass Data. please see common.py\n\nget the training data",
"import numpy as np\nimport os\nfrom sklearn.manifold import TSNE\n\nfrom common import Data\n\nlld=Data('lld')\nlld.load_training_data()\nprint 'training feature shape: ', lld.feature.shape\nprint 'training label shape: ', lld.label.shape\n\n#lld.load_test_data()\n#print 'test feature shape: ',lld.feature_test.shape\n#print 'test label shape: ',lld.label_test.shape",
"a. histogram\nplot the histgram of one feature, to see what distribution the feature is.",
"import matplotlib.pyplot as plt\n%matplotlib inline \n\nfeature_table=[1,10,100,300]\nfor ind,fea in enumerate(feature_table):\n f= lld.feature[:,fea]\n \n plt.subplot(2,2,ind+1)\n plt.hist(f)\n #plt.title(\"Histogram of feature \"+str(ind))\nplt.axis('tight')",
"Different features have different ditributions.\nSome are subject to Gussain distribution.\nb. t-SNE\nuse TSNE to see the linear separability of the data.",
"model=TSNE(n_components=2,random_state=0) # reduct the dimention to 2 for visualization\nnp.set_printoptions(suppress=True)\nY=model.fit_transform(lld.feature,lld.label) # the reducted data\n\n\nplt.scatter(Y[:, 0], Y[:, 1],c=lld.label[:,0],cmap=plt.cm.Spectral)\nplt.title('training data')\nplt.axis('tight')\n \nprint Y.shape",
"the linear separability is so terrible : (\nc. analyse what classification methods are suit for out data theoretically\n\n\nTraining data:\n9959 examples and 384 features.\n5 classes.\n\n\nmost used classification methods\nSVM: good for 2 calsses. feature dimension increase ->computing resources large. X\nNN: $\\surd$\nDecision Trees:\nHMM:\n..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
enlighter/learnML
|
mini-projects/p0 - titanic survival exploration/unused notebook/Titanic_Survival_Exploration.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject 0: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.",
"import numpy as np\nimport pandas as pd\n\n# RMS Titanic data visualization code \nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())",
"From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code block cell to remove Survived as a feature of the dataset and store it in outcomes.",
"# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())",
"The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?",
"def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(predictions, outcomes[:5])",
"Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were told to make a prediction about any passenger aboard the RMS Titanic who we did not know anything about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers as a whole did not survive the ship sinking.\nThe function below will always predict that a passenger did not survive.",
"def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)",
"Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: Replace this text with the prediction accuracy you found above.\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.",
"survival_stats(data, outcomes, 'Sex')",
"Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.",
"def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)",
"Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: Replace this text with the prediction accuracy you found above.\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. Consider, for example, all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.",
"survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])",
"Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.",
"def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)",
"Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: Replace this text with the prediction accuracy you found above.\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age < 18\"]",
"survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Age < 18\"])",
"After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.",
"def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n pass\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)",
"Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.",
"print accuracy_score(outcomes, predictions)",
"Answer: Replace this text with your answer to the question above.\nConclusion\nCongratulations on what you've accomplished here! You should now have an algorithm for predicting whether or not a person survived the Titanic disaster, based on their features. In fact, what you have done here is a manual implementation of a simple machine learning model, the decision tree. In a decision tree, we split the data into smaller groups, one feature at a time. Each of these splits will result in groups that are more homogeneous than the original group, so that our predictions become more accurate. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many algorithms that fall into the category of supervised learning. In this Nanodegree, you'll learn about supervised learning techniques first. In supervised learning, we concern ourselves with using features of data to predict or model things with objective outcome labels. That is, each of our datapoints has a true outcome value, whether that be a category label like survival in the Titanic dataset, or a continuous value like predicting the price of a house.\nQuestion 5\nCan you think of an example of where supervised learning can be applied?\nHint: Be sure to note the outcome variable to be predicted and at least two features that might be useful for making the predictions.\nAnswer: Replace this text with your answer to the question above.\n\nTip: If we want to share the results of our analysis with others, we aren't limited to giving them a copy of the iPython Notebook (.ipynb) file. We can also export the Notebook output in a form that can be opened even for those without Python installed. From the File menu in the upper left, go to the Download as submenu. You can then choose a different format that can be viewed more generally, such as HTML (.html) or\nPDF (.pdf). You may need additional packages or software to perform these exports."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
HaoMood/cs231n
|
assignment1/assignment1/features.ipynb
|
gpl-3.0
|
[
"Image features exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nWe have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.\nAll of your work for this exercise will be done in this notebook.",
"import random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"Load data\nSimilar to previous exercises, we will load CIFAR-10 data from disk.",
"from cs231n.features import color_histogram_hsv, hog_feature\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()",
"Extract Features\nFor each image we will compute a Histogram of Oriented\nGradients (HOG) as well as a color histogram using the hue channel in HSV\ncolor space. We form our final feature vector for each image by concatenating\nthe HOG and color histogram feature vectors.\nRoughly speaking, HOG should capture the texture of the image while ignoring\ncolor information, and the color histogram represents the color of the input\nimage while ignoring texture. As a result, we expect that using both together\nought to work better than using either alone. Verifying this assumption would\nbe a good thing to try for the bonus section.\nThe hog_feature and color_histogram_hsv functions both operate on a single\nimage and return a feature vector for that image. The extract_features\nfunction takes a set of images and a list of feature functions and evaluates\neach feature function on each image, storing the results in a matrix where\neach column is the concatenation of all feature vectors for a single image.",
"from cs231n.features import *\n\nnum_color_bins = 10 # Number of bins in the color histogram\nfeature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]\nX_train_feats = extract_features(X_train, feature_fns, verbose=True)\nX_val_feats = extract_features(X_val, feature_fns)\nX_test_feats = extract_features(X_test, feature_fns)\n\n# Preprocessing: Subtract the mean feature\nmean_feat = np.mean(X_train_feats, axis=0, keepdims=True)\nX_train_feats -= mean_feat\nX_val_feats -= mean_feat\nX_test_feats -= mean_feat\n\n# Preprocessing: Divide by standard deviation. This ensures that each feature\n# has roughly the same scale.\nstd_feat = np.std(X_train_feats, axis=0, keepdims=True)\nX_train_feats /= std_feat\nX_val_feats /= std_feat\nX_test_feats /= std_feat\n\n# Preprocessing: Add a bias dimension\nX_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])\nX_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])\nX_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])",
"Train SVM on features\nUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.",
"# Use the validation set to tune the learning rate and regularization strength\n\nfrom cs231n.classifiers.linear_classifier import LinearSVM\n\nlearning_rates = [1e-9, 1e-8, 1e-7]\nregularization_strengths = [1e5, 1e6, 1e7]\n\nresults = {}\nbest_val = -1\nbest_svm = None\n\npass\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained classifer in best_svm. You might also want to play #\n# with different numbers of bins in the color histogram. If you are careful #\n# you should be able to get accuracy of near 0.44 on the validation set. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n# Evaluate your trained SVM on the test set\ny_test_pred = best_svm.predict(X_test_feats)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint test_accuracy\n\n# An important way to gain intuition about how an algorithm works is to\n# visualize the mistakes that it makes. In this visualization, we show examples\n# of images that are misclassified by our current system. The first column\n# shows images that our system labeled as \"plane\" but whose true label is\n# something other than \"plane\".\n\nexamples_per_class = 8\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor cls, cls_name in enumerate(classes):\n idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]\n idxs = np.random.choice(idxs, examples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)\n plt.imshow(X_test[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls_name)\nplt.show()",
"Inline question 1:\nDescribe the misclassification results that you see. Do they make sense?\nNeural Network on image features\nEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. \nFor completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.",
"print X_train_feats.shape\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\ninput_dim = X_train_feats.shape[1]\nhidden_dim = 500\nnum_classes = 10\n\nnet = TwoLayerNet(input_dim, hidden_dim, num_classes)\nbest_net = None\n\n################################################################################\n# TODO: Train a two-layer neural network on image features. You may want to #\n# cross-validate various parameters as in previous sections. Store your best #\n# model in the best_net variable. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Run your neural net classifier on the test set. You should be able to\n# get more than 55% accuracy.\n\ntest_acc = (net.predict(X_test_feats) == y_test).mean()\nprint test_acc",
"Bonus: Design your own features!\nYou have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.\nFor bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.\nBonus: Do something extra!\nUse the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NYUDataBootcamp/Projects
|
UG_S16/Shu.ipynb
|
mit
|
[
"Visualizing Price Changes a la Reis (2006)\nRichard Shu\nThis project will graphically demonstrate how price adjustments occur in an economy where producers have an imperfect picture of the economy. This model is in many ways more realistic than traditional economic models of price-setting, wherein producers are perfectly cognizant of the state of the world at all stages and are able to optimize with respect to that state. But in the real world, gathering information about the economy can be difficult and costly. This model of price-setting, as described by Reis's paper \"Inattentive Producers,\" assumes that agents are entirely ignorant of the state of the world until they pay a fixed cost. At that point, the agent knows the state of the current period and all periods in the past. The important question then becomes when it is right for an agent to update, and what the prices will look like as a result.\nMost of my findings will be adapted from Laura Veldkamp's textbook \"Information Choice in Macroeconomics and Finance,\" where she presents a stripped-down version of the Reis model.",
"import numpy.random as rand\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport sys\n\n%matplotlib inline \n\nprint('Python version: ', sys.version)\nprint('Pandas version: ', pd.__version__)\n\nplt.style.use('seaborn-dark-palette')",
"The model depicts an economy with multiple periods and an infinite number of producers, all of whom are trying to set prices as close as possible to some target price. This target price is given by a linear combination of demand and average price:\np(t) = (1-r) * m(t) + r *p_avg(t)\nDetermining this target price is thus contingent on both demand and average price. Demand is modeled as a random walk, with each period receiving a normally-distributed natural-log demand 'shock' epsilon. The code below models this demand structure.",
"variance = .25 #sets variance of epsilon\npds = 100 #sets number of periods\nr = .7 #sets relative weight of average price on target price\n\nshocklist = []\ncumulist = []\ndum_variable = 0\nfor numba in range(pds):\n e = rand.normal(loc = 0.0, scale = variance, size = None) \n #each log epsilon shock is a normally distributed random variable centered at 0\n shocklist.append(e)\nfor elem in shocklist:\n dum_variable += elem\n #This accumulates every epsilon leading up to the period \n #makes up the random walk\n cumulist.append(dum_variable)\ndemand = pd.DataFrame({'Period shocks': shocklist, 'Cumulative demand': cumulist})\n\ndemand.head()",
"What we have now is a pandas dataframe containing the randomly-generated demand shocks and the demand, which is the result of the accumulation of those shocks in each period. Every time the cell is run, a new demand series will be generated.\nThe path of demand can be seen below:",
"fig, ax = plt.subplots()\nax.set_title('Demand')\nax.set_xlabel('t (Period)')\nax.set_ylabel('Demand (Natural Log, Normalized)')\ndemand['Cumulative demand'].plot(kind = 'line',\n ax = ax,\n legend = False,)",
"The next step is to model the target price, which can be quite nasty. \nThe crux of the Reis model is the idea of inattention. Producers have a very specific, and very limited information set -- unlike traditional economics which presumes that agents are perfectly cognizant of the state of the world. Instead, Reis incorporates an information constraint known as \"inattention\". Every period, each agent has the option to pay a fixed cost c to learn everything about demand from that period backwards. This cost disincentivizes firms from just updating every period -- sometimes they'll just determine that it's not worth it. \nBut when they do pay, they don't learn anything else until they pay again. This means that at any given time, only a certain fraction of the producers are totally up-to-date on the demand. \nIn addition, no one can directly observe average prices of the period. However, as Reis works out, with common knowledge producers can derive the average price from observing the updating pattern of the other producers.\nOne of the infinitely many equilibria that can arise from this game is that no one updates until a fixed point T. At T, everyone updates. Every period after, some fraction 1/T of the producers update their information. The critical value from this behavior is L(t, tau), which is the fraction of producers who have updated their information from tau to t. In this case, it can be given as the following:",
"T = 10 #can adjust this\n\ndef L(t, tau):\n if t >= T: \n if tau <= T: lam = 1\n else: lam = (1-(1-T**-1)**(t-tau)) #derived from each period's update proportion being independent\n else: lam = 0\n return lam ",
"With the pattern of updating in the macroeconomy thus defined, we can evaluate the value of average price, which at each period is a weighted accumulation of the epsilon shocks. Each period's shock is weighed both by how important coordination is in the target price (value of r) and how many other people are up to date on demand (L), the idea being that one period's shock may not matter that much if, during that period, not many other people are aware of it.",
"def eps(t): #finds the float value of the period t shock\n return float(demand['Period shocks'][demand.index == t])\n\ndef p_avg(t): #average price\n return sum([((L(t,s)*(1-r))/(1-(r*L(t,s))))*eps(s) for s in range(t)]) \n#formula for average price was outlined in Veldkamp (2014)\ndemand['Average price'] = [p_avg(t) for t in range(pds)]",
"We can thus define target price as above.",
"def p_target(t):\n m = float(demand['Cumulative demand'][demand.index == t])\n tar = (1-r)*m + r*p_avg(t)\n #p_target as defined in the original problem; linear combination of demand and p_avg\n return tar\n\ndemand['Target price'] = [p_target(t) for t in range(pds)]\n#Now the table for demand has four columns:\n#Cumulative demand, period shocks, average prices, target prices\n\ndemand.head(2 * T)",
"But tables can only tell us so much. Graphs are much more revealing.",
"fig1, ax1 = plt.subplots()\nax1.set_xlabel('Period')\nax1.set_ylabel('Demand/Price (Natural Log, Normalized)')\n\ndemand[[\"Cumulative demand\", \"Target price\"]].plot(kind = 'line', ax = ax1)",
"The shocks to cumulative demand are reflected in the variations of target price, but much less extremely, as would be expected from the formula for target price which weighs cumulative demand less than fully. \nImportant to note is that around period T, target price jumps sharply to meet cumulative demand. This is because average price does not budge from the baseline of 0 until people start updating their information: up until T, L(t, tau) == 0, so each epsilon shock is weighted 0. This can be more easily observed in the graph below.\nAverage price is a bit more interesting. Not only does it make up the target price, but it also chases the target price. This results in an average price that shows similar patterns to target price, but more smoothly and with a noticeable lag.",
"fig2, ax2 = plt.subplots()\nax2.set_xlabel('Period')\nax2.set_ylabel('Price (Natural Log)')\nax2.set_ylim(ax1.get_ylim())\n\ndemand[[\"Target price\", \"Average price\"]].plot(kind = 'line', ax = ax2)",
"It is this lag that ultimately makes up the most important result of Reis. It demonstrates that, in this economy, firms do not respond elastically to changes in demand -- a lot of the time it's better to just go with the flow. And, in the real world, a lot of the time this is the case. The average supermarket, for example, isn't looking so much at factors in the macroeconomy when setting prices, it's looking more at the prices of their competitor down the street. Those macro factors do play in, but in a muted way, and only after other people in the economy have started to get wind of them.\nThe second part of Reis's analysis is the behavior of an individual firm in the economy -- specifically, which points they decide to update their prices. Since updating takes a certain cost, he reasons, it must be that for a firm to update prices, they must be able to figure out that their expected loss from remaining ignorant must exceed the immediate cost of updating, which is fixed.",
"c = .5",
"The formula for expected loss -- that is, the possible disutility that one could get from not updating between tau_hat and t, is given by the following:",
"def Loss(t, tau_hat):\n sm = 0\n for k in range(tau_hat + 1, t):\n sm += (1-r)/(1-(r*L(t, k)))\n return sm * variance\nloss_list = [Loss(t, T) for t in range(pds)]",
"Whenever the agent updates, tau_hat resets to that period, and Loss for that period becomes 0 minus the cost c. \nHowever, every time the agent updates, that tau_hat is increased and the loss function is reset. So for a rationally updating agent, Loss will never exceed cost c. \nWith the loss function and cost defined, we can determine when the agent decides to update -- that is, whenever the Loss function creeps up to c. Keep in mind, however, that every time the agent updates, that tau_hat changes, and the Loss function resets.",
"loss_list1 = loss_list.copy()\nupdate_pds = []\nfor x in range(pds):\n if x == T:\n tau_hat = x #predetermined point at which everyone updates\n update_pds.append(x)\n elif loss_list1[x] > c:\n tau_hat = x #x becomes the new \"last period updated\"\n update_pds.append(x)\n loss_list1[x] = 0.0\n for s in range(x+1, pds): #all future periods are affected by this new tau_hat\n loss_list1[s] = Loss(s, tau_hat)\nupdate_check = [] \nfor x in range(pds):\n if x in update_pds: update_check.append(\"Yes\")\n else: update_check.append(\"No\") \n \ndemand['Update?'] = update_check\n\ndemand.tail(10)",
"With these updates cemented, we can determine what the agent's price is at that given time -- they just set it to the target price, which they can derive due to their observation of demand. Once they update, however, they don't change their target price at all. After all, without information, their expectation of epsilon, the change in demand each period, is 0.",
"agent_price = []\ncurr_price = 0\nfor t in range(pds):\n if update_check[t] == \"Yes\": \n agent_price.append(p_target(t))\n curr_price = p_target(t)\n else: agent_price.append(curr_price)\ndemand[\"Agent price\"] = agent_price\n\ndemand.tail(10)\n\nfig4, ax4 = plt.subplots()\nax4.set_xlabel('Period')\nax4.set_ylabel('Price (Natural Log)')\nax4.set_ylim(ax1.get_ylim())\n\nfig3, ax3 = plt.subplots()\nax3.set_xlabel('Period')\nax3.set_ylabel('Price (Natural Log)')\nax3.set_ylim(ax1.get_ylim())\n\ndemand[[\"Agent price\", \"Target price\"]].plot(kind = 'line', ax = ax4)\ndemand[[\"Agent price\", \"Average price\"]].plot(kind = 'line', ax = ax3)",
"You can hit \"Run All\" to see a new demand walk and the new results that emerge.\nConclusion\nReis provides a compelling model because it accurately reflects many of the market conditions that are most relevant to producers on a small level. Inattention is an attractive model because, in many ways, it reflects an organization's approach to learning. That is, a human may be passively observing the world around them all the time, but for an organization like a firm to collect information requires sophisticated, thorough and costly actions. It is an active decision for a firm to conduct research. \nThat said, I did somewhat gloss over the way that lambda -- that is, the updating pattern of the macroeconomy -- was decided. I specified one equilibrium updating pattern, but in truth there are infinitely many, and as far as my understanding goes there is not one that necessarily emerges from the updating pattern of individuals. In other words, the simplified model does not bridge the macro-micro gap fully. It treats the macroeconomy as somewhat exogenous when determining how the micro agent behaves. For example, it may well be that, in a more realistic model, there would be some producers who don't update in a rational way and whose irrational updating schedule prompts the rest of the economy to chase."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_channel_epochs_image.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Visualize channel over epochs as an image\nThis will produce what is sometimes called an event related\npotential / field (ERP/ERF) image.\n2 images are produced. One with a good channel and one with a channel\nthat does not see any evoked field.\nIt is also demonstrated how to reorder the epochs using a 1d spectral\nembedding as described in:\nGraph-based variability estimation in single-trial event-related neural\nresponses A. Gramfort, R. Keriven, M. Clerc, 2010,\nBiomedical Engineering, IEEE Trans. on, vol. 57 (5), 1051-1061\nhttps://hal.inria.fr/inria-00497023",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))",
"Show event related fields images",
"# and order with spectral reordering\n# If you don't have scikit-learn installed set order_func to None\nfrom sklearn.cluster.spectral import spectral_embedding # noqa\nfrom sklearn.metrics.pairwise import rbf_kernel # noqa\n\n\ndef order_func(times, data):\n this_data = data[:, (times > 0.0) & (times < 0.350)]\n this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]\n return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),\n n_components=1, random_state=0).ravel())\n\ngood_pick = 97 # channel with a clear evoked response\nbad_pick = 98 # channel with no evoked response\n\n# We'll also plot a sample time onset for each trial\nplt_times = np.linspace(0, .2, len(epochs))\n\nplt.close('all')\nmne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=0.5, vmin=-100,\n vmax=250, colorbar=True, order=order_func,\n overlay_times=plt_times, show=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ryan-leung/PHYS4650_Python_Tutorial
|
notebooks/Jan2018/pre_tutorial/CH6 Advanced Plot.ipynb
|
bsd-3-clause
|
[
"Files\nYou can actually get the data directly from web using urllib package:",
"import urllib\nurllib.urlretrieve(\"http://www.einstein-online.info/spotlights/binding_energy-data_file/index.txt/at_download/file\", \"index.txt\")",
"Read in the file by open() function",
"for row in open('index.txt').readlines():\n print row",
"Filtered the '#' comment line by .startswith('#') functions which returns true if starts with that character",
"for row in open('index.txt').readlines():\n if not row.startswith('#'):\n print row",
"Remove the newline character at the end of each line by .strip()",
"for row in open('index.txt').readlines():\n if not row.startswith('#'):\n print row.strip()",
"Split the line into list by .split()!",
"for row in open('index.txt').readlines():\n if not row.startswith('#'):\n print row.strip().split()",
"Further filter the items, by selecting the items we want and format the item into correct variables!",
"data = []\nfor row in open('index.txt').readlines():\n if not row.startswith('#'):\n t = row.strip().split()\n t1 = (int(t[0]),float(t[5]),t[2])\n data.append(t1)\nprint data",
"A magic function zip() and * return the row tuples into a column tuple, and assign different variables to the list of tuples",
"plotdata = zip(*data)\nx,y,label = plotdata",
"import numpy and matplolib for plotting, use the scatter function",
"import numpy as np\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(16,8))\nplt.scatter(x,y, alpha=0.5)\nplt.show()",
"Add the title and x,y axis",
"plt.figure(figsize=(16,8))\nplt.scatter(x,y, alpha=0.5)\nplt.ylabel(\"Binding energy per nucleon (MeV)\", fontsize=16);\nplt.xlabel(\"Number of nucleon(s)\", fontsize=16)\nplt.title(\"Binding energy curve\", fontsize=16)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OSHI7/Learning1
|
MatplotLib Pynotebooks/AnatomyOfMatplotlib-Part5-Artists.ipynb
|
mit
|
[
"# Let printing work the same in Python 2 and 3\nfrom __future__ import print_function\n# Turning on inline plots -- just for use in ipython notebooks.\nimport matplotlib\nmatplotlib.use('nbagg')\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Artists\nAnything that can be displayed in a Figure is an Artist. There are two main classes of Artists: primatives and containers. Below is a sample of these primitives.",
"\"\"\"\nShow examples of matplotlib artists\nhttp://matplotlib.org/api/artist_api.html\n\nSeveral examples of standard matplotlib graphics primitives (artists)\nare drawn using matplotlib API. Full list of artists and the\ndocumentation is available at\nhttp://matplotlib.org/api/artist_api.html\n\nCopyright (c) 2010, Bartosz Telenczuk\n\nLicense: This work is licensed under the BSD. A copy should be\nincluded with this source code, and is also available at\nhttp://www.opensource.org/licenses/bsd-license.php\n\"\"\"\n\nfrom matplotlib.collections import PatchCollection\nimport matplotlib.path as mpath\nimport matplotlib.patches as mpatches\nimport matplotlib.lines as mlines\n\nfig, ax = plt.subplots(1, 1, figsize=(7,7))\n\n# create 3x3 grid to plot the artists\npos = np.mgrid[0.2:0.8:3j, 0.2:0.8:3j].reshape(2, -1)\npatches = []\n\n# add a circle\nart = mpatches.Circle(pos[:, 0], 0.1, ec=\"none\")\npatches.append(art)\nplt.text(pos[0, 0], pos[1, 0] - 0.15, \"Circle\", ha=\"center\", size=14)\n\n# add a rectangle\nart = mpatches.Rectangle(pos[:, 1] - [0.025, 0.05], 0.05, 0.1, ec=\"none\")\npatches.append(art)\nplt.text(pos[0, 1], pos[1, 1] - 0.15, \"Rectangle\", ha=\"center\", size=14)\n\n# add a wedge\nwedge = mpatches.Wedge(pos[:, 2], 0.1, 30, 270, ec=\"none\")\npatches.append(wedge)\nplt.text(pos[0, 2], pos[1, 2] - 0.15, \"Wedge\", ha=\"center\", size=14)\n\n# add a Polygon\npolygon = mpatches.RegularPolygon(pos[:, 3], 5, 0.1)\npatches.append(polygon)\nplt.text(pos[0, 3], pos[1, 3] - 0.15, \"Polygon\", ha=\"center\", size=14)\n\n#add an ellipse\nellipse = mpatches.Ellipse(pos[:, 4], 0.2, 0.1)\npatches.append(ellipse)\nplt.text(pos[0, 4], pos[1, 4] - 0.15, \"Ellipse\", ha=\"center\", size=14)\n\n#add an arrow\narrow = mpatches.Arrow(pos[0, 5] - 0.05, pos[1, 5] - 0.05, 0.1, 0.1, width=0.1)\npatches.append(arrow)\nplt.text(pos[0, 5], pos[1, 5] - 0.15, \"Arrow\", ha=\"center\", size=14)\n\n# add a path patch\nPath = mpath.Path\nverts = np.array([\n (0.158, -0.257),\n (0.035, -0.11),\n (-0.175, 0.20),\n (0.0375, 0.20),\n (0.085, 0.115),\n (0.22, 0.32),\n (0.3, 0.005),\n (0.20, -0.05),\n (0.158, -0.257),\n ])\nverts = verts - verts.mean(0)\ncodes = [Path.MOVETO,\n Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.LINETO,\n Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.CLOSEPOLY]\n\npath = mpath.Path(verts / 2.5 + pos[:, 6], codes)\npatch = mpatches.PathPatch(path)\npatches.append(patch)\nplt.text(pos[0, 6], pos[1, 6] - 0.15, \"PathPatch\", ha=\"center\", size=14)\n\n# add a fancy box\nfancybox = mpatches.FancyBboxPatch(\n pos[:, 7] - [0.025, 0.05], 0.05, 0.1,\n boxstyle=mpatches.BoxStyle(\"Round\", pad=0.02))\npatches.append(fancybox)\nplt.text(pos[0, 7], pos[1, 7] - 0.15, \"FancyBoxPatch\", ha=\"center\", size=14)\n\n# add a line\nx,y = np.array([[-0.06, 0.0, 0.1], [0.05,-0.05, 0.05]])\nline = mlines.Line2D(x+pos[0, 8], y+pos[1, 8], lw=5.)\nplt.text(pos[0, 8], pos[1, 8] - 0.15, \"Line2D\", ha=\"center\", size=14)\n\ncollection = PatchCollection(patches)\nax.add_collection(collection)\nax.add_line(line)\nax.set_axis_off()\n\nplt.show()",
"Containers are objects like Figure and Axes. Containers are given primitives to draw. The plotting functions we discussed back in Parts 1 & 2 are convenience functions that generate these primitives and places them into the appropriate containers. In fact, most of those functions will return artist objects (or a list of artist objects) as well as store them into the appropriate axes container.\nAs discussed in Part 3, there is a wide range of properties that can be defined for your plots. These properties are processed and passed down to the primitives, for your convenience. Ultimately, you can override anything you want just by directly setting a property to the object itself.",
"fig, ax = plt.subplots(1, 1)\nlines = plt.plot([1, 2, 3, 4], [1, 2, 3, 4], 'b', [1, 2, 3, 4], [4, 3, 2, 1], 'r')\nlines[0].set(linewidth=5)\nlines[1].set(linewidth=10, alpha=0.7)\nplt.show()",
"To see what properties are set for an artist, use getp()",
"fig = plt.figure()\nprint(plt.getp(fig.patch))",
"Collections\nIn addition to the Figure and Axes containers, there is another special type of container called a Collection. A Collection usually contains a list of primitives of the same kind that should all be treated similiarly. For example, a CircleCollection would have a list of Circle objects all with the same color, size, and edge width. Individual values for artists in the collection can also be set.",
"from matplotlib.collections import LineCollection\nfig, ax = plt.subplots(1, 1)\nlc = LineCollection([[(4, 10), (16, 10)],\n [(2, 2), (10, 15), (6, 7)],\n [(14, 3), (1, 1), (3, 5)]])\nlc.set_color('r')\nlc.set_linewidth(5)\nax.add_collection(lc)\nax.set_xlim(0, 18)\nax.set_ylim(0, 18)\nplt.show()\n\n# Now show how to set individual properties in a collection\nfig, ax = plt.subplots(1, 1)\nlc = LineCollection([[(4, 10), (16, 10)],\n [(2, 2), (10, 15), (6, 7)],\n [(14, 3), (1, 1), (3, 5)]])\nlc.set_color(['r', 'blue', (0.2, 0.9, 0.3)])\nlc.set_linewidth([4, 3, 6])\nax.add_collection(lc)\nax.set_xlim(0, 18)\nax.set_ylim(0, 18)\nplt.show()",
"There are other kinds of collections that are not just simply a list of primitives, but are Artists in their own right. These special kinds of collections take advantage of various optimizations that can be assumed when rendering similar or identical things. You actually do use these collections all the time whether you realize it or not. Markers are (indirectly) implemented this way (so, whenever you do plot() or scatter(), for example).",
"from matplotlib.collections import RegularPolyCollection\n\nfig, ax = plt.subplots(1, 1)\noffsets = np.random.rand(20, 2)\ncollection = RegularPolyCollection(\n numsides=5, # a pentagon\n sizes=(150,),\n offsets=offsets,\n transOffset=ax.transData,\n )\nax.add_collection(collection)\nplt.show()",
"Exercise 5.1\nGive yourselves 4 gold stars!\nHint: StarPolygonCollection",
"%load exercises/5.1-goldstar.py\n\nfrom matplotlib.collections import StarPolygonCollection\n\nfig, ax = plt.subplots(1, 1)\n\ncollection = StarPolygonCollection(5,\n offsets=[(0.5, 0.5)],\n transOffset=ax.transData)\nax.add_collection(collection)\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
idealabasu/code_pynamics
|
python/pynamics_examples/triple_pendulum.ipynb
|
mit
|
[
"title: Triple Pendulum Example\ntype: submodule",
"%matplotlib inline",
"Try running with this variable set to true and to false and see the difference in the resulting equations of motion",
"use_constraints = False",
"Import all the necessary modules",
"# -*- coding: utf-8 -*-\n\"\"\"\nWritten by Daniel M. Aukes\nEmail: danaukes<at>gmail.com\nPlease see LICENSE for full license.\n\"\"\"\n\nimport pynamics\nfrom pynamics.frame import Frame\nfrom pynamics.variable_types import Differentiable,Constant\nfrom pynamics.system import System\nfrom pynamics.body import Body\nfrom pynamics.dyadic import Dyadic\nfrom pynamics.output import Output,PointsOutput\nfrom pynamics.particle import Particle\nfrom pynamics.constraint import AccelerationConstraint\nimport pynamics.integration\nimport numpy\nimport sympy\nimport matplotlib.pyplot as plt\nplt.ion()\nfrom math import pi",
"The next two lines create a new system object and set that system as the global system within the module so that other variables can use and find it.",
"system = System()\npynamics.set_system(__name__,system)",
"Parameterization\nConstants\nDeclare constants and seed them with their default value. This can be changed at integration time but is often a nice shortcut when you don't want the value to change but you want it to be represented symbolically in calculations",
"lA = Constant(1,'lA',system)\nlB = Constant(1,'lB',system)\nlC = Constant(1,'lC',system)\n\nmA = Constant(1,'mA',system)\nmB = Constant(1,'mB',system)\nmC = Constant(1,'mC',system)\n\ng = Constant(9.81,'g',system)\nb = Constant(1e1,'b',system)\nk = Constant(1e1,'k',system)\n\npreload1 = Constant(0*pi/180,'preload1',system)\npreload2 = Constant(0*pi/180,'preload2',system)\npreload3 = Constant(0*pi/180,'preload3',system)\n\nIxx_A = Constant(1,'Ixx_A',system)\nIyy_A = Constant(1,'Iyy_A',system)\nIzz_A = Constant(1,'Izz_A',system)\nIxx_B = Constant(1,'Ixx_B',system)\nIyy_B = Constant(1,'Iyy_B',system)\nIzz_B = Constant(1,'Izz_B',system)\nIxx_C = Constant(1,'Ixx_C',system)\nIyy_C = Constant(1,'Iyy_C',system)\nIzz_C = Constant(1,'Izz_C',system)\n\ntorque = Constant(0,'torque',system)\nfreq = Constant(3e0,'freq',system)",
"Differentiable State Variables\nDefine your differentiable state variables that you will use to model the state of the system. In this case $qA$, $qB$, and $qC$ are the rotation angles of a three-link mechanism",
"qA,qA_d,qA_dd = Differentiable('qA',system)\nqB,qB_d,qB_dd = Differentiable('qB',system)\nqC,qC_d,qC_dd = Differentiable('qC',system)",
"Initial Values\nDefine a set of initial values for the position and velocity of each of your state variables. It is necessary to define a known. This code create a dictionary of initial values.",
"initialvalues = {}\ninitialvalues[qA]=0*pi/180\ninitialvalues[qA_d]=0*pi/180\ninitialvalues[qB]=0*pi/180\ninitialvalues[qB_d]=0*pi/180\ninitialvalues[qC]=0*pi/180\ninitialvalues[qC_d]=0*pi/180",
"These two lines of code order the initial values in a list in such a way that the integrator can use it in the same order that it expects the variables to be supplied",
"statevariables = system.get_state_variables()\nini = [initialvalues[item] for item in statevariables]",
"Kinematics\nFrames\nDefine the reference frames of the system",
"N = Frame('N',system)\nA = Frame('A',system)\nB = Frame('B',system)\nC = Frame('C',system)",
"Newtonian Frame\nIt is important to define the Newtonian reference frame as a reference frame that is not accelerating, otherwise the dynamic equations will not be correct",
"system.set_newtonian(N)",
"Rotate each successive frame by amount q<new> from the last. This approach can produce more complex equations but is representationally simple (Minimal Representation)",
"A.rotate_fixed_axis(N,[0,0,1],qA,system)\nB.rotate_fixed_axis(A,[0,0,1],qB,system)\nC.rotate_fixed_axis(B,[0,0,1],qC,system)",
"Vectors\nDefine the vectors that describe the kinematics of a series of connected lengths\n\npNA - This is a vector with position at the origin.\npAB - This vector is length $l_A$ away from the origin along the A.x unit vector\npBC - This vector is length $l_B$ away from the pAB along the B.x unit vector \npCtip - This vector is length $l_C$ away from the pBC along the C.x unit vector \n\nDefine my rigid body kinematics",
"pNA=0*N.x\npAB=pNA+lA*A.x\npBC = pAB + lB*B.x\npCtip = pBC + lC*C.x",
"Centers of Mass\nIt is important to define the centers of mass of each link. In this case, the center of mass of link A, B, and C is halfway along the length of each",
"pAcm=pNA+lA/2*A.x\npBcm=pAB+lB/2*B.x\npCcm=pBC+lC/2*C.x",
"Calculating Velocity\nThe angular velocity between frames, and the time derivatives of vectors are extremely useful in calculating the equations of motion and for determining many of the forces that need to be applied to your system (damping, drag, etc). Thus, it is useful, once kinematics have been defined, to take or find the derivatives of some of those vectors for calculating linear or angular velocity vectors\nAngular Velocity\nThe following three lines of code computes and returns the angular velocity between frames N and A (${}^N\\omega^A$), A and B (${}^A\\omega^B$), and B and C (${}^B\\omega^C$). In other cases, if the derivative expression is complex or long, you can supply pynamics with a given angular velocity between frames to speed up computation time.",
"wNA = N.get_w_to(A)\nwAB = A.get_w_to(B)\nwBC = B.get_w_to(C)",
"Vector derivatives\nThe time derivatives of vectors may also be \nvCtip = pCtip.time_derivative(N,system)\nDefine Inertias and Bodies\nThe next several lines compute the inertia dyadics of each body and define a rigid body on each frame. In the case of frame C, we represent the mass as a particle located at point pCcm.",
"IA = Dyadic.build(A,Ixx_A,Iyy_A,Izz_A)\nIB = Dyadic.build(B,Ixx_B,Iyy_B,Izz_B)\nIC = Dyadic.build(C,Ixx_C,Iyy_C,Izz_C)\n\nBodyA = Body('BodyA',A,pAcm,mA,IA,system)\nBodyB = Body('BodyB',B,pBcm,mB,IB,system)\nBodyC = Body('BodyC',C,pCcm,mC,IC,system)\n#BodyC = Particle(pCcm,mC,'ParticleC',system)",
"Forces and Torques\nForces and torques are added to the system with the generic addforce method. The first parameter supplied is a vector describing the force applied at a point or the torque applied along a given rotational axis. The second parameter is the vector describing the linear speed (for an applied force) or the angular velocity(for an applied torque)",
"system.addforce(torque*sympy.sin(freq*2*sympy.pi*system.t)*A.z,wNA)",
"Damper",
"system.addforce(-b*wNA,wNA)\nsystem.addforce(-b*wAB,wAB)\nsystem.addforce(-b*wBC,wBC)",
"Spring Forces\nSpring forces are a special case because the energy stored in springs is conservative and should be considered when calculating the system's potential energy. To do this, use the add_spring_force command. In this method, the first value is the linear spring constant. The second value is the \"stretch\" vector, indicating the amount of deflection from the neutral point of the spring. The final parameter is, as above, the linear or angluar velocity vector (depending on whether your spring is a linear or torsional spring)\nIn this case, the torques applied to each joint are dependent upon whether qA, qB, and qC are absolute or relative rotations, as defined above.",
"system.add_spring_force1(k,(qA-preload1)*N.z,wNA) \nsystem.add_spring_force1(k,(qB-preload2)*A.z,wAB)\nsystem.add_spring_force1(k,(qC-preload3)*B.z,wBC)",
"Gravity\nAgain, like springs, the force of gravity is conservative and should be applied to all bodies. To globally apply the force of gravity to all particles and bodies, you can use the special addforcegravity method, by supplying the acceleration due to gravity as a vector. This will get applied to all bodies defined in your system.",
"system.addforcegravity(-g*N.y)",
"Constraints\nConstraints may be defined that prevent the motion of certain elements. Try turning on the constraints flag at the top of the script to see what happens.",
"if use_constraints:\n eq = []\n eq.append(pCtip)\n eq_d=[item.time_derivative() for item in eq]\n eq_dd=[item.time_derivative() for item in eq_d]\n eq_dd_scalar = []\n eq_dd_scalar.append(eq_dd[0].dot(N.y))\n constraint = AccelerationConstraint(eq_dd_scalar)\n system.add_constraint(constraint)",
"F=ma\nThis is where the symbolic expressions for F and ma are calculated. This must be done after all parts of the system have been defined. The getdynamics function uses Kane's method to derive the equations of motion.",
"f,ma = system.getdynamics()\n\nf\n\nma",
"Solve for Acceleration\nThe next line of code solves the system of equations F=ma plus any constraint equations that have been added above. It returns one or two variables. func1 is the function that computes the velocity and acceleration given a certain state, and lambda1(optional) supplies the function that computes the constraint forces as a function of the resulting states\nThere are a few ways of solveing for a. The below function inverts the mass matrix numerically every time step. This can be slower because the matrix solution has to be solved for, but is sometimes more tractable than solving the highly nonlinear symbolic expressions that can be generated from the previous step. The other options would be to use state_space_pre_invert, which pre-inverts the equations symbolically before generating a numerical function, or state_space_post_invert2, which adds Baumgarte's method for intermittent constraints.",
"func1,lambda1 = system.state_space_post_invert(f,ma,return_lambda = True)",
"Integration Tolerance\nSpecify the precision of the integration",
"tol = 1e-5",
"Time\nDefine variables for time that can be used throughout the script. These get used to create the t array, a list of every time value that is solved for during integration",
"tinitial = 0\ntfinal = 10\nfps = 30\ntstep = 1/fps\nt = numpy.r_[tinitial:tfinal:tstep]",
"Integrate\nThe next line of code integrates the function calculated",
"states=pynamics.integration.integrate(func1,ini,t,rtol=tol,atol=tol, args=({'constants':system.constant_values},))",
"Outputs\nThe next section simply calculates and plots a variety of data from the previous simulation\nStates",
"plt.figure()\nartists = plt.plot(t,states[:,:3])\nplt.legend(artists,['qA','qB','qC'])",
"Energy",
"KE = system.get_KE()\nPE = system.getPEGravity(pNA) - system.getPESprings()\nenergy_output = Output([KE-PE],system)\nenergy_output.calc(states,t)\nenergy_output.plot_time()",
"Constraint Forces\nThis line of code computes the constraint forces once the system's states have been solved for.",
"if use_constraints:\n lambda2 = numpy.array([lambda1(item1,item2,system.constant_values) for item1,item2 in zip(t,states)])\n plt.figure()\n plt.plot(t, lambda2)",
"Motion",
"points = [pNA,pAB,pBC,pCtip]\npoints_output = PointsOutput(points,system)\ny = points_output.calc(states,t)\npoints_output.plot_time(20)",
"Motion Animation\nin normal Python the next lines of code produce an animation using matplotlib",
"points_output.animate(fps = fps,movie_name = 'triple_pendulum.mp4',lw=2,marker='o',color=(1,0,0,1),linestyle='-')",
"To plot the animation in jupyter you need a couple extra lines of code...",
"from matplotlib import animation, rc\nfrom IPython.display import HTML\nHTML(points_output.anim.to_html5_video())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SaintlyVi/DLR_DB
|
DLR Social Surveys Explore.ipynb
|
mit
|
[
"Domestic Load Research Programme Social Survey Exploration\nThis notebook requires access to a data directory with DLR survey data saved as feather objects. The data files must be saved in /data/tables/ .",
"import processing.procore as pcore\nimport features.socios as s\n\ntbls = pcore.loadTables()\nprint(\"Stored Data Tables\\n\")\nfor k in sorted(list(tbls.keys())):\n print(k)",
"List of Questionaires",
"tbls['questionaires'][tbls['questionaires'].QuestionaireID.isin([3, 4, 6, 7, 1000000, 1000001, 1000002])]",
"Search Questions",
"searchterm = ['earn per month', 'watersource', 'GeyserNumber', 'GeyserBroken', 'roof', 'wall', 'main switch', 'floor area']\nquestionaire_id = 3\ns.searchQuestions(searchterm, questionaire_id)",
"Search Answers",
"searchterm = ['earn per month', 'watersource', 'GeyserNumber', 'GeyserBroken', 'roof', 'wall', 'main switch', 'floor area']\nquestionaire_id = 3\nanswers = s.searchAnswers(searchterm, questionaire_id)\nprint(answers[1])\nanswers[0].head()",
"List of Site Locations and Corresponding RecorderIDs by Year",
"s.recorderLocations(year = 2011)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cyucheng/skimr
|
jupyter/4_LDA_analysis.ipynb
|
bsd-3-clause
|
[
"FOR LDA ANALYSIS OF MEDIUM.COM CORPUS\nA potentially useful feature could be to compare the topic distribution of each sentence in an article with the topic distribution of the article itself. The question is: could sentences that are highlighted contain words that are more or less associated with the topic of the article than sentences that are not highlighted?\nThis type of analysis requires topic analysis, such as Latent Dirichlet Allocation (LDA). Here, use LDA (from the gensim library) to generate topics for the corpus of articles scraped from Medium.com and calculate a topic vector for each article. Then, when calculating features for the sentences in the dataset, I will be able to apply the LDA model to generate a topic vector for each sentence and calculate a cosine similarity score between the topic vector of the sentence and the article it belongs to.",
"import matplotlib.pyplot as plt\nimport csv\nfrom textblob import TextBlob, Word\nimport pandas as pd\nimport sklearn\nimport pickle\nimport numpy as np\nimport scipy\nimport nltk.data\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.metrics import classification_report, f1_score, accuracy_score, confusion_matrix\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.tree import DecisionTreeClassifier \nfrom sklearn.model_selection import learning_curve, GridSearchCV, StratifiedKFold, cross_val_score, train_test_split \nsent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\nfrom nltk.tokenize import RegexpTokenizer\nword_tokenizer = RegexpTokenizer('\\s+', gaps=True)\nfrom patsy import dmatrices\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import metrics\nfrom sklearn.preprocessing import OneHotEncoder\nfrom nltk.stem.porter import PorterStemmer\n\nfrom stop_words import get_stop_words\nstop_en = get_stop_words('en')\np_stemmer = PorterStemmer()\nen_words = set(nltk.corpus.words.words())\n\n\nfrom gensim import corpora, models\nimport gensim\n\nimport timeit\nimport re\nimport string\nfrom string import whitespace, punctuation\n\nfrom nltk.corpus import stopwords\nstopw_en = stopwords.words('english')\nprint(stopw_en)\nprint(stop_en)\nprint(len(stopw_en))\nprint(len(stop_en))\nall_stopw = set(stopw_en) | set(stop_en)\nprint(len(all_stopw))\n\n\nset_tr = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/set_tr_2','rb'))\n# set_tr includes all unique highlight/text combos, plus text with highlights removed\n\n# print(set_tr)\n# print(set_tr['textwohighlight'][0])",
"Initial text processing",
"all_texts_processed = []\n\nn = 0\nfor text in set_tr['text']:\n # combine sentences\n txt = ' '.join(text)\n # remove punctuation\n translator = str.maketrans('', '', string.punctuation)\n txt2 = re.sub(u'\\u2014','',txt)\n txt3 = txt2.translate(translator)\n # split text into words\n tokens = word_tokenizer.tokenize(txt3.lower())\n # remove stop words\n nostop_tokens = [i for i in tokens if not i in all_stopw]\n # stem words\n stemmed = [p_stemmer.stem(i) for i in nostop_tokens]\n # append to processed texts\n all_texts_processed.append( stemmed )\n if n == 0:\n# print(txt)\n# print(tokens)\n# print(nostop_tokens)\n print(stemmed)\n n += 1\n# if n == 5:\n# break\n\n# #### testing removal of punctuation\n\n# test = ' '.join(set_tr['text'][0])\n# # print(test.lower())\n# # tkns = word_tokenizer.tokenize(test.lower().translate(dict.fromkeys(string.punctuation)))\n# # print(tkns)\n# # test2 = ''.join(e for e in test if e.isalnum())\n# # print(test2)\n# # test3 = test.translate(None, string.punctuation)\n# # print(test3)\n# # tkns = nltk.word_tokenize(test.lower().translate(dict.fromkeys(string.punctuation)))\n# # print(tkns)\n# translator = str.maketrans('', '', string.punctuation)\n\n# test2 = re.sub(u'\\u2014','',test)\n\n# print(test2.translate(translator))",
"Save all_texts_processed",
"flda_processedtexts = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_processedtexts','wb')\npickle.dump(all_texts_processed, flda_processedtexts)\n\n\n# Make document-term matrix\ndictionary = corpora.Dictionary(all_texts_processed)\n# Convert to bag-of-words\ncorpus = [dictionary.doc2bow(text) for text in all_texts_processed]\nprint(corpus[0])\n\n# save all_texts_processed\nflda_dictionary = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_dictionary','wb')\nflda_corpus = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_corpus','wb')\npickle.dump(dictionary, flda_dictionary)\npickle.dump(corpus, flda_corpus)\n\n\n# Run LDA\n# choose 10 topics, 1 pass for initial try and time it\n\ntic = timeit.default_timer()\nldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=10, id2word = dictionary, passes=1)\ntoc = timeit.default_timer()\nprint(str(toc - tic) + ' seconds elapsed')\n# current: with new dictionary/corpus excluding more stopwords\n\n# # Run LDA\n# # choose 20 topics, 1 pass to test how much time increases\n\n# tic = timeit.default_timer()\n# ldamodel2 = gensim.models.ldamodel.LdaModel(corpus, num_topics=20, id2word = dictionary, passes=1)\n# toc = timeit.default_timer()\n# print(str(toc - tic) + ' seconds elapsed')\n# # current: with old dictionary/corpus without excluding nltk stopwords\n\n# # Run LDA\n# # choose 100 topics, 1 pass to test how much time increases\n\n# tic = timeit.default_timer()\n# ldamodel3 = gensim.models.ldamodel.LdaModel(corpus, num_topics=100, id2word = dictionary, passes=1)\n# toc = timeit.default_timer()\n# print(str(toc - tic) + ' seconds elapsed')\n# # current: with old dictionary/corpus without excluding nltk stopwords\n\n# flda_10topic1pass = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_10topic1pass','wb')\n# pickle.dump(ldamodel, flda_10topic1pass)\n# # current: with new dictionary/corpus excluding more stopwords\n\n# flda_20topic1pass = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_20topic1pass','wb')\n# pickle.dump(ldamodel2, flda_20topic1pass)\n# flda_100topic1pass = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_100topic1pass','wb')\n# pickle.dump(ldamodel3, flda_100topic1pass)\n# # current: with old dictionary/corpus without excluding nltk stopwords\n\nprint(ldamodel.print_topics(num_topics=3, num_words=3))\n\n# # Run LDA\n# # choose 100 topics, 20 passes\n\n# tic = timeit.default_timer()\n# ldamodel4 = gensim.models.ldamodel.LdaModel(corpus, num_topics=100, id2word = dictionary, passes=20)\n# toc = timeit.default_timer()\n# print(str(toc - tic) + ' seconds elapsed')\n# # current: with old dictionary/corpus without excluding nltk stopwords\n\nflda_100topic20pass = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_100topic20pass','wb')\npickle.dump(ldamodel4, flda_100topic20pass)\n\n\n# Compare topic outputs of LDA models 1-4\nprint(ldamodel.print_topics( num_topics=10, num_words=10))\n# print(ldamodel2.print_topics(num_topics=10, num_words=10))\n# print(ldamodel3.print_topics(num_topics=10, num_words=10))\n# print(ldamodel4.print_topics(num_topics=10, num_words=10))\n",
"Combine nltk and stop_words lists of stopwords -- moved to top",
"# from nltk.corpus import stopwords\n# stopw_en = stopwords.words('english')\n# print(stopw_en)\n# print(stop_en)\n# print(len(stopw_en))\n# print(len(stop_en))\n# all_stopw = set(stopw_en) | set(stop_en)\n# print(len(all_stopw))\n\n\n# Run LDA\n# choose 10 topics, 20 passes after removing more stopwords\n\ntic = timeit.default_timer()\nldamodel5 = gensim.models.ldamodel.LdaModel(corpus, num_topics=10, id2word = dictionary, passes=20)\ntoc = timeit.default_timer()\nprint(str(toc - tic) + ' seconds elapsed')\n\n\nprint(ldamodel.print_topics( num_topics=10, num_words=5))",
"Generate a \"common word\" list to ignore in LDA\nSome words appear in most topics above - these should be treated as stopwords and ignored. To do this, create a list of all words appearing in more than 60% of files (to ignore).",
"flatten = lambda all_texts_processed: [item for sublist in all_texts_processed for item in sublist]\n# all_texts_combined = ' '.join(all_texts_processed)\nall_texts_flattened = flatten(all_texts_processed)\nall_texts_flattened[1000000]\nprint(len(all_texts_flattened))\n\nflatten_uniq = set(all_texts_flattened)\nprint(len(flatten_uniq))\n\nprint(len(all_texts_processed))\n\ncommonwords = []\nwordlist = []\ni = 0\ntic = timeit.default_timer()\nfor word in flatten_uniq:\n n = 0\n for text in all_texts_processed:\n if word in text:\n# print('yes!')\n n += 1\n frac = float(n / len(all_texts_processed))\n if frac >= 0.6:\n commonwords.append(word)\n elif frac < 0.6:\n wordlist.append(word)\n i += 1\n# print(word)\n# print(frac)\n# print(n)\n# if i == 20:\n# break\n# print(word)\n# print(frac)\n# if i >= 50:\n# break\n\ntoc = timeit.default_timer()\nprint(str(toc - tic) + ' seconds elapsed')\n\n\n\n\nprint(len(wordlist))\nprint(len(commonwords))\nprint(commonwords)\ncommonwords_2 = [i.strip('”“’‘') for i in commonwords]\nprint(commonwords_2)\n\n# all_stopw2 = set(all_stopw_stem) | set(commonwords)\n# print(len(all_stopw2))\n\n# all_stopw_stem = [p_stemmer.stem(i) for i in all_stopw]\n# print(all_stopw)\n# print(all_stopw_stem)\n\n# all_stopw2 = set(all_stopw_stem) | set(commonwords)\n# print(len(all_stopw2))\n\n\nfwordlist = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/wordlist','wb')\nfcommonwords = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/commonwords','wb')\nfcommonwords2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/commonwords2','wb')\npickle.dump(wordlist, fwordlist)\npickle.dump(commonwords, fcommonwords)\npickle.dump(commonwords_2, fcommonwords2)\n\ntmp = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/wordlist','rb'))\nprint(commonwords_2)\n\n# print(tmp)\n\n# REDO LDA with common words removed\n\nall_texts_processed_new = []\n\n\ntic = timeit.default_timer()\n\nn = 0\nfor text in set_tr['text']:\n txt = ' '.join(text)\n # remove punctuation\n translator = str.maketrans('', '', string.punctuation)\n txt2 = re.sub(u'\\u2014','',txt) # remove em dashes\n txt3 = re.sub(r'\\d+', '', txt2) # remove digits\n txt4 = txt3.translate(translator) # remove punctuation\n # split text into words\n tokens = word_tokenizer.tokenize(txt4.lower())\n # strip single and double quotes from ends of words\n tokens_strip = [i.strip('”“’‘') for i in tokens]\n # keep only english words\n tokens_en = [i for i in tokens_strip if i in en_words]\n # remove nltk/stop_word stop words\n nostop_tokens = [i for i in tokens_en if not i in all_stopw]\n # strip single and double quotes from ends of words\n nostop_strip = [i.strip('”“’‘') for i in nostop_tokens]\n # stem words\n stemmed = [p_stemmer.stem(i) for i in nostop_strip]\n # strip single and double quotes from ends of words\n stemmed_strip = [i.strip('”“’‘') for i in stemmed]\n # stem words\n stemmed2 = [p_stemmer.stem(i) for i in stemmed_strip]\n # strip single and double quotes from ends of words\n stemmed2_strip = [i.strip('”“’‘') for i in stemmed2]\n # remove common words post-stemming\n stemmed_nocommon = [i for i in stemmed2_strip if not i in commonwords_2]\n # append to processed texts\n all_texts_processed_new.append( stemmed_nocommon )\n if n == 0:\n# print(txt)\n# print(tokens)\n# print(nostop_tokens)\n print(stemmed_nocommon)\n n += 1\n# if n == 5:\n# break\n\ntoc = timeit.default_timer()\nprint(str(toc - tic) + ' seconds elapsed')\n\n\n\n# #### TRY only using nouns? -- nah, since some of main words in e.g. topic 0 are 'design', 'use'",
"Save all_texts_processed",
"# flda_processedtexts_new = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_processedtexts_new','wb')\n# pickle.dump(all_texts_processed_new, flda_processedtexts_new)\n# # above: without filtering for english words\n\nflda_processedtexts_new2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_processedtexts_new2','wb')\npickle.dump(all_texts_processed_new, flda_processedtexts_new2)\n# above: with filtering for english words\n\n",
"Make document-term matrix",
"# dictionary_new = corpora.Dictionary(all_texts_processed_new)\n# # Convert to bag-of-words\n# corpus_new = [dictionary_new.doc2bow(text) for text in all_texts_processed_new]\n# print(corpus_new[0])\n# # above: without filtering for english words\n\n\n# Make document-term matrix\ndictionary_new2 = corpora.Dictionary(all_texts_processed_new)\n# Convert to bag-of-words\ncorpus_new2 = [dictionary_new2.doc2bow(text) for text in all_texts_processed_new]\nprint(corpus_new2[0])\n# above: with filtering for english words\n",
"Save new dictionary and corpus",
"# flda_dictionary_new = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_dictionary_new','wb')\n# flda_corpus_new = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_corpus_new','wb')\n# pickle.dump(dictionary_new, flda_dictionary_new)\n# pickle.dump(corpus_new, flda_corpus_new)\n# # above: without filtering for english words\n\nflda_dictionary_new2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_dictionary_new2','wb')\nflda_corpus_new2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_corpus_new2','wb')\npickle.dump(dictionary_new2, flda_dictionary_new2)\npickle.dump(corpus_new2, flda_corpus_new2)\n# above: with filtering for english words\n",
"Run LDA",
"# # choose 10 topics, 20 passes\n# tic = timeit.default_timer()\n# ldamodel_new = gensim.models.ldamodel.LdaModel(corpus_new, num_topics=10, id2word = dictionary_new, passes=20)\n# toc = timeit.default_timer()\n# print(str(toc - tic) + ' seconds elapsed')\n# # current: with new dictionary/corpus excluding more stopwords and common words\n# # # above: without filtering for english words\n\n\n# choose 10 topics, 20 passes\ntic = timeit.default_timer()\nldamodel_new = gensim.models.ldamodel.LdaModel(corpus_new2, num_topics=10, id2word = dictionary_new2, passes=20)\ntoc = timeit.default_timer()\nprint(str(toc - tic) + ' seconds elapsed')\n# current: with new dictionary/corpus excluding more stopwords and common words\n# above: with filtering for english words\n\n\n# Save LDA model\n# # flda_10topic20pass_new = open('/Users/clarencecheng/Dropbox/~Insight/skimr/lda_10topic20pass_new','wb')\n# # pickle.dump(ldamodel_new, flda_10topic20pass_new)\n\n# flda_10topic20pass_new2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/lda_10topic20pass_new2','wb')\n# pickle.dump(ldamodel_new, flda_10topic20pass_new2)\n# # # above: without filtering for english words\n\n\nflda_10topic20pass_new2b = open('/Users/clarencecheng/Dropbox/~Insight/skimr/lda_10topic20pass_new2b','wb')\npickle.dump(ldamodel_new, flda_10topic20pass_new2b)\n# above: with filtering for english words\n",
"Inspect topic output of new LDA model",
"print(ldamodel_new.print_topics( num_topics=10, num_words=5))\n",
"Define a function to convert topic vector to numeric vector",
"\ndef lda_to_vec(lda_input):\n num_topics = 10\n vec = [0]*num_topics\n for i in lda_input:\n col = i[0]\n val = i[1]\n vec[col] = val\n return vec",
"Calculate document vectors",
"\n\nall_lda_vecs = []\n\nn = 0\nfor i in corpus_new2:\n doc_lda = ldamodel_new[i]\n vec_lda = lda_to_vec(doc_lda)\n all_lda_vecs.append(vec_lda)\n n += 1\n if n <= 20:\n# print(doc_lda)\n print(vec_lda)\n\n\n\nprint(len(all_lda_vecs))\nprint(sum(all_lda_vecs[1000]))\n\n# Save all_lda_vecs\nfall_lda_vecs = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/all_lda_vecs','wb')\npickle.dump(all_lda_vecs, fall_lda_vecs)\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
paolorivas/homeworkfoundations
|
homeworkdata/Homework_3_Paolo_Rivas_Legua.ipynb
|
mit
|
[
"Homework assignment #3\nThese problem sets focus on using the Beautiful Soup library to scrape web pages.\nProblem Set #1: Basic scraping\nI've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.",
"from bs4 import BeautifulSoup\nfrom urllib.request import urlopen\nhtml_str = urlopen(\"http://static.decontextualize.com/widgets2016.html\").read()\ndocument = BeautifulSoup(html_str, \"html.parser\")",
"Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.",
"h3_tags = document.find_all('h3')\nprint(type(h3_tags))\nprint([tag.string for tag in h3_tags])\nlen([tag.string for tag in h3_tags])",
"Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the \"Widget Catalog\" header.",
"telephone_checkup = document.find('a', attrs={'class': 'tel'})\n[tag.string for tag in telephone_checkup]",
"In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):\nSkinner Widget\nWidget For Furtiveness\nWidget For Strawman\nJittery Widget\nSilver Widget\nDivided Widget\nManicurist Widget\nInfinite Widget\nYellow-Tipped Widget\nUnshakable Widget\nSelf-Knowledge Widget\nWidget For Cinema",
"all_widget = document.find_all('td', attrs={'class': 'wname'})\n#USING FOR LOOP\nfor widget in all_widget:\n widget2 = widget.string\n print(widget2)\nprint(\"##########\")\n\n#Using LIST comprehention\nraw_widget = [tag.string for tag in all_widget]\nraw_widget\n",
"Problem set #2: Widget dictionaries\nFor this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:\n[{'partno': 'C1-9476',\n 'price': '$2.70',\n 'quantity': u'512',\n 'wname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': '$9.36',\n 'quantity': '967',\n 'wname': u'Widget For Furtiveness'},\n ...several items omitted...\n {'partno': '5B-941/F',\n 'price': '$13.26',\n 'quantity': '919',\n 'wname': 'Widget For Cinema'}]\nAnd this expression:\nwidgets[5]['partno']\n\n... should evaluate to:\nLH-74/O",
"\nwidgets = []\n\n# your code here\nsearch_table = document.find_all('tr', attrs={'class': 'winfo'})\nfor new_key in search_table: \n diccionaries = {}\n partno_tag = new_key.find('td', attrs={'class': 'partno'})\n price_tag = new_key.find('td', attrs={'class': 'price'})\n quantity_tag = new_key.find('td', attrs={'class': 'quantity'})\n widget_tag = new_key.find('td', attrs={'class': 'wname'}) \n diccionaries['partno'] = partno_tag.string\n diccionaries['price'] = price_tag.string\n diccionaries['quantity'] = quantity_tag.string \n diccionaries['widget'] = widget_tag.string\n widgets.append(diccionaries) \nwidgets\n# end your code\n\n\n#test \nwidgets[5]['partno']",
"In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:\n[{'partno': 'C1-9476',\n 'price': 2.7,\n 'quantity': 512,\n 'widgetname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': 9.36,\n 'quantity': 967,\n 'widgetname': 'Widget For Furtiveness'},\n ... some items omitted ...\n {'partno': '5B-941/F',\n 'price': 13.26,\n 'quantity': 919,\n 'widgetname': 'Widget For Cinema'}]\n\n(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)",
"widgets = []\n\n# your code here\nsearch_table = document.find_all('tr', attrs={'class': 'winfo'})\nfor new_key in search_table: \n diccionaries = {}\n partno_tag = new_key.find('td', attrs={'class': 'partno'})\n price_tag = new_key.find('td', attrs={'class': 'price'})\n quantity_tag = new_key.find('td', attrs={'class': 'quantity'})\n widget_tag = new_key.find('td', attrs={'class': 'wname'}) \n diccionaries['partno'] = partno_tag.string\n diccionaries['price'] = float(price_tag.string[1:])\n diccionaries['quantity'] = int(quantity_tag.string) \n diccionaries['widget'] = widget_tag.string\n widgets.append(diccionaries) \nwidgets\n#widgets\n# end your code\n",
"Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.\nExpected output: 7928",
"new_list = []\nfor items in widgets:\n new_list.append(items['quantity'])\nsum(new_list)\n\n",
"In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.\nExpected output:\nWidget For Furtiveness\nJittery Widget\nSilver Widget\nInfinite Widget\nWidget For Cinema",
"for widget in widgets:\n if widget['price'] > 9.30:\n print(widget['widget'])",
"Problem set #3: Sibling rivalries\nIn the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:\nOften, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):",
"example_html = \"\"\"\n<h2>Camembert</h2>\n<p>A soft cheese made in the Camembert region of France.</p>\n\n<h2>Cheddar</h2>\n<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>\n\"\"\"",
"If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:",
"example_doc = BeautifulSoup(example_html, \"html.parser\")\ncheese_dict = {}\nfor h2_tag in example_doc.find_all('h2'):\n cheese_name = h2_tag.string\n cheese_desc_tag = h2_tag.find_next_sibling('p')\n cheese_dict[cheese_name] = cheese_desc_tag.string\n\ncheese_dict",
"With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header \"Hallowed Widgets.\"\nExpected output:\nMZ-556/B\nQV-730\nT1-9731\n5B-941/F",
"for h3_tag in document.find_all('h3'):\n if \"Hallowed widgets\" in h3_tag:\n table = h3_tag.find_next_sibling('table', {'class': 'widgetlist'})\n partno = table.find_all('td', {'class': 'partno'}) \n for x in partno:\n print(x.string)",
"Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!\nIn the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are \"categories\" of widgets (e.g., the contents of the <h3> tags on the page: \"Forensic Widgets\", \"Mood widgets\", \"Hallowed Widgets\") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:\n{'Forensic Widgets': 3,\n 'Hallowed widgets': 4,\n 'Mood widgets': 2,\n 'Wondrous widgets': 3}",
"widget_count = {}\ndoc = document.find_all('h3')\nfor h3_tag in doc:\n widget_name = h3_tag.string\n table = h3_tag.find_next_sibling('table', {'class': 'widgetlist'})\n partno = table.find_all('td', {'class': 'partno'})\n count = len(partno)\n widget_count[widget_name] = count\nwidgets\n",
"Congratulations! You're done."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Snyder005/StatisticalMethods
|
examples/Cepheids/PeriodMagnitudeRelation.ipynb
|
gpl-2.0
|
[
"A Period - Magnitude Relation in Cepheid Stars\n\n\nCepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).\n\n\nA lot of monitoring data - repeated imaging and subsequent \"photometry\" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.\n\n\nLet's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).\n\n\nOur goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.",
"from __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15.0, 8.0) ",
"A Look at Each Host Galaxy's Cepheids\nLet's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.",
"# First, we need to know what's in the data file.\n\n!head -15 R11ceph.dat\n\nclass Cepheids(object):\n \n def __init__(self,filename):\n # Read in the data and store it in this master array:\n self.data = np.loadtxt(filename)\n self.hosts = self.data[:,1].astype('int').astype('str')\n # We'll need the plotting setup to be the same each time we make a plot:\n colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']\n self.colors = dict(zip(self.list_hosts(), colornames))\n self.xlimits = np.array([0.3,2.3])\n self.ylimits = np.array([30.0,17.0])\n return\n \n def list_hosts(self):\n # The list of (9) unique galaxy host names:\n return np.unique(self.hosts)\n \n def select(self,ID):\n # Pull out one galaxy's data from the master array:\n index = (self.hosts == str(ID))\n self.mobs = self.data[index,2]\n self.merr = self.data[index,3]\n self.logP = np.log10(self.data[index,4])\n return\n \n def plot(self,X):\n # Plot all the points in the dataset for host galaxy X.\n ID = str(X)\n self.select(ID)\n plt.rc('xtick', labelsize=16) \n plt.rc('ytick', labelsize=16)\n plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)\n plt.xlabel('$\\\\log_{10} P / {\\\\rm days}$',fontsize=20)\n plt.ylabel('${\\\\rm magnitude (AB)}$',fontsize=20)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)\n return\n\n def overlay_straight_line_with(self,a=0.0,b=24.0):\n # Overlay a straight line with gradient a and intercept b.\n x = self.xlimits\n y = a*x + b\n plt.plot(x, y, 'k-', alpha=0.5, lw=2)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n return\n \n def add_legend(self):\n plt.legend(loc='upper left')\n return\n\n\ndata = Cepheids('R11ceph.dat')\nprint(data.colors)",
"OK, now we are all set up! Let's plot one of the datasets.",
"data.plot(4258)\n\n# for ID in data.list_hosts():\n# data.plot(ID)\n \ndata.overlay_straight_line_with(a=-2.0,b=24.0)\n\ndata.add_legend()",
"Q: Is the Cepheid Period-Luminosity relation likely to be well-modeled by a power law ?\nIs it easy to find straight lines that \"fit\" all the data from each host? And do we get the same \"fit\" for each host?\nInferring the Period-Magnitude Relation\n\nLet's try inferring the parameters $a$ and $b$ of the following linear relation:\n\n$m = a\\;\\log_{10} P + b$\n\nWe have data consisting of observed magnitudes with quoted uncertainties, of the form \n\n$m^{\\rm obs} = 24.51 \\pm 0.31$ at $\\log_{10} P = \\log_{10} (13.0/{\\rm days})$\n\nLet's draw a PGM for this, imagining our way through what we would do to generate a mock dataset like the one we have.",
"# import cepheids_pgm\n# cepheids_pgm.simple()\n\nfrom IPython.display import Image\nImage(filename=\"cepheids_pgm.png\")",
"Q: What are reasonable assumptions about the sampling distribution for the $k^{\\rm th}$ datapoint, ${\\rm Pr}(m^{\\rm obs}_k|m_k,H)$?\n\nWe were given points ($m^{\\rm obs}_k$) with error bars ($\\sigma_k$), which suggests a Gaussian sampling distribution (as was suggested in Session 1):\n\n${\\rm Pr}(m^{\\rm obs}_k|m_k,\\sigma_k,H) = \\frac{1}{Z} \\exp{-\\frac{(m^{\\rm obs}_k - m_k)^2}{2\\sigma_k^2}}$\n\nThen, we might suppose that the measurements of each Cepheid start are independent of each other, so that we can define predicted and observed data vectors $m$ and $m^{\\rm obs}$ (plus a corresponding observational uncertainty vector $\\sigma$) via:\n\n${\\rm Pr}(m^{\\rm obs}|m,\\sigma,H) = \\prod_k {\\rm Pr}(m^{\\rm obs}_k|m_k,\\sigma_k,H)$\nQ: What is the conditional PDF ${\\rm Pr}(m_k|a,b,\\log{P_k},H)$?\nOur relationship between the intrinsic magnitude and the log period is linear and deterministic, indicating the following delta-function PDF:\n${\\rm Pr}(m_k|a,b,\\log{P_k},H) = \\delta(m_k - a\\log{P_k} - b)$\nQ: What is the resulting joint likelihood, ${\\rm Pr}(m^{\\rm obs}|a,b,H)$?\n\nThe factorisation of the joint PDF for everything inside the plate that is illustrated by the PGM is:\n\n${\\rm Pr}(m^{\\rm obs}|m,\\sigma,H)\\;{\\rm Pr}(m|a,b,H) = \\prod_k {\\rm Pr}(m^{\\rm obs}_k|m_k,\\sigma_k,H)\\;\\delta(m_k - a\\log{P_k} - b)$\n\nThe intrinsic magnitudes of each Cepheid ($m$) are not interesting, and so we marginalize them out:\n\n${\\rm Pr}(m^{\\rm obs}|a,b,H) = \\int {\\rm Pr}(m^{\\rm obs}|m,\\sigma,H)\\;{\\rm Pr}(m|a,b,H)\\; dm$\nso that ${\\rm Pr}(m^{\\rm obs}|a,b,H) = \\prod_k {\\rm Pr}(m^{\\rm obs}_k|[a\\log{P_k} + b],\\sigma,H)$\nQ: What is the log likelihood?\n$\\log {\\rm Pr}(m^{\\rm obs}|a,b,H) = \\sum_k \\log {\\rm Pr}(m^{\\rm obs}_k|[a\\log{P_k} + b],\\sigma,H)$\nwhich, substituting in our Gaussian form, gives us: \n$\\log {\\rm Pr}(m^{\\rm obs}|a,b,H) = {\\rm constant} - 0.5 \\sum_k \\frac{(m^{\\rm obs}_k - a\\log{P_k} - b)^2}{\\sigma_k^2}$\n\nThis sum is often called $\\chi^2$ (\"chi-squared\"), and you may have seen it before. It's an effective \"misfit\" statistic, quantifying the difference between observed and predicted data - and under the assumptions outlined here, it's twice the log likelihood (up to a constant).\n\nQ: What could be reasonable assumptions for the prior ${\\rm Pr}(a,b|H)$?\nFor now, we can (continue to) assume a uniform distribution for each of $a$ and $b$ - in the homework, you can investigate some alternatives.\n${\\rm Pr}(a|H) = \\frac{1.0}{a_{\\rm max} - a_{\\rm min}}\\;\\;{\\rm for}\\;\\; a_{\\rm min} < a < a_{\\rm max}$\n${\\rm Pr}(b|H) = \\frac{1.0}{b_{\\rm max} - b_{\\rm min}}\\;\\;{\\rm for}\\;\\; b_{\\rm min} < b < b_{\\rm max}$\nWe should now be able to code up functions for the log likelihood, log prior and log posterior, such that we can evaluate them on a 2D parameter grid. Let's fill them in:",
"def log_likelihood(logP,mobs,merr,a,b):\n return -0.5*np.sum((mobs - a*logP -b)**2/(merr**2))\n\ndef log_prior(a,b):\n amin,amax = -10.0,10.0\n bmin,bmax = 10.0,30.0\n if (a > amin)*(a < amax)*(b > bmin)*(b < bmax):\n logp = np.log(1.0/(amax-amin)) + np.log(1.0/(bmax-bmin))\n else:\n logp = -np.inf\n return logp\n\ndef log_posterior(logP,mobs,merr,a,b):\n return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)",
"Now, let's set up a suitable parameter grid and compute the posterior PDF!",
"# Select a Cepheid dataset:\ndata.select(4258)\n\n# Set up parameter grids:\nnpix = 100\namin,amax = -4.0,-2.0\nbmin,bmax = 25.0,27.0\nagrid = np.linspace(amin,amax,npix)\nbgrid = np.linspace(bmin,bmax,npix)\nlogprob = np.zeros([npix,npix])\n\n# Loop over parameters, computing unnormlized log posterior PDF:\nfor i,a in enumerate(agrid):\n for j,b in enumerate(bgrid):\n logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)\n\n# Normalize and exponentiate to get posterior density:\nZ = np.max(logprob)\nprob = np.exp(logprob - Z)\nnorm = np.sum(prob)\nprob /= norm",
"Now, plot, with confidence contours:",
"sorted = np.sort(prob.flatten())\nC = sorted.cumsum()\n\n# Find the pixel values that lie at the levels that contain\n# 68% and 95% of the probability:\nlvl68 = np.min(sorted[C > (1.0 - 0.68)])\nlvl95 = np.min(sorted[C > (1.0 - 0.95)])\n\nplt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])\nplt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])\nplt.grid()\nplt.xlabel('slope a')\nplt.ylabel('intercept b / AB magnitudes')",
"Are these inferred parameters sensible? \n\n\nLet's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.",
"data.plot(4258)\n\ndata.overlay_straight_line_with(a=-3.0,b=26.3)\n\ndata.add_legend()",
"OK, this looks good! Later in the course we will do some more extensive model checking.\nSummarizing our Inferences\nLet's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and \"68% credible interval\" (defined as the region of 1D parameter space enclosing 68% of the posterior probability).",
"prob_a_given_data = np.sum(prob,axis=0) # Approximate the integral as a sum\nprob_b_given_data = np.sum(prob,axis=1) # Approximate the integral as a sum\n\nprint(prob_a_given_data.shape, np.sum(prob_a_given_data))\n\n# Plot 1D distributions:\n\nfig,ax = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(15, 6)\nplt.subplots_adjust(wspace=0.2)\n\nleft = ax[0].plot(agrid, prob_a_given_data)\nax[0].set_title('${\\\\rm Pr}(a|d)$')\nax[0].set_xlabel('slope $a$')\nax[0].set_ylabel('Posterior probability density')\n\nright = ax[1].plot(bgrid, prob_b_given_data)\nax[1].set_title('${\\\\rm Pr}(a|d)$')\nax[0].set_xlabel('intercept $b$ / AB magnitudes')\nax[1].set_ylabel('Posterior probability density')\n\n# Compress each PDF into a median and 68% credible interval, and report:\n\ndef compress_1D_pdf(x,pr,ci=68,dp=1):\n \n # Interpret credible interval request:\n low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68\n high = 1.0 - low # 0.84 for ci=68\n\n # Find cumulative distribution and compute percentiles:\n cumulant = pr.cumsum()\n pctlow = x[cumulant>low].min()\n median = x[cumulant>0.50].min()\n pcthigh = x[cumulant>high].min()\n \n # Convert to error bars, and format a string:\n errplus = np.abs(pcthigh - median)\n errminus = np.abs(median - pctlow)\n \n report = \"$ \"+str(round(median,dp))+\"^{+\"+str(round(errplus,dp))+\"}_{-\"+str(round(errminus,dp))+\"} $\"\n \n return report\n\nprint(\"a = \",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))\n\nprint(\"b = \",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))",
"Notes\n\n\nIn this simple case, our report makes sense: the medians of both 1D marginalized PDFs lie within the region of high 2D posterior PDF. This will not always be the case.\n\n\nThe marginalized posterior for $x$ has a well-defined meaning, regardless of the higher dimensional structure of the joint posterior: it is ${\\rm Pr}(x|d,H)$, the PDF for $x$ given the data and the model, and accounting for the uncertainty in all other parameters.\n\n\nThe high degree of symmetry in this problem is due to the posterior being a bivariate Gaussian. We could have derived the posterior PDF analytically - but in general this will not be possible. The homework invites you to explore various other analytic and numerical possibilities in this simple inference scenario."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WyoARCC/arcc-106-python
|
ARCC+Bootcamp+Machine+Learning.ipynb
|
mit
|
[
"Machine Learning Using Python by ARCC\nWhat is Machine Learning?\nMachine Learning is the subfield of computer science, which is defined by Arthur Samuel as \"Giving computers the ability to learn without being explicitly programmed\". Generally speaking, it can be defined as the ability of machines to learn to perform a task efficiently based on experience.\nTwo major types of Machine Learning problems\nThe two major types of Machine Learning problems are: \n1. Supervised Learning : In this type of Machine Learning problem, the learning algorithms are provided with a data set that has a known label or result, such as classifying a bunch of emails as spam/not-spam emails.\n2. Unsupervised Learning : In this type of Machine Learning problem, the learning algorithms are provided with a data set that has no known label or result. The algorithm without any supervision is expected to find some structure in the data by itself. For example search engines.\nIn order to limit the scope of this boot camp, as well as save us some time, we will focus on Supervised Learning today.\nMachine Learning's \"Hello World\" programs\nSupervised Learning\nIn this section we focus on three Supervised Learning algorithms namely Linear Regression, Linear Classifier and Support Vector Machines.\nLinear Regression\nIn order to explain what Linear Regression is, lets write a program that performs Linear Regression. Our goal is to find the best fitting straight line through a data set comprising of 50 random points. \nEquation of a line in slope intercept form is: $y=m*x+C$",
"#NumPy is the fundamental package for scientific computing with Python\nimport numpy as np\n\n# Matplotlib is a Python 2D plotting library \nimport matplotlib.pyplot as plt\n\n#Number of data points\nn=50\nx=np.random.randn(n)\ny=np.random.randn(n)\n\n#Create a figure and a set of subplots\nfig, ax = plt.subplots()\n\n#Find best fitting straight line\n#This will return the coefficients of the best fitting straight line \n#i.e. m and c in the slope intercept form of a line-> y=m*x+c\nfit = np.polyfit(x, y, 1)\n\n#Plot the straight line\nax.plot(x, fit[0] * x + fit[1], color='black')\n\n#scatter plot the data set\nax.scatter(x, y)\n\nplt.ylabel('y axis')\nplt.xlabel('x axis')\nplt.show()\n\n#predict output for an input say x=5\nx_input=5\npredicted_value= fit[0] * x_input + fit[1]\nprint(predicted_value)",
"Using the best fittng straight line, we can predict the next expected value. Hence Regression is a Machine Learning technique which helps to predict the output in a model that takes continous values.\nLinear Classifier\nA classifier, for now, can be thought of as a program that uses an object's characteristics to identify which class it belongs to. For example, classifying a fruit as an orange or an apple. The following program is a simple Supervised Learning classifier, that makes use of a decision tree classifier (An example of a decision tree is shown below).",
"#Import the decision tree classifier class from the scikit-learn machine learning library\nfrom sklearn import tree\n\n#List of features\n#Say we have 9 inputs each with two features i.e. [feature one=1:9, feature two=0 or 1] \nfeatures=[[1,1],[8,0],[5,1],[2,1],[6,0],[9,1],[3,1],[4,1],[7,0]]\n\n#The 9 inputs are classified explicitly into three classes (0,1 and 2) by us\n# For example input 1,1 belongs to class 0\n# input 4,1 belongs to class 1\n# input 8,1 belongs to class 2\nlabels=[0,0,0,1,1,1,2,2,2]\n\n#Features are the inputs to the classifier and labels are the outputs\n\n#Create decision tree classifier\nclf=tree.DecisionTreeClassifier()\n#Training algorithm, included in the object, is executed\nclf=clf.fit(features,labels) #Fit is a synonym for \n\"find patterns in data\"\n\n#Predict to which class does an input belong\n#for example [20,1]\nprint (clf.predict([[2,1]]))",
"Support Vector Machines\n“Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. An example of SVM would be using a linear hyperplane to seperate two clusters of data points. The following code implements the same.",
"#import basic libraries for plotting and scientific computing\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\n#emulates the aesthetics of ggplot in R\nstyle.use(\"ggplot\")\n\n#import class svm from scikit-learn\nfrom sklearn import svm\n\n#input data\nX = [1, 5, 1.5, 8, 1, 9]\nY = [2, 8, 1.8, 8, 0.6, 11]\n\n#classes assigned to input data\ny = [0,1,0,1,0,1]\n\n#plot input data\n#plt.scatter(X,Y)\n#plt.show()\n\n#Create the Linear Support Vector Classification\nclf = svm.SVC(kernel='linear', C = 1.0)\n\n#input data in 2-D\npoints=[[1,2],[5,8],[1.5,1.8],[8,8],[1,0.6],[9,11]]#,[2,2],[1,4]]\n\n#Fit the data with Linear Support Vector Classification\nclf.fit(points,y) #Fit is a synonym for \"find patterns in data\"\n\n#Predict the class for the following two points depending on which side of the SVM they lie\nprint(clf.predict([0.58,0.76]))\nprint(clf.predict([10.58,10.76]))\n\n#find coefficients of the linear svm\nw = clf.coef_[0]\n#print(w)\n\n#find slope of the line we wish to draw between the two classes\n#a=change in y/change in x\na = -w[0] / w[1]\n\n#Draw the line\n\n#x points for a line\n#linspace ->Return evenly spaced numbers over a specified interval.\nxx = np.linspace(0,12)\n#equation our SVM hyperplane\nyy = a * xx - clf.intercept_[0] / w[1]\n\n#plot the hyperplane\nh0 = plt.plot(xx, yy, 'k-', label=\"svm\") \n#plot the data points as a scatter plot\nplt.scatter(X,Y)\nplt.legend()\nplt.show()\n",
"Basic workflow when using Machine Learning algorithms\n\nNeural Networks\nNeural Networks can be thought of as a one large composite function, which is comprised of other functions. Each layer is a function that is fed an input which is the result of the previous function's output.\nFor example:\n\nThe example above was rather rudimentary. Let us look at a case where we have more than one inputs, fed to a prediction function that maps them to an output. This can be depicted by the following graph.\n\nBuilding a Neural Network\nThe function carried out by our layer is termed as the sigmoid. It takes the form: $1/(1+exp(-x))$\nSteps to follow to create our neural network:\n<br>\n1) Get set of input\n<br>\n2) dot with a set of weights i.e. weight1input1+weight2input2+weight3*input3\n<br>\n3) send the dot product to our prediction function i.e. sigmoid\n<br>\n4) check how much we missed i.e. calculate error\n<br>\n5) adjust weights accordingly\n<br>\n6) Do this for all inputs and about 1000 times",
"#import numpy library\nimport numpy as np\n\n# Sigmoid function which maps any value to between 0 and 1 \n# This is the function which will our layers will comprise of\n# It is used to convert numbers to probabilties\n\ndef sigmoid(x):\n return 1/(1+np.exp(-x))\n\n# input dataset\n# 4 set of inputs\nx = np.array([[0.45,0,0.21],[0.5,0.32,0.21],[0.6,0.5,0.19],[0.7,0.9,0.19]])\n \n# output dataset corresponding to our set of inputs \n# .T takes the transpose of the 1x4 matrix which will give us a 4x1 matrix\ny = np.array([[0.5,0.4,0.6,0.9]]).T\n\n#Makes our model deterministic for us to judge the outputs better\n#Numbers will be randomly distributed but randomly distributed in exactly the same way each time the model is trained\nnp.random.seed(1)\n \n# initialize weights randomly with mean 0, weights lie within -1 to 1\n# dimensions are 3x1 because we have three inputs and one output\nweights = 2*np.random.random((3,1))-1\n\n\n#Network training code\n#Train our neural network\nfor iter in range(1000):\n #get input\n input_var = x\n \n #This is our prediction step\n #first predict given input, then study how it performs and adjust to get better\n #This line first multiplies the input by the weights and then passes it to the sigmoid function\n output = sigmoid(np.dot(input_var,weights))\n \n #now we have guessed an output based on the provided input\n #subtract from the actual answer to see how much did we miss\n error = y - output\n\n #based on error update our weights\n weights += np.dot(input_var.T,error)\n\n#The best fit weights by our neural net is as following:\nprint(\"The weights that the neural network found was:\")\nprint(weights)\n\n#Predict with new inputs i.e. dot with weights and then send to our prediction function\npredicted_output = sigmoid(np.dot(np.array([0.3,0.9,0.1]),weights))\nprint (\"Predicted Output:\")\nprint (predicted_output)",
"Optional :\nK-Nearest Neighbour",
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Feature set containing (x,y) values of 25 known/training data\ntrainData = np.random.randint(0,100,(25,2)).astype(np.float32)\n\n# Labels each one either Red or Blue with numbers 0 and 1\nresponses = np.random.randint(0,2,(25,1)).astype(np.float32)\n\n# Take Red families and plot them\nred = trainData[responses.ravel()==0]\nplt.scatter(red[:,0],red[:,1],80,'r','^')\n\n# Take Blue families and plot them\nblue = trainData[responses.ravel()==1]\nplt.scatter(blue[:,0],blue[:,1],80,'b','s')\n\n#New unknown data point\nnewcomer = np.random.randint(0,100,(1,2)).astype(np.float32)\n#Make this unknown data point green\nplt.scatter(newcomer[:,0],newcomer[:,1],80,'g','o')\n\n#Carry out the K nearest neighbour classification\nknn = cv2.ml.KNearest_create()\n#Train the algorithm\n#passing 0 as a parameter considers the length of array as 1 for entire row.\nknn.train(trainData, 0, responses)\n#Find 3 nearest neighbours...also make sure the neighbours found belong to both classes\nret, results, neighbours ,dist = knn.findNearest(newcomer, 3)\nprint (\"result: \", results,\"\\n\")\nprint (\"neighbours: \", neighbours,\"\\n\")\nprint (\"distance: \", dist)\nplt.show()",
"Additional resources to further learn Python\nTutorials by python.org at https://docs.python.org/3/tutorial/\nPython for everybody specialization by the University of Michigan at www.coursera.org\nPython intro course on Data Camp at https://www.datacamp.com/courses/intro-to-python-for-data-science\nFree exercises at https://learnpythonthehardway.org/book/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dtamayo/MachineLearning
|
Day2/titanic_svm.ipynb
|
gpl-3.0
|
[
"Titanic kaggle competition with SVM",
"#import all the needed package\nimport numpy as np\nimport scipy as sp\nimport re\nimport pandas as pd\nimport sklearn\nfrom sklearn.cross_validation import train_test_split,cross_val_score\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import metrics\nimport matplotlib \nfrom matplotlib import pyplot as plt\n%matplotlib inline\nfrom sklearn.svm import SVC\n",
"Let's load and examine the titanic data with pandas first.",
"data = pd.read_csv('data/train.csv')\nprint data.head()\n\n# our target is the survived column\ny= data['Survived']\n\nprint data.shape",
"So we have 891 training examples with 10 information columns given. Of course it is not straight forward to use all of them at this point. \nIn this example, we will just explore two simple SVM models that only use two features. \nOur choice of models are:\n- gender, class\n- gender, feature\nGender-Class model\nRecall how we generated features from catagories last session. We use the same method to generate an additional feature called Sex_male.",
"#add in Sex_male features\ndata['Sex_male']=data.Sex.map({'female':0,'male':1})\ndata.head()\n\n#get the features we indented to use \nfeature_cols=['Pclass','Sex_male']\nX=data[feature_cols]\nX.head()\n\n#use the default SVM rbf model\nmodel=SVC()\nscores=cross_val_score(model,X,y,cv=10,scoring='accuracy')\nprint scores, np.mean(scores),np.std(scores)",
"Here is how we examine how the selection works.",
"xmin,xmax=X['Pclass'].min()-0.5,X['Pclass'].max()+0.5\nymin,ymax=X['Sex_male'].min()-0.5,X['Sex_male'].max()+0.5\nprint xmin,xmax,ymin,ymax\nxx, yy = np.meshgrid(np.linspace(xmin, xmax, 200), np.linspace(ymin, ymax, 200))\n\nmodel.fit(X,y)\nZ = model.decision_function(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\n\nfig=plt.figure(figsize=(20,10))\nax=fig.add_subplot(111)\nax.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu,alpha=0.5)\nax.scatter(X['Pclass']+np.random.randn(len(X['Pclass']))*0.1, X['Sex_male']+np.random.randn(len(X['Pclass']))*0.05, c=y,s=40, cmap=plt.cm.RdBu_r)\nax.set_xlabel(\"Pclass\")\nax.set_ylabel(\"Sex_male\")\nax.set_xlim([0.5,3.5])\nax.set_ylim([-0.5,1.5])\nplt.show()",
"Gender-Age model\nWe will introduce a few concepts while including the Age feature. \nmissing values\n\nidentify missing values",
"#use the isnull function to check if there is any missing value in the Age column. \npd.isnull(data['Age']).any()",
"How many missing values are there?",
"print len(data['Age'][pd.isnull(data['Age'])])",
"SVM does not allow features with missing values, what do we do? \nOne idea would be to fill in them with a number we think is reasonable. \nLet's try to use the average age first.",
"data['Age'][pd.isnull(data['Age'])]=data['Age'].mean()",
"can you think of better ways to do this?",
"#generate our new feature\nfeature_cols=['Age','Sex_male']\nX=data[feature_cols]\nX.head()\n\n#use the default SVM rbf model\nscores=cross_val_score(model,X,y,cv=10,scoring='accuracy')\nprint scores, np.mean(scores),np.std(scores)",
"feature rescaling",
"X['Age']=(X['Age']-X['Age'].median())/X['Age'].std()\n#X = StandardScaler().fit_transform(X)\n\nscores=cross_val_score(model,X,y,cv=10,scoring='accuracy')\nprint scores, np.mean(scores),np.std(scores)",
"Let's examine the selection function of the model.",
"xmin,xmax=X['Age'].min()-0.5,X['Age'].max()+0.5\nymin,ymax=X['Sex_male'].min()-0.5,X['Sex_male'].max()+0.5\nprint xmin,xmax,ymin,ymax\nxx, yy = np.meshgrid(np.linspace(xmin, xmax, 200), np.linspace(ymin, ymax, 200))\n\nmodel.fit(X,y)\nZ = model.decision_function(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\n\nfig=plt.figure(figsize=(20,10))\nax=fig.add_subplot(111)\nax.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu,alpha=0.5)\nax.scatter(X['Age'], X['Sex_male']+np.random.randn(len(X['Age']))*0.05, c=y,s=40, cmap=plt.cm.RdBu_r)\nax.set_xlabel(\"Normalized Age\")\nax.set_ylabel(\"Sex_male\")\nax.set_ylim([-0.5,1.5])\nax.set_xlim([-3,4.5])\nplt.show()",
"Create a submission file with the Gender-Age model\nFirst we want to read in the test data set and add in the gender features as what we did with the training data set.",
"test_data = pd.read_csv('data/test.csv')\n#print test_data.head()\n#add in Sex_male features\ntest_data['Sex_male']=test_data.Sex.map({'female':0,'male':1})",
"We notice again that some of the age value is missing in the test data, and want to fill in the same way as what we did with the training data.",
"#use the isnull function to check if there is any missing value in the Age column. \npd.isnull(test_data['Age']).any()\n\nprint len(test_data['Age'][pd.isnull(test_data['Age'])])\n\ntest_data['Age'][pd.isnull(test_data['Age'])]=data['Age'].mean()",
"Note here we give the missing values the mean age of the training data. \nWhat's the pros and cons of doing this? \nWe want to get the features from the test data, and scale our age feature the same way as what we did in the training data.",
"#generate our new feature\nX_test=test_data[feature_cols]\nX_test['Age']=(X_test['Age']-data['Age'].median())/data['Age'].std()",
"We use the model above to predict the survive of our test data. \nThe model is fitted with the entire training data.",
"y_pred=model.predict(X_test)\nX_test.head()",
"create a file that can be submit to kaggle\nWe read in the example submission file provided by kaggle, and then replace the \"Survived\" column with our own prediction.\nWe use the to_csv method of panda dataframe, now we can check with kaggle on how well we are doing.",
"samplesubmit = pd.read_csv(\"data/titanic_submit_example.csv\")\n#samplesubmit.head()\n\nsamplesubmit[\"Survived\"]=y_pred\n#samplesubmit.to_csv\nsamplesubmit.to_csv(\"data/titanic_submit_gender_age.csv\",index=False)\n\n\nsamplesubmit.head()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jfrygeo/Read-Headers-CSV-python
|
View-Headers-Pandas.ipynb
|
apache-2.0
|
[
"View-Headers-And-Edit-CSV\nIf a file is too large to be opened by Excel or Notepad, this script will read the csv file and allow a user to output a clean csv",
"import pandas as pd\nprint (pd.__version__)\n\nresponse = input(\"Please Enter Location of Input CSV (\"\"Example: C:\\Test\\TestCSV.csv)\"\": \")",
"The code below will read the csv, show you any errors and skip over those lines. The data frame object is then passed below.",
"# If you need to see what pandas read is all about here is the call for the help\n#df = pd.read_csv?\n\ndf = pd.read_csv(response,error_bad_lines=False, index_col=False, encoding='iso-8859-1',warn_bad_lines=True)\ndatakeys = df.keys()\ndatakeys\ndf",
"Put in the location for the new csv",
"newcsv = input(\"Please Enter Location of output CSV (\"\"Example: C:\\Test\\TestCSV.csv)\"\": \")",
"The section below will allow you to put in values that you want replaced in the output csvs.",
"newdf = df.replace(to_replace =[\"None\", \"none\", \"NONE\"], \n value =\"\") \nexport_csv = newdf.to_csv((newcsv), index = None, header=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
badlands-model/BayesLands
|
Examples/mountain/Hydrometrics.ipynb
|
gpl-3.0
|
[
"Hydrometrics\nIn this notebook, we show how to compute several hydrometics parameters based on stream network produced from model. The analysis relies on the flow files (i.e. stream) found in Badlands outputs. If you are interested in looking at morphometrics and stratigraphic analysis there are other notebooks specially designed for that in the Badlands companion repository.\nHydrometrics here refers only to quantitative description and analysis of water surface and we don't consider groundwater analysis. We will show how you can extract a particular catchment from a given model and compute for this particular catchment a series of paramters such as:\n\nriver profile evolution based on main stream elevation and distance to outlet,\npeclet number distribution which evaluates the dominant processes shaping the landscape,\n$\\chi$ parameter that characterizes rivers system evolution based on terrain steepness and the arrangement of tributaries,\ndischarge profiles",
"%matplotlib inline\n\nfrom matplotlib import cm\n\n# Import badlands grid generation toolbox\nimport pybadlands_companion.hydroGrid as hydr\n\n# display plots in SVG format\n%config InlineBackend.figure_format = 'svg' ",
"1. Load catchments parameters\nWe first have to define the path to the Badlands outputs we want to analyse. In addition Badlands is creating several files for each processors that have been used, you need to specify this number in the ncpus variable. \nWe then need to provide a point coordinates (X,Y) contained in the catchment of interest. This point doesn't need to be the outlet of the catchment. \nFor more information regarding the function uncomment the following line.",
"#help(hydr.hydroGrid.__init__)\n\nhydro1 = hydr.hydroGrid(folder='output/h5', ncpus=1, \\\n ptXY = [40599,7656.65])\n\nhydro2 = hydr.hydroGrid(folder='output/h5', ncpus=1, \\\n ptXY = [33627.6,30672.9])",
"2. Extract particular catchment dataset\nWe now extract the data from a particular time step (timestep) and for the catchment of interest, which contained the point specified in previous function.\nNote\nIf you are interested in making some hydrometric comparisons between different time steps you can create multiple instances of the hydrometrics python class each of them associated to a given time step.",
"#help(hydro.getCatchment)\n\nhydro1.getCatchment(timestep=200)\nhydro2.getCatchment(timestep=200)",
"We can visualise the stream network using the viewNetwork function. The following paramters can be displayed:\n- $\\chi$ paramater 'chi',\n- elevation 'Z',\n- discharge 'FA' (logarithmic values)",
"#help(hydro.viewNetwork)\n\nhydro1.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15, \n val = 'chi', width = 300, height = 500, colorMap = cm.viridis, \n colorScale = 'Viridis', reverse = False, \n title = '<br>Stream network graph 1')\n\nhydro2.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15, \n val = 'chi', width = 300, height = 500, colorMap = cm.viridis, \n colorScale = 'Viridis', reverse = False, \n title = '<br>Stream network graph 2')\n\nhydro1.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3, \n val = 'FA', width = 300, height = 500, colorMap = cm.Blues, \n colorScale = 'Blues', reverse = True, \n title = '<br>Stream network graph 1')\n\nhydro2.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3, \n val = 'FA', width = 300, height = 500, colorMap = cm.Blues, \n colorScale = 'Blues', reverse = True, \n title = '<br>Stream network graph 2')",
"3. Extract catchment main stream\nWe now extract the main stream for the considered catchment based on flow \ndischarge values.",
"#help(hydro.extractMainstream)\n\nhydro1.extractMainstream()\nhydro2.extractMainstream()",
"As for the global stream network, you can use the viewStream function to visualise the main stream dataset.",
"#help(hydro.viewStream)\n\nhydro1.viewStream(linePlot = False, lineWidth = 1, markerSize = 7, \n val = 'Z', width = 300, height = 500, colorMap = cm.jet, \n colorScale = 'Jet', reverse = False, \n title = '<br>Stream network graph 1')\n\nhydro2.viewStream(linePlot = True, lineWidth = 1, markerSize = 7, \n val = 'Z', width = 300, height = 500, colorMap = cm.jet, \n colorScale = 'Jet', reverse = False, \n title = '<br>Stream network graph 2')",
"4. Compute main stream hydrometrics\nHere, we compute the stream parameters using the distance from outlet and the Badlands simulation coefficients for the stream power law and the hillslope linear diffusion.\nThe formulation for the Peclet number is: \n$$Pe =\\frac {\\kappa_{c}l^{2(m+1)-n}}{\\kappa_{d}z^{1-n}}$$\nwhere $\\kappa_{c}$ is the erodibility coefficient, $\\kappa_{d}$ the hillslope diffusion coefficient and m, n the exponents from the stream power law equation. Their values are defined in your model input file.\nThe formulation for the $\\chi$ parameter follows:\n$$\\chi = \\int_{x_b}^x \\left( \\frac{A_o}{A(x')} \\right)^{m/n} dx' $$\nwhere $A_o$ is an arbitrary scaling area, and the integration is performed upstream from base level to location $x$.\nIn addition the function computeParams requires an additional parameter num which is the number of samples to generate along the main stream profile for linear interpolation.",
"hydro1.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)\nhydro2.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)",
"The following combination of parameters can be visualised with the viewPlot function:\n- 'dist': distance from catchment outlet\n- 'FA': flow discharge (logorithmic)\n- 'Pe': Peclet number\n- 'Chi': $\\chi$ parameter\n- 'Z': elevation from outlet.",
"#help(hydro1.viewPlot)\n\nhydro1.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',\n width = 800, height = 500, colorLine = 'black', colorMarker = 'black',\n opacity = 0.2, title = 'Chi vs distance to outlet')\nhydro2.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',\n width = 800, height = 500, colorLine = 'orange', colorMarker = 'purple',\n opacity = 0.2, title = 'Chi vs distance to outlet')",
"5. River profile through time\nUsing the same functions as before we can now create the river profile evolution through time and plot it on a single graph.",
"#help(hydro.timeProfiles)\n\nhydro0 = hydr.hydroGrid(folder='output/h5', ncpus=1, \\\n ptXY = [40599,7656.65])\n\ntimeStp = [20,40,60,80,100,120,140,160,180,200]\ntimeMA = map(lambda x: x * 0.25, timeStp)\nprint 'Profile time in Ma:',timeMA\ndist = []\nelev = []\nfor t in range(len(timeStp)):\n hydro0.getCatchment(timestep=timeStp[t])\n hydro0.extractMainstream()\n hydro0.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=1000)\n dist.append(hydro0.dist)\n elev.append(hydro0.Zdata)\n\nhydro0.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,\n title = 'River profile through time')\n\nhydro00 = hydr.hydroGrid(folder='output/h5', ncpus=1, \\\n ptXY = [33627.6,30672.9])\n\ntimeStp = [20,40,60,80,100,120,140,160,180,200]\ntimeMA = map(lambda x: x * 0.25, timeStp)\nprint 'Profile time in Ma:',timeMA\ndist = []\nelev = []\nfor t in range(len(timeStp)):\n hydro00.getCatchment(timestep=timeStp[t])\n hydro00.extractMainstream()\n hydro00.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=50)\n dist.append(hydro00.dist)\n elev.append(hydro00.Zdata)\n \nhydro00.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,\n title = 'River profile through time')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
drvinceknight/cfm
|
assets/assessment/mock/main.ipynb
|
mit
|
[
"Computing for Mathematics - Mock individual coursework\nThis jupyter notebook contains questions that will resemble the questions in your individual coursework.\nImportant Do not delete the cells containing: \n```\nBEGIN SOLUTION\nEND SOLUTION\n```\nwrite your solution attempts in those cells.\nIf you would like to submit this notebook:\n\nChange the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.\nWrite all your solution attempts in the correct locations;\nSave the notebook (File>Save As);\nFollow the instructions given in class/email to submit.\n\nQuestion 1\nOutput the evaluation of the following expressions exactly.\na. \\(\\frac{(9a^2bc^4) ^ {\\frac{1}{2}}}{6ab^{\\frac{3}{2}}c}\\)",
"### BEGIN SOLUTION\nimport sympy as sym\na, b, c = sym.Symbol(\"a\"), sym.Symbol(\"b\"), sym.Symbol(\"c\")\n\nsym.expand((9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c))\n### END SOLUTION\n\nq1_a_answer = _\nfeedback_text = \"\"\"Your output is not a symbolic expression.\n\nYou are expected to use sympy for this question.\n\"\"\"\ntry:\n assert q1_a_answer.expand(), feedback_text\nexcept AttributeError:\n assert False, feedback_text\n\nimport sympy as sym\na, b, c = sym.Symbol(\"a\"), sym.Symbol(\"b\"), sym.Symbol(\"c\")\n\nexpected_answer = (9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c)\nfeedback_text = f\"\"\"Your answer is not correct.\n\nThe expected answer is {expected_answer}.\"\"\"\nassert sym.simplify(q1_a_answer - expected_answer) == 0, feedback_text",
"b. \\((2 ^ {\\frac{1}{2}} + 2) ^ 2 - 2 ^ {\\frac{5}{2}}\\)",
"### BEGIN SOLUTION\n(sym.S(2) ** (sym.S(1) / 2) + 2) ** 2 - 2 ** (sym.S(5) / 2)\n### END SOLUTION\n\nq1_b_answer = _\nfeedback_text = \"\"\"Your output is not a symbolic expression.\n\nYou are expected to use sympy for this question.\n\"\"\"\ntry:\n assert q1_b_answer.expand(), feedback_text\nexcept AttributeError:\n assert False, feedback_text\n\nx = sym.Symbol(\"x\")\nexpected_answer = 6\nfeedback_text = f\"\"\"Your answer is not correct.\n\nThe expected answer is {expected_answer}.\"\"\"\nassert sym.expand(q1_b_answer - expected_answer) == 0, feedback_text",
"\\((\\frac{1}{8}) ^ {\\frac{4}{3}}\\)",
"### BEGIN SOLUTION\n(sym.S(1) / 8) ** (sym.S(4) / 3)\n### END SOLUTION\n\nq1_c_answer = _\nfeedback_text = \"\"\"Your output is not a symbolic expression.\n\nYou are expected to use sympy for this question.\n\"\"\"\ntry:\n assert q1_c_answer.expand(), feedback_text\nexcept AttributeError:\n assert False, feedback_text\n\nx = sym.Symbol(\"x\")\nexpected_answer = sym.S(1) / 16\nfeedback_text = f\"\"\"Your answer is not correct.\n\nThe expected answer is {expected_answer}.\"\"\"\nassert q1_c_answer == expected_answer, feedback_text",
"Question 2\nWrite a function expand that takes a given mathematical expression and returns the expanded expression.",
"def expand(expression):\n ### BEGIN SOLUTION\n \"\"\"\n Take a symbolic expression and expands it.\n \"\"\"\n return sym.expand(expression)\n ### END SOLUTION\n\nfeedback_text = \"\"\"You did not include a docstring. This is important to help document your code.\n\n\nIt is done using triple quotation marks. For example:\n\ndef get_remainder(m, n):\n \\\"\\\"\\\"\n This function returns the remainder of m when dividing by n\n \\\"\\\"\\\"\n …\n\nUsing that it's possible to access the docstring,\none way to do this is to type: `get_remainder?`\n(which only works in Jupyter) or help(get_remainder).\n\nWe can also comment code using `#` but this is completely\nignored by Python so cannot be accessed in the same way.\n\n\"\"\"\ntry:\n assert expand.__doc__ is not None, feedback_text\nexcept NameError:\n assert False, \"You did not create a function called `expand`\"\n\nexpression = x * (x + 1)\nassert expand(expression) == x ** 2 + x, f\"Your function failed for {expression}\"\nexpression = x * (x + 1) - x ** 2\nassert expand(expression) == x, f\"Your function failed for {expression}\"\nexpression = x ** 2 + 1\nassert expand(expression) == x ** 2 + 1, f\"Your function failed for {expression}\"",
"Question 3\nThe matrix \\(D\\) is given by \\(D = \\begin{pmatrix} 1& 2 & a\\ 3 & 1 & 0\\ 1 & 1 & 1\\end{pmatrix}\\) where \\(a\\ne 2\\).\na. Create a variable D which has value the matrix \\(D\\).",
"### BEGIN SOLUTION\na = sym.Symbol(\"a\")\nD = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])\n### END SOLUTION\n\nexpected_D = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])\nfeedback_text = f\"The expected value of `D` is {expected_D}.\"\ntry:\n assert sym.simplify(sym.expand(sym.simplify(D) - expected_D)) == sym.Matrix([[0, 0, 0] for _ in range(3)]), feedback_text\nexcept NameError:\n assert False, \"You did not create a variable `D`\"",
"b. Create a variable D_inv with value the inverse of \\(D\\).",
"### BEGIN SOLUTION\nD_inv = D.inv()\n### END SOLUTION\n\nexpected_D_inv = expected_D.inv()\nfeedback_text = f\"The expected value of `D_inv` is {expected_D_inv}.\"\nassert sym.simplify(sym.expand(sym.simplify(D_inv) - expected_D_inv)) == sym.Matrix([[0, 0, 0] for _ in range(3)]), feedback_text",
"c. Using D_inv output the solution of the following system of equations:\n\\[\n\\begin{array}{r}\n x + 2y + 4z = 3\\\n 3x + y = 4\\\n x + y + z = 1\\\n\\end{array}\n\\]",
"### BEGIN SOLUTION\nb = sym.Matrix([[3], [4], [1]])\nsym.simplify(D.inv() @ b).subs({a: 4})\n### END SOLUTION\n\nanswer_q3_c = _\nexpected_b = sym.Matrix([[3], [4], [1]])\nexpected_answer = sym.simplify(expected_D_inv @ expected_b).subs({a: 4})\nfeedback_text = f\"The expected solution is {expected_answer}.\"\nassert sym.expand(expected_answer - answer_q3_c) == sym.Matrix([[0], [0], [0]]), feedback_text",
"Question 4\nDuring a game of frisbee between a handler and their dog the handler chooses to randomly select if they throw using a backhand or a forehand: 25% of the time they will throw a backhand.\nBecause of the way their dog chooses to approach a flying frisbee they catch it with the following probabilities:\n\n80% of the time when it is thrown using a backhand\n90% of the time when it is thrown using a forehand\n\na. Write a function sample_experiment() that simulates a given throw and returns the throw type (as a string with value \"backhand\" or \"forehand\") and whether it was caught (as a boolean: either True or False).",
"import random\n\n\ndef sample_experiment():\n \"\"\"\n Returns the throw type and whether it was caught\n \"\"\"\n ### BEGIN SOLUTION\n if random.random() < .25:\n throw = \"backhand\"\n probability_of_catch = .8\n else:\n throw = \"forehand\"\n probability_of_catch = .9\n \n caught = random.random() < probability_of_catch\n ### END SOLUTION\n return throw, caught\n\nfeedback_text = \"\"\"You did not include a docstring. This is important to help document your code.\n\n\nIt is done using triple quotation marks. For example:\n\ndef get_remainder(m, n):\n \\\"\\\"\\\"\n This function returns the remainder of m when dividing by n\n \\\"\\\"\\\"\n …\n\nUsing that it's possible to access the docstring,\none way to do this is to type: `get_remainder?`\n(which only works in Jupyter) or help(get_remainder).\n\nWe can also comment code using `#` but this is completely\nignored by Python so cannot be accessed in the same way.\n\n\"\"\"\ntry:\n assert sample_experiment.__doc__ is not None, feedback_text\nexcept NameError:\n assert False, \"You did not create a variable called `sample_experiment`\"\n\ntry:\n random.seed(0)\n throw, caught = sample_experiment()\n assert throw in [\"forehand\", \"backhand\"], \"Your function did not give a throw with seed=0\"\n assert caught in [True, False], \"Your function did not give a valid coin with seed=0\"\n\n random.seed(1)\n throw, caught = sample_experiment()\n assert throw in [\"forehand\", \"backhand\"], \"Your function did not give a valid throw with seed=0\"\n assert caught in [True, False], \"Your function did not give a valid coin with seed=0\"\nexcept NameError:\n assert False, \"You did not create a function called `sample_experiment` or there is an error in your function.\"\n\nrepetitions = 10_000\nrandom.seed(0)\nfeedback_text = f\"\"\"Your function did not give a selection of forehand throws within acceptable error bounds.\n\nOut of {repetitions} repetitions you got less than 5500 or more than 9500 forehand throw.\n\"\"\"\nthrows = [sample_experiment()[0] for _ in range(repetitions)]\nassert 5_500 <= throws.count(\"forehand\") <= 9_500, feedback_text",
"b. Using 1,000,000 samples create a variable probability_of_catch which has value an estimate for the probability of the frisbee being caught.",
"### BEGIN SOLUTION\nnumber_of_repetitions = 1_000_000\nrandom.seed(0)\nsamples = [sample_experiment() for repetition in range(number_of_repetitions)]\nprobability_of_catch = sum(catch is True for throw, catch in samples) / number_of_repetitions\n### END SOLUTION\n\nassert type(probability_of_catch) is float, \"You did not return a float\"\n\nexpected_answer = sym.S(1) / (4) * sym.S(8) / 10 + sym.S(3) / (4) * sym.S(9) / 10\nfeedback_text = f\"\"\"The expected value is: {expected_answer}\n\nYour value was not within 20% of the expected answer.\n\"\"\"\nassert expected_answer * .8 <= probability_of_catch <= expected_answer * 1.2, feedback_text",
"c. Using the above, create a variable probability_of_forehand_given_drop which has value an estimate for the probability of the frisbee being thrown with a forehand given that it was not caught.",
"### BEGIN SOLUTION\nsamples_with_drop = [(throw, catch) for throw, catch in samples if catch is False]\nnumber_of_drops = len(samples_with_drop)\nprobability_of_forehand_given_drop = sum(throw == \"forehand\" for throw, catch in samples_with_drop) / number_of_drops\n### END SOLUTION\n\nassert type(probability_of_forehand_given_drop) is float, \"You did not return a float\"\n\nexpected_answer = (sym.S(1) / (10) * sym.S(75) / 100) / (1 - (sym.S(1) / (4) * sym.S(8) / 10 + sym.S(3) / (4) * sym.S(9) / 10))\nfeedback_text = f\"\"\"The expected value is: {expected_answer}\n\nYour value was not within 20% of the expected answer.\n\"\"\"\nassert expected_answer * .8 <= probability_of_forehand_given_drop <= expected_answer * 1.2, feedback_text"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chetan51/nupic.research
|
projects/dynamic_sparse/notebooks/ExperimentAnalysis-ImprovedMag.ipynb
|
gpl-3.0
|
[
"Experiment:\nEvaluate pruning by magnitude weighted by coactivations.\nMotivation.\nTest new proposed method",
"%load_ext autoreload\n%autoreload 2\n\nimport sys\nsys.path.append(\"../../\")\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport glob\nimport tabulate\nimport pprint\nimport click\nimport numpy as np\nimport pandas as pd\nfrom ray.tune.commands import *\nfrom dynamic_sparse.common.browser import *",
"Load and check data",
"exps = ['improved_magpruning_test1', ]\npaths = [os.path.expanduser(\"~/nta/results/{}\".format(e)) for e in exps]\ndf = load_many(paths)\n\ndf.head(5)\n\n# replace hebbian prine\ndf['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)\ndf['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)\n\ndf.columns\n\ndf.shape\n\ndf.iloc[1]\n\ndf.groupby('model')['model'].count()",
"## Analysis\nExperiment Details",
"# Did any trials failed?\ndf[df[\"epochs\"]<30][\"epochs\"].count()\n\n# Removing failed or incomplete trials\ndf_origin = df.copy()\ndf = df_origin[df_origin[\"epochs\"]>=30]\ndf.shape\n\n# which ones failed?\n# failed, or still ongoing?\ndf_origin['failed'] = df_origin[\"epochs\"]<30\ndf_origin[df_origin['failed']]['epochs']\n\n# helper functions\ndef mean_and_std(s):\n return \"{:.3f} ± {:.3f}\".format(s.mean(), s.std())\n\ndef round_mean(s):\n return \"{:.0f}\".format(round(s.mean()))\n\nstats = ['min', 'max', 'mean', 'std']\n\ndef agg(columns, filter=None, round=3):\n if filter is None:\n return (df.groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n else:\n return (df[filter].groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n",
"What are optimal levels of hebbian and weight pruning",
"agg(['weight_prune_perc'])\n\nmulti2 = (df['weight_prune_perc'] % 0.2 == 0)\nagg(['weight_prune_perc'], multi2)",
"No relevant difference",
"pd.pivot_table(df[filter], \n index='hebbian_prune_perc',\n columns='weight_prune_perc',\n values='val_acc_max',\n aggfunc=mean_and_std)\n\ndf.shape",
"Conclusions:\n\nNo pruning leads (0,0) to acc of 0.976\nPruning all connections at every epoch (1,0) leads to acc of 0.964\nBest performing model is still no hebbian pruning, and weight pruning set to 0.2 (0.981)\nPruning only by hebbian learning decreases accuracy\nCombining hebbian and weight magnitude is not an improvement compared to simple weight magnitude pruning"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
elleros/spoofed-speech-detection
|
gmm_ml_synthetic_speech_detection.ipynb
|
mit
|
[
"Spoofed Speech Detection via Maximum Likelihood Estimation of Gaussian Mixture Models\nThe goal of synthetic speech detection is to determine whether a speech segment $S$ is natural or synthetic/converted speeach.\nThis notebook implements a Gaussian mixture model maximum likelihood (GMM-ML) classifier for synthetic (spoofed) speech detection. This approach uses regular mel frequency cepstral coefficients (MFCC) features and gives the best performance on the ASVspoof 2015 dataset among the standard classifiers (GMM-SV, GMM-UBM, ...). For more background information see: Hanilçi, Cemal, Tomi Kinnunen, Md Sahidullah, and Aleksandr Sizov. \"Classifiers for synthetic speech detection: a comparison.\" In INTERSPEECH 2015. The scripts use the Python package Bob.Bio.SPEAR 2.04 for speaker recogntion.\nThis work is part of the \"DDoS Resilient Emergency Dispatch Center\" project at the University of Houston, funded by the Department of Homeland Security (DHS).\nApril 19, 2015\nLorenzo Rossi\n(lorenzo [dot] rossi [at] gmail [dot] com)",
"import os\nimport time\nimport numpy as np\nimport pandas as pd\nfrom bob.bio.spear import preprocessor, extractor\nfrom bob.bio.gmm import algorithm\nfrom bob.io.base import HDF5File\nfrom bob.learn import em\nfrom sklearn.metrics import classification_report, roc_curve, roc_auc_score\n\nWAV_FOLDER = 'Wav/' #'ASV2015dataset/wav/' # Path to folder containing speakers .wav subfolders\nLABEL_FOLDER = 'CM_protocol/' #'ASV2015dataset/CM_protocol/' # Path to ground truth csv files\n\nEXT = '.wav'\n%matplotlib inline",
"Loading the Ground Truth\nLoad the dataframes (tables) with the labels for the training, development and evaluation (hold out) sets. Each subfolder corresponds to a different speaker. For example, T1 and D4 indicate the subfolders associated to the utterances and spoofed segments of speakers T1 and D4, respectively in training and development sets. Note that number of evaluation samples >> number of development samples >> testing samples.\nYou can either select the speakers in each set one by one, e.g.:\ntrain_subfls = ['T1', 'T2'] \nwill only load segments from speakers T1 and T2 for training,\nor use all the available speakers in a certain subset by leaving the list empty, e.g.:\ndevel_subfls = []\nwill load all the available Dx speaker segments for the development stage. If you are running this notebook for the first time, you may want to start only with 2 or so speakers per set for sake of quick testing. All the scripts may take several hours to run on the full size datsets.",
"train_subfls = ['T1']#, 'T2', 'T3', 'T4', 'T5', 'T6', 'T7', 'T8', 'T9', 'T13'] #T13 used instead of T10 for gender balance\ndevel_subfls = ['D1']#, 'D2', 'D3', 'D4', 'D5', 'D6', 'D7', 'D8', 'D9', 'D10']\nevalu_subfls = ['E1']#, 'E2', 'E3', 'E4', 'E5', 'E6','E7', 'E8', 'E9', 'E10']\ntrain = pd.read_csv(LABEL_FOLDER + 'cm_train.trn', sep=' ', header=None, names=['folder','file','method','source'])\nif len(train_subfls): train = train[train.folder.isin(train_subfls)]\ntrain.sort_values(['folder', 'file'], inplace=True)\ndevel = pd.read_csv(LABEL_FOLDER + 'cm_develop.ndx', sep=' ', header=None, names=['folder','file','method','source'])\nif len(devel_subfls): devel = devel[devel.folder.isin(devel_subfls)]\ndevel.sort_values(['folder', 'file'], inplace=True)\nevalu = pd.read_csv(LABEL_FOLDER +'cm_evaluation.ndx', sep=' ', header=None, names=['folder','file','method','source'])\nif len(evalu_subfls): evalu = evalu[evalu.folder.isin(evalu_subfls)]\n\nevalu.sort_values(['folder', 'file'], inplace=True)\n\nlabel_2_class = {'human':1, 'spoof':0}\n\nprint('training samples:',len(train))\nprint('development samples:',len(devel))\nprint('evaluation samples:',len(evalu))",
"Speech Preprocessing and MFCC Extraction\nSilence removal and MFCC feature extraction for training segments. More details about the bob.bio.spear involved libraries at:\nhttps://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.spear/master/implemented.html\nYou can also skip this stage and load a set of feaures (see Loading features cell).",
"# Parameters\nn_ceps = 60 # number of ceptral coefficients (implicit in extractor)\nsilence_removal_ratio = .1\n\nsubfolders = train_subfls\nground_truth = train\n\n# initialize feature matrix\nfeatures = []\ny = np.zeros((len(ground_truth),))\nprint(\"Extracting features for training stage.\")\n\nvad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio)\ncepstrum = extractor.Cepstral()\n\nk = 0\nstart_time = time.clock()\n\nfor folder in subfolders[0:n_subfls]:\n print(folder, end=\", \")\n folder = \"\".join(('Wav/',folder,'/'))\n f_list = os.listdir(folder)\n for f_name in f_list:\n # ground truth\n try: \n label = ground_truth[ground_truth.file==f_name[:-len(EXT)]].source.values[0]\n except IndexError:\n continue\n y[k] = label_2_class[label]\n # silence removal\n x = vad.read_original_data(folder+f_name)\n vad_data = vad(x)\n if not vad_data[2].max():\n vad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio*.8)\n vad_data = vad(x)\n vad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio)\n # MFCC extraction \n mfcc = cepstrum(vad_data)\n features.append(mfcc)\n k += 1\n\nXf = np.array(features)\nprint(k,\"files processed in\",(time.clock()-start_time)/60,\"minutes.\")",
"Saving features",
"np.save('X.npy',Xf)\nnp.save('y.npy',y)\nprint('Feature and label matrices saved to disk')",
"Loading features",
"# Load already extracter features to skip the preprocessing-extraction stage\nXf = np.load('train_features_10.npy')\ny = np.load('y_10.npy')",
"GMM - ML Classification\nGMM Training\nTrain the GMMs for natural and synthetic speach. For documentation on bob.bio k-means and GMM machines see:\nhttps://pythonhosted.org/bob.learn.em/guide.html\nYou can also skip the training stage and load an already trained GMM model (see cell Loading GMM Model).",
"# Parameters of the GMM machines\nn_gaussians = 128 # number of Gaussians\nmax_iterats = 25 # maximum number of iterations",
"GMM for natural speech",
"# Initialize and train k-means machine: the means will initialize EM algorithm for GMM machine\nstart_time = time.clock()\nkmeans_nat = em.KMeansMachine(n_gaussians,n_ceps)\nkmeansTrainer = em.KMeansTrainer()\nem.train(kmeansTrainer, kmeans_nat, np.vstack(Xf[y==1]), max_iterations = max_iterats, convergence_threshold = 1e-5)\n#kmeans_nat.means\n\n# initialize and train GMM machine\ngmm_nat = em.GMMMachine(n_gaussians,n_ceps)\ntrainer = em.ML_GMMTrainer(True, True, True)\ngmm_nat.means = kmeans_nat.means\nem.train(trainer, gmm_nat, np.vstack(Xf[y==1]), max_iterations = max_iterats, convergence_threshold = 1e-5)\n#gmm_nat.save(HDF5File('gmm_nat.hdf5', 'w'))\nprint(\"Done in:\", (time.clock() - start_time)/60, \"minutes\")\nprint(gmm_nat)",
"GMM for synthetic speech",
"# initialize and train k-means machine: the means will initialize EM algorithm for GMM machine\nstart_time = time.clock()\nkmeans_synt = em.KMeansMachine(n_gaussians,n_ceps)\nkmeansTrainer = em.KMeansTrainer()\nem.train(kmeansTrainer, kmeans_synt, np.vstack(Xf[y==0]), max_iterations = max_iterats, convergence_threshold = 1e-5)\n\n# initialize and train GMM machine\ngmm_synt = em.GMMMachine(n_gaussians,n_ceps)\ntrainer = em.ML_GMMTrainer(True, True, True)\ngmm_synt.means = kmeans_synt.means\nem.train(trainer, gmm_synt, np.vstack(Xf[y==0]), max_iterations = max_iterats, convergence_threshold = 1e-5)\nprint(\"Done in:\", (time.clock() - start_time)/60, \"minutes\")\n#gmm_synt.save(HDF5File('gmm_synt.hdf5', 'w'))\nprint(gmm_synt)",
"Loading GMM model",
"gmm_nat = em.GMMMachine()\ngmm_nat.load(HDF5File('gmm_nat.hdf5', 'r'))\ngmm_synt = em.GMMMachine()\ngmm_synt.load(HDF5File('gmm_synt.hdf5','r'))\n\nnp.save('p_gmm_ml_eval_10.npy',llr_score)\nnp.save('z_gmm_ml_eval_est_10.npy',z_gmm)",
"GMM-ML Scoring\nExtract the features for the testing data, compute the likelihood ratio test and compute ROC AUC and estimated EER scores.",
"status = 'devel' # 'devel'(= test) OR 'evalu'(= hold out)\nstart_time = time.clock()\n\nif status == 'devel':\n subfolders = devel_subfls\n ground_truth = devel\nelif status == 'evalu':\n subfolders = evalu_subfls\n ground_truth = evalu\nn_subfls = len(subfolders)\n# initialize score and class arrays\nllr_gmm_score = np.zeros(len(ground_truth),)\nz_gmm = np.zeros(len(ground_truth),)\nprint(status)\n\nvad = preprocessor.Energy_Thr(ratio_threshold=.1)\ncepstrum = extractor.Cepstral()\n\nk = 0\nthr = .5\nspeaker_list = ground_truth.folder.unique()\n\nfor speaker_id in speaker_list:\n #speaker = ground_truth[ground_truth.folder==speaker_id]\n f_list = list(ground_truth[ground_truth.folder==speaker_id].file)\n folder = \"\".join(['Wav/',speaker_id,'/'])\n print(speaker_id, end=',')\n\n for f in f_list:\n f_name = \"\".join([folder,f,'.wav'])\n x = vad.read_original_data(f_name)\n # voice activity detection\n vad_data = vad(x)\n if not vad_data[2].max():\n vad = preprocessor.Energy_Thr(ratio_threshold=.08)\n vad_data = vad(x)\n vad = preprocessor.Energy_Thr(ratio_threshold=.1)\n # MFCC extraction \n mfcc = cepstrum(vad_data)\n # Log likelihood ratio computation\n llr_gmm_score[k] = gmm_nat(mfcc)-gmm_synt(mfcc)\n z_gmm[k] = int(llr_gmm_score[k]>0)\n k += 1\n\nground_truth['z'] = ground_truth.source.map(lambda x: int(x=='human'))\nground_truth['z_gmm'] = z_gmm\nground_truth['score_gmm'] = llr_gmm_score\nprint(roc_auc_score(ground_truth.z, ground_truth.z_gmm))\nprint(k,\"files processed in\",(time.clock()-start_time)/60,\"minutes.\")\n\n# Performance evaluation\nhumans = z_gmm[z_dvl==0]\nspoofed = z_gmm[z_dvl==1]\nfnr = 100*(1-(humans<thr).sum()/len(humans))\nfpr = 100*(1-(spoofed>=thr).sum()/len(spoofed))\nprint(\"ROC AUC score:\", roc_auc_score(z_dvl,z_gmm))\nprint(\"False negative rate %:\", fnr)\nprint(\"False positive rate %:\", fpr)\nprint(\"EER %: <=\", (fnr+fpr)/2)",
"EER computation\nAdjust the threshold $thr$ to reduce $FNR-FPR$ for a more accurate estimate of the $EER$.\nThe Equal Error Rate ($EER$) is the value where the false negative rate ($FNR$) equals the false positive rate ($FPR$). It's an error metric commonly used to characterize biometric systems.",
"thr = -.115\npz = llr_gmm_score\nspoofed = pz[np.array(ground_truth.z)==1]\nhumans = pz[np.array(ground_truth.z)==0]\nfnr = 100*(humans>thr).sum()/len(humans)\nfpr = 100*(spoofed<=thr).sum()/len(spoofed)\nprint(\"False negative vs positive rates %:\", fnr, fpr)\nprint(\"FNR - FPR %:\", fnr-fpr)\nif np.abs(fnr-fpr) <.25:\n print(\"EER =\", (fnr+fpr)/2,\"%\")\nelse:\n print(\"EER ~\", (fnr+fpr)/2,\"%\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kghose/groho
|
docs/dev/adaptive-display-points.ipynb
|
mit
|
[
"Some notes on downsampling data for display\nThe smaller the time step of a simulation, the more accurate it is. Empirically, for the Euler method, it looks like 0.001 JD per step (or about a minute) is decent for our purposes. This means that we now have 365.25 / 0.001 = {{365.25 / 0.001}} points per simulation object, or {{12 * 365.25 / 0.001}} bytes. However, we don't really need such dense points for display. On the other hand, 4MB is not that much and we could probably let it go, but just for fun let's explore some different downsampling schemes.\nNote: We can do both adaptive time steps for the simulation as well as use a better intergrator/gravity model to get by with larger time steps, but I haven't explored this yet as it requires a deeper understanding of such models and my intuition is that it still won't downsample the points to the extent that we want, not to mention being more complicated to program. We'll leave that for version 2.0 of the simulator.\nWe'll set up a simple simulation with the aim of generating some interesting trajectories. The main property we are looking for are paths that have different curvatures as we expect in simulations we will do - since spacecraft will engage/disengage engines and change attitude.",
"import numpy as np\nimport numpy.linalg as ln\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nclass Body:\n def __init__(self, _mass, _pos, _vel):\n self.mass = _mass\n self.pos = _pos\n self.vel = _vel\n self.acc = np.zeros(3)\n \n def setup_sim(self, steps=1000):\n self.trace = np.empty((steps, 3), dtype=float)\n \n def update(self, dt, n):\n self.pos += self.vel * dt\n self.vel += self.acc * dt\n self.acc = np.zeros(3)\n self.trace[n, :] = self.pos\n \n def plot_xy(self, ax):\n ax.plot(self.trace[:, 0], self.trace[:, 1])\n \ndef acc_ab(a, b):\n r = a.pos - b.pos\n r2 = np.dot(r, r)\n d = r / ln.norm(r)\n Fb = d * (a.mass * b.mass) / r2\n Fa = -Fb\n a.acc += Fa / a.mass\n b.acc += Fb / b.mass\n \n \ndef sim_step(bodies, dt, n):\n for n1, b1 in enumerate(bodies):\n for b2 in bodies[n1 + 1:]:\n acc_ab(b1, b2)\n \n for b1 in bodies:\n b1.update(dt, n)\n\n \ndef run_sim(bodies, steps, dt):\n for b in bodies:\n b.setup_sim(steps)\n for n in range(steps):\n sim_step(bodies, dt, n)\n\nbodyA = Body(100, np.array([0.0, 1.0, 0.0]), np.array([0.0, -10.0, 0.0]))\nbodyB = Body(100, np.array([0.0, -1.0, 0.0]), np.array([10.0, 0.0, 0.0]))\nbodyC = Body(100, np.array([1.0, 0.0, 0.0]), np.array([0.0, 10.0, 0.0]))\n\nN = 100000\ndt = 1e-5\n\nrun_sim([bodyA, bodyB, bodyC], N, dt)\n\nplt.figure(figsize=(20,10))\nax = plt.gca()\n#bodyA.plot_xy(ax)\nbodyB.plot_xy(ax)\n#bodyC.plot_xy(ax)\n_ = plt.axis('scaled')",
"Simple decimation\nLet us try a simple decimation type downsampler, taking every Nth point of the simulation",
"def downsample_decimate(body, every=20):\n return body.trace[::every, :]\n\ndecimated_trace = downsample_decimate(bodyB, every=2000)\n\n\ndef plot_compare(body, downsampled_trace):\n ds = downsampled_trace\n plt.figure(figsize=(20,10))\n plt.plot(ds[:, 0], ds[:, 1], 'ko:')\n ax = plt.gca()\n body.plot_xy(ax)\n plt.title('{} -> {}'.format(body.trace.shape[0], ds.shape[0]))\n _ = plt.axis('scaled')\n \n\nplot_compare(bodyB, decimated_trace)",
"This is unsatisfactory because we are doing poorly on the loop the loops. It does not adapt itself to different curvatures. So we either have to have a lot of points when we don't need it - on the straight stretches, or have too few points on the tight curves. Can we do better?\nSaturating maximum deviation\nThis scheme looks at the maximum deviation between the actual trace and the linear-interpolation between the points and adaptively downsamples to keep that deviation under a given threashold.",
"def perp_dist(x, y, z):\n \"\"\"x, z are endpoints, y is a point on the curve\"\"\"\n a = y - x\n a2 = np.dot(a, a)\n b = y - z\n b2 = np.dot(b, b)\n l = z - x\n l2 = np.dot(l, l)\n l = l2**0.5\n return (a2 - ((l2 + a2 - b2)/(2*l))**2)**0.5\n\n# # Here we'll compute the value for each point, but using just the mid point is probably \n# # a pretty good heurstic\n# def max_dist(pos, n0, n1):\n# return np.array([perp_dist(pos[n0, :], pos[n2, :], pos[n1, :]) for n2 in range(n0, n1)]).max()\n\n# Here we'll just use the midpoint for speed\ndef mid_dist(pos, n0, n1):\n return perp_dist(pos[n0, :], pos[int((n1 + n0)/2), :], pos[n1, :])\n \n\ndef max_deviation_downsampler(pos, thresh=0.1):\n adaptive_pos = [pos[0, :]]\n last_n = 0\n for n in range(1, pos.shape[0]):\n #print(pos[last_n,:])\n if n == last_n: continue\n #print(pos[n, :])\n if mid_dist(pos, last_n, n) > thresh:\n adaptive_pos.append(pos[n - 1, :])\n last_n = n - 1\n return np.vstack(adaptive_pos)\n\n\nmax_dev_trace = max_deviation_downsampler(bodyB.trace, thresh=0.005)\n\nplot_compare(bodyB, max_dev_trace)",
"Hey, this is pretty good! One thing that bothers me about this scheme is that it requires memory. It's hidden in how I did the simulation here in the prototype, but we have to keep storing every point during the simulation in a temporary buffer until we can select a point for the ouput trace. Can we come up with a scheme that is memory less?\nFractal downsampling\nOk, I call this frcatal downsampling because I was inspired by the notion of fractals where the length of a line depends on the scale of measurement. It's possibly more accurately described as length difference threshold downsampling, and that's no fun to say.\nIn this scheme I keep a running to total of the length of the original trace since the last sampled point and compare it to the length of the straight line segment if we use the current point as the next sampled point. If the ratio between the original length and the downsampled length goes above a given threshold, we use that as the next sampled point.\nThis discards the requirement for a (potentially very large) scratch buffer, but is it any good?",
"def fractal_downsampler(pos, ratio_thresh=2.0):\n d = np.diff(pos, axis=0)\n adaptive_pos = [pos[0, :]]\n last_n = 0\n for n in range(1, pos.shape[0]):\n if n == last_n: continue\n line_d = ln.norm(pos[n, :] - pos[last_n, :])\n curve_d = ln.norm(d[last_n:n,:], axis=1).sum()\n if curve_d / line_d > ratio_thresh:\n adaptive_pos.append(pos[n - 1, :])\n last_n = n - 1\n adaptive_pos.append(pos[-1, :])\n return np.vstack(adaptive_pos)\n\n\nfractal_trace = fractal_downsampler(bodyB.trace, ratio_thresh=1.001)\n\nplot_compare(bodyB, fractal_trace)",
"Darn it, not as good as the max deviation downsampler. We do well in the high curvature regions, but are insensitive on the long stretches. This is because we are using a ratio, and the longer the stretch, the more we can drift. I think the soluton to this may be to have an absolute distance difference threshold in addition to the ratio threshold and make this an OR operation - if the ratio OR the absolute distance threshold are exceeded, take a sample. \nThe ratio threshold takes care of the tight curves and the absolute threshold takes care of the gentle curves.\nSo ...",
"def fractal_downsampler2(pos, ratio_thresh=1.001, abs_thresh=0.1):\n d = np.diff(pos, axis=0)\n adaptive_pos = [pos[0, :]]\n last_n = 0\n for n in range(1, pos.shape[0]):\n if n == last_n: continue\n line_d = ln.norm(pos[n, :] - pos[last_n, :])\n curve_d = ln.norm(d[last_n:n,:], axis=1).sum()\n if curve_d / line_d > ratio_thresh or abs(curve_d - line_d) > abs_thresh:\n adaptive_pos.append(pos[n - 1, :])\n last_n = n - 1\n adaptive_pos.append(pos[-1, :])\n return np.vstack(adaptive_pos)\n\n\nfractal_trace2 = fractal_downsampler2(bodyB.trace, ratio_thresh=1.005, abs_thresh=0.0001)\n\nplot_compare(bodyB, fractal_trace2)",
"This looks like a good downsampling scheme. It's nice to have two knobs to control: one for the tight curves and one for the less curvy stretches. This allows us to get close to the max deviation downsampler without needing a ton of memory"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LukasMosser/Jupetro
|
notebooks/Simple Decline Curve Analysis in Python.ipynb
|
gpl-3.0
|
[
"Simple Decline Curve Analysis in Python\nCreated by Lukas Mosser, 2016\nIntroduction\nWe will look at how we can use python and related libraries to perform a decline curve analysis on\na number of example datasets. \nThe methodology shown here can be readily extended to additional decline curve types.\nFirst off we will import a number of libraries that allow us to perform the curve fitting as well as plotting the data.\nEspecially the scipy.optimize library will allow us to perform curve fitting using a non-linear least squares method.\nDefinitions\nDecline curves provide us with an empirical method to predict expected ultimate recovery and cummulative production of a reservoir. We will not cover the theory of decline curve analysis as this is provided by a number of references in oil and gas literature (see references section).\nWe will distinguish the three classic types of decline curves: \n\nExponential decline\nHyperbolic decline\nHarmonic decline\n\nThe production rate as a function of time is given as follows:\nExponential Decline\n$$q(\\Delta t)=q_i e^{-a\\Delta t}$$\nwhere $q_i$ is the initial rate, and $a$ is the decline rate. The exponential decline curve has one fitting parameter: $a$\nHyperbolic Decline\n$$q(\\Delta t)=\\frac{q_i}{(1+ba_i\\Delta t)^{(\\frac{1}{b})}}$$\nwhere $q_i$ is the initial rate, and $a_i$ is the decline rate, $b$ fractional exponent. The hyperbolic decline curve has two fitting parameters: $a_i$ and $b$\nHarmonic Decline\n$$q(\\Delta t)=\\frac{q_i}{(1+a_i\\Delta t)}$$\nwhere $q_i$ is the initial rate, and $a_i$ is the decline rate. The harmonic decline curve has one fitting parameter: $a_i$\nTurning theory into code\nWe will now turn our above decline curves into simple python methods. (Also known as functions)\nTo start off we will import a number of helper libraries to get us started and to provide\nthe necessary tools for curve fitting.\nThe scipy library and its module scipy.optimize allow us to use curve_fit to find the fitting parameters required for curve fitting.",
"from scipy.optimize import curve_fit",
"We will use matplotlib to perform any plotting tasks.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"The curve_fit method takes in three parameters at least. \npopt, pcov = curve_fit(func, T, Q)\nThe first parameter is a function, we can pass them as we would pass any variable. This function needs to take in the independent parameter, in our case the time, as well as any fitting parameters. The second parameter is the independent variable in our case the elapsed time. The third parameter is the observed data that we want to fit.\nWe know the initial rate, and therefore we would like to pass this to our decline curve method. There is no functionality to pass the additional variable via curve_fit. But python allows us to define functions inside functions and therefore we have a simple workaround.\nWe define a function inside an if clause and set up our initial rate. The function decline_curve then returns our decline curve function which is then passed to curve_fit\nFinally, we also define a simple L2 norm to estimate how good we've fit our data.\n$$Error_{L2}=\\sum{|{q(t)-q_{obs}(t)}|^2}$$",
"def decline_curve(curve_type, q_i):\n if curve_type == \"exponential\":\n def exponential_decline(T, a):\n return q_i*np.exp(-a*T)\n return exponential_decline\n \n elif curve_type == \"hyperbolic\":\n def hyperbolic_decline(T, a_i, b):\n return q_i/np.power((1+b*a_i*T), 1./b)\n return hyperbolic_decline\n \n elif curve_type == \"harmonic\":\n def parabolic_decline(T, a_i):\n return q_i/(1+a_i*T)\n return parabolic_decline\n \n else:\n raise \"I don't know this decline curve!\"\n\ndef L2_norm(Q, Q_obs):\n return np.sum(np.power(np.subtract(Q, Q_obs), 2))",
"Loading the data\nTo load our data we will load from a csv file and skip the first line, our header using the keyword skiprows.\nWe then split our data into time and rate data independently.",
"data = np.loadtxt(\"data/prod_data_1.txt\", skiprows=1, delimiter=\",\")\nT, Q = data.T",
"Let's define our decline curves using our initial rate. We then pass these on to scipy.optimize.curve_fit and compute the L2 norm of the resulting fit.",
"exp_decline = decline_curve(\"exponential\", Q[0])\nhyp_decline = decline_curve(\"hyperbolic\", Q[0])\nhar_decline = decline_curve(\"harmonic\", Q[0])\n\npopt_exp, pcov_exp = curve_fit(exp_decline, T, Q, method=\"trf\")\npopt_hyp, pcov_hyp = curve_fit(hyp_decline, T, Q, method=\"trf\")\npopt_har, pcov_har = curve_fit(har_decline, T, Q, method=\"trf\")\n\nprint \"L2 Norm of exponential decline: \", L2_norm(exp_decline(T, popt_exp[0]), Q)\nprint \"L2 Norm of hyperbolic decline decline: \", L2_norm(hyp_decline(T, popt_hyp[0], popt_hyp[1]), Q)\nprint \"L2 Norm of harmonic decline decline: \", L2_norm(har_decline(T, popt_har[0]), Q)",
"As we can see the hyperbolic decline fits the data much better than the other decline curves. We will now visualise the results using matplotlib.",
"fig, ax = plt.subplots(1, figsize=(16, 16))\n\nax.set_title(\"Decline Curve Analysis\", fontsize=30)\n\nlabel_size = 20\nyed = [tick.label.set_fontsize(label_size) for tick in ax.yaxis.get_major_ticks()]\nxed = [tick.label.set_fontsize(label_size) for tick in ax.xaxis.get_major_ticks()]\n\nax.set_xlim(min(T), max(T))\n\nax.scatter(T, Q, color=\"black\", marker=\"x\", s=250, linewidth=3)\nax.set_xlabel(\"Time (years)\", fontsize=25)\nax.set_ylabel(\"Oil Rate (1000 STB/d)\", fontsize=25)\n\npred_exp = exp_decline(T, popt_exp[0])\npred_hyp = hyp_decline(T, popt_hyp[0], popt_hyp[1])\npred_har = har_decline(T, popt_har[0])\n\nmin_val = min([min(curve) for curve in [pred_exp, pred_hyp, pred_har]])\nmax_val = max([max(curve) for curve in [pred_exp, pred_hyp, pred_har]])\n\nax.set_ylim(min_val, max_val)\n\nax.plot(T, pred_exp, color=\"red\", linewidth=5, alpha=0.5, label=\"Exponential\")\nax.plot(T, pred_hyp, color=\"green\", linewidth=5, alpha=0.5, label=\"Hyperbolic\")\nax.plot(T, pred_har, color=\"blue\", linewidth=5, alpha=0.5, label=\"Harmonic\")\nax.ticklabel_format(fontsize=25)\nax.legend(fontsize=25)",
"Finally, we will use our decline curves to predict the production for the following 7 years.",
"fig, ax = plt.subplots(1, figsize=(16, 16))\n\nT_max = 10.0\nT_pred = np.linspace(min(T), T_max)\n\nax.set_title(\"Decline Curve Analysis Prediction\", fontsize=30)\n\nlabel_size = 20\nyed = [tick.label.set_fontsize(label_size) for tick in ax.yaxis.get_major_ticks()]\nxed = [tick.label.set_fontsize(label_size) for tick in ax.xaxis.get_major_ticks()]\n\nax.set_xlim(min(T), max(T_pred))\n\nax.scatter(T, Q, color=\"black\", marker=\"x\", s=250, linewidth=3)\nax.set_xlabel(\"Time (years)\", fontsize=25)\nax.set_ylabel(\"Oil Rate (1000 STB/d)\", fontsize=25)\n\npred_exp = exp_decline(T_pred, popt_exp[0])\npred_hyp = hyp_decline(T_pred, popt_hyp[0], popt_hyp[1])\npred_har = har_decline(T_pred, popt_har[0])\n\nmin_val = min([min(curve) for curve in [pred_exp, pred_hyp, pred_har]])\nmax_val = max([max(curve) for curve in [pred_exp, pred_hyp, pred_har]])\n\nax.set_ylim(min_val, max_val)\n\nax.plot(T_pred, pred_exp, color=\"red\", linewidth=5, alpha=0.5, label=\"Exponential\")\nax.plot(T_pred, pred_hyp, color=\"green\", linewidth=5, alpha=0.5, label=\"Hyperbolic\")\nax.plot(T_pred, pred_har, color=\"blue\", linewidth=5, alpha=0.5, label=\"Harmonic\")\nax.ticklabel_format(fontsize=25)\nax.legend(fontsize=25)",
"References\nArps, J.J.: “ Analysis of Decline Curves,” Trans. AIME, 160, 228-247, 1945.\nGolan, M. and Whitson, C.M.: Well Performance, International Human Resource\nDevelopment Corp., 122-125, 1986.\nEconomides, M.J., Hill, A.D., and Ehlig-Economides, C.: Petroleum Production\nSystems, Prentice Hall PTR, Upper Saddle River, 516-519, 1994."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dmytroKarataiev/MachineLearning
|
creating_customer_segments/customer_segments.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nUnsupervised Learning\nProject 3: Creating Customer Segments\nWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.\nThe dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.\nRun the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nimport renders as rs\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Show matplotlib plots inline (nicely formatted in the notebook)\n%matplotlib inline\n\n# Load the wholesale customers dataset\ntry:\n data = pd.read_csv(\"customers.csv\")\n data.drop(['Region', 'Channel'], axis = 1, inplace = True)\n print \"Wholesale customers dataset has {} samples with {} features each.\".format(*data.shape)\nexcept:\n print \"Dataset could not be loaded. Is the dataset missing?\"",
"Data Exploration\nIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.\nRun the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.",
"# Display a description of the dataset\ndisplay(data.describe())",
"Implementation: Selecting Samples\nTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.",
"# Select three indices of your choice you wish to sample from the dataset\nindices = [4, 81, 390]\n\n# Create a DataFrame of the chosen samples\nsamples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)\nprint \"Chosen samples of wholesale customers dataset:\"\ndisplay(samples)\n\nprint \"Diff with the mean of the dataset\"\ndisplay(samples - data.mean().round())\n\nprint \"Diff with the median of the dataset\"\ndisplay(samples - data.median().round())\n\nprint \"Quartile Visualization\"\n\n# Import Seaborn, a very powerful library for Data Visualisation\nimport seaborn as sns\nperc = data.rank(pct=True)\nperc = 100 * perc.round(decimals=3)\nperc = perc.iloc[indices]\nsns.heatmap(perc, vmin=1, vmax=99, annot=True)\n\nsamples_bar = samples.append(data.describe().loc['mean'])\nsamples_bar.index = indices + ['mean']\n_ = samples_bar.plot(kind='bar', figsize=(14,6))\n\n",
"Question 1\nConsider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.\nWhat kind of establishment (customer) could each of the three samples you've chosen represent?\nHint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying \"McDonalds\" when describing a sample customer as a restaurant.\nAnswer:\nI've chosen three different indices which represent three completely different types of establishments:\n\nIndex 4: Probably a big supermarket in a good neighbourhood. All spendings are well above the median for each category, which means that the scale is large. Sales of Fresh and Delicatessen are in 86% and 97% respectively, which probably makes it almost an outlier in the data.\nIndex 81: A shop near a poor neighbourhood (taking into account the amount of Fresh and Delicatessen sales). This point of sales is focused on sales of Milk, Groceries and Detergents, with all sales in 80+ percentile (their sales are much higher than the mean and the median). \nIndex 390: considering low sales of Milk, Grocery and Detergents - probably a fast food of some kind with a focus on saled of Frozen (84 percentile).\n\nImplementation: Feature Relevance\nOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.\nIn the code block below, you will need to implement the following:\n - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.\n - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.\n - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.\n - Import a decision tree regressor, set a random_state, and fit the learner to the training data.\n - Report the prediction score of the testing set using the regressor's score function.",
"# Make a copy of the DataFrame, using the 'drop' function to drop the given feature\nfeatures = data.columns\n\nfor feature in features:\n new_data = data.drop(feature, axis = 1)\n target = data[feature]\n\n # Split the data into training and testing sets using the given feature as the target\n from sklearn import cross_validation\n\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(\n new_data, target, test_size = 0.25, random_state = 0)\n\n # Create a decision tree regressor and fit it to the training set\n from sklearn.tree import DecisionTreeRegressor\n\n regressor = DecisionTreeRegressor(random_state = 0)\n regressor.fit(X_train, y_train)\n # Report the score of the prediction using the testing set\n score = regressor.score(X_test, y_test)\n\n print feature, score\n ",
"Question 2\nWhich feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?\nHint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.\nAnswer:\nI've tried to predict each feature of the set to understand if we have some features with a high r^2.\nThe highest prediction score was 0.73 with Detergents_Paper. I don't think this feature is necessary for identifying customer habits, as we have a limited number of samples and that's why we shouldn't use highly correlated features in the dataset. And also we can get the same information from other features.\nVisualize Feature Distributions\nTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.",
"# Correlations between segments\ncorr = data.corr()\nmask = np.zeros_like(corr)\nmask[np.triu_indices_from(mask)] = True\nwith sns.axes_style(\"white\"):\n ax = sns.heatmap(corr, mask=mask, square=True, annot=True, cmap='RdBu')\n\n# Produce a scatter matrix for each pair of features in the data\npd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde')",
"Question 3\nAre there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?\nHint: Is the data normally distributed? Where do most of the data points lie? \nAnswer:\nThere are several pairs of features: Milk - Detergents, Milk - Grocery, Grocery - Detergents.\nThe biggest correlation is between Grocery and Detergents_Paper features. This picture confirmed my suspicions about the relevance of the Groceries feature. In each case the data is skewed to the right.\nData Preprocessing\nIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.\nImplementation: Feature Scaling\nIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.\nIn the code block below, you will need to implement the following:\n - Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this.\n - Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log.",
"# Scale the data using the natural logarithm\nlog_data = np.log(data)\n\n# Scale the sample data using the natural logarithm\nlog_samples = np.log(samples)\n\n# Produce a scatter matrix for each pair of newly-transformed features\npd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde')\n\n## Checked the difference between data and cleaned data (with 1 deleted outlier)\n## pd.scatter_matrix(good_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde')",
"Observation\nAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).\nRun the code below to see how the sample data has changed after having the natural logarithm applied to it.",
"# Display the log-transformed sample data\ndisplay(log_samples)",
"Implementation: Outlier Detection\nDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many \"rules of thumb\" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.\nIn the code block below, you will need to implement the following:\n - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.\n - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.\n - Assign the calculation of an outlier step for the given feature to step.\n - Optionally remove data points from the dataset by adding indices to the outliers list.\nNOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!\nOnce you have performed this implementation, the dataset will be stored in the variable good_data.",
"# For each feature find the data points with extreme high or low values\nfor feature in log_data.keys():\n \n # Calculate Q1 (25th percentile of the data) for the given feature\n Q1 = np.percentile(log_data[feature], 25)\n \n # Calculate Q3 (75th percentile of the data) for the given feature\n Q3 = np.percentile(log_data[feature], 75)\n \n # Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)\n step = 1.5 * (Q3 - Q1)\n \n # Display the outliers\n print \"Data points considered outliers for the feature '{}':\".format(feature)\n display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])\n \n# OPTIONAL: Select the indices for data points you wish to remove\noutliers = [75]\n\n# Remove the outliers, if any were specified\ngood_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)",
"Question 4\nAre there any data points considered outliers for more than one feature? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. \nAnswer:\nThere are several points, which are outliers for more than 1 feature: 65, 66, 75, 128, 154. \nI think, only point 75 should be removed as it really changes the trend in the data and it is very far from the rest of data points. Probably, it is a recording error. After deleting we can see a better picture of the data.\nFeature Transformation\nIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.\nImplementation: PCA\nNow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new \"feature\" of the space, however it is a composition of the original features present in the data.\nIn the code block below, you will need to implement the following:\n - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.\n - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.",
"# Apply PCA to the good data with the same number of dimensions as features\nfrom sklearn.decomposition import PCA\n\npca = PCA(n_components=len(good_data.columns))\npca.fit(good_data)\n\n# Apply a PCA transformation to the sample log-data\npca_samples = pca.transform(log_samples)\n\n# Generate PCA results plot\npca_results = rs.pca_results(good_data, pca)",
"Question 5\nHow much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.\nHint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.\nAnswer:\n\n~72% of data explained by 1 and 2 principal components.\n93.4% of data explained by 1-4 components.\n\nThe first dimension has the biggest positive weight on Detergents and slightly lower weights on Groceries and Milk, which are the 3 features with the highest correlation (based on the plots). It also shows that these customers are buying Fresh and Frozen products in a much lesser proportion.\n- type of a customer: modern trade store (supermarket type) with a variety of products, probably with a smaller assortment (to maintain larger sales per meter from a shelf and to keep prices lower).\n- customers with a high value are buying Detergents more than any other product, sligthly less Milk and Groceries, while sales of Fresh and Frozen are low.\n- customers with a negative value are the opposite: they almost do not buy Detergents, Groceries and Milk.\nThe second dimension probably is orthogonal to the first, reducing the impact of Milk, Grocery, and Detergents, and instead puting high weights on sales of Fresh, Frozen and Delicatessen items. \n- type of a customer: HoReCa point of sales due to the prevalence of sales of foods which are needed to be cooked (Fresh, Frozen, Delicatessen).\n- customers with a positive value are buying Fresh, Frozen and Delicatessen products.\n- customers with a negative value almost aren't buying Fresh, Frozen and Delicatessen products.\nThe third dimension has a high Fresh weight and a very negative Delicatessen weight. \n- type of a customer: looks like an open market to me (farmers market for example) as it sells Fresh products more than anything else.\n- customers with a high positive value are buying a lot Fresh products.\n- customers with a negative value are buying primarily Delicatessen products with a slight increase in Frozen. \nThe 4th dimension is mostly focused on Frozen with a high weight and with a very low Delicatessen weight.\n- type of a customer: could be a place which sells frozen meat (but the Frozen category can be anything: ice cream, meat, etc - I don't know for sure). Considering the fact that this data is from Portugal, I would argue that this is a stall type of a client selling frozen meat products. \n- customers with a positive value buy Frozen and spend very little on Delicatessen.\n- customers with a negative value buy Delicatessen with a slight increase in Fresh products.\nObservation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.",
"# Display sample log-data after having a PCA transformation applied\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))",
"Implementation: Dimensionality Reduction\nWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.\nIn the code block below, you will need to implement the following:\n - Assign the results of fitting PCA in two dimensions with good_data to pca.\n - Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data.\n - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.",
"# Fit PCA to the good data using only two dimensions\npca = PCA(n_components=2)\npca.fit(good_data)\n\n# Apply a PCA transformation the good data\nreduced_data = pca.transform(good_data)\n\n# Apply a PCA transformation to the sample log-data\npca_samples = pca.transform(log_samples)\n\n# Create a DataFrame for the reduced data\nreduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])",
"Observation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.",
"# Display sample log-data after applying PCA transformation in two dimensions\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))",
"Clustering\nIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. \nQuestion 6\n*What are the advantages to using a K-Means clustering algorithm? *\n*What are the advantages to using a Gaussian Mixture Model clustering algorithm? *\nGiven your observations about the wholesale customer data so far, which of the two algorithms will you use and why?\nAnswer:\nK-Means clustering:\n- It uses a hard assignment of points to clusters.\n- In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available) and scalable.\n- Gives best result when data set are distinct or well separated from each other.\nGaussian Mixture Model clustering:\n- The GMM algorithm is a good algorithm to use for the classification of static postures and non-temporal pattern recognition.\n- The fastest algorithm for learning mixture models, but it is slower than the K-Means due to using information about the data distribution — e.g., probabilities of points belonging to clusters.\n- It uses a soft classification, which means a sample will not be classified fully to one class but it will have different probabilities in several classes.\nI will start with the K-Means, as I don't have a complete understanding of the dataset and K-means is usually used as a first algorithm to use for clustering. \nImplementation: Creating Clusters\nDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the \"goodness\" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.\nIn the code block below, you will need to implement the following:\n - Fit a clustering algorithm to the reduced_data and assign it to clusterer.\n - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.\n - Find the cluster centers using the algorithm's respective attribute and assign them to centers.\n - Predict the cluster for each sample data point in pca_samples and assign them sample_preds.\n - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.\n - Assign the silhouette score to score and print the result.",
"# Apply your clustering algorithm of choice to the reduced data \nfrom sklearn.cluster import KMeans\nfrom sklearn.metrics import silhouette_score\n\nclusters = range(2, 11)\nbest = (0, 0.0)\n\nfor each in clusters:\n clusterer = KMeans(n_clusters=each, random_state=0).fit(reduced_data)\n\n # Predict the cluster for each data point\n preds = clusterer.predict(reduced_data)\n\n # Find the cluster centers\n centers = clusterer.cluster_centers_\n\n # Predict the cluster for each transformed sample data point\n sample_preds = clusterer.predict(pca_samples)\n\n # Calculate the mean silhouette coefficient for the number of clusters chosen\n score = silhouette_score(reduced_data, preds)\n print \"Clusters:\", each, \"score:\", score\n if score > best[1]:\n best = (each, score)\n\nclusterer = KMeans(n_clusters=best[0], random_state=0).fit(reduced_data)\n\n# Predict the cluster for each data point\npreds = clusterer.predict(reduced_data)\n\n# Find the cluster centers\ncenters = clusterer.cluster_centers_\n\n# Predict the cluster for each transformed sample data point\nsample_preds = clusterer.predict(pca_samples)\n\n# Calculate the mean silhouette coefficient for the number of clusters chosen\nscore = silhouette_score(reduced_data, preds)\n \nprint \"The best n of Clusters:\", best[0], \"\\nScore:\", best[1]",
"Question 7\nReport the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? \nAnswer:\nI've tried 9 different numbers of clusters:\n\nClusters: 2 score: 0.420795773671\nClusters: 3 score: 0.396034911432\nClusters: 4 score: 0.331704488262\nClusters: 5 score: 0.349383709753\nClusters: 6 score: 0.361735087656\nClusters: 7 score: 0.363059697196\nClusters: 8 score: 0.360593881403\nClusters: 9 score: 0.354722206188\nClusters: 10 score: 0.349422838857\n\nThe best number of Clusters: 2 with a score: 0.420795773671\nCluster Visualization\nOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.",
"# Display the results of the clustering from implementation\nrs.cluster_results(reduced_data, preds, centers, pca_samples)",
"Implementation: Data Recovery\nEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.\nIn the code block below, you will need to implement the following:\n - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.\n - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.",
"# TODO: Inverse transform the centers\nlog_centers = pca.inverse_transform(centers)\n\n# TODO: Exponentiate the centers\ntrue_centers = np.exp(log_centers)\n\n# Display the true centers\nsegments = ['Segment {}'.format(i) for i in range(0,len(centers))]\ntrue_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())\ntrue_centers.index = segments\ndisplay(true_centers)\ndisplay(data.describe())\nprint samples",
"Question 8\nConsider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?\nHint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.\nAnswer:\nCluster 0: considering the sales of Milk, Groceries and Detergents which are much higher than mean we can think of this cluster as a retailers channel.\nCluster 1: high sales of Fresh products - probably so-called HoReCa channel: hotels, restaraunts, cafes.\nQuestion 9\nFor each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?\nRun the code block below to find which cluster each sample point is predicted to be.",
"# Display the predictions\nfor i, pred in enumerate(sample_preds):\n print \"Sample point\", i, \"predicted to be in Cluster\", pred\n \nimport matplotlib.pyplot as plt\n\n# check if samples' spending closer to segment 0 or 1\ndf_diffs = (np.abs(samples-true_centers.iloc[0]) < np.abs(samples-true_centers.iloc[1])).applymap(lambda x: 0 if x else 1)\n\n# see how cluster predictions align with similariy of spending in each category\ndf_preds = pd.concat([df_diffs, pd.Series(sample_preds, name='PREDICTION')], axis=1)\nsns.heatmap(df_preds, annot=True, cbar=False, yticklabels=['sample 0', 'sample 1', 'sample 2'], square=True)\nplt.title('Samples closer to\\ncluster 0 or 1?')\nplt.xticks(rotation=45, ha='center')\nplt.yticks(rotation=0);\n",
"Answer:\nPoint 0 is consistent with the predictions. Almost every product is selling in high volumes, including Detergents.\nPoint 1 is also consistent because of the high sales of both Groceries and Detergents. \nPoint 2 consistent, sales in Fresh are the most important.\nConclusion\nQuestion 10\nCompanies often run A/B tests when making small changes to their products or services. If the wholesale distributor wanted to change its delivery service from 5 days a week to 3 days a week, how would you use the structure of the data to help them decide on a group of customers to test?\nHint: Would such a change in the delivery service affect all customers equally? How could the distributor identify who it affects the most?\nAnswer:\nI would choose some percentage of customers from both of the clusters (let's say 5%) and I would test both methods of deliveries on them, for example:\nPick 22 customers from the segment 0: 11 - 5 days/week, 11 - 3 days/week. Gather feedback, make a decision. Repeat the same for the segment 1. \nQuestion 11\nAssume the wholesale distributor wanted to predict a new feature for each customer based on the purchasing information available. How could the wholesale distributor use the structure of the clustering data you've found to assist a supervised learning analysis?\nHint: What other input feature could the supervised learner use besides the six product features to help make a prediction?\nAnswer:\nSupervised learner coud have used numbers of segments predicted by the K-means algorithm.\nVisualizing Underlying Distributions\nAt the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier on to the original dataset.\nRun the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.",
"# Display the clustering results based on 'Channel' data\nrs.channel_results(reduced_data, outliers, pca_samples)",
"Question 12\nHow well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?\nAnswer:\nI would say that the clustering algorithm and the number of clusters are extremely similar to the real distribution of the clients and did a good job of separating customers. I would probably argue, that the algorithm did a better job of separating clients and could've been used in the real life to adjust delivery model, because we clearly saw a division by products sold in each of the segment: Fresh products should be delivered more often, other products - doesn't really matter.\nThere is some overlap between the segments, but it's not as critical and happens all the time in the real life. Some HoReCa customers can have a similar model of sales to retailers (for example, they can have a store inside a hotel, so the sales could be comparable to those of a small retailer). \nI think that my definition of customer segments is consistent with the classification.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CalPolyPat/phys202-2015-work
|
days/day13/ODEs.ipynb
|
mit
|
[
"Ordinary Differential Equations\nLearning Objectives: Understand the numerical solution of ODEs and use scipy.integrate.odeint to solve and explore ODEs numerically.\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns",
"Overview of ODEs\nMany of the equations of Physics, Chemistry, Statistics, Data Science, etc. are Ordinary Differential Equation or ODEs. An ODE is a differential equation with the form:\n$$ \\frac{d\\vec{y}}{dt} = \\vec{f}\\left(\\vec{y}(t), t\\right) $$\nThe goal is usually to solve for the $N$ dimensional state vector $\\vec{y}(t)$ at each time $t$ given some initial condition:\n$$ \\vec{y}(0) = \\vec{y}_0 $$\nIn this case we are using $t$ as the independent variable, which is common when studying differential equations that depend on time. But any independent variable may be used, such as $x$. Solving an ODE numerically usually involves picking a set of $M$ discrete times at which we wish to know the solution:",
"tmax = 10.0 # The max time\nM = 100 # Use 100 times between [0,tmax]\nt = np.linspace(0,tmax,M)\nt",
"It is useful to define the step size $h$ as:\n$$ h = t_{i+1} - t_i $$",
"h = t[1]-t[0]\nprint(\"h =\", h)",
"The numerical solution of an ODE will then be an $M\\times N$ array $y_{ij}$ such that:\n$$ \\left[\\vec{y}(t_i)\\right]j = y{ij} $$\nIn other words, the rows of the array $y_{ij}$ are the state vectors $\\vec{y}(t_i)$ at times $t_i$. Here is an array of zeros having the right shape for the values of $N$ and $M$ we are using here:",
"N = 2 # 2d case\ny = np.zeros((M, N))\nprint(\"N =\", N)\nprint(\"M =\", M)\nprint(\"y.shape =\", y.shape)",
"A numerical ODE solver takes the ith row of this array y[i,:] and calculates the i+1th row y[i+1,:]. This process starts with the initial condition y[0,:] and continues through all of the times with steps of size $h$. One of the core ideas of numerical ODE solvers is that the error at each step is proportional to $\\mathcal{O}(h^n)$ where $n\\geq1$. Because $h<1$ you can reduce the error by making $h$ smaller (up to a point) or finding an ODE solver with a larger value of $n$.\nHere are some common numerical algorithms for solving ODEs:\n\nThe Euler method, which has an error of $\\mathcal{O}(h)$.\nThe midpoint method, which has an error of $\\mathcal{O}(h^2)$.\nRunga-Kutta methods, \n the most common (called RK4) of which has an error of $\\mathcal{O}(h^4)$. Because\n Runga-Kutta methods are fast and have a small errors, they are one of the most popular\n general purpose algorithm for solving ODEs.\n\nThere are many other specialized methods and tricks for solving ODEs (see this page). One of the most common tricks is to use an adaptive step size, which changes the value of $h$ at each step to make sure the error stays below a certain threshold.\nUsing scipy.integrate.odeint\nSciPy provides a general purpose ODE solver, scipy.integrate.odeint, that can handle a wide variety of linear and non-linear multidimensional ODEs.",
"from scipy.integrate import odeint\n\nodeint?",
"To show how odeint works, we will solve the Lotka–Volterra equations, an example of a predator-prey model:\n$$ \\frac{dx}{dt} = \\alpha x - \\beta x y $$\n$$ \\frac{dy}{dt} = \\delta x y - \\gamma y $$\nwhere:\n\n$x(t)$ is the number of prey.\n$y(t)$ is the number of predators.\n$\\alpha$ is the natural birth rate of the prey.\n$\\gamma$ is the natural death rate of the predators.\n$\\beta$ determines the death rate of prey when eaten by predators.\n$\\delta$ determines the growth rate of predators when they eat prey. \n\nIt is important to note here that $y(t)$ is different from the overall solutions vector $\\vec{y}(t)$. In fact, perhaps confusingly, in this case $\\vec{y}(t)=[x(t),y(t)]$.\nTo integrate this system of differential equations, we must define a function derivs that computes the right-hand-side of the differential equation, $\\vec{f}(\\vec{y}(t), t)$. The signature of this function is set by odeint itself:\npython\ndef derivs(yvec, t, *args):\n ...\n return dyvec\n\nyvec will be a 1d NumPy array with $N$ elements that are the values of the solution at\n the current time, $\\vec{y}(t)$.\nt will be the current time.\n*args will be other arguments, typically parameters in the differential equation.\n\nThe derivs function must return a 1d NumPy array with elements that are the values of the function $\\vec{f}(\\vec{y}(t), t)$.",
"def derivs(yvec, t, alpha, beta, delta, gamma):\n x = yvec[0]\n y = yvec[1]\n dx = alpha*x - beta*x*y\n dy = delta*x*y - gamma*y\n return np.array([dx, dy])",
"Here are the parameters and initial condition we will use to solve the differential equation. In this case, our prey variable $x$ is the number of rabbits and the predator variable $y$ is the number of foxes (foxes eat rabbits).",
"nfoxes = 10\nnrabbits = 20\nic = np.array([nrabbits, nfoxes])\nmaxt = 20.0\nalpha = 1.0\nbeta = 0.1\ndelta = 0.1\ngamma = 1.0",
"Here we call odeint with our derivs function, initial condition ic, array of times t and the extra parameters:",
"t = np.linspace(0, maxt, int(100*maxt))\nsoln = odeint(derivs, # function to compute the derivatives\n ic, # array of initial conditions\n t, # array of times\n args=(alpha, beta, delta, gamma), # extra args\n atol=1e-9, rtol=1e-8) # absolute and relative error tolerances",
"We can plot the componenets of the solution as a function of time as follows:",
"plt.plot(t, soln[:,0], label='rabbits')\nplt.plot(t, soln[:,1], label='foxes')\nplt.xlabel('t')\nplt.ylabel('count')\nplt.legend();",
"We can also make a parametric plot of $[x(t),y(t)]$:",
"plt.plot(soln[:,0], soln[:,1])\nplt.xlim(0, 25)\nplt.ylim(0, 25)\nplt.xlabel('rabbits')\nplt.ylabel('foxes');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Vvkmnn/books
|
AutomateTheBoringStuffWithPython/lesson35.ipynb
|
gpl-3.0
|
[
"Lesson 35:\nThe Raise and Assert Statements\nInvalid code raises exceptions:",
"42/0",
"Formerly, the try and except statements were used to handle exceptions, but you can also create your own exceptions with raise.",
"raise Exception('This is the error message.')",
"Box Priting Program",
"def boxPrint(symbol, width, height):\n\"\"\"\n\nA program to build boxes like this:\n\n***************\n* *\n* *\n***************\n\n\"\"\"\n # print top part\n print(symbol * width) \n # print the sides (excluding top and bottom)\n for i in range(height-2):\n # empty spaces in the middle except the last two spots\n print(symbol + (' ' * (width - 2) + symbol))\n # print the bottom\n print(symbol * width) \n \n# Working functions\nboxPrint('*', 15,5)\nboxPrint('O', 10,8)\n\n# Not working\nboxPrint('**', 5,16)\n",
"To handle situations with the wrong input, an exception can be raised:",
"def boxPrint(symbol, width, height):\n \"\"\"\n\n A program to build boxes like this:\n\n ***************\n * *\n * *\n ***************\n\n \"\"\"\n # Exception statements\n if len(symbol) != 1:\n raise Exception('\"symbol\" needs to be a string of length 1.')\n if (width < 2) or (height <2):\n raise Exception('\"width\" and \"height\" must be greater than or equal to 2.')\n \n print(symbol * width) \n for i in range(height-2):\n print(symbol + (' ' * (width - 2) + symbol))\n print(symbol * width) \n \n# Working functions\nboxPrint('*', 10,5)\nboxPrint('O', 12,8)\n\n# Exception statement 1\nboxPrint('**', 5,16)\n\n# Exception statement 2\nboxPrint('*', 1,1)",
"Exceptions have a particular structure, defined as the traceback or call stack.\nThe traceback shows exactly what function the error occured in, here boxPrint(**,5,16) and boxPrint(*,1,1). It also shows where the exception was define in the program, lines 14 and 16. \nThe traceback module contains some tools to deal with the traceback, including returning it as a string value.\nThis is useful for storing code in a log so it doesnt break the entire function on error.",
"import traceback\nimport os\n\n# Sample code to raise exception and store code in a log\ntry: \n # Raise this exception\n raise Exception('This is the error message.\\n')\nexcept: \n # Open/create a log in append mode\n errorFile = open(os.path.abspath('files/error_log.txt'), 'a')\n errorFile.write(traceback.format_exc())\n errorFile.close()\n print('The traceback info was written to error_log.txt')\n\n# Open the error file and read the log; will increase for every error\nerrorFile = open(os.path.abspath('files/error_log.txt'), 'r')\n\nerrorFile.read()",
"An assertion is a sanity check to make sure the code isn't doing something obviously wrong. It is another kind of exception.\nIf an assert statement evaluates to false, then an error is raised.",
"assert False, 'This is the error message.'",
"Stoplight Program",
"market_2nd = {'ns':'green', 'ew':'red'} #ns: north south, ew: east west\n\ndef switchLights(intersection):\n for key in intersection.keys():\n if intersection[key] == 'green':\n intersection[key] = 'yellow'\n elif intersection[key] == 'yellow':\n intersection[key] = 'red'\n elif intersection[key] == 'red':\n intersection[key] = 'green'\n \n\nprint(market_2nd)\nswitchLights(market_2nd) # Run function in data structure\nprint(market_2nd) ",
"This code is buggy, because the 'north-south' direction is green, while the 'east-west' direction is yellow. The cars crash into each other at intersections.\nWe need to create a sanity check with an assert statement to make sure illogical things throw errors.",
"market_2nd = {'ns':'green', 'ew':'red'} #ns: north south, ew: east west\n\ndef switchLights(intersection):\n for key in intersection.keys():\n if intersection[key] == 'green':\n intersection[key] = 'yellow'\n elif intersection[key] == 'yellow':\n intersection[key] = 'red'\n elif intersection[key] == 'red':\n intersection[key] = 'green'\n # if false, raise assert statement\n assert 'red' in intersection.values(), 'Neither light is red at ' + str(intersection) + '!'\n\nprint(market_2nd)\nswitchLights(market_2nd) # Run function in data structure\nprint(market_2nd) ",
"An assert statement asserts that a condition holds True. If the assert is False, then it raises an assert statement.\nThis improves the ability to debug, by holding the program to logical output.\nAssertions should be used for programmer errors (invalid outputs, wrong returns, etc.) They are meant to be recovered from.\nExceptions should be used for user errogs (invalid inputs, bugs, etc.) They are meant to stop invalid program use. \nAny error statements will help improve the debug flow, and help you find errors sooner instead of later.\nRecap\n\nYou can raise your own exceptions with raise Exception('Error message').\nYou can also use assertions with assert condition,'Error message'.\nAll Error messages return a traceback, outlining the line the error occured, and the actual error logic. \nAssertions are for detecing programmer errors that are not meant to be recovered from.\nUser errors should raise exceptions."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
darkrun95/Applied-Data-Science
|
Introduction to Data Science in Python/Week 3/.ipynb_checkpoints/Assignment3-checkpoint.ipynb
|
gpl-3.0
|
[
"You are currently looking at version 1.5 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nAssignment 3 - More Pandas\nThis assignment requires more individual learning then the last one did - you are encouraged to check out the pandas documentation to find functions or methods you might not have used yet, or ask questions on Stack Overflow and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff.\nQuestion 1 (20%)\nLoad the energy data from the file Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of energy.\nKeep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:\n['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']\nConvert Energy Supply to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with \"...\") make sure this is reflected as np.NaN values.\nRename the following list of countries (for use in later questions):\n\"Republic of Korea\": \"South Korea\",\n\"United States of America\": \"United States\",\n\"United Kingdom of Great Britain and Northern Ireland\": \"United Kingdom\",\n\"China, Hong Kong Special Administrative Region\": \"Hong Kong\"\nThere are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, \ne.g. \n'Bolivia (Plurinational State of)' should be 'Bolivia', \n'Switzerland17' should be 'Switzerland'.\n<br>\nNext, load the GDP data from the file world_bank.csv, which is a csv containing countries' GDP from 1960 to 2015 from World Bank. Call this DataFrame GDP. \nMake sure to skip the header, and rename the following list of countries:\n\"Korea, Rep.\": \"South Korea\", \n\"Iran, Islamic Rep.\": \"Iran\",\n\"Hong Kong SAR, China\": \"Hong Kong\"\n<br>\nFinally, load the Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology from the file scimagojr-3.xlsx, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame ScimEn.\nJoin the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15). \nThe index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',\n 'Citations per document', 'H index', 'Energy Supply',\n 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008',\n '2009', '2010', '2011', '2012', '2013', '2014', '2015'].\nThis function should return a DataFrame with 20 columns and 15 entries.",
"import pandas as pd\nimport numpy as np\n\ndef answer_one():\n energy = pd.read_excel(\"Energy Indicators.xls\", skiprows=17, skip_footer=38, names=['#1', '#2', 'Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'], na_values=\"...\").drop(['#1', '#2'], axis=1)\n energy['Energy Supply'] = energy['Energy Supply'] * 1000000\n energy['Country'] = energy['Country'].map(lambda x: x.rstrip('0123456789'))\n energy['Country'] = energy['Country'].map(lambda x: x[:x.find(\"(\")-1] if x.find(\"(\") != -1 else x)\n columns={\"Republic of Korea\": \"South Korea\", \"United States of America\": \"United States\", \"United Kingdom of Great Britain and Northern Ireland\": \"United Kingdom\", \"China, Hong Kong Special Administrative Region\": \"Hong Kong\"}\n energy['Country'] = energy['Country'].map(lambda x: columns[x] if x in columns else x)\n energy = energy.set_index('Country')\n GDP = pd.read_csv(\"world_bank.csv\", header=None, skiprows=5)\n columns_to_keep = [num for num in range(GDP.columns.__len__()-10,GDP.columns.__len__())]\n columns_to_keep.insert(0, 0)\n GDP = GDP[columns_to_keep]\n column_names = [\"Country\",\"2006\",\"2007\",\"2008\",\"2009\",\"2010\",\"2011\",\"2012\",\"2013\",\"2014\",\"2015\"]\n GDP.columns = column_names\n columns = {\"Korea, Rep.\": \"South Korea\", \"Iran, Islamic Rep.\": \"Iran\", \"Hong Kong SAR, China\": \"Hong Kong\"}\n GDP['Country'] = GDP['Country'].map(lambda x: columns[x] if x in columns else x)\n GDP = GDP.rename(columns={'Country':'Country'})\n GDP = GDP.set_index('Country')\n ScimEn = pd.read_excel(\"scimagojr-3.xlsx\")\n ScimEn_top = ScimEn.head(15)\n ScimEn_top = ScimEn_top.set_index('Country')\n res = pd.merge(GDP, ScimEn_top, left_index=True, right_index=True, how='inner')\n df = pd.merge(energy, res, left_index=True, right_index=True, how='inner')\n columns_to_keep = ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations','Citations per document', 'H index', 'Energy Supply','Energy Supply per Capita', '% Renewable', '2006', '2007', '2008','2009', '2010', '2011', '2012', '2013', '2014', '2015']\n df = df[columns_to_keep]\n return df\nanswer_one()",
"Question 2 (6.6%)\nThe previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?\nThis function should return a single number.",
"%%HTML\n<svg width=\"800\" height=\"300\">\n <circle cx=\"150\" cy=\"180\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"blue\" />\n <circle cx=\"200\" cy=\"100\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"red\" />\n <circle cx=\"100\" cy=\"100\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"green\" />\n <line x1=\"150\" y1=\"125\" x2=\"300\" y2=\"150\" stroke=\"black\" stroke-width=\"2\" fill=\"black\" stroke-dasharray=\"5,3\"/>\n <text x=\"300\" y=\"165\" font-family=\"Verdana\" font-size=\"35\">Everything but this!</text>\n</svg>\n\ndef answer_two():\n energy = pd.read_excel(\"Energy Indicators.xls\", skiprows=17, skip_footer=38, names=['#1', '#2', 'Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'], na_values=\"...\").drop(['#1', '#2'], axis=1)\n energy['Energy Supply'] = energy['Energy Supply'] * 1000000\n energy['Country'] = energy['Country'].map(lambda x: x.rstrip('0123456789'))\n energy['Country'] = energy['Country'].map(lambda x: x[:x.find(\"(\")-1] if x.find(\"(\") != -1 else x)\n columns={\"Republic of Korea\": \"South Korea\", \"United States of America\": \"United States\", \"United Kingdom of Great Britain and Northern Ireland\": \"United Kingdom\", \"China, Hong Kong Special Administrative Region\": \"Hong Kong\"}\n energy['Country'] = energy['Country'].map(lambda x: columns[x] if x in columns else x)\n energy = energy.set_index('Country')\n GDP = pd.read_csv(\"world_bank.csv\", header=None, skiprows=5)\n columns_to_keep = [num for num in range(GDP.columns.__len__()-10,GDP.columns.__len__())]\n columns_to_keep.insert(0, 0)\n GDP = GDP[columns_to_keep]\n column_names = [\"Country\",\"2006\",\"2007\",\"2008\",\"2009\",\"2010\",\"2011\",\"2012\",\"2013\",\"2014\",\"2015\"]\n GDP.columns = column_names\n columns = {\"Korea, Rep.\": \"South Korea\", \"Iran, Islamic Rep.\": \"Iran\", \"Hong Kong SAR, China\": \"Hong Kong\"}\n GDP['Country'] = GDP['Country'].map(lambda x: columns[x] if x in columns else x)\n GDP = GDP.rename(columns={'Country':'Country'})\n GDP = GDP.set_index('Country')\n ScimEn = pd.read_excel(\"scimagojr-3.xlsx\")\n ScimEn = ScimEn.set_index('Country')\n res = pd.merge(GDP, ScimEn, left_index=True, right_index=True, how='inner')\n df = pd.merge(energy, res, left_index=True, right_index=True, how='inner')\n columns_to_keep = ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations','Citations per document', 'H index', 'Energy Supply','Energy Supply per Capita', '% Renewable', '2006', '2007', '2008','2009', '2010', '2011', '2012', '2013', '2014', '2015']\n df = df[columns_to_keep]\n \n outer_res = pd.merge(GDP, ScimEn, left_index=True, right_index=True, how='outer')\n outer_df = pd.merge(energy, outer_res, left_index=True, right_index=True, how='outer')\n return len(outer_df) - len(df)\nanswer_two()",
"<br>\nAnswer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka the DataFrame returned by answer_one())\nQuestion 3 (6.6%)\nWhat is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)\nThis function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.",
"def answer_three():\n Top15 = answer_one()\n series = Top15[['2006', '2007', '2008','2009', '2010', '2011', '2012', '2013', '2014', '2015']].mean(axis=1).sort_values(ascending=False)\n return series\nanswer_three()",
"Question 4 (6.6%)\nBy how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?\nThis function should return a single number.",
"def answer_four():\n Top15 = answer_one()\n series = Top15[['2006', '2007', '2008','2009', '2010', '2011', '2012', '2013', '2014', '2015']].mean(axis=1).sort_values(ascending=False)\n country = series.keys()[5]\n min_val = Top15.loc[country]['2006']\n max_val = Top15.loc[country]['2015']\n return max_val - min_val\nanswer_four()",
"Question 5 (6.6%)\nWhat is the mean Energy Supply per Capita?\nThis function should return a single number.",
"def answer_five():\n Top15 = answer_one()\n return Top15['Energy Supply per Capita'].mean()\nanswer_five()",
"Question 6 (6.6%)\nWhat country has the maximum % Renewable and what is the percentage?\nThis function should return a tuple with the name of the country and the percentage.",
"def answer_six():\n Top15 = answer_one()\n answer = (Top15['% Renewable'].sort_values()[-1:].keys()[0], Top15['% Renewable'].max())\n return answer\nanswer_six()",
"Question 7 (6.6%)\nCreate a new column that is the ratio of Self-Citations to Total Citations. \nWhat is the maximum value for this new column, and what country has the highest ratio?\nThis function should return a tuple with the name of the country and the ratio.",
"def answer_seven():\n Top15 = answer_one()\n Top15['cit.'] = pd.Series(Top15['Self-citations']/Top15['Citations'])\n Top15['cit.'] = Top15['cit.'].sort_values(ascending=False)\n answer = (Top15['cit.'].keys()[0], Top15['cit.'][0])\n return answer\nanswer_seven()",
"Question 8 (6.6%)\nCreate a column that estimates the population using Energy Supply and Energy Supply per capita. \nWhat is the third most populous country according to this estimate?\nThis function should return a single string value.",
"def answer_eight():\n Top15 = answer_one()\n col = pd.Series(Top15['Energy Supply']/Top15['Energy Supply per Capita'])\n col = col.sort_values(ascending=False)\n answer = col.keys()[2]\n return answer\nanswer_eight()",
"Question 9 (6.6%)\nCreate a column that estimates the number of citable documents per person. \nWhat is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).\nThis function should return a single number.\n(Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)",
"def answer_nine():\n Top15 = answer_one()\n Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']\n Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']\n answer = Top15.corr().loc['Energy Supply per Capita']['Citable docs per Capita']\n return answer\nanswer_nine()\n\n#plot9() # Be sure to comment out plot9() before submitting the assignment!",
"Question 10 (6.6%)\nCreate a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.\nThis function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.",
"def answer_ten():\n Top15 = answer_one()\n median = Top15['% Renewable'].median()\n Top15['HighRenew'] = 1\n Top15['HighRenew'] = Top15.where(Top15['% Renewable'] >= median, 0)\n Top15['HighRenew'] = Top15['HighRenew'].map(lambda x:0 if x == 0 else 1)\n return Top15['HighRenew']\nanswer_ten()",
"Question 11 (6.6%)\nUse the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.\npython\nContinentDict = {'China':'Asia', \n 'United States':'North America', \n 'Japan':'Asia', \n 'United Kingdom':'Europe', \n 'Russian Federation':'Europe', \n 'Canada':'North America', \n 'Germany':'Europe', \n 'India':'Asia',\n 'France':'Europe', \n 'South Korea':'Asia', \n 'Italy':'Europe', \n 'Spain':'Europe', \n 'Iran':'Asia',\n 'Australia':'Australia', \n 'Brazil':'South America'}\nThis function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std']",
"def answer_eleven():\n Top15 = answer_one()\n ContinentDict = {'China':'Asia', \n 'United States':'North America', \n 'Japan':'Asia', \n 'United Kingdom':'Europe', \n 'Russian Federation':'Europe', \n 'Canada':'North America', \n 'Germany':'Europe', \n 'India':'Asia',\n 'France':'Europe', \n 'South Korea':'Asia', \n 'Italy':'Europe', \n 'Spain':'Europe', \n 'Iran':'Asia',\n 'Australia':'Australia', \n 'Brazil':'South America'}\n Top15['Continent'] = None\n Top15 = Top15.reset_index()\n Top15['Continent'] = Top15['Country'].map(lambda x: ContinentDict[x] if x in ContinentDict else None)\n new_df = pd.DataFrame(Top15.groupby(by='Continent').size(), columns=['size'])\n Top15 = Top15.set_index('Country') \n Top15['pop'] = pd.Series(Top15['Energy Supply']/Top15['Energy Supply per Capita'])\n new_df['sum'] = Top15.groupby(by='Continent')['pop'].apply(np.sum).astype(float)\n new_df['mean'] = Top15.groupby(by='Continent')['pop'].apply(np.mean)\n new_df['std'] = Top15.groupby(by='Continent')['pop'].apply(np.std) \n return new_df\nanswer_eleven()",
"Question 12 (6.6%)\nCut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?\nThis function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.",
"def answer_twelve():\n Top15 = answer_one()\n ContinentDict = {'China':'Asia', \n 'United States':'North America', \n 'Japan':'Asia', \n 'United Kingdom':'Europe', \n 'Russian Federation':'Europe', \n 'Canada':'North America', \n 'Germany':'Europe', \n 'India':'Asia',\n 'France':'Europe', \n 'South Korea':'Asia', \n 'Italy':'Europe', \n 'Spain':'Europe', \n 'Iran':'Asia',\n 'Australia':'Australia', \n 'Brazil':'South America'}\n Top15 = Top15.reset_index()\n Top15['bins'] = pd.cut(Top15['% Renewable'], bins=5)\n Top15['Continent'] = Top15['Country'].map(lambda x: ContinentDict[x] if x in ContinentDict else None)\n Top15 = Top15.groupby(by=['Continent','bins']).size()\n return Top15\nanswer_twelve()",
"Question 13 (6.6%)\nConvert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.\ne.g. 317615384.61538464 -> 317,615,384.61538464\nThis function should return a Series PopEst whose index is the country name and whose values are the population estimate string.",
"import locale\ndef answer_thirteen():\n Top15 = answer_one()\n PopEst = pd.Series(Top15['Energy Supply']/Top15['Energy Supply per Capita'])\n PopEst = PopEst.map(lambda x: str(format(int(str(x).split(\".\")[0]),\",d\")) + \".\" + str(x).split(\".\")[1])\n return PopEst\nanswer_thirteen()",
"Optional\nUse the built in function plot_optional() to see an example visualization.",
"#plot_optional() # Be sure to comment out plot_optional() before submitting the assignment!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
theandygross/TCGA_differential_expression
|
Notebooks/microarray_validation_aggregation.ipynb
|
mit
|
[
"Aggregating GEO Datasets for TCGA Validation",
"import NotebookImport\nfrom DX_screen import *\n\nstore = pd.HDFStore(MICROARRAY_STORE)\nmicroarray = store['data']\ntissue = store['tissue']\n\ntissue.value_counts()\n\ndx = microarray.xs('01',1,1) - microarray.xs('11',1,1)\ntt = tissue[:,'01'].replace('COAD','COADREAD')\ngenes = ti(dx.notnull().sum(1) > 500)\ndx = dx.ix[genes]",
"Simple average",
"dx_simple = binomial_test_screen(microarray.ix[genes])\n\nfig, ax = subplots(figsize=(4,4))\ns1, s2 = match_series(dx_rna.frac, dx_simple.frac)\nplot_regression(s1, s2, density=True, rad=.02, ax=ax, rasterized=True,\n line_args={'lw':0})\nax.set_ylabel(\"GEO microarray\")\nax.set_xlabel(\"TCGA mRNASeq\")\nann = ax.get_children()[4]\nann.set_text(ann.get_text().split()[0])\nax.set_xticks([0, .5, 1])\nax.set_yticks([0, .5, 1])\nfig.tight_layout()",
"Group by tissue type first and then average. This is to limit the heavy skew of liver cancer samples in the GEO dataset.",
"pos = (dx>0).groupby(tt, axis=1).sum() \ncount = dx.groupby(tt, axis=1).count().replace(0, np.nan)\ncount = count[count.sum(1) > 500]\nfrac_df = 1.*pos / count\nfrac_microarray = frac_df.mean(1)\n\nfig, ax = subplots(figsize=(4,4))\ns1, s2 = match_series(dx_rna.frac, frac_microarray)\nplot_regression(s1, s2, density=True, rad=.02, ax=ax, rasterized=True,\n line_args={'lw':0})\nax.set_ylabel(\"GEO microarray\")\nax.set_xlabel(\"TCGA mRNASeq\")\nann = ax.get_children()[4]\nann.set_text(ann.get_text().split()[0])\nax.set_xticks([0, .5, 1])\nax.set_yticks([0, .5, 1])\nfig.tight_layout()",
"Grouping both TCGA and GEO based on tissue first. This is not the approach we use for the TCGA data in the rest of the analyses, so I'm just doing this to show that it does not effect performance of the replication.",
"dx2 = (rna_df.xs('01',1,1) - rna_df.xs('11',1,1)).dropna(1)\ncc = codes.ix[dx2.columns]\ncc = cc[cc.isin(ti(cc.value_counts() > 10))]\n\npos = (dx2>0).groupby(cc, axis=1).sum() \ncount = dx2.replace(0, np.nan).groupby(cc, axis=1).count()\ncount = count[count.sum(1) > 500]\nfrac_df = 1.*pos / count\nfrac_tcga= frac_df.mean(1)\n\nfig, ax = subplots(figsize=(4,4))\ns1, s2 = match_series(frac_tcga, frac_microarray)\nplot_regression(s1, s2, density=True, rad=.02, ax=ax, rasterized=True,\n line_args={'lw':0})\nax.set_ylabel(\"GEO microarray\")\nax.set_xlabel(\"TCGA mRNASeq\")\nann = ax.get_children()[4]\nann.set_text(ann.get_text().split()[0])\nax.set_xticks([0, .5, 1])\nax.set_yticks([0, .5, 1])\nfig.tight_layout()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
d-k-b/udacity-deep-learning
|
batch-norm/Batch_Normalization_Lesson.ipynb
|
mit
|
[
"Batch Normalization – Lesson\n\nWhat is it?\nWhat are it's benefits?\nHow do we add it to a network?\nLet's see it work!\nWhat are you hiding?\n\nWhat is Batch Normalization?<a id='theory'></a>\nBatch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called \"batch\" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.\nWhy might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.\nFor example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. \nLikewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.\nWhen you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).\nBeyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.\nBenefits of Batch Normalization<a id=\"benefits\"></a>\nBatch normalization optimizes network training. It has been shown to have several benefits:\n1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. \n2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. \n3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.\n4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.\n5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.\n6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. \n7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.\nBatch Normalization in TensorFlow<a id=\"implementation_1\"></a>\nThis section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. \nThe following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.",
"# Import necessary packages\nimport tensorflow as tf\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Import MNIST data so we have something for our experiments\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)",
"Neural network classes for testing\nThe following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.\nAbout the code:\n\nThis class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.\nIt's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.",
"class NeuralNet:\n def __init__(self, initial_weights, activation_fn, use_batch_norm):\n \"\"\"\n Initializes this object, creating a TensorFlow graph using the given parameters.\n \n :param initial_weights: list of NumPy arrays or Tensors\n Initial values for the weights for every layer in the network. We pass these in\n so we can create multiple networks with the same starting weights to eliminate\n training differences caused by random initialization differences.\n The number of items in the list defines the number of layers in the network,\n and the shapes of the items in the list define the number of nodes in each layer.\n e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would \n create a network with 784 inputs going into a hidden layer with 256 nodes,\n followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param use_batch_norm: bool\n Pass True to create a network that uses batch normalization; False otherwise\n Note: this network will not use batch normalization on layers that do not have an\n activation function.\n \"\"\"\n # Keep track of whether or not this network uses batch normalization.\n self.use_batch_norm = use_batch_norm\n self.name = \"With Batch Norm\" if use_batch_norm else \"Without Batch Norm\"\n\n # Batch normalization needs to do different calculations during training and inference,\n # so we use this placeholder to tell the graph which behavior to use.\n self.is_training = tf.placeholder(tf.bool, name=\"is_training\")\n\n # This list is just for keeping track of data we want to plot later.\n # It doesn't actually have anything to do with neural nets or batch normalization.\n self.training_accuracies = []\n\n # Create the network graph, but it will not actually have any real values until after you\n # call train or test\n self.build_network(initial_weights, activation_fn)\n \n def build_network(self, initial_weights, activation_fn):\n \"\"\"\n Build the graph. The graph still needs to be trained via the `train` method.\n \n :param initial_weights: list of NumPy arrays or Tensors\n See __init__ for description. \n :param activation_fn: Callable\n See __init__ for description. \n \"\"\"\n self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])\n layer_in = self.input_layer\n for weights in initial_weights[:-1]:\n layer_in = self.fully_connected(layer_in, weights, activation_fn) \n self.output_layer = self.fully_connected(layer_in, initial_weights[-1])\n \n def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n # Since this class supports both options, only use batch normalization when\n # requested. However, do not use it on the final layer, which we identify\n # by its lack of an activation function.\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n # (See later in the notebook for more details.)\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n # Apply batch normalization to the linear combination of the inputs and weights\n batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\n\n # Now apply the activation function, *after* the normalization.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n\n def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):\n \"\"\"\n Trains the model on the MNIST training dataset.\n \n :param session: Session\n Used to run training graph operations.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param training_batches: int\n Number of batches to train.\n :param batches_per_sample: int\n How many batches to train before sampling the validation accuracy.\n :param save_model_as: string or None (default None)\n Name to use if you want to save the trained model.\n \"\"\"\n # This placeholder will store the target labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define loss and optimizer\n cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))\n \n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n if self.use_batch_norm:\n # If we don't include the update ops as dependencies on the train step, the \n # tf.layers.batch_normalization layers won't update their population statistics,\n # which will cause the model to fail at inference time\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n else:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n \n # Train for the appropriate number of batches. (tqdm is only for a nice timing display)\n for i in tqdm.tqdm(range(training_batches)):\n # We use batches of 60 just because the original paper did. You can use any size batch you like.\n batch_xs, batch_ys = mnist.train.next_batch(60)\n session.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\n \n # Periodically test accuracy against the 5k validation images and store it for plotting later.\n if i % batches_per_sample == 0:\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n self.training_accuracies.append(test_accuracy)\n\n # After training, report accuracy against test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))\n\n # If you want to use this model later for inference instead of having to retrain it,\n # just construct it with the same parameters and then pass this file to the 'test' function\n if save_model_as:\n tf.train.Saver().save(session, save_model_as)\n\n def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):\n \"\"\"\n Trains a trained model on the MNIST testing dataset.\n\n :param session: Session\n Used to run the testing graph operations.\n :param test_training_accuracy: bool (default False)\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n Note: in real life, *always* perform inference using the population mean and variance.\n This parameter exists just to support demonstrating what happens if you don't.\n :param include_individual_predictions: bool (default True)\n This function always performs an accuracy test against the entire test set. But if this parameter\n is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n and accuracy.\n :param restore_from: string or None (default None)\n Name of a saved model if you want to test with previously saved weights.\n \"\"\"\n # This placeholder will store the true labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n # If provided, restore from a previously saved model\n if restore_from:\n tf.train.Saver().restore(session, restore_from)\n\n # Test against all of the MNIST test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,\n labels: mnist.test.labels,\n self.is_training: test_training_accuracy})\n print('-'*75)\n print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))\n\n # If requested, perform tests predicting individual values rather than batches\n if include_individual_predictions:\n predictions = []\n correct = 0\n\n # Do 200 predictions, 1 at a time\n for i in range(200):\n # This is a normal prediction using an individual test case. However, notice\n # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.\n # Remember that will tell it whether it should use the batch mean & variance or\n # the population estimates that were calucated while training the model.\n pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],\n feed_dict={self.input_layer: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n self.is_training: test_training_accuracy})\n correct += corr\n\n predictions.append(pred[0])\n\n print(\"200 Predictions:\", predictions)\n print(\"Accuracy on 200 samples:\", correct/200)\n",
"There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.\nWe add batch normalization to layers inside the fully_connected function. Here are some important points about that code:\n1. Layers with batch normalization do not include a bias term.\n2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)\n3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.\n4. We add the normalization before calling the activation function.\nIn addition to that code, the training step is wrapped in the following with statement:\npython\nwith tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\nThis line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.\nFinally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nWe'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.\nBatch Normalization Demos<a id='demos'></a>\nThis section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. \nWe'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.\nCode to support testing\nThe following two functions support the demos we run in the notebook. \nThe first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.\nThe second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.",
"def plot_training_accuracies(*args, **kwargs):\n \"\"\"\n Displays a plot of the accuracies calculated during training to demonstrate\n how many iterations it took for the model(s) to converge.\n \n :param args: One or more NeuralNet objects\n You can supply any number of NeuralNet objects as unnamed arguments \n and this will display their training accuracies. Be sure to call `train` \n the NeuralNets before calling this function.\n :param kwargs: \n You can supply any named parameters here, but `batches_per_sample` is the only\n one we look for. It should match the `batches_per_sample` value you passed\n to the `train` function.\n \"\"\"\n fig, ax = plt.subplots()\n\n batches_per_sample = kwargs['batches_per_sample']\n \n for nn in args:\n ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),\n nn.training_accuracies, label=nn.name)\n ax.set_xlabel('Training steps')\n ax.set_ylabel('Accuracy')\n ax.set_title('Validation Accuracy During Training')\n ax.legend(loc=4)\n ax.set_ylim([0,1])\n plt.yticks(np.arange(0, 1.1, 0.1))\n plt.grid(True)\n plt.show()\n\ndef train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):\n \"\"\"\n Creates two networks, one with and one without batch normalization, then trains them\n with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.\n \n :param use_bad_weights: bool\n If True, initialize the weights of both networks to wildly inappropriate weights;\n if False, use reasonable starting weights.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param training_batches: (default 50000)\n Number of batches to train.\n :param batches_per_sample: (default 500)\n How many batches to train before sampling the validation accuracy.\n \"\"\"\n # Use identical starting weights for each network to eliminate differences in\n # weight initialization as a cause for differences seen in training performance\n #\n # Note: The networks will use these weights to define the number of and shapes of\n # its layers. The original batch normalization paper used 3 hidden layers\n # with 100 nodes in each, followed by a 10 node output layer. These values\n # build such a network, but feel free to experiment with different choices.\n # However, the input size should always be 784 and the final output should be 10.\n if use_bad_weights:\n # These weights should be horrible because they have such a large standard deviation\n weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,10), scale=5.0).astype(np.float32)\n ]\n else:\n # These weights should be good because they have such a small standard deviation\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n # Just to make sure the TensorFlow's default graph is empty before we start another\n # test, because we don't bother using different graphs or scoping and naming \n # elements carefully in this sample code.\n tf.reset_default_graph()\n\n # build two versions of same network, 1 without and 1 with batch normalization\n nn = NeuralNet(weights, activation_fn, False)\n bn = NeuralNet(weights, activation_fn, True)\n \n # train and test the two models\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n nn.train(sess, learning_rate, training_batches, batches_per_sample)\n bn.train(sess, learning_rate, training_batches, batches_per_sample)\n \n nn.test(sess)\n bn.test(sess)\n \n # Display a graph of how validation accuracies changed during training\n # so we can compare how the models trained and when they converged\n plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)\n",
"Comparisons between identical networks, with and without batch normalization\nThe next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.\nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.relu)",
"As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.\nIf you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)\nThe following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.",
"train_and_test(False, 0.01, tf.nn.relu, 2000, 50)",
"As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)\nIn the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.\nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.sigmoid)",
"With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.relu)",
"Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.\nThe next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.",
"train_and_test(False, 1, tf.nn.relu)",
"In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.sigmoid)",
"In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.\nThe cell below shows a similar pair of networks trained for only 2000 iterations.",
"train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)",
"As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.relu)",
"With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.sigmoid)",
"Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.\nHowever, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.",
"train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)",
"In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. \nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.relu)",
"As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. \nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.sigmoid)",
"Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id=\"successful_example_lr_1\"></a>",
"train_and_test(True, 1, tf.nn.relu)",
"The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.sigmoid)",
"Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id=\"successful_example_lr_2\"></a>",
"train_and_test(True, 2, tf.nn.relu)",
"We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.sigmoid)",
"In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.\nFull Disclosure: Batch Normalization Doesn't Fix Everything\nBatch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.\nThis section includes two examples that show runs when batch normalization did not help at all.\nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.relu)",
"When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.relu)",
"When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. \nNote: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.\nBatch Normalization: A Detailed Look<a id='implementation_2'></a>\nThe layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. \nIn order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.\nWe represent the average as $\\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ \n$$\n\\mu_B \\leftarrow \\frac{1}{m}\\sum_{i=1}^m x_i\n$$\nWe then need to calculate the variance, or mean squared deviation, represented as $\\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\\mu_B$), which gives us what's called the \"deviation\" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.\n$$\n\\sigma_{B}^{2} \\leftarrow \\frac{1}{m}\\sum_{i=1}^m (x_i - \\mu_B)^2\n$$\nOnce we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)\n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAbove, we said \"(almost) standard deviation\". That's because the real standard deviation for the batch is calculated by $\\sqrt{\\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. \nWhy increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. \nAt this point, we have a normalized value, represented as $\\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\\gamma$, and then add a beta value, $\\beta$. Both $\\gamma$ and $\\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. \n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.\nIn NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nThe next section shows you how to implement the math directly. \nBatch normalization without the tf.layers package\nOur implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.\nHowever, if you would like to implement batch normalization at a lower level, the following code shows you how.\nIt uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.\n1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.",
"def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n num_out_nodes = initial_weights.shape[-1]\n\n # Batch normalization adds additional trainable variables: \n # gamma (for scaling) and beta (for shifting).\n gamma = tf.Variable(tf.ones([num_out_nodes]))\n beta = tf.Variable(tf.zeros([num_out_nodes]))\n\n # These variables will store the mean and variance for this layer over the entire training set,\n # which we assume represents the general population distribution.\n # By setting `trainable=False`, we tell TensorFlow not to modify these variables during\n # back propagation. Instead, we will assign values to these variables ourselves. \n pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)\n\n # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.\n # This is the default value TensorFlow uses.\n epsilon = 1e-3\n\n def batch_norm_training():\n # Calculate the mean and variance for the data coming out of this layer's linear-combination step.\n # The [0] defines an array of axes to calculate over.\n batch_mean, batch_variance = tf.nn.moments(linear_output, [0])\n\n # Calculate a moving average of the training data's mean and variance while training.\n # These will be used during inference.\n # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter\n # \"momentum\" to accomplish this and defaults it to 0.99\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n\n # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n # This is necessary because the those two operations are not actually in the graph\n # connecting the linear_output and batch_normalization layers, \n # so TensorFlow would otherwise just skip them.\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def batch_norm_inference():\n # During inference, use the our estimated population mean and variance to normalize the layer\n return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\n\n # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute \n # the operation returned from `batch_norm_training`; otherwise it will execute the graph\n # operation returned from `batch_norm_inference`.\n batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)\n \n # Pass the batch-normalized layer output through the activation function.\n # The literature states there may be cases where you want to perform the batch normalization *after*\n # the activation function, but it is difficult to find any uses of that in practice.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n",
"This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:\n\nIt explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.\nIt initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \\leftarrow \\gamma \\hat{x_i} + \\beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.\nUnlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.\nTensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. \nThe actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.\ntf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.\nWe use the tf.nn.moments function to calculate the batch mean and variance.\n\n2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: \npython\nif self.use_batch_norm:\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nelse:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nOur new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:\npython\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:\npython\nreturn tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAnd replace this line in batch_norm_inference:\npython\nreturn tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAs you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\\hat{x_i}$: \n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAnd the second line is a direct translation of the following equation:\n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. \nWhy the difference between training and inference?\nIn the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nAnd that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nIf you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?\nFirst, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).",
"def batch_norm_test(test_training_accuracy):\n \"\"\"\n :param test_training_accuracy: bool\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n \"\"\"\n\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n tf.reset_default_graph()\n\n # Train the model\n bn = NeuralNet(weights, tf.nn.relu, True)\n \n # First train the network\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n bn.train(sess, 0.01, 2000, 2000)\n\n bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)",
"In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.",
"batch_norm_test(True)",
"As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The \"batches\" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. \nNote: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.\nTo overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it \"normalize\" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. \nSo in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.",
"batch_norm_test(False)",
"As you can see, now that we're using the estimated population mean and variance, we get a 97% accuracy. That means it guessed correctly on 194 of the 200 samples – not too bad for something that trained in under 4 seconds. :)\nConsiderations for other network types\nThis notebook demonstrates batch normalization in a standard neural network with fully connected layers. You can also use batch normalization in other types of networks, but there are some special considerations.\nConvNets\nConvolution layers consist of multiple feature maps. (Remember, the depth of a convolutional layer refers to its number of feature maps.) And the weights for each feature map are shared across all the inputs that feed into the layer. Because of these differences, batch normalizaing convolutional layers requires batch/population mean and variance per feature map rather than per node in the layer.\nWhen using tf.layers.batch_normalization, be sure to pay attention to the order of your convolutionlal dimensions.\nSpecifically, you may want to set a different value for the axis parameter if your layers have their channels first instead of last. \nIn our low-level implementations, we used the following line to calculate the batch mean and variance:\npython\nbatch_mean, batch_variance = tf.nn.moments(linear_output, [0])\nIf we were dealing with a convolutional layer, we would calculate the mean and variance with a line like this instead:\npython\nbatch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)\nThe second parameter, [0,1,2], tells TensorFlow to calculate the batch mean and variance over each feature map. (The three axes are the batch, height, and width.) And setting keep_dims to False tells tf.nn.moments not to return values with the same size as the inputs. Specifically, it ensures we get one mean/variance pair per feature map.\nRNNs\nBatch normalization can work with recurrent neural networks, too, as shown in the 2016 paper Recurrent Batch Normalization. It's a bit more work to implement, but basically involves calculating the means and variances per time step instead of per layer. You can find an example where someone extended tf.nn.rnn_cell.RNNCell to include batch normalization in this GitHub repo."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
brian-rose/env-415-site
|
notes/EBM_notes.ipynb
|
mit
|
[
"Using climlab to solve the 1D diffusive Energy Balance Model\nAn interactive tutorial for ENV / ATM 415: Climate Laboratory\nSpring 2016\n\nThe model\nThe equation for the model is\n$$ C \\frac{\\partial T}{\\partial t} = ASR - OLR + \\frac{D}{\\cos\\phi} \\frac{\\partial}{\\partial \\phi} \\left( \\cos\\phi \\frac{\\partial T}{\\partial \\phi} \\right)$$\nWhere, following our standard notation:\n- $T$ is the surface temperature (in this case, the zonal average temperature)\n- $C$ is the heat capacity in J m$^{-2}$ K$^{-1}$\n- $ASR$ is the absorbed solar radiation\n- $OLR$ is the outgoing longwave radiation\n- $D$ is the thermal diffusivity in units of W m$^{-2}$ K$^{-1}$\n- $\\phi$ is latitude\nLongwave parameterization\nWe will use a very simple parameterization linking surface temperature to OLR:\n$$ OLR = A + B~T $$\nShortwave parameterization\n$$ ASR = (1-\\alpha) Q $$\nand we will use the annual average insolation (ignore seasons).\nThe albedo $\\alpha$ (at least for now) will be a simple prescribed function of latitude (larger at the poles than at the equator) that mimics the observed annual mean albedo.\nThe EBM is already coded up in climlab.",
"# We start with the usual import statements\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport climlab\n\n# create a new model with all default parameters (except the grid size)\n\nmymodel = climlab.EBM_annual(num_lat = 30)\n\n# What did we just do?\n\nprint mymodel",
"What is a climlab Process?\nclimlab is a flexible Python engine for process-oriented climate modeling.\nEvery model in climlab consists of some state variables, and processes that operate on those state variables.\nThe state variables are the climate variables that we will step forward in time. For our EBM, it is just the surface temperature:",
"mymodel.Ts",
"We have an array of temperatures in degrees Celsius. Let's see how big this array is:",
"mymodel.Ts.shape",
"Every state variable exists on a spatial grid. In this case, the grid is an array of latitude points:",
"mymodel.lat",
"We can, for example, plot the current temperature versus latitude:",
"plt.plot(mymodel.lat, mymodel.Ts)\nplt.xlabel('Latitude')\nplt.ylabel('Temperature (deg C)')",
"It is based on a very general concept of a model as a collection of individual, interacting processes. \nEvery model in climlab is a collection of individual, interacting physical processes. \nIn this case, the EBM_annual object has a list of 4 sub-processes. Each one is basically one of the terms in our energy budget equation above:\n\ndiffusion\nLongwave radiation ($A+BT$)\nInsolation (annual mean)\nAlbedo (in this case, a fixed spatial variation)\n\nTo see this explicitly:",
"# The dictionary of sub-processes:\nmymodel.subprocess",
"So what does it do?\nWell for one thing, we can step the model forward in time. This is basically just like we did manually with our zero-dimensional model",
"# Make a copy of the original temperature array\ninitial = mymodel.Ts.copy()\n# Take a single timestep forward!\nmymodel.step_forward()\n# Check out the difference\nprint mymodel.Ts - initial",
"Looks like the temperature got a bit colder near the equator and warmer near the poles",
"# How long is a single timestep?\nmymodel.timestep",
"This value is in seconds. It is actually 1/90th of a year (so, to step forward one year, we need 90 individual steps). This is a default value -- we could change it if we wanted to.\nActually we can see a bunch of important parameters for this model all together in a dictionary called param:",
"mymodel.param",
"Accessing the model diagnostics\nEach model can have lots of different diagnostic quantities. This means anything that can be calculated from the current model state -- in our case, from the temperature.\nFor the EBM, this includes (among other things) the OLR and the ASR.",
"# Each climlab model has a dictionary called diagnostics.\n# Let's look at the names of the fields in this dicionary:\nmymodel.diagnostics.keys()\n\n# We can access individual fields either through standard dictionary notation:\nmymodel.diagnostics['ASR']\n\n# Or using the more interactive 'dot' notation:\nmymodel.ASR\n\n# Let's use the diagnostics to make a plot of the current state of the radiation\n\nplt.plot(mymodel.lat, mymodel.ASR, label='ASR')\nplt.plot(mymodel.lat, mymodel.OLR, label='OLR')\nplt.xlabel('Latitude')\nplt.ylabel('W/m2')\nplt.legend()\nplt.grid()",
"This plot shows that $ASR > OLR$ (system is gaining extra energy) across the tropics, and $ASR < OLR$ (system is losing energy) near the poles.\nIt's interesting to ask how close this model is to global energy balance.\nFor that, we have to take an area-weighted global average.",
"climlab.global_mean(mymodel.net_radiation)\n\nclimlab.global_mean(mymodel.Ts)",
"Running the model out to equilibrium\nWe write a loop to integrate the model through many timesteps, and then check again for energy balance:",
"# Loop 90 times for 1 year of simulation\nfor n in range(90):\n mymodel.step_forward()\n\nprint 'Global mean temperature is %0.1f degrees C.' %climlab.global_mean(mymodel.Ts)\nprint 'Global mean energy imbalance is %0.1f W/m2.' %climlab.global_mean(mymodel.net_radiation)",
"Since there is still a significant imbalance, we are not yet at equilibrium. We should step forward again.\nThis time, let's use a convenient built-in function instead of writing our own loop:",
"mymodel.integrate_years(1.)\n\nprint 'Global mean temperature is %0.1f degrees C.' %climlab.global_mean(mymodel.Ts)\nprint 'Global mean energy imbalance is %0.1f W/m2.' %climlab.global_mean(mymodel.net_radiation)",
"We are now quite close to equilibrium. Let's make some plots",
"plt.plot(mymodel.lat, mymodel.Ts)\nplt.xlabel('Latitude')\nplt.ylabel('Temperature (deg C)')\nplt.grid()\n\nplt.plot(mymodel.lat_bounds, mymodel.heat_transport())\nplt.xlabel('Latitude')\nplt.ylabel('PW')\nplt.grid()",
"Let's make some nice plots of all the terms in the energy budget.\nWe'll define a reusable function to make the plots.",
"def ebm_plot(e, return_fig=False): \n templimits = -20,32\n radlimits = -340, 340\n htlimits = -6,6\n latlimits = -90,90\n lat_ticks = np.arange(-90,90,30)\n \n fig = plt.figure(figsize=(8,12))\n\n ax1 = fig.add_subplot(3,1,1)\n ax1.plot(e.lat, e.Ts)\n ax1.set_ylim(templimits)\n ax1.set_ylabel('Temperature (deg C)')\n \n ax2 = fig.add_subplot(3,1,2)\n ax2.plot(e.lat, e.ASR, 'k--', label='SW' )\n ax2.plot(e.lat, -e.OLR, 'r--', label='LW' )\n ax2.plot(e.lat, e.net_radiation, 'c-', label='net rad' )\n ax2.plot(e.lat, e.heat_transport_convergence(), 'g--', label='dyn' )\n ax2.plot(e.lat, e.net_radiation.squeeze() + e.heat_transport_convergence(), 'b-', label='total' )\n ax2.set_ylim(radlimits)\n ax2.set_ylabel('Energy budget (W m$^{-2}$)')\n ax2.legend()\n \n ax3 = fig.add_subplot(3,1,3)\n ax3.plot(e.lat_bounds, e.heat_transport() )\n ax3.set_ylim(htlimits)\n ax3.set_ylabel('Heat transport (PW)')\n \n for ax in [ax1, ax2, ax3]:\n ax.set_xlabel('Latitude')\n ax.set_xlim(latlimits)\n ax.set_xticks(lat_ticks)\n ax.grid()\n \n if return_fig:\n return fig\n\nebm_plot(mymodel)",
"What if we start from a very different initial temperature?",
"model2 = climlab.EBM_annual(num_lat=30)\nprint model2\n\n# The default initial temperature\nprint model2.Ts\n\n# Now let's change it to be 15 degrees everywhere\nmodel2.Ts[:] = 15.\n\nprint model2.Ts\n\nmodel2.compute_diagnostics()\n\nebm_plot(model2)",
"Why is the heat transport zero everywhere?\nNow let's look at an animation showing how all the terms in the energy budget evolve from this initial condition toward equilibrium.\nThe code below generates a series of snapshots, that we can stitch together into an animation with a graphics program.",
"# Code to generate individual frames for the animation\n\n# You need to specify a valid path for your computer, or else this won't work\n\n# Uncomment the code below to re-create the frames from the animation\n\n#nsteps = 90\n#for n in range(nsteps):\n# fig = ebm_plot(model2, return_fig=True)\n# \n# filepath = '/Users/br546577/Desktop/temp/ebm_animation_frames/'\n# filename = filepath + 'ebm_animation' + str(n).zfill(4) + '.png'\n# fig.savefig(filename)\n# plt.close()\n# \n# model2.step_forward()",
"We will look at the animation together in class."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ProjectQ-Framework/ProjectQ
|
examples/simulator_tutorial.ipynb
|
apache-2.0
|
[
"ProjectQ Simulator Tutorial\nThe aim of this tutorial is to introduce some of the basic and more advanced features of the ProjectQ simulator. Please note that all the simulator features can be found in our code documentation.\nContents:\n\nIntroduction\nInstallation\nBasics\nAdvanced features\nImproving the speed of the ProjectQ simulator\n\nIntroduction <a id=\"Introduction\"></a>\nOur simulator can be used to simulate any circuit model quantum algorithm. This requires storing the state, also called wavefunction, of all qubits which requires storing 2<sup>n</sup> complex values (each of size 16 bytes) for an n-qubit algorithm. This can get very expensive in terms of required RAM:\nNumber of qubits n| Required RAM to store wavefunction\n------------------- | ----------------------------------\n10 | 16 KByte\n20 | 16 MByte\n30 | 16 GByte\n31 | 32 GByte\n32 | 64 GByte\n40 | 16 TByte\n45 | 512 TByte (world's largest quantum computer simulation)\nThe number of qubits you can simulate with the ProjectQ simulator is only limited by the amount of RAM in your notebook or workstation. Applying quantum gates is expensive as we have to potentially update each individual value in the full wavefunction. Therefore, we have implemented a high-performance simulator which is significantly faster than other simulators (see our papers for a detailed comparison [1], [2]). The performance of such simulators is hardware-dependent and therefore we have decided to provide 4 different versions. A simulator implemented in C++ which uses multi-threading (OpenMP) and instrinsics, a C++ simulator which uses intrinsics, a C++ simulator, and a slower simulator which only uses Python. During the installation we try to install the fastest of these options given your hardware and available compiler.\nOur simulator is simultaneously also a quantum emulator. This concept was first introduced by us in [2]. A quantum emulator takes classical shortcuts when performing the simulation and hence can be orders of magnitude faster. E.g., for simulating Shor's algorithm, we only require to store the wavefunction of n+1 qubits while the algorithm on a real quantum computer would require 2n+O(1) qubits. Using these emulation capabilities, we can easily emulate Shor's algorithm for factoring, e.g., 4028033 = 2003 · 2011 on a notebook [1].\nInstallation <a id=\"Installation\"></a>\nPlease follow the installation instructions in the docs. The Python interface to all our different simulators (C++ or native Python) is identical. The different versions only differ in performance. If you have a C++ compiler installed on your system, the setup will try to install the faster C++ simulator. To figure out which simulator is installed just execute the following code after installing ProjectQ:",
"import projectq\neng = projectq.MainEngine() # This loads the simulator as it is the default backend",
"If you now see the following message\n(Note: This is the (slow) Python simulator.)\nyou are using the slow Python simulator and we would recommend to reinstall ProjectQ with the C++ simulator following the installation instructions. If this message doesn't show up, then you are using one of the fast C++ simulator versions. Which one exactly depends on your compiler and hardware.\nBasics <a id=\"Basics\"></a>\nTo write a quantum program, we need to create a compiler called MainEngine and provide a backend for which the compiler should compile the quantum program. In this tutorial we are focused on the simulator as a backend. We can create a compiler with a simulator backend by importing the simulator class and creating a simulator instance which is passed to the MainEngine:",
"from projectq.backends import Simulator\neng = projectq.MainEngine(backend=Simulator())",
"Note that the MainEngine contains the simulator as the default backend, hence one can equivalently just do:",
"eng = projectq.MainEngine()",
"In order to simulate the probabilistic measurement process, the simulator internally requires a random number generator. When creating a simulator instance, one can provide a random seed in order to create reproducible results:",
"eng = projectq.MainEngine(backend=Simulator(rnd_seed=10))",
"Let's write a simple test program which allocates a qubit in state |0>, performs a Hadamard gate which puts it into a superposition 1/sqrt(2) * ( |0> + |1>) and then measure the qubit:",
"import projectq\nfrom projectq.ops import Measure, H\n\neng = projectq.MainEngine()\nqubit = eng.allocate_qubit() # Allocate a qubit from the compiler (MainEngine object)\nH | qubit\nMeasure | qubit # Measures the qubit in the Z basis and collapses it into either |0> or |1>\neng.flush() # Tells the compiler (MainEninge) compile all above gates and send it to the simulator backend\nprint(int(qubit)) # The measurement result can be accessed by converting the qubit object to a bool or int\n",
"This program randomly outputs 0 or 1. Note that the measurement does not return for example a probability of measuring 0 or 1 (see below how this could be achieved). The reason for this is that a program written in our high-level syntax should be independent of the backend. In other words, the code can be executed either by our simulator or by exchanging only the MainEngine's backend by a real device which cannot return a probability but only one value. See the other examples on GitHub of how to execute code on the IBM quantum chip.\nImportant note on eng.flush()\nNote that we have used eng.flush() which tells the compiler to send all the instructions to the backend. In a simplified version, when the Python interpreter executes a gate (e.g. the above lines with H, or Measure), this gate is sent to the compiler (MainEngine), where it is cached. Compilation and optimization of cached gates happens irregularly, e.g., an optimizer in the compiler might wait for 5 gates before it starts the optimization process. Therefore, if we require the result of a quantum program, we need to call eng.flush() which tells the compiler to compile and send all cached gates to the backend for execution. eng.flush() is therefore necessary when accessing the measurement result by converting the qubit object to an int or bool. Or when using the advanced features below, where we want to access properties of the wavefunction at a specific point in the quantum algorithm.\nThe above statement is not entirely correct as for example there is no eng.flush() strictly required before accessing the measurement result. The reason is that the measurement gate in our compiler is not cached but is sent directly to the local simulator backend because it would allow for performance optimizations by shrinking the wavefunction. However, this is not a feature which your code should/can rely on and therefore you should always use eng.flush().\nImportant debugging feature of our simulator\nWhen a qubit goes out of scope, it gets deallocated automatically. If the backend is a simulator, it checks that the qubit was in a classical state and otherwise it raises an exception. This is an important debugging feature as in many quantum algorithms, ancilla qubits are used for intermediate results and then \"uncomputed\" back into state |0>. If such ancilla qubits now go out of scope, the simulator throws an error if they are not in either state |0> or |1>, as this is most likely a bug in the user code:",
"def test_program(eng):\n # Do something\n ancilla = eng.allocate_qubit()\n H | ancilla\n # Use the ancilla for something\n # Here the ancilla is not reset to a classical state but still in a superposition and will go out of scope\ntest_program(eng)",
"If you are using a qubit as an ancilla which should have been reset, this is a great feature which automatically checks the correctness of the uncomputation if the simulator is used as a backend. Should you wish to deallocate qubits which might be in a superposition, always apply a measurement gate in order to avoid this error message:",
"from projectq.ops import All\ndef test_program_2(eng):\n # Do something\n ancillas = eng.allocate_qureg(3) # allocates a quantum register with 3 qubits\n All(H) | ancillas # applies a Hadamard gate to each of the 3 ancilla qubits\n All(Measure) | ancillas # Measures all ancilla qubits such that we don't get\n #an error message when they are deallocated\n # Here the ancillas will go out of scope but because of the measurement, they are in a classical state\ntest_program_2(eng)",
"Advanced features <a id=\"Advanced_features\"></a>\nHere we will show some features which are unique to a simulator backend which has access to the full wavefunction. Note that using these features in your code will prohibit the code to run on a real quantum device. Therefore instead of, e.g., using the feature to ininitialize the wavefunction into a specific state, you could execute a small quantum circuit which prepares the desired state and hence the code can be run on both the simulator and on actual hardware. \nFor details on the simulator please see the code documentation.\nIn order to use the features of the simulator backend, we need to have a reference to access it. This can be achieved by creating a simulator instance and keeping a reference to it before passing it to the MainEngine:",
"sim = Simulator(rnd_seed=5) # We can now always access the simulator via the \"sim\" variable\neng = projectq.MainEngine(backend=sim)",
"Alternatively, one can access the simulator by accessing the backend of the compiler (MainEngine):",
"assert id(sim) == id(eng.backend)",
"Amplitude\nOne can access the complex amplitude of a specific state as follows:",
"from projectq.ops import H, Measure\n\neng = projectq.MainEngine()\nqubit = eng.allocate_qubit()\nqubit2 = eng.allocate_qubit()\neng.flush() # sends the allocation of the two qubits to the simulator (only needed to showcase the stuff below)\n\nH | qubit # Put this qubit into a superposition\n\n# qubit is a list with one qubit, qubit2 is another list with one qubit object\n# qubit + qubit2 creates a list containing both qubits. Equivalently, one could write [qubit[0], qubit2[0]]\n# get_amplitude requires that one provides a list/qureg of all qubits such that it can determine the order\n\namp_before = eng.backend.get_amplitude('00', qubit + qubit2)\n\n# Amplitude will be 1 as Hadamard gate is not yet executed on the simulator backend\n# We forgot the eng.flush()!\nprint(\"Amplitude saved in amp_before: {}\".format(amp_before))\n\neng.flush() # Makes sure that all the gates are sent to the backend and executed\n\namp_after = eng.backend.get_amplitude('00', qubit + qubit2)\n\n# Amplitude will be 1/sqrt(2) as Hadamard gate was executed on the simulator backend\nprint(\"Amplitude saved in amp_after: {}\".format(amp_after))\n\n# To avoid triggering the warning of deallocating qubits which are in a superposition\nMeasure | qubit\nMeasure | qubit2\n",
"NOTE: One always has to call eng.flush() before accessing the amplitude as otherwise some of the gates might not have been sent and executed on the simulator. Also don't forget in such an example to measure all the qubits in the end in order to avoid the above mentioned debugging error message of deallocating qubits which are not in a classical state.\nProbability\nOne can access the probability of measuring one or more qubits in a specified state by the following method:",
"import projectq\nfrom projectq.ops import H, Measure, CNOT, All\n\neng = projectq.MainEngine()\nqureg = eng.allocate_qureg(2)\nH | qureg[0]\neng.flush()\n\nprob11 = eng.backend.get_probability('11', qureg)\nprob10 = eng.backend.get_probability('10', qureg)\nprob01 = eng.backend.get_probability('01', qureg)\nprob00 = eng.backend.get_probability('00', qureg)\nprob_second_0 = eng.backend.get_probability('0', [qureg[1]])\n\nprint(\"Probability to measure 11: {}\".format(prob11))\nprint(\"Probability to measure 00: {}\".format(prob00))\nprint(\"Probability to measure 01: {}\".format(prob01))\nprint(\"Probability to measure 10: {}\".format(prob10))\nprint(\"Probability that second qubit is in state 0: {}\".format(prob_second_0))\n\nAll(Measure) | qureg",
"Expectation value\nWe can use the QubitOperator objects to build a Hamiltonian and access the expectation value of this Hamiltonian:",
"import projectq\nfrom projectq.ops import H, Measure, CNOT, All, QubitOperator, X, Y\n\neng = projectq.MainEngine()\nqureg = eng.allocate_qureg(3)\nX | qureg[0]\nH | qureg[1]\neng.flush()\nop0 = QubitOperator('Z0') # Z applied to qureg[0] tensor identity on qureg[1], qureg[2]\nexpectation = eng.backend.get_expectation_value(op0, qureg)\nprint(\"Expectation value = <Psi|Z0|Psi> = {}\".format(expectation))\n\nop_sum = QubitOperator('Z0 X1') - 0.5 * QubitOperator('X1')\nexpectation2 = eng.backend.get_expectation_value(op_sum, qureg)\nprint(\"Expectation value = <Psi|-0.5 X1 + 1.0 Z0 X1|Psi> = {}\".format(expectation2))\n\nAll(Measure) | qureg # To avoid error message of deallocating qubits in a superposition",
"Collapse Wavefunction (Post select measurement outcome)\nFor debugging purposes, one might want to check the algorithm for cases where an intermediate measurement outcome was, e.g., 1. Instead of running many simulations and post selecting only those with the desired intermediate measurement outcome, our simulator allows to force a specific measurement outcome. Note that this is only possible if the desired state has non-zero amplitude, otherwise the simulator will throw an error.",
"import projectq\nfrom projectq.ops import H, Measure\n\neng = projectq.MainEngine()\nqureg = eng.allocate_qureg(2)\n\n# Create an entangled state:\nH | qureg[0]\nCNOT | (qureg[0], qureg[1])\n# qureg is now in state 1/sqrt(2) * (|00> + |11>)\nMeasure | qureg[0]\nMeasure | qureg[1]\neng.flush() # required such that all above gates are executed before accessing the measurement result\n\nprint(\"First qubit measured in state: {} and second qubit in state: {}\".format(int(qureg[0]), int(qureg[1])))",
"Running the above circuit will either produce both qubits in state 0 or both qubits in state 1. Suppose I want to check the outcome if the first qubit was measured in state 0. This can be achieve by telling the simulator backend to collapse the wavefunction for the first qubit to be in state 0:",
"import projectq\nfrom projectq.ops import H, CNOT\n\neng = projectq.MainEngine()\nqureg = eng.allocate_qureg(2)\n\n# Create an entangled state:\nH | qureg[0]\nCNOT | (qureg[0], qureg[1])\n# qureg is now in state 1/sqrt(2) * (|00> + |11>)\neng.flush() # required such that all above gates are executed before collapsing the wavefunction\n\n# We want to check what happens to the second qubit if the first qubit (qureg[0]) is measured to be 0\neng.backend.collapse_wavefunction([qureg[0]], [0])\n\n# Check the probability that the second qubit is measured to be 0:\nprob_0 = eng.backend.get_probability('0', [qureg[1]])\n\nprint(\"After forcing a measurement outcome of the first qubit to be 0, \\n\"\n \"the second qubit is in state 0 with probability: {}\".format(prob_0))",
"Set wavefunction to a specific state\nIt is possible to set the state of the simulator to any arbitrary wavefunction. In a first step one needs to allocate all the required qubits (don't forget to call eng.flush()), and then one can use this method to set the wavefunction. Note that this only works if the provided wavefunction is the wavefunction of all allocated qubits. In addition, the wavefunction needs to be normalized. Here is an example:",
"import math\nimport projectq\nfrom projectq.ops import H\n\neng = projectq.MainEngine()\nqureg = eng.allocate_qureg(2)\neng.flush()\n\neng.backend.set_wavefunction([1/math.sqrt(2), 1/math.sqrt(2), 0, 0], qureg)\n\nH | qureg[0]\n# At this point both qubits are back in the state 00 and hence there will be no exception thrown\n# when the qureg is deallocated",
"Cheat / Accessing the wavefunction\nCheat is the original method to access and manipulate the full wavefunction. Calling cheat with the C++ simulator returns a copy of the full wavefunction plus the mapping of which qubit is at which bit position. The Python simulator returns a reference. If possible we are planning to change the C++ simulator to also return a reference which currently is not possible due to the python export. Please keep this difference in mind when writing code. If you require a copy, it is safest to make a copy of the objects returned by the cheat method.\nWhen qubits are allocated in the code, each of the qubits gets a unique integer id. This id is important in order to understand the wavefunction returned by cheat. The wavefunction is a numpy array of length 2<sup>n</sup>, where n is the number of qubits. Which bitlocation a specific qubit in the wavefunction has is not predefined (e.g. by the order of qubit allocation) but is rather chosen depending on the compiler optimizations and the simulator. Therefore, cheat also returns a dictionary containing the mapping of qubit id to bit location in the wavefunction. Here is a small example:",
"import copy\nimport projectq\nfrom projectq.ops import Rx, Measure, All\n\neng = projectq.MainEngine()\nqubit1 = eng.allocate_qubit()\nqubit2 = eng.allocate_qubit()\nRx(0.2) | qubit1\nRx(0.4) | qubit2\neng.flush() # In order to have all the above gates sent to the simulator and executed\n\n# We save a copy of the wavefunction at this point in the algorithm. In order to make sure we get a copy\n# also if the Python simulator is used, one should make a deepcopy:\nmapping, wavefunction = copy.deepcopy(eng.backend.cheat())\n\nprint(\"The full wavefunction is: {}\".format(wavefunction))\n# Note: qubit1 is a qureg of length 1, i.e. a list containing one qubit objects, therefore the\n# unique qubit id can be accessed via qubit1[0].id\nprint(\"qubit1 has bit-location {}\".format(mapping[qubit1[0].id]))\nprint(\"qubit2 has bit-location {}\".format(mapping[qubit2[0].id]))\n\n# Suppose we want to know the amplitude of the qubit1 in state 0 and qubit2 in state 1:\nstate = 0 + (1 << mapping[qubit2[0].id])\nprint(\"Amplitude of state qubit1 in state 0 and qubit2 in state 1: {}\".format(wavefunction[state]))\n# If one only wants to access one (or a few) amplitudes, get_amplitude provides an easier interface:\namplitude = eng.backend.get_amplitude('01', qubit1 + qubit2)\nprint(\"Accessing same amplitude but using get_amplitude instead: {}\".format(amplitude))\n\nAll(Measure) | qubit1 + qubit2 # In order to not deallocate a qubit in a superposition state",
"Improving the speed of the ProjectQ simulator <a id=\"improve_speed\"></a>\n\nPlease check the installation instructions in order to install the fastest C++ simulator which uses instrinsics and multi-threading.\nFor simulations with very few qubits, the speed is not limited by the simulator but rather by the compiler. If the compiler engines are not needed, e.g., if only native gates of the simulator are executed, then one can remove the compiler engines and obtain a speed-up:",
"import projectq\nfrom projectq.backends import Simulator\neng = projectq.MainEngine(backend=Simulator(), engine_list=[]) # Removes the default compiler engines",
"As noted in the code documentation, one can play around with the number of threads in order to increase the simulation speed. Execute the following statements in the terminal before running ProjectQ:\nexport OMP_NUM_THREADS=2\nexport OMP_PROC_BIND=spread\nA good setting is to set the number of threads to the number of physical cores on your system.\n\n\nThe simulator has a feature called \"gate fusion\" in which case it combines smaller gates into larger ones in order to increase the speed of the simulation. If the simulator is faster with or without gate fusion depends on many parameters. By default it is currently turned off but one can turn it on and compare by:",
"import projectq\nfrom projectq.backends import Simulator\neng = projectq.MainEngine(backend=Simulator(gate_fusion=True)) # Enables gate fusion",
"We'd like to refer interested readers to our paper on the world's largest and fastest quantum computer simulation for more details on how to optimize the speed of a quantum simulator."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google-research/text-to-text-transfer-transformer
|
notebooks/t5-deploy.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google-research/text-to-text-transfer-transfrormer/blob/main/notebooks/t5-deploy.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 The T5 Authors\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2020 The T5 Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"T5 SavedModel Export and Inference\nThis notebook guides you through the process of exporting a T5 SavedModel for inference. It uses the fine-tuned checkpoints in the T5 Closed Book QA repository for the Natural Questions task as an example, but the same process will work for any model trained with the t5 library.\nFor more general usage of the t5 library, please see the main github repo and fine-tuning colab notebook.\nInstall T5 Library",
"!pip install -q t5\n!git clone https://github.com/google-research/google-research.git --depth=1\n# Add closed-book qa library to Python path.\n%env PYTHONPATH=\"/content/google-research/:/content/google-research/t5_closed_book_qa:${PYTHONPATH}\"",
"Export SavedModel to local storage\nNOTE: This will take a while for XL and XXL.",
"MODEL = \"small_ssm_nq\" #@param[\"small_ssm_nq\", \"t5.1.1.xl_ssm_nq\", \"t5.1.1.xxl_ssm_nq\"]\n\nimport os\n\nsaved_model_dir = f\"/content/{MODEL}\"\n\n!t5_mesh_transformer \\\n --module_import=\"t5_cbqa.tasks\" \\\n --model_dir=\"gs://t5-data/pretrained_models/cbqa/{MODEL}\" \\\n --use_model_api \\\n --mode=\"export_predict\" \\\n --export_dir=\"{saved_model_dir}\"\n\nsaved_model_path = os.path.join(saved_model_dir, max(os.listdir(saved_model_dir)))",
"Load SavedModel and create helper functions for inference\nNOTE: This will take a while for XL and XXL.",
"import tensorflow as tf\nimport tensorflow_text # Required to run exported model.\n\nmodel = tf.saved_model.load(saved_model_path, [\"serve\"])\n\ndef predict_fn(x):\n return model.signatures['serving_default'](tf.constant(x))['outputs'].numpy()\n\ndef answer(question):\n return predict_fn([question])[0].decode('utf-8')",
"Ask some questions\nWe must prefix each question with the nq question: prompt since T5 is a multitask model.",
"for question in [\"nq question: where is google's headquarters\",\n \"nq question: what is the most populous country in the world\",\n \"nq question: name a member of the beatles\",\n \"nq question: how many teeth do humans have\"]:\n print(answer(question))",
"Package in Docker image\n```bash\nMODEL_NAME=model-name\nSAVED_MODEL_PATH=/path/to/export/dir\nDownload the TensorFlow Serving Docker image and repo:\ndocker pull tensorflow/serving:nightly\nFirst, run a serving image as a daemon:\ndocker run -d --name serving_base tensorflow/serving:nightly\nNext, copy your SavedModel to the container's model folder:\ndocker cp $SAVED_MODEL_PATH serving_base:/models/$MODEL_NAME\nNow, commit the container that's serving your model:\ndocker commit --change \"ENV MODEL_NAME ${MODEL_NAME}\" serving_base $MODEL_NAME\nFinally, save the image to a tar file:\ndocker save $MODEL_NAME -o $MODEL_NAME.tar\nYou can now stop serving_base:\ndocker kill serving_base\n```\n```bash\ndocker run -t --rm -p 8501:8501 --name $MODEL_NAME-server $MODEL_NAME &\ncurl -d '{\"inputs\": [\"nq question: what is the most populous country?\"]}' \\\n -X POST http://localhost:8501/v1/models/$MODEL_NAME:predict\ndocker stop $MODEL_NAME-server\n```"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
UWSEDS/LectureNotes
|
week_1/procedural_python/data_basics.ipynb
|
bsd-2-clause
|
[
"Data Basics\nOverview\n\nData basics\nTheory of types\nBasic types\nComposite types: Tuples, lists and dictionaries\nFunctions\nrange() function\nread and write files\nFlow of control\nIf\nfor\n\nData Basics\nCore concepts\n\nAn object is an element used in and possibly maniuplated by software.\nObjects have two kinds of attributes\nProperties (\"values\")\nMethods (\"functions\")\nTo access the attributes of a python object, use the dot (\".\") operator.\nA type (or class) specifies a set of attributes supported by an object.\n\nBasic types\n\nint, float, string, bool\nrelating types",
"# Let's do something with strings\na_string = \"Happy birthday\"\ntype(a_string)\n\na_string.count(\"p\")\n\na = 3\n\n# We can't count strings in an integer\na.count(\"p\")\n\nx = True\n\ntype(x)\n\nx = a > 4\n\nx\n\nx = \"3\"\nx + 2",
"Advanced types\nTuples\nLet's begin by creating a tuple called my_tuple that contains three elements.",
"my_tuple = ('I', 'like', 'cake')\nmy_tuple\n\nmy_tuple[1]\n\nmy_tuple[1:3]",
"Tuples are simple containers for data. They are ordered, meaining the order the elements are in when the tuple is created are preserved. We can get values from our tuple by using array indexing, similar to what we were doing with pandas.",
"my_tuple[0]",
"Recall that Python indexes start at 0. So the first element in a tuple is 0 and the last is array length - 1. You can also address from the end to the front by using negative (-) indexes, e.g.",
"my_tuple[-1]",
"You can also access a range of elements, e.g. the first two, the first three, by using the : to expand a range. This is called slicing.",
"my_tuple[0:2]\n\nmy_tuple[0:3]",
"What do you notice about how the upper bound is referenced?\nWithout either end, the : expands to the entire list.",
"my_tuple[1:]\n\nmy_tuple[:-1]\n\nmy_tuple[:]",
"Tuples have a key feature that distinguishes them from other types of object containers in Python. They are immutable. This means that once the values are set, they cannot change.",
"my_tuple[2]",
"So what happens if I decide that I really prefer pie over cake?",
"my_tuple[2] = 'pie'",
"Facts about tuples:\n* You can't add elements to a tuple. Tuples have no append or extend method.\n* You can't remove elements from a tuple. Tuples have no remove or pop method.\n* You can also use the in operator to check if an element exists in the tuple.\nSo then, what are the use cases of tuples?\n* Speed\n* Write-protects data that other pieces of code should not alter\n<hr>\n\nLists\nLet's begin by creating a list called my_list that contains three elements.",
"my_list = ['I', 'like', 'cake']\na_tuple = ('I', 'like', 'cake')\n\nmy_list[2]",
"At first glance, tuples and lists look pretty similar. Notice the lists use '[' and ']' instead of '(' and ')'. But indexing and refering to the first entry as 0 and the last as -1 still works the same.",
"my_list[0]\n\nmy_list[-1]\n\nmy_list[0:3]",
"Lists, however, unlike tuples, are mutable.",
"my_list[2] = 'pie'\nmy_list\n\n# Some methods for lists\nmy_list.index(\"cake\")\n\nmy_list\n\nmy_list.append(\"very\")\n\nmy_list\n\nmy_list.sort()\n\nmy_list",
"There are other useful methods on lists, including:\n| methods | description |\n|---|---|\n| list.append(obj) | Appends object obj to list |\n| list.count(obj)| Returns count of how many times obj occurs in list |\n| list.extend(seq) | Appends the contents of seq to list |\n| list.index(obj) | Returns the lowest index in list that obj appears |\n| list.insert(index, obj) | Inserts object obj into list at offset index |\n| list.pop(obj=list[-1]) | Removes and returns last object or obj from list |\n| list.remove(obj) | Removes object obj from list |\n| list.reverse() | Reverses objects of list in place |\n| list.sort([func]) | Sort objects of list, use compare func, if given |\nTry some of them now.\n```\nmy_list.count('I')\nmy_list\nmy_list.append('I')\nmy_list\nmy_list.count('I')\nmy_list\nmy_list.index(42)\nmy_list.index('puppies')\nmy_list\nmy_list.insert(my_list.index('puppies'), 'furry')\nmy_list\n```\nDictionaries\n<hr>\n\nDictionaries are similar to tuples and lists in that they hold a collection of objects. Dictionaries, however, allow an additional indexing mode: keys. Think of a real dictionary where the elements in it are the definitions of the words and the keys to retrieve the entries are the words themselves.\n| word | definition |\n|------|------------|\n| tuple | An immutable collection of ordered objects |\n| list | A mutable collection of ordered objects |\n| dictionary | A mutable collection of named objects |\nLet's create this data structure now. Dictionaries, like tuples and elements use a unique referencing method, '{' and its evil twin '}'.",
"my_dict = {\"birthday\": \"Jan 1\", \"present\": \"vacine\"}\n\nmy_dict.keys()\n\nmy_dict.values()\n\nmy_dict = { 'tuple' : 'An immutable collection of ordered objects',\n 'list' : 'A mutable collection of ordered objects',\n 'dictionary' : 'A mutable collection of objects' }\nmy_dict",
"We access items in the dictionary by name, e.g.",
"my_dict['dictionary']",
"Since the dictionary is mutable, you can change the entries.",
"my_dict['dictionary'] = 'A mutable collection of named objects'\nmy_dict",
"Notice that ordering is not preserved!\nAnd we can add new items to the list.",
"my_dict['cabbage'] = 'Green leafy plant in the Brassica family'\nmy_dict",
"To delete an entry, we can't just set it to None",
"my_dict['cabbage'] = None\nmy_dict",
"To delete it propery, we need to pop that specific entry.",
"my_dict.pop('cabbage', None)\nmy_dict",
"You can use other objects as names, but that is a topic for another time. You can mix and match key types, e.g.",
"my_new_dict = {}\nmy_new_dict[1] = 'One'\nmy_new_dict['42'] = 42\nmy_new_dict",
"You can get a list of keys in the dictionary by using the keys method.",
"my_dict.keys()",
"Similarly the contents of the dictionary with the items method.",
"my_dict.items()",
"We can use the keys list for fun stuff, e.g. with the in operator.",
"'dictionary' in my_dict.keys()",
"This is a synonym for in my_dict",
"'dictionary' in my_dict",
"Notice, it doesn't work for elements.",
"'A mutable collection of ordered objects' in my_dict",
"Other dictionary methods:\n| methods | description |\n|---|---|\n| dict.clear() | Removes all elements from dict |\n| dict.get(key, default=None) | For key key, returns value or default if key doesn't exist in dict | \n| dict.items() | Returns a list of dicts (key, value) tuple pairs | \n| dict.keys() | Returns a list of dictionary keys |\n| dict.setdefault(key, default=None) | Similar to get, but set the value of key if it doesn't exist in dict |\n| dict.update(dict2) | Add the key / value pairs in dict2 to dict |\n| dict.values | Returns a list of dictionary values|",
"help( dict)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bhlmn/ds
|
docs/ml/prediction/linear_regression.ipynb
|
mit
|
[
"Linear Regression\nThere are multiple ways to construct a linear regression model using python. Here I use scikit-learn to investigate linear relationships in the housing dataset. Check out that link to see some exploratory data analysis (EDA) on this dataset.\nLet's begin by importing the data.",
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# load the housing dataset\ndata_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'\ncolnames = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', \n 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']\ndf = pd.read_csv(data_url, header=None, sep='\\s+', names=colnames)",
"Now we can build the linear regression model. As I mentioned in the EDA, there appears to be a relationship between the number of rooms a house has (RM) and the value of that home (MEDV). Let's explore the linear relationship between the two.",
"# use scikit-learn to build the linear model\nfrom sklearn.linear_model import LinearRegression\n\n# we can change x or y to different variables in the dataset to explore those relationships\nx = df[['RM']].values\ny = df['MEDV'].values\nlm = LinearRegression()\n\n# fit the model\nlm.fit(x, y)",
"In the next cell I define a function that builds a nice visual to overlay the regression model on top of the data. I'm in the process of building a package that will include this function.",
"def lm_plot(x, y, model, xlab=None, ylab=None, main=None, size=(7,5), res=100, show=True, col_xy='grey', \n col_lm='red', font='Arial', show_fit=True, fitloc='top_left'):\n fig, ax = plt.subplots(figsize=size, dpi=res, facecolor='white', edgecolor='k')\n ax.scatter(x, y, c=col_xy, edgecolors='#262626', alpha=0.5)\n ax.plot(x, model.predict(x), color=col_lm, alpha=0.75)\n \n # if we have axis labels\n if xlab != None:\n ax.set_xlabel(xlab, fontname=font)\n if ylab != None:\n ax.set_ylabel(ylab, fontname=font)\n if main != None:\n ax.set_title(main, fontname=font)\n \n # set font for tick marks\n for tick in ax.get_xticklabels():\n tick.set_fontname(font)\n for tick in ax.get_yticklabels():\n tick.set_fontname(font)\n \n # remove top and right axes\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n # Only show ticks on the left and bottom spines\n ax.yaxis.set_ticks_position('left')\n ax.xaxis.set_ticks_position('bottom')\n \n # Show equation and R squared\n if show_fit:\n text_x, text_y = [0.15, 0.9]\n if fitloc == 'top_right':\n text_x, text_y = [0.85, 0.9]\n if fitloc == 'bottom_left':\n text_x, text_y = [0.15, 0.15]\n if fitloc == 'bottom_right':\n text_x, text_y = [0.85, 0.15]\n eq_text = 'y = %.2f + %.2fx' % (model.intercept_, model.coef_[0])\n if model.coef_[0] < 0:\n eq_text = 'y = %.2f - %.2fx' % (model.intercept_, abs(model.coef_[0]))\n plt.text(text_x, text_y, eq_text, ha='center', va='center', transform=ax.transAxes, fontname=font)\n from sklearn.metrics import r2_score\n r2_text = 'R^2 = %.3f' % r2_score(y, lm.predict(x))\n plt.text(text_x, text_y - 0.05, r2_text, ha='center', va='center', transform=ax.transAxes, fontname=font)\n\n if show:\n plt.show()\n return None",
"Now let's run this function, inputting our data and the regression model to create the visual.",
"lm_plot(x, y, lm, xlab='Average number of rooms', ylab='Median home price (in $1000s)')",
"There is a very clear positive trend here! Seems that, on average, each additional room increases the value of the home by $9k."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ThinkStats2
|
solutions/chap02soln.ipynb
|
gpl-3.0
|
[
"Chapter 2\nExamples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"import numpy as np\n\nfrom os.path import basename, exists\n\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n\n local, _ = urlretrieve(url, filename)\n print(\"Downloaded \" + local)\n\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py\")",
"Given a list of values, there are several ways to count the frequency of each value.",
"t = [1, 2, 2, 3, 5]",
"You can use a Python dictionary:",
"hist = {}\nfor x in t:\n hist[x] = hist.get(x, 0) + 1\n \nhist",
"You can use a Counter (which is a dictionary with additional methods):",
"from collections import Counter\ncounter = Counter(t)\ncounter",
"Or you can use the Hist object provided by thinkstats2:",
"import thinkstats2\nhist = thinkstats2.Hist([1, 2, 2, 3, 5])\nhist",
"Hist provides Freq, which looks up the frequency of a value.",
"hist.Freq(2)",
"You can also use the bracket operator, which does the same thing.",
"hist[2]",
"If the value does not appear, it has frequency 0.",
"hist[4]",
"The Values method returns the values:",
"hist.Values()",
"So you can iterate the values and their frequencies like this:",
"for val in sorted(hist.Values()):\n print(val, hist[val])",
"Or you can use the Items method:",
"for val, freq in hist.Items():\n print(val, freq)",
"thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.\nFor example Hist plots the values and their frequencies as a bar graph.\nConfig takes parameters that label the x and y axes, among other things.",
"import thinkplot\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='value', ylabel='frequency')",
"As an example, I'll replicate some of the figures from the book.\nFirst, I'll load the data from the pregnancy file and select the records for live births.",
"download(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py\")\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct\")\ndownload(\n \"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz\"\n)\n\nimport nsfg\n\npreg = nsfg.ReadFemPreg()\nlive = preg[preg.outcome == 1]",
"Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.",
"hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')",
"Before plotting the ages, I'll apply floor to round down:",
"ages = np.floor(live.agepreg)\n\nhist = thinkstats2.Hist(ages, label='agepreg')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='years', ylabel='Count')",
"As an exercise, plot the histogram of pregnancy lengths (column prglngth).",
"# Solution\n\nhist = thinkstats2.Hist(live.prglngth, label='prglngth')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='weeks', ylabel='Count')",
"Hist provides smallest, which select the lowest values and their frequencies.",
"for weeks, freq in hist.Smallest(10):\n print(weeks, freq)",
"Use Largest to display the longest pregnancy lengths.",
"# Solution\n\nfor weeks, freq in hist.Largest(10):\n print(weeks, freq)",
"From live births, we can select first babies and others using birthord, then compute histograms of pregnancy length for the two groups.",
"firsts = live[live.birthord == 1]\nothers = live[live.birthord != 1]\n\nfirst_hist = thinkstats2.Hist(firsts.prglngth, label='first')\nother_hist = thinkstats2.Hist(others.prglngth, label='other')",
"We can use width and align to plot two histograms side-by-side.",
"width = 0.45\nthinkplot.PrePlot(2)\nthinkplot.Hist(first_hist, align='right', width=width)\nthinkplot.Hist(other_hist, align='left', width=width)\nthinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])",
"Series provides methods to compute summary statistics:",
"mean = live.prglngth.mean()\nvar = live.prglngth.var()\nstd = live.prglngth.std()",
"Here are the mean and standard deviation:",
"mean, std",
"As an exercise, confirm that std is the square root of var:",
"# Solution\n\nnp.sqrt(var) == std",
"Here's are the mean pregnancy lengths for first babies and others:",
"firsts.prglngth.mean(), others.prglngth.mean()",
"And here's the difference (in weeks):",
"firsts.prglngth.mean() - others.prglngth.mean()",
"This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations:",
"def CohenEffectSize(group1, group2):\n \"\"\"Computes Cohen's effect size for two groups.\n \n group1: Series or DataFrame\n group2: Series or DataFrame\n \n returns: float if the arguments are Series;\n Series if the arguments are DataFrames\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n var1 = group1.var()\n var2 = group2.var()\n n1, n2 = len(group1), len(group2)\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / np.sqrt(pooled_var)\n return d",
"Compute the Cohen effect size for the difference in pregnancy length for first babies and others.",
"# Solution\n\nCohenEffectSize(firsts.prglngth, others.prglngth)",
"Exercises\nUsing the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. \nCompute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?",
"# Solution\n\nfirsts.totalwgt_lb.mean(), others.totalwgt_lb.mean()\n\n# Solution\n\nCohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)",
"For the next few exercises, we'll load the respondent file:",
"download(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz\")\n\nresp = nsfg.ReadFemResp()",
"Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.",
"# Solution\n\nhist = thinkstats2.Hist(resp.totincr)\nthinkplot.Hist(hist, label='totincr')\nthinkplot.Config(xlabel='income (category)', ylabel='Count')",
"Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.",
"# Solution\n\nhist = thinkstats2.Hist(resp.ager)\nthinkplot.Hist(hist, label='ager')\nthinkplot.Config(xlabel='age (years)', ylabel='Count')",
"Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.",
"# Solution\n\nhist = thinkstats2.Hist(resp.numfmhh)\nthinkplot.Hist(hist, label='numfmhh')\nthinkplot.Config(xlabel='number of people', ylabel='Count')",
"Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?",
"# Solution\n\n# This distribution is positive-valued and skewed to the right.\n\nhist = thinkstats2.Hist(resp.parity)\nthinkplot.Hist(hist, label='parity')\nthinkplot.Config(xlabel='parity', ylabel='Count')",
"Use Hist.Largest to find the largest values of <tt>parity</tt>.",
"# Solution\n\nhist.Largest(10)",
"Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.\nUse <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.",
"# Solution\n\nrich = resp[resp.totincr == 14]\nhist = thinkstats2.Hist(rich.parity)\nthinkplot.Hist(hist, label='parity')\nthinkplot.Config(xlabel='parity', ylabel='Count')",
"Find the largest parities for high income respondents.",
"# Solution\n\nhist.Largest(10)",
"Compare the mean <tt>parity</tt> for high income respondents and others.",
"# Solution\n\nnot_rich = resp[resp.totincr < 14]\nrich.parity.mean(), not_rich.parity.mean()",
"Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?",
"# Solution\n\n# This effect is about 10 times stronger than the difference in pregnancy length.\n# But remembering the design of the study, we should not make too much of this\n# apparent effect.\n\nCohenEffectSize(rich.parity, not_rich.parity)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
encima/Comp_Thinking_In_Python
|
Session_10/10_OOPython.ipynb
|
mit
|
[
"Object-Oriented Python\nDr. Chris Gwilliams\ngwilliamsc@cardiff.ac.uk\nSo far, we have been writing blocks of code in functions and running them within scripts.\nWe call this procedural programming or procedure oriented programming\nPython is a multi-paradigm language and it allows you to write other styles in the language.\nWhen we call methods, we are attaching functions to objects. If we create objects in our own code, then we are following Object-oriented programming.\nClasses and Objects\nThese are the two key constructs we use in OOP in Python.\nA class is the description of a type.\nAn object is an instance of that class.\nA class has one type but it can have MULTIPLE instances\nA class describes a thing, such as a car but the objects are the instances, such as each car in the car park.\nExercise\nGive me 3 more examples of classes vs objects\nClasses in Python\n```python\nclass Person:\n def init(self, name):\n self.name = name\ndef say_name(self):\n print(\"Hi, I am called {}\".format(name))\n\n``` \nWe can create an instance of this person:",
"class Person:\n def __init__(self, name):\n self.name = name\n \n def say_name(self):\n print(\"Hi, I am called {}\".format(self.name))\n\njimbob = Person(\"Jimbob\")\njimbob.say_name()",
"Notice the use of self, we do not explicitly pass it as an argument but it represents the instance. This brings us back to scope. \nWhat would happen if we did used name instead of self.name in the say_name method?\nVariables\nClasses have variables attached to them and there are two key types we look at here:\nInstance variables\nClass variables\nClass Variables\nVariables that are attached to a class and shared across all instances.\n```python\nclass Person:\n person_count = 0\ndef __init__(self):\n Person.person_count += 1\n\n```",
"class Person:\n person_count = 0\n \n def __init__(self):\n Person.person_count += 1\n \nprint(Person.person_count)\np = Person()\nprint(Person.person_count)",
"Instance variables\nThese are properties of a class that are unique to each instance.\npython\nclass Vampire:\n def __init__(self, name, age):\n self.name = name\n self.age = age\nExercise\nThere is a class for connections to a Database with the following variables, which ones would be class and which would be instance?\n\nconnection_string\nactive_connections\nport\n\nusername\n\n\ninstance\n\nclass\nclass or instance\nclass or instance\n\nExercise\nDefine a class, which have a class variable and have a same instance variable.\nWrite a method to return (NOT PRINT) the instance variable.\nCreate instances of the class and print out both.\nMethods\nClass vs Static\nStatic methods are methods that exist without an instance being needed and they utilise what Python calls decorators.",
"class Person:\n person_count = 0\n \n def __init__(self):\n Person.person_count += 1\n \n @staticmethod\n def eat_food(): #no self passed in here\n print(\"mmmmm\")\n \nprint(Person.person_count)\np = Person()\nprint(Person.person_count)",
"Exercise\nCreate a class for a car and create a constructor (__init__) that sets the make, model and engine size.\nAdd some class variables that are general across all cars.\nCreate methods to get information about each instance.\nCreate static methods to do things with the car that would be common across all cars.\nThis is barely the surface to OO programming. You cover this in detail in the next semester.\nExtra Credit - Overriding\nWhen we create a class, we are given built in Python methods to run on it already (such as: print, add, multiply, type etc)\nWe can override these methods in order to do our own thing when we call it.\nProve it? Use on of your classes from earlier and print it. What is it printing? We can make that more detailed:",
"class Bella:\n def __init__(self): #here is the constructor being overridden\n pass\n def __str__(self): #called when we call `print` on an object\n return \"I want you, and I want you forever. One lifetime is simply not enough for me.\"\n \nb = Bella()\nprint(b)",
"Extra Credit - Overriding\nCreate a fraction class that takes a numerator and a denominator. Override the print method.\nDone? GREAT, now override the __add__ method to add two fractions together. \nThe formulae for this is:\na c ad+bc\n- + - = -----\nb d bd\nThat's all, Folks!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eds-uga/csci1360-fa16
|
assignments/A2/A2_Q6.ipynb
|
mit
|
[
"Q6\nIn this question, we'll dive more deeply into some of the review questions from the last flipped session.\nA\nIn one of the review questions, we discussed creating nested for-loops in order to list out all possible combinations of (x, y) values for a given range of numbers. Somebody pointed out that this would indeed generate some duplicates--e.g., (0, 9) and (9, 0) could be considered exactly the same.\nIn this question, you'll replicate the double-for loop for a range of numbers, but you'll only generate a range of unique pairwise combinations. This is exactly what itertools.combinations(list, 2) does, but you'll implement it yourself. For example, the list [1, 2, 3] should return [ (1, 2), (1, 3), (2, 3) ] (the ordering of the internal tuple pairs doesn't matter).\nPut your answer in the list_of_pairs variable.",
"def my_pairs(x):\n list_of_pairs = []\n \n ### BEGIN SOLUTION\n \n ### END SOLUTION\n \n return list_of_pairs\n\ntry:\n combinations\n itertools.combinations\nexcept:\n assert True\nelse:\n assert False\n\nfrom itertools import combinations as c\n\ni1 = [1, 2, 3]\na1 = set(list(c(i1, 2)))\nassert a1 == set(my_pairs(i1))\n\ni2 = [8934, 123, 23, 1, 6, 8, 553, 8, 98.345, 354, 876796.5, 34]\na2 = set(list(c(i2, 2)))\nassert a2 == set(my_pairs(i2))",
"B\nThis question is a straight-up rip-off from one of the lecture review questions. In the code below, you'll be given a matrix in list-of-lists format. You'll need to iterate through the matrix and replace each row list with a tuple, where the first element of the tuple is the row index, and the second element is the original row list.\nFor example, if the input matrix list-of-lists is\nx = [ [1, 2, 3], [4, 5, 6] ]\nthen the output should be\ny = [ (0, [1, 2, 3]), (1, [4, 5, 6]) ]\nIn other words, y[0] would give the tuple (0, [1, 2, 3]), and y[1] would give (1, [4, 5, 6]).\nPut your answer in the tuple_matrix variable.",
"def add_row_ids(matrix):\n tuple_matrix = []\n \n ### BEGIN SOLUTION\n \n ### END SOLUTION\n \n return tuple_matrix\n\ni1 = [ [1, 2, 3], [4, 5, 6] ]\na1 = set((1, (4, 5, 6)))\ni, t = add_row_ids(i1)[1]\nassert a1 == set((i, tuple(t)))\n\ni2 = [ [1, 2], [2, 3], [4, 5], [6, 7], [8, 9] ]\na2 = set((4, (8, 9)))\ni, t = add_row_ids(i2)[4]\nassert a2 == set((i, tuple(t)))",
"C\nYou could solve Part B in one of several different ways--some combination of range, zip, and enumerate, either just one, or two, or even all three. Regardless of how you solved Part B, describe in words (and pseudocode if you'd like) how you could condense the entirety of Part B into a list comprehension statement, using one or more of the aforementioned iterating tools."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ipsl/cmip6/models/ipsl-cm6a-lr/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: IPSL\nSource ID: IPSL-CM6A-LR\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: CMIP5:IPSL-CM5A-LR \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ipsl', 'ipsl-cm6a-lr', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: ice\") \nDOC.set_value(\"bare soil\") \nDOC.set_value(\"vegetated\") \n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/a\") \n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/a\") \n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/a\") \n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/a\") \n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"vegetation type\") \n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"no distinction between direct and diffuse albedo\") \n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(2) \n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(2) \n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Choisnel\") \n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/A\") \n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Explicit diffusion\") \n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(1) \n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"constant\") \n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"ground snow fraction\") \n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"snow melting\") \n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"snow age\") \nDOC.set_value(\"vegetation type\") \n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"vegetation types\") \n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"C3 grass\") \nDOC.set_value(\"C4 grass\") \nDOC.set_value(\"broadleaf tree\") \nDOC.set_value(\"needleleaf tree\") \nDOC.set_value(\"vegetated\") \n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prescribed (varying from files)\") \n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"prognostic\") \n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"CO2\") \nDOC.set_value(\"light\") \nDOC.set_value(\"temperature\") \nDOC.set_value(\"water availability\") \n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(1) \n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"combined\") \n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"transpiration\") \n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(8) \n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"Leaves, roots, sapwood above and below ground, heartwood above and below ground, fruits, and a plant carbohydrate reserve\") \n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"leaves + stems + roots\") \n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"explicitly calculated\") \n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(4) \n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"Active, slow and passive soil carbon\") \n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"N/A\") \n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(3) \n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"present day\") \n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"water\") \n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
antonpetkoff/learning
|
information-retreival/2018_10_08_inverted_index.ipynb
|
gpl-3.0
|
[
"Incidence Matrixes",
"sample_bbc_news_sentences = [\n \"China confirms Interpol chief detained\",\n \"Turkish officials believe the Washington Post writer was killed in the Saudi consulate in Istanbul.\",\n \"US wedding limousine crash kills 20\",\n \"Bulgarian journalist killed in park\",\n \"Kanye West deletes social media profiles\",\n \"Brazilians vote in polarised election\",\n \"Bull kills woman at French festival\",\n \"Indonesia to wrap up tsunami search\",\n \"Tina Turner reveals wedding night ordeal\",\n \"Victory for Trump in Supreme Court battle\",\n \"Clashes at German far-right rock concert\",\n \"The Walking Dead actor dies aged 76\",\n \"Jogger in Netherlands finds lion cub\",\n \"Monkey takes the wheel of Indian bus\"\n]\n\n#basic tokenization\nfrom nltk.tokenize import TweetTokenizer\n\ntokenizer = TweetTokenizer()\nsample_bbc_news_sentences_tokenized = [tokenizer.tokenize(sent) for sent in sample_bbc_news_sentences]\nsample_bbc_news_sentences_tokenized[0]\n\nsample_bbc_news_sentences_tokenized_lower = [[_t.lower() for _t in _s] for _s in sample_bbc_news_sentences_tokenized]\nsample_bbc_news_sentences_tokenized_lower[0]\n\n#get all unique tokens\nunique_tokens = set(sum(sample_bbc_news_sentences_tokenized_lower, []))\nunique_tokens\n\n# create incidence matrix (term-document frequency)\nimport numpy as np\nincidence_matrix = np.array([[sent.count(token) for sent in sample_bbc_news_sentences_tokenized_lower] \n for token in unique_tokens])\nprint(incidence_matrix)",
"For a bigger vocab can take too much memory (number of tokens * number of documents), while also being sparse!\nWhich words will have highest and which lowest Total freq?\nDataset\nhttps://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups",
"!ls data/mini_newsgroups/sci.electronics/\n!tail -50 data/mini_newsgroups/sci.electronics/52464 | head -10",
"You will now have to construct the Inverted Index - only the dictionary part (term and #docs)",
"import nltk\nnltk.download('punkt')\n\n\nfrom nltk.tokenize import sent_tokenize\nfrom collections import defaultdict, Counter\nimport os\n \ndef prepare_dataset(documents_dir):\n tokenized_documents = []\n for document in os.listdir(documents_dir):\n with open(os.path.join(documents_dir, document)) as outf:\n sentence_tokens = [tokenizer.tokenize(sent.lower()) for sent in sent_tokenize(outf.read())]\n tokenized_documents.append(np.array(sum(sentence_tokens, [])))\n print(\"Found documents: \", len(tokenized_documents))\n return tokenized_documents \n \ndef document_frequency(tokenized_documents):\n all_unique_tokens = set(sum(tokenized_documents, []))\n print(\"Found unique tokens: \", len(all_unique_tokens))\n \n tokens = {token: sum([1 for doc in tokenized_documents if token in doc]) \n for token in all_unique_tokens}\n return tokens\n\n# Load data\nselected_category = 'data/mini_newsgroups/sci.crypt/'\nprint(selected_category)\ntokenized_dataset = prepare_dataset(selected_category)\nprint(\"Sample tokenized document:\")\nprint(tokenized_dataset[0])\n\n# statistics\n\nall_terms = np.concatenate(tokenized_dataset).ravel()\nunique_terms = np.unique(all_terms)\nunique_terms.sort()\n\ndocument_count = len(tokenized_dataset)\nall_terms_count = len(all_terms)\nunique_terms_count = len(unique_terms)\n\nprint(\"documents count: {}\".format(document_count))\nprint(\"total term count: {}\".format(all_terms_count))\nprint(\"unique term count: {}\".format(unique_terms_count))\n\n# incidence matrix\n# rows are documents\n# columns are terms\nincidence_matrix = np.zeros([document_count, unique_terms_count])\n\n# construct postings array of tuples\n# each tuple is of the form: (term_id, doc_id, frequency of term in doc, positions of term in doc)\n# the tuple can be expanded even more\npostings = []\n\nfor term_id, term in enumerate(unique_terms):\n for doc_id, doc in enumerate(tokenized_dataset):\n positions_of_term_in_doc = np.where(doc == term)[0]\n term_count_in_doc = positions_of_term_in_doc.size\n \n if term_count_in_doc > 0:\n postings.append((term_id, doc_id, term_count_in_doc, positions_of_term_in_doc))\n \n incidence_matrix[doc_id, term_id] = term_count_in_doc\n\n# construct lexicon\n# key: term\n# value: [total term frequency, document frequency of term]\n\nlexicon = {term: [\n incidence_matrix[:, term_id].sum(), # total term frequency\n np.count_nonzero(incidence_matrix[:, term_id]) # document frequency of term\n ]\n for term_id, term in enumerate(unique_terms)}\n\npostings",
"What's next in research:\n\n<a href='http://nbjl.nankai.edu.cn/Lab_Papers/2018/SIGIR2018.pdf'>Index Compression for BitFunnel Query Processing</a>\n<a href='https://link.springer.com/chapter/10.1007/978-3-319-76941-7_47'>Inverted List Caching for Topical Index Shards</a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cliburn/sta-663-2017
|
scratch/Lecture09.ipynb
|
mit
|
[
"Machine Learning in Python",
"%matplotlib inline\nimport seaborn as sns\n\nsns.set_context('notebook', font_scale=1.5)",
"Illustration\nAn image classification example\n\nPython-powered\nopencv and scikit-image for feature extraction\nkeras for feature augmentation\nscikit-learn for classification\nflask for web application\nplus some JavaScript for interactive web features\n\n\n\nTypes of learning\n\nUnsupervised (clustering, density estimation)\nSupervised (classification, regression)\nReinforcement (reward and punishment)\n\nObjective of S/L\n\nPredict outcome from features\ny = outcome or label\nX = vector of features\ny = f(X, Θ) + error\nLoss function = g(y, f(X, Θ))\n\nModel evaluation\n\nIn-sample (training) and out-of-sample (test) errors\nCross validation\nHoldout\nK-fold\nLOOCV\n\n\nNote: Any step which uses label/outcome information must be included in cross-validation pipeline\n\nS/L training pipeline\n\nRaw data\nExtract features from raw data\nNormalize/scale features\nSelect features for use in model\nModel selection/evaluation\n\nS/L training process in scikit-learn\n\nConsistent API for scikit-learn classes\nfit\ntransform\npredict\nfit_transform for transformations\nfit_predict for clustering\nscore for classification and regression\nget_params\nset_params\n\n\nFeature extraction\nDomain knowledge useful\nConsider augmenting with external data sources\nMore specialized tools\nFrom natural language\nFrom images\nFrom images/video\nImage feature augmentation\nFrom audio\n\n\nNormalize/scale features\nNecessary for methods based on measures of distance\nMost commonly\nNote: Must apply same scaling to training and test data\n\n\nFeature selection\nNote: Include feature selection in a pipeline\n\n\nModel selection/evaluation\n\nExample\nThis example is only meant to show the mechanics of using scikit-learn.",
"from sklearn.preprocessing import PolynomialFeatures, StandardScaler\nfrom sklearn.feature_selection import VarianceThreshold\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.linear_model import RidgeClassifierCV\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import classification_report\nfrom sklearn.pipeline import Pipeline\n\nimport pandas as pd",
"Get data",
"iris = pd.read_csv('iris.csv')\niris.head()\n\nsns.pairplot(iris, hue='Species')\npass",
"Feature extraction\nSplit labels and features as plain numpy arrays",
"X = iris.iloc[:, :4].values\ny = iris.iloc[:, 4].astype('category').cat.codes.values\n\nX[:3]\n\ny[:3]",
"Generate polynomial (interaction) features",
"poly = PolynomialFeatures(2)\nX_poly = poly.fit_transform(X)\n\nX_poly[:3]",
"Scale features to have zero mean and unit standard deviation",
"scaler = StandardScaler()\nX_poly_scaled = scaler.fit_transform(X_poly)\n\nX_poly_scaled[:3]",
"Select \"useful\" features",
"selector = VarianceThreshold(threshold=0.1)\nX_new = selector.fit_transform(X_poly_scaled)\n\nX_poly_scaled.shape, X_new.shape",
"Split into training and test sets",
"X_train, X_test, y_train, y_test = train_test_split(X_new, y, random_state=1)\n\nX_train[:3]\n\ny_train[:3]\n\nX_test[:3]\n\ny_test[:3]",
"Train an estimator",
"alphas = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10]\nclf = RidgeClassifierCV(alphas=alphas, cv=5)\n\nclf.fit(X_train, y_train)",
"Evaluate estimator",
"y_pred = clf.predict(X_test)\n\nprint(classification_report(y_test, y_pred))",
"Putting it all together in a pipeline",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\npipe = Pipeline([\n ('polynomaial_features', PolynomialFeatures(2)),\n ('standard_scalar', StandardScaler()),\n ('feature_selection', VarianceThreshold(threshold=0.1)),\n ('classification', clf)\n])\n\npipe.fit(X_train, y_train)\ny_pred = pipe.predict(X_test)\nprint(classification_report(y_test, y_pred))",
"Alternative pipeline",
"params = {'n_estimators': [5, 10, 25], 'max_depth': [1, 3, None]}\nrf = RandomForestClassifier()\nclf2 = GridSearchCV(rf, params, cv=5, n_jobs=-1)\n\npipe2 = Pipeline([\n ('polynomaial_features', PolynomialFeatures(2)),\n ('feature_selection', VarianceThreshold(threshold=0.1)),\n ('classification', clf2)\n])\n\npipe2.fit(X_train, y_train)\ny_pred2 = pipe2.predict(X_test)\nprint(classification_report(y_test, y_pred2))",
"Getting detailed information from pipeline",
"classifier = pipe2.named_steps['classification']\nclassifier.best_params_"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eugen/mlstudy
|
1. Intro to Jupyter Notebooks - Python.ipynb
|
apache-2.0
|
[
"Jupyter Notebooks\nMajor features\n\nOnline environment for running snippets\nCombine hypertext, code and charts on the same page\nPerfect for sharing snippets for teach something to others.\nDon't need to have a local environment with all the languages and libs\n\nLanguage support\nMultiple languages are supported through the concept of kernels: interpreters that execute tiny scripts one by one, on demand, while maintaining a runtime environment. Basically a REPL that's called from the web UI. The list currently includes:\n* Python\n* R\n* F# (on Azure Notebooks)a\n* Julia/Scala/etc.\nData Science / Machine Learning\nPython and R are also popular for data science and machine learning, so people made sure they integrate well with Jupyter Notebooks. This means that many objects render nicely on the Notebook UI:\n\nPandas DataFrames are rendered as tables\nmatplotlib charts are rendered as inline pictures\n\nScientific Python\nFor machine learning, 3 types of libraries always pop up:\n* Data Analysis: These are libraries that can load data from various sources, do various transformations, and compute basic statistics. Best Python example: Pandas\n* Machine Learning: These libraries implement machine learning algorithms. Best Python example: scikit-learn\n* Charting: Render various graphs and plots of our data. Best Python example: matplotlib\nExample 1: Titanic dataset",
"import os\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt \n# Make charts a bit prettier\nplt.style.use('ggplot')\n\ntitanic = pd.read_csv('titanic.csv', sep = ',')\n\n# What are the dimensions\ntitanic.shape\n\n# What are the column names\ntitanic.columns\n\ntitanic['MySum'] = titanic[\"Survived\"] + titanic[\"Pclass\"]\n\n# What do the first few rows look like\ntitanic.head()\n\n# Let's x cleanup the data a bit\ncity_names = {\"C\": \"Cherbourg\", \"Q\": \"Queenstown\", \"S\": \"Southampton\"} \ntitanic[\"EmbarkedCode\"] = titanic[\"Embarked\"]\ntitanic[\"Embarked\"] = titanic[\"EmbarkedCode\"].apply(lambda value: city_names.get(value)) \n\n# Check if it worked\ntitanic.head()\n\n#Tell matplotlib to render graphs inside this notebook\n%matplotlib inline\n\n# Let's create a contingency table\npd.crosstab(titanic.Pclass, titanic.Survived, margins = True) \n\n# Let's do the same but as percentages\npd.crosstab(titanic.Pclass, titanic.Survived, margins = True).apply(lambda row: row/len(titanic))\n\ntitanic.groupby([\"Sex\", \"Survived\"]).count().unstack(\"Survived\")[\"PassengerId\"]\n\n# Let's create a stacked bar chart for sex vs. survivability \ntitanic.groupby([\"Sex\", \"Survived\"]).count().unstack(\"Survived\")[\"PassengerId\"].plot(kind=\"bar\", stacked=True)\n\n\ntitanic[titanic.Sex == 'female']\n\n# Do the same graph, but only for people older than 18 years old\ntitanic[titanic.Age >= 18].groupby([\"Sex\", \"Survived\"]).count().unstack(\"Survived\")[\"PassengerId\"].plot(kind=\"bar\", stacked=True)",
"Example 2: Video Game Sales",
"games = pd.read_csv(\"videogames.csv\", sep = \",\")\n\ngames.head()\n\nby_publisher = games.groupby(\"Publisher\").agg({\"NA_Sales\": sum, \n \"EU_Sales\": sum, \n \"JP_Sales\": sum, \n \"Global_Sales\": sum, \n \"Critic_Score\": np.mean}) \nby_publisher[\"Nintendo\"]\n\nby_publisher.columns\n\ntop_publishers = by_publisher.sort_values(\"Global_Sales\", ascending = False)[0:15][[\"NA_Sales\", \"EU_Sales\", \"JP_Sales\"]]\ntop_publishers\n\ntop_publishers.plot(kind=\"bar\", figsize=(12,5))\n\n# And again, as a barplot\ntop_publishers.plot(kind=\"bar\", stacked = True, figsize=(12,5))",
"Running Jupyter Notebooks locally with Docker\nInstall Docker, either natively or with docker machine\nIf running Linux, MacOS or Windows 10, you can get Docker native at docker.com.\nIf you're running Windows 8 then you need Docker Toolbox\nOpen a docker console to verify that docker is running\nOpen Docker Quickstart Terminal and run the following: \n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nYou're probably seeing an empty list. This is ok, docker is running, you just don't have any container running.\nChoose an image from the Jupyter official Docker\n\nGo here: https://hub.docker.com/u/jupyter/\nPick one of the images named *-notebook. For example, for python+scikit-learn+matplotlib, pick jupyter/scipy-notebook\n\nCreate a new container\n$ docker run -p 8888:8888 -v /home/jovyan/work --name jupyternb jupyter/scipy-notebook start-notebook.sh --NotebookApp.token=''\nA bit about what this does:\n\ndocker run is used to run a new container\n-p 8888:8888 tells docker to map the port 8888 from the container to the host machine (or docker-machine vm)\n-v /home/jovyan/work tells docker to create a persistent volume for the directory where the notebooks are stored. Without this, all work will be lost when stopping the docker container.\n--name jupyternb specifies the name of the container. Without it, docker will generate a random name\njupyter/scipy-notebook is the name of the image from the docker hub to run\nstart-notebook.sh --NotebookApp.token='': totally optional, but this is specific to the Jupyter Notebook Docker image and tells it to disable authentication. Otherwise, you would have to get the initial configuration token from the docker logs.\n\nAccessing the container\nIf using docker native, the app will be available at http://localhost:8888. \nIf using docker-machine, you'll need to find out its IP first using docker-machine inspect default | grep IPAddress, usually 192.168.99.100. The app will be avaialble then at e.g. http://192.168.99.100:8888.\nStarting the container\nIf the container is stopped (e.g after reboot), it can be started with:\n$ docker start jupyternb\nGood Luck!\nYou should now be able to upload or create notebooks, as well as datasets that can be loaded from the notebooks.\nNote: All work will be persisted to the docker volume, but you are encouraged to keep your files separately anwyay. They can be downloaded by choosing File > Download as > Notebook from the menu."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Cyb3rWard0g/ThreatHunter-Playbook
|
docs/notebooks/windows/08_lateral_movement/WIN-190815181010.ipynb
|
gpl-3.0
|
[
"Remote Service creation\nMetadata\n| | |\n|:------------------|:---|\n| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |\n| creation date | 2019/08/15 |\n| modification date | 2020/09/20 |\n| playbook related | ['WIN-190813181020'] |\nHypothesis\nAdversaries might be creating new services remotely to execute code and move laterally in my environment\nTechnical Context\nNone\nOffensive Tradecraft\nAdversaries may execute a binary, command, or script via a method that interacts with Windows services, such as the Service Control Manager. This can be done by by adversaries creating a new service.\nAdversaries can create services remotely to execute code and move lateraly across the environment.\nMordor Test Data\n| | |\n|:----------|:----------|\n| metadata | https://mordordatasets.com/notebooks/small/windows/08_lateral_movement/SDWIN-190518210652.html |\n| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip |\nAnalytics\nInitialize Analytics Engine",
"from openhunt.mordorutils import *\nspark = get_spark()",
"Download & Process Mordor Dataset",
"mordor_file = \"https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip\"\nregisterMordorSQLTable(spark, mordor_file, \"mordorTable\")",
"Analytic I\nLook for new services being created in your environment under a network logon session (3). That is a sign that the service creation was performed from another endpoint in the environment\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| Service | Microsoft-Windows-Security-Auditing | User created Service | 4697 |\n| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |",
"df = spark.sql(\n'''\nSELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.SubjectUserName, o.ServiceName, a.IpAddress\nFROM mordorTable o\nINNER JOIN (\n SELECT Hostname,TargetUserName,TargetLogonId,IpAddress\n FROM mordorTable\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4624\n AND LogonType = 3 \n AND NOT TargetUserName LIKE \"%$\"\n ) a\nON o.SubjectLogonId = a.TargetLogonId\nWHERE LOWER(o.Channel) = \"security\"\n AND o.EventID = 4697\n'''\n)\ndf.show(10,False)",
"Known Bypasses\n| Idea | Playbook |\n|:-----|:---------|\nFalse Positives\nNone\nHunter Notes\n\nIf there are a lot of unique services being created in your environment, try to categorize the data based on the bussiness unit.\nIdentify the source of unique services being created everyday. I have seen Microsoft applications doing this.\nStack the values of the service file name associated with the new service.\nDocument what users create new services across your environment on a daily basis\n\nHunt Output\n| Type | Link |\n| :----| :----|\nReferences\n\nhttps://www.powershellempire.com/?page_id=523"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
timofeymukha/bervet_python
|
Laboration 10.ipynb
|
gpl-3.0
|
[
"Laboration 10\nLorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse rhoncus, felis non imperdiet ornare, metus mauris ultrices nibh, et ornare erat metus sed ipsum. Ut sit amet sapien nisi. Nunc nec enim nec nibh blandit luctus quis id ante. Curabitur quis porttitor elit. Sed nec magna vel quam cursus pretium in vel nulla. Pellentesque ac eros ut velit fringilla elementum eget sit amet ligula. Etiam porttitor dolor mi, a congue justo eleifend nec. Morbi et felis at ante aliquet condimentum id ac metus. Nulla non fringilla justo, vel feugiat risus. In at eros eu neque condimentum sodales.\n$$F(D) = \\int_0^D p\\cdot(D-y)\\cdot w(y)$$\nNam sollicitudin suscipit magna. Suspendisse porttitor congue tellus ut gravida. Mauris convallis justo et mollis suscipit. Pellentesque convallis turpis in imperdiet commodo. Vivamus pulvinar viverra neque at mattis. Nullam at leo sit amet metus pulvinar rhoncus. Praesent faucibus orci sit amet sodales tristique. Nam magna arcu, laoreet vel erat nec, mattis imperdiet massa.\n<img src='trapez.png'/>\nSkapa matris\n\nKoden nedan lagrar till variabeln A matrisen\n$$A = \\begin{pmatrix}1 & 2 \\ 3 & 4 \\end{pmatrix}$$\noch skriver ut den. Kör koden genom att markera cellen och trycka Skift + Enter.",
"import numpy as np\nA = np.array([[1,2],\n [3,4]])\n\nprint(A)",
"Integrera funktion\n\nKoden nedan definierar en funktion $f(x) = e^x$ och beräknar integralen $I = \\int_0^1 f(x)\\,dx$ med funktionen scipy.integrate.quad.",
"import scipy.integrate as integrate\n\ndef f(x):\n return np.exp(x)\n\nI, err = integrate.quad(f, 0, 1)\n\nprint(\"Integralens värde är {:.2f} (med feltolerans = {:.1e})\".format(I, err))",
"Ändra i koden ovan och integrera istället funktionen $g(x) = \\sin(10x)$ över samma intervall. Definiera en funktion g och ändra i anropet till quad.\n\nPlotta\n\nKör koden nedan för att rita en graf av funktionen $f$ ovan.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0,1)\nplt.plot(x, f(x))",
"Exportera som PDF\n\nDet går att spara körningsresultaten som PDF/HTML m.m.\n\nMotsvarande finns tillgängligt för MATLAB genom MATLAB Live Editor"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rgarcia-herrera/sistemas-dinamicos
|
jacobian.ipynb
|
gpl-3.0
|
[
"Jacobiana\n$$\\frac{dx}{dt}=a_{1}x-b_{1}x^{2}+c_{1}xy$$\n$$\\frac{dy}{dt}=a_{2}y-b_{2}y^{2}+c_{1}xy$$\n\n$$\\frac{dx}{dt}=(1-x-y)x$$\n$$\\frac{dy}{dt}=(4-7x-3y)y$$",
"import numpy as np\n\n# importamos bibliotecas para plotear\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# para desplegar los plots en el notebook\n%matplotlib inline\n\n# para cómputo simbólico\nfrom sympy import *\ninit_printing()\n\nx, y = symbols('x y')\n\nf = (1-x-y)*x\nf\n\ng = (4-7*x-3*y)*y\ng",
"Equilibrios",
"solve(f, x)\n\nsolve(g, y)\n\nY = solve(g, y)[1]\nsolve(f.subs(y, Y),x)\n\nsolve(g.subs(x, -y + 1), y)",
"Jacobiana",
"J = symbols(\"J\")\n\nJ = Matrix([[diff(f, x), diff(f, y)], \n [diff(g, x), diff(g, y)]])\nJ",
"Evaluada en un punto de equilibrio",
"J = J.subs({x: 1/4, y:3/4})\nJ\n\nJ.det(), J.trace()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
brillozon-code/pebaystats
|
examples/quickstart.ipynb
|
apache-2.0
|
[
"Quick Start",
"from __future__ import print_function",
"Import the aggregation object from the module.",
"from pebaystats import dstats",
"Create a few objects with various depths (number of moments) and widths (number of columns to compute statistics for). Here the stats1 and stats3 objects each accumulate two moments for a single column of data, and the stats2 object collects 4 statistical moments for 4 columns of data.",
"stats1 = dstats(2,1)\nstats2 = dstats(4,4)\nstats3 = dstats(2,1)",
"Add individual data values to the single column accumulation of the stats1 object. Print the object to view its state, which includes the moment values so far accumulated. Also, print the list of lists returned from the statistics() method call. Here you can see that the mean is 2.0 and the variance is 0.0.",
"stats1.add(2)\nstats1.add(2)\nstats1.add(2)\nprint('stats1: %s' % stats1)\nprint('statistics: %s' % stats1.statistics())",
"Add entire rows (multiple columns) of values to the stats2 object. View the accumulated results. Note that when the second moment (n * Var) is 0, equivalent to a deviation of 0, the higher moments are left in there initial 0 state. The higher statistics are set to a NaN value in this case.",
"stats2.add([1.2,2,3,9])\nstats2.add([4.5,6,7,9])\nstats2.add([8.9,0,1,9])\nstats2.add([2.3,4,5,9])\nprint('stats2: %s' % stats2)\nprint('statistics: %s' % stats2.statistics(True))",
"Remove data (UNIMPLEMENTED) from the stats2 object.",
"# stats2.remove(1.2,2,3,9)",
"Load the stats3 object with with data and view the results.",
"stats3.add(4)\nstats3.add(4)\nstats3.add(4)\nprint('stats3: %s' % stats3)\nprint('statistics: %s' % stats3.statistics())",
"Now aggregate that object onto the first. This only works when the shapes are the same.",
"stats1.aggregate(stats3)\nprint('stast1: %s' % stats1)\nprint('statistics: %s' % stats1.statistics(True))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
danresende/deep-learning
|
autoencoder/Simple_Autoencoder.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32 # feel free to change this value\n\ninputs_ = tf.placeholder(tf.float32, [None, 784])\ntargets_ = tf.placeholder(tf.float32, [None, 784])\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(inputs_, 784)\n# Sigmoid output from logits\ndecoded = tf.nn.sigmoid(logits)\n\n# Sigmoid cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n# Mean of the loss\ncost = tf.reduce_mean(loss)\n\n# Adam optimizer\nopt = tf.train.AdamOptimizer().minimize(cost)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
abatula/MachineLearningIntro
|
Diabetes_DataSet.ipynb
|
gpl-2.0
|
[
"What is a dataset?\nA dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.\nWhat is an example?\nAn example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.\nExamples are often referred to with the letter $x$.\nWhat is a feature?\nA feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the color of the walls is not, even though they both describe the house.\nFeatures are sometimes specified as a single element of an example, $x_i$\nWhat is a label?\nA label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house.\nLabels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color, species of plant/animal). They are typically specified by the letter $y$.\nThe Diabetes Dataset\nHere, we use the Diabetes dataset, available through scikit-learn. This dataset contains information related to specific patients and disease progression of diabetes.\nExamples\nThe datasets consists of 442 examples, each representing an individual diabetes patient.\nFeatures\nThe dataset contains 10 features: Age, sex, body mass index, average blood pressure, and 6 blood serum measurements.\nTarget\nThe target is a quantitative measure of disease progression after one year.\nOur goal\nThe goal, for this dataset, is to train a computer to predict the progression of diabetes after one year.\nSetup\nTell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures), and datasets (to download the iris dataset from scikit-learn). Also import colormaps to customize plot coloring and Normalize to normalize data for use with colormaps.",
"# Print figures in the notebook\n%matplotlib inline \n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import datasets # Import datasets from scikit-learn\nimport matplotlib.cm as cm\nfrom matplotlib.colors import Normalize",
"Import the dataset\nImport the dataset and store it to a variable called diabetes. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target', 'data', 'feature_names']\nThe data features are stored in diabetes.data, where each row is an example from a single patient, and each column is a single feature. The feature names are stored in diabetes.feature_names. Target values are stored in diabetes.target.",
"# Import some data to play with\ndiabetes = datasets.load_diabetes()\n\n# List the data keys\nprint('Keys: ' + str(diabetes.keys()))\nprint('Feature names: ' + str(diabetes.feature_names))\nprint('')\n\n# Store the labels (y), features (X), and feature names\ny = diabetes.target # Labels are stored in y as numbers\nX = diabetes.data\nfeatureNames = diabetes.feature_names\n\n# Show the first five examples\nX[:5,:]",
"Visualizing the data\nVisualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of serum measurement 1 (x-axis) vs serum measurement 6 (y-axis). The level of diabetes progression has been mapped to fit in the [0,1] range and is shown as a color scale.",
"norm = Normalize(vmin=y.min(), vmax=y.max()) # need to normalize target to [0,1] range for use with colormap\nplt.scatter(X[:, 4], X[:, 9], c=norm(y), cmap=cm.bone_r)\nplt.colorbar()\nplt.xlabel('Serum Measurement 1 (s1)')\nplt.ylabel('Serum Measurement 6 (s6)')\n\nplt.show()\n",
"Make your own plot\nBelow, try making your own plots. First, modify the previous code to create a similar plot, comparing different pairs of features. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.",
"# Put your code here!",
"Training and Testing Sets\nIn order to evaluate our data properly, we need to divide our dataset into training and testing sets.\n* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.\n* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not \"see\" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.\n* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.\nCreating training and testing sets\nBelow, we create a training and testing set from the iris dataset using using the train_test_split() function.",
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\nprint('Original dataset size: ' + str(X.shape))\nprint('Training dataset size: ' + str(X_train.shape))\nprint('Test dataset size: ' + str(X_test.shape))",
"Create validation set using crossvalidation\nCrossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.\n* Divide data into multiple equal sections (called folds)\n* Hold one fold out for validation and train on the other folds\n* Repeat using each fold as validation\nThe KFold() function returns an iterable with pairs of indices for training and testing data.",
"from sklearn.model_selection import KFold\n\n# Older versions of scikit learn used n_folds instead of n_splits\nkf = KFold(n_splits=5)\nfor trainInd, valInd in kf.split(X_train):\n X_tr = X_train[trainInd,:]\n y_tr = y_train[trainInd]\n X_val = X_train[valInd,:]\n y_val = y_train[valInd]\n print(\"%s %s\" % (X_tr.shape, X_val.shape))",
"More information on different methods for creating training and testing sets is available at scikit-learn's crossvalidation page."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
anguszxd/segment
|
你好,Colaboratory.ipynb
|
gpl-3.0
|
[
"<a href=\"https://colab.research.google.com/github/anguszxd/segment/blob/master/%E4%BD%A0%E5%A5%BD%EF%BC%8CColaboratory.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n<img height=\"60px\" src=\"/img/colab_favicon.ico\" align=\"left\" hspace=\"20px\" vspace=\"5px\">\n欢迎使用 Colaboratory!\nColaboratory 是免费的 Jupyter 笔记本环境,不需要进行任何设置就可以使用,并且完全在云端运行。要了解更多信息,请参阅我们的常见问题解答。\n使用入门\n\nColaboratory 概览\n加载和保存数据:本地文件、云端硬盘、表格、Google Cloud Storage\n导入库和安装依赖项\n使用 Google Cloud BigQuery\n表单、图表、Markdown 以及微件\n支持 GPU 的 TensorFlow\n机器学习速成课程:Pandas 简介以及使用 TensorFlow 的起始步骤\n\n重要功能\n执行 TensorFlow 代码\n链接文字借助 Colaboratory,您只需点击一下鼠标,即可在浏览器中执行 TensorFlow 代码。下面的示例展示了两个矩阵相加的情况。\n\n$\\begin{bmatrix}\n 1. & 1. & 1. \\\n 1. & 1. & 1. \\\n\\end{bmatrix} +\n\\begin{bmatrix}\n 1. & 2. & 3. \\\n 4. & 5. & 6. \\\n\\end{bmatrix} =\n\\begin{bmatrix}\n 2. & 3. & 4. \\\n 5. & 6. & 7. \\\n\\end{bmatrix}$",
"import tensorflow as tf\n\ninput1 = tf.ones((2, 3))\ninput2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))\noutput = input1 + input2\n\nwith tf.Session():\n result = output.eval()\nresult ",
"GitHub\n您可以通过依次转到“文件”>“在 GitHub 中保存一份副本…”,保存一个 Colab 笔记本副本\n只需在 colab.research.google.com/github/ 后面加上路径,即可在 GitHub 上加载任何 .ipynb。例如,colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb 将在 GitHub 上加载此 .ipynb。\n可视化\nColaboratory 包含很多已被广泛使用的库(例如 matplotlib),因而能够简化数据的可视化过程。",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(20)\ny = [x_i + np.random.randn(1) for x_i in x]\na, b = np.polyfit(x, y, 1)\n_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')",
"想使用新的库?请在笔记本的顶部通过 pip install 命令安装该库。然后,您就可以在笔记本的任何其他位置使用该库。要了解导入常用库的方法,请参阅导入库示例笔记本。",
"!pip install -q matplotlib-venn\n\nfrom matplotlib_venn import venn2\n_ = venn2(subsets = (3, 2, 1))",
"本地运行时支持\nColab 支持连接本地计算机上的 Jupyter 运行时。有关详情,请参阅我们的文档。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
california-civic-data-coalition/python-calaccess-notebooks
|
calaccess-exploration/late-contributions.ipynb
|
mit
|
[
"Late contributions Received and Made\nSetup",
"%load_ext sql\n\nfrom django.conf import settings\nconnection_string = 'postgresql+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(\n **settings.DATABASES['default']\n)\n%sql $connection_string",
"Unique Composite Key\nThe documentation says that the records are unique on the following fields:\n* FILING_ID\n* AMEND_ID\n* LINE_ITEM\n* REC_TYPE\n* FORM_TYPE\nREC_TYPE is always the same value: S497, so we can ignore this column. \nFORM_TYPE is either F497P1 or F497P2, indicating in whether itemized transaction is listed under Part 1 (Contributions Received) or Part 2 (Contributions Made). I'll split these up into separate tables.\nAre the S497_CD records actually unique on FILING_ID, AMEND_ID and LINE_ITEM?\nYes. And this is even true across the Parts 1 and 2 (Contributions Received and Contributions Made).",
"%%sql\nSELECT \"FILING_ID\", \"AMEND_ID\", \"LINE_ITEM\", COUNT(*)\nFROM \"S497_CD\"\nGROUP BY 1, 2, 3\nHAVING COUNT(*) > 1\nORDER BY COUNT(*) DESC;",
"TRAN_ID\nThe S497_CD table includes a TRAN_ID field, which the documentation describes as a \"Permanent value unique to this item\".\nIs TRAN_ID ever NULL or blank?\nNo.",
"%%sql\nSELECT COUNT(*)\nFROM \"S497_CD\"\nWHERE \"TRAN_ID\" IS NULL OR \"TRAN_ID\" = '' OR \"TRAN_ID\" = '0';",
"Is TRAN_ID unique across filings?\nDecidedly no.",
"%%sql\nSELECT \"TRAN_ID\", COUNT(DISTINCT \"FILING_ID\") \nFROM \"S497_CD\"\nGROUP BY 1\nHAVING COUNT(DISTINCT \"FILING_ID\") > 1\nORDER BY COUNT(DISTINCT \"FILING_ID\") DESC\nLIMIT 100;",
"But TRAN_ID does appear to be unique within each filing amendment, and appears to be reused for each filing.",
"%%sql\nSELECT \"FILING_ID\", \"TRAN_ID\", COUNT(DISTINCT \"AMEND_ID\") AS amend_count, COUNT(*) AS row_count\nFROM \"S497_CD\"\nGROUP BY 1, 2\nORDER BY COUNT(*) DESC\nLIMIT 100;",
"There's one exception:",
"%%sql\nSELECT \"FILING_ID\", \"TRAN_ID\", \"AMEND_ID\", COUNT(*)\nFROM \"S497_CD\"\nGROUP BY 1, 2, 3\nHAVING COUNT(*) > 1;",
"Looks like this TRAN_ID is duplicated across the two parts of the filing. So it was both a contribution both made and received?",
"%%sql\nSELECT *\nFROM \"S497_CD\"\nWHERE \"FILING_ID\" = 2072379\nAND \"TRAN_ID\" = 'EXP9671';",
"Looking at the PDF for the filing, it appears to be a check from the California Psychological Association PAC to the McCarty for Assembly 2016 committee, which was given and returned on 8/25/2016.\nRegardless, because the combinations of FILING_ID, AMEND_ID and TRAN_ID are unique within each part of the Schedule 497, we could substitute TRAN_ID for LINE_ITEM in the composite key when splitting up the contributions received from the contributions made.\nThe advantage is that the TRAN_ID purportedly points to the same contribution from one amendment to the next, whereas the same LINE_ITEM might not because the filers don't necessarily list transactions on the same line from one filing amendment to the next.\nHere's an example: On the original Schedule 497 filing for Steven Bradford for Senate 2016, a $8,500.00 contribution from an AFL-CIO sub-committee is listed on line 1. But on the first and second amendments to the filing, it is listed on line 4."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ecell/ecell4-notebooks
|
en/tests/Reversible.ipynb
|
gpl-2.0
|
[
"Reversible\nThis is for an integrated test of E-Cell4. Here, we test a simple reversible association/dissociation model in volume.",
"%matplotlib inline\nfrom ecell4.prelude import *",
"Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.",
"D = 1\nradius = 0.005\nN_A = 60\nU = 0.5\nka_factor = 0.1 # 0.1 is for reaction-limited\n\nN = 20 # a number of samples",
"Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.",
"import numpy\nkD = 4 * numpy.pi * (radius * 2) * (D * 2)\nka = kD * ka_factor\nkd = ka * N_A * U * U / (1 - U)\nkon = ka * kD / (ka + kD)\nkoff = kd * kon / ka",
"Start with no C molecules, and simulate 3 seconds.",
"y0 = {'A': N_A, 'B': N_A}\nduration = 3\nopt_kwargs = {'legend': True}",
"Make a model with effective rates. This model is for macroscopic simulation algorithms.",
"with species_attributes():\n A | B | C | {'radius': radius, 'D': D}\n\nwith reaction_rules():\n A + B == C | (kon, koff)\n\nm = get_model()",
"Save a result with ode as obs, and plot it:",
"ret1 = run_simulation(duration, y0=y0, model=m)\nret1.plot(**opt_kwargs)",
"Simulating with gillespie (Bars represent standard error of the mean):",
"ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver='gillespie', repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)",
"Simulating with meso:",
"ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('meso', Integer3(4, 4, 4)), repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)",
"Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.",
"with species_attributes():\n A | B | C | {'radius': radius, 'D': D}\n\nwith reaction_rules():\n A + B == C | (ka, kd)\n\nm = get_model()",
"Simulating with spatiocyte. voxel_radius is given as radius:",
"ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)",
"Simulating with egfrd:",
"ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N)\nret2.plot('o', ret1, '-', **opt_kwargs)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JorgeDeLosSantos/nusa
|
docs/nusa-info/es/linear-triangle-element.ipynb
|
mit
|
[
"Elemento LinearTriangle\nEl elemento LinearTriangle es un elemento finito bidimensional con coordenadas locales y globales, caracterizado por una función de forma lineal. Puede ser utilizado para problemas de esfuerzo y deformación plana. Este elemento tiene un módulo de elasticidad $E$, una relación de Poisson $\\nu$ y un espesor $t$. Cada triangulo tiene tres nodos con dos grados de libertad en el plano (ux y uy), las coordenadas globales de estos nodos se denotan por $(x_i,y_i)$, $(x_j,y_j)$ y $(x_m,y_m)$ (como se muestra en la figura). El orden de los nodos para cada elemento es importante y deben listarse en sentido antihorario comenzando desde cualquier nodo.\n<img src=\"src/linear-triangle-element/linear_triangle_element.PNG\" width=\"200px\">\nLa matriz de rigidez por elemento viene dada por:\n$$ [k] = tA[B]^T[D][B] $$\nDonde $A$ es el área del elemento dada por:\n$$ A = \\frac{1}{2} \\left( x_i(y_j-y_m) + x_j(y_m - y_i) + x_m(y_i - y_j) \\right) $$\ny $[B]$ es la matriz dada por:\n$$\n[B] = \\frac{1}{2A}\n\\begin{bmatrix}\n\\beta_i & 0 & \\beta_j & 0 & \\beta_m & 0 \\\n0 & \\gamma_i & 0 & \\gamma_j & 0 & \\gamma_m \\\n\\gamma_i & \\beta_i & \\gamma_j & \\beta_j & \\gamma_m & \\beta_m \\\n\\end{bmatrix}\n$$\nDonde $\\beta_i$, $\\beta_j$, $\\beta_m$, $\\gamma_i$, $\\gamma_j$ y $\\gamma_m$ están dados por:\n$$ \\beta_i = y_j - y_m $$\n$$ \\beta_j = y_m - y_i $$\n$$ \\beta_m = y_i - y_j $$\n$$ \\gamma_i = x_m - x_j $$\n$$ \\gamma_j = x_i - x_m $$\n$$ \\gamma_m = x_j - x_i $$\nPara el caso de esfuerzo plano la matriz $D$ viene dada por:\n$$\n[D] = \\frac{E}{1-\\nu^2}\n\\begin{bmatrix}\n1 & \\nu & 0 \\\n\\nu & 1 & 0 \\\n0 & 0 & \\frac{1-\\nu}{2} \\\n\\end{bmatrix}\n$$\nLos esfuerzos en cada elemento se calculan mediante:\n$$ {\\sigma} = [D][B]{u} $$\nDonde $\\sigma$ es el vector de esfuerzos en el plano, es decir:\n$$ \\sigma = \\begin{Bmatrix} \\sigma_x \\ \\sigma_y \\ \\tau_{xy} \\end{Bmatrix} $$\ny $u$ el vector de desplazamientos en cada nodo del elemento:\n$$ {u} = \\begin{Bmatrix} ux_i \\ uy_i \\ ux_j \\ uy_j \\ ux_m \\ uy_m \\end{Bmatrix}$$\nEjemplo 1. Caso simple\nEn este ejemplo vamos a resolver el caso más simple: un elemento triangular fijo en dos de sus nodos y con una fuerza aplicada en el tercero.\nImportamos NuSA e indicamos que usaremos el modo \"inline\" de matplotlib en este notebook",
"%matplotlib inline\nfrom nusa import *",
"Definimos algunas propiedades a utilizar:",
"E = 200e9 # Módulo de elasticidad\nnu = 0.3 # Relación de Poisson\nt = 0.1 # Espesor",
"Creamos los nodos y el elemento:",
"n1 = Node((0,0))\nn2 = Node((1,0.5))\nn3 = Node((0,1))\ne1 = LinearTriangle((n1,n2,n3),E,nu,t)",
"Creamos el modelo y agregamos los nodos y elementos a este:",
"m = LinearTriangleModel(\"Modelo 01\")\nfor n in (n1,n2,n3): m.add_node(n)\nm.add_element(e1)",
"Aplicamos las condiciones de carga y restricciones:",
"m.add_constraint(n1, ux=0, uy=0)\nm.add_constraint(n3, ux=0, uy=0)\nm.add_force(n2, (5e3, 0))",
"En este punto podemos graficar el modelo con las condiciones impuestas, para esto utilizamos el método plot_model:",
"m.plot_model()",
"Finalmente resolvemos el modelo:",
"m.solve()",
"Podemos consultar los desplazamientos nodales:",
"for n in m.get_nodes():\n print(n.ux,n.uy)",
"Además, utilizar las herramientas de postproceso para graficar el campo de desplazamientos en el elemento:",
"m.plot_nsol('ux')",
"Placa simple, utilizando NuSA Modeler",
"import nusa.mesh as nmsh\n\nmd = nmsh.Modeler()\nBB, ES = 1, 0.1\na = md.add_rectangle((0,0),(BB,BB), esize=ES)\nnc, ec = md.generate_mesh()\nx,y = nc[:,0], nc[:,1]\n\nnodos = []\nelementos = []\n\nfor k,nd in enumerate(nc):\n cn = Node((x[k],y[k]))\n nodos.append(cn)\n \nfor k,elm in enumerate(ec):\n i,j,m = int(elm[0]),int(elm[1]),int(elm[2])\n ni,nj,nm = nodos[i],nodos[j],nodos[m]\n ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.25)\n elementos.append(ce)\n\nm = LinearTriangleModel()\nfor node in nodos: m.add_node(node)\nfor elm in elementos: m.add_element(elm)\n \n# Aplicando condiciones de frontera en los extremos\nminx, maxx = min(x), max(x)\nminy, maxy = min(y), max(y)\n\nP = 100e3/((BB/ES)+1)\n\nfor node in nodos:\n if node.x == minx:\n m.add_constraint(node, ux=0, uy=0)\n if node.x == maxx:\n m.add_force(node, (P,0))\n\nm.plot_model()\nm.solve()\n\n# Esfuerzo de von Mises\nm.plot_nsol(\"seqv\",\"Pa\")\n\n# Deformación unitaria en la dirección X\nm.plot_nsol(\"exx\", \"\")",
"Placa con agujero",
"%matplotlib inline\nfrom nusa import *\nimport nusa.mesh as nmsh\n\nmd = nmsh.Modeler()\na = md.add_rectangle((0,0),(1,1), esize=0.1)\nb = md.add_circle((0.5,0.5), 0.1, esize=0.05)\nmd.substract_surfaces(a,b)\nnc, ec = md.generate_mesh()\nx,y = nc[:,0], nc[:,1]\n\nnodos = []\nelementos = []\n\nfor k,nd in enumerate(nc):\n cn = Node((x[k],y[k]))\n nodos.append(cn)\n \nfor k,elm in enumerate(ec):\n i,j,m = int(elm[0]),int(elm[1]),int(elm[2])\n ni,nj,nm = nodos[i],nodos[j],nodos[m]\n ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.1)\n elementos.append(ce)\n\nm = LinearTriangleModel()\nfor node in nodos: m.add_node(node)\nfor elm in elementos: m.add_element(elm)\n \n# Aplicando condiciones de frontera en los extremos\nminx, maxx = min(x), max(x)\nminy, maxy = min(y), max(y)\n\nfor node in nodos:\n if node.x == minx:\n m.add_constraint(node, ux=0, uy=0)\n if node.x == maxx:\n m.add_force(node, (10e3,0))\n\nm.plot_model()\nm.solve()\nm.plot_nsol(\"sxx\", units=\"Pa\")\n\n# Element solution\nm.plot_esol(\"sxx\")",
"Concentrador de esfuerzos",
"# generando la geometría\nmd = nmsh.Modeler()\ng = md.geom # Para acceder a la clase SimpleGMSH\np1 = g.add_point((0,0))\np2 = g.add_point((1,0))\np3 = g.add_point((2,0))\np4 = g.add_point((2,1))\np5 = g.add_point((3,1))\np6 = g.add_point((3,2))\np7 = g.add_point((0,2))\np8 = g.add_point((0.7,1.4))\np9 = g.add_point((0.7,1.7), esize=0.1)\nL1 = g.add_line(p1,p2)\nL2 = g.add_circle(p3,p2,p4)\nL3 = g.add_line(p4,p5)\nL4 = g.add_line(p5,p6)\nL5 = g.add_line(p6,p7)\nL6 = g.add_line(p7,p1)\nL7 = g.add_circle(p8,p9)\nloop1 = g.add_line_loop(L1,L2,L3,L4,L5,L6) # boundary\nloop2 = g.add_line_loop(L7)# hole\ng.add_plane_surface(loop1,loop2)\nnc, ec = md.generate_mesh()\n\nx,y = nc[:,0], nc[:,1]\n\nnodos = []\nelementos = []\n\nfor k,nd in enumerate(nc):\n cn = Node((x[k],y[k]))\n nodos.append(cn)\n \nfor k,elm in enumerate(ec):\n i,j,m = int(elm[0]),int(elm[1]),int(elm[2])\n ni,nj,nm = nodos[i],nodos[j],nodos[m]\n ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.1)\n elementos.append(ce)\n\nm = LinearTriangleModel()\nfor node in nodos: m.add_node(node)\nfor elm in elementos: m.add_element(elm)\n \n# Aplicando condiciones de frontera en los extremos\nminx, maxx = min(x), max(x)\nminy, maxy = min(y), max(y)\n\nfor node in nodos:\n if node.x == minx:\n m.add_constraint(node, ux=0, uy=0)\n if node.x == maxx:\n m.add_force(node, (10e3,1))\n\nm.plot_model()\nm.solve()\nm.plot_nsol(\"seqv\") # Esfuerzo de von Mises"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hasadna/knesset-data-pipelines
|
jupyter-notebooks/running the pipelines - getting fresh committee data.ipynb
|
mit
|
[
"Running pipelines\nThe pipelines are run on the server periodically and based on pipeline and data dependencies. \nYou can also run specific pipelines manually for development or to run custom pipelines.\nChange directory to project root\nThe Jupyter notebooks run in the jupyter-notebooks directory. To run pipelines you need to change directory to the parent directory\nWhen running using Docker the directory will be /pipelines",
"import os\n\nos.chdir('..')\nos.getcwd()",
"List the available pipelines",
"!{'dpp'}",
"Run a pipeline\nThe following runs the ./committees/kns_committee pipeline which downloads committees from the Knesset API",
"!{'dpp run --verbose ./committees/kns_committee'}",
"Inspect the output datapackage descriptor\nPipelines use datapackages as the primary input and output data.\nPipeline and datapackage names usually match, so the output of the ./committees/kns_committee pipeline is available at local directory ./data/committees/kns_committee/datapackage.json",
"KNS_COMMITTEE_DATAPACKAGE_PATH = './data/committees/kns_committee/datapackage.json'",
"Each package may contain multiple resources, let's see which resource names are available for the kns_committee package",
"from datapackage import Package\n\nkns_committee_package = Package(KNS_COMMITTEE_DATAPACKAGE_PATH)\nkns_committee_package.resource_names\n\nKNS_COMMITTEE_RESOURE_NAME = 'kns_committee'",
"Inspect the kns_committee resource descriptor which includes metadata and field descriptions",
"import yaml\n\nprint(yaml.dump(kns_committee_package.get_resource(KNS_COMMITTEE_RESOURE_NAME).descriptor, \n allow_unicode=True, default_flow_style=False))",
"Print the first 5 row of data",
"for i, row in enumerate(kns_committee_package.get_resource(KNS_COMMITTEE_RESOURE_NAME).iter(keyed=True), 1):\n if i > 5: continue\n print(f'-- row {i} --')\n print(yaml.dump(row, allow_unicode=True, default_flow_style=False))\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
telescopeuser/workshop_blog
|
wechat_tool_py3_local/reference/S-IPA-Workshop/workshop2/Web-Parcel-Bot-Python-WeChat/Web-Parcel-Bot-Python-WeChat.ipynb
|
mit
|
[
"import IPython.display\nIPython.display.YouTubeVideo('TBD')",
"Intelligent Process Automation / Intelligent Agent\nzhan.gu@nus.edu.sg\nA workshop to develop & use an intelligent and interactive chat-bot in WeChat\nFor Automated Parcel Status Enquiry\n<img src='reference/WeChat_IPA-Bot_Parcel_Enquiry.jpg' width=50% style=\"float: left;\">\n<img src='reference/WeChat_IPA-Bot_Parcel_Enquiry_Status.png' width=100% style=\"float: left;\">\nImports / 导入需要用到的一些功能程序库:",
"# from __future__ import unicode_literals, division\nimport time, datetime, requests, itchat\nfrom itchat.content import *\n\nimport tagui as t\nfrom time import localtime, strftime\n# import pandas as pd",
"Function: Enquire_Parcel",
"# Function\n# input : Parcel ID, type: string\n# Return: File name of screenshot png image containing parcel status results, type: string\n\ndef Enquire_Parcel(str_parcel_id):\n# print('str_parcel_id : ', str_parcel_id)\n\n result_filename = 'results/' + str(str_parcel_id).lstrip() + strftime(\"-%Y-%m-%d-%Hm%Mm%Ss\", localtime()) + '.png'\n# print('result_filename : ', result_filename)\n \n t.init(visual_automation = True)\n t.url('http://qexpress.co.nz/tracking.aspx?orderNumber=' + str(str_parcel_id).lstrip())\n t.wait(0.5)\n t.keyboard('[end]')\n t.wait(0.5)\n t.snap('page.png', result_filename)\n t.wait(0.5)\n t.close()\n print('[ Enquie Parcel ] ID : {} | File Name : {}'.format(str_parcel_id, result_filename)) \n return result_filename;",
"",
"#######################################################################\n### Optional Adhoc Test\n### Function: Enquire_Parcel\n#######################################################################\n\nmsg_parcel_id = 'DZ140053181NZ'\nmsg_filename = Enquire_Parcel(msg_parcel_id)",
"* Log in web WeChat using QR code image / 用微信App扫QR码图片来自动登录",
"# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。\nitchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片\n\n# If click Kernel -> Interupt, then hot re-login is possible:\n# itchat.auto_login(hotReload=True)",
"",
"#######################################################################\n### Optional Adhoc Test\n### Send Enquire_Parcel status through WeChat\n#######################################################################\n\n# Locate User / 获取分别对应相应键值的用户。\n\n# friend = itchat.search_friends(name=u'IPA-Bot')\nfriend = itchat.search_friends(name=u'白黑')\n\nfor i in range(0, len(friend)):\n print('NickName : %s' % friend[i]['NickName'])\n print('Alias A-ID: %s' % friend[i]['Alias'])\n print('RemarkName: %s' % friend[i]['RemarkName'])\n print('UserName : %s' % friend[i]['UserName'])\n\n# Picture / 图片\nreply = itchat.send_image(msg_filename, friend[0]['UserName']) \nprint(reply['BaseResponse']['ErrMsg'])",
"* Interactive Conversation : Auto Mode / 自定义复杂消息处理,例如:信息存档、回复群组中被@的消息\nParcel Enquiry module",
"# 如果收到[TEXT, MAP, CARD, NOTE, SHARING]类的信息,会自动回复:\n@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING]) # 文字、位置、名片、通知、分享\ndef text_reply(msg):\n print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s' \n % (msg['Type'], msg['Text'], msg['FromUserName']))\n itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\\nI received:\\n[ %s ]\\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])\n \n ######################################################################################################\n # Parcel Enquiry module\n ######################################################################################################\n if \"Parcel\" in msg['Text'] or \"parcel\" in msg['Text']: # Check parcel enquiry command keyword: Parcel\n msg_filename = Enquire_Parcel(msg['Text'].replace('Parcel ','').replace('parcel ','')) # Extract Parcel ID from message\n itchat.send_image(msg_filename, msg['FromUserName']) \n ######################################################################################################\n\n \n# 如果收到[PICTURE, RECORDING, ATTACHMENT, VIDEO]类的信息,会自动保存:\n@itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO]) # 图片、语音、文件、视频\ndef download_files(msg):\n msg['Text'](msg['FileName'])\n print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s' \n % ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'], msg['FromUserName']))\n itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\\nI received:', msg['FromUserName'])\n return '@%s@%s' % ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'])\n\n\n# 如果收到新朋友的请求,会自动通过验证添加加好友,并主动打个招呼:幸会幸会!Nice to meet you!\n@itchat.msg_register(FRIENDS)\ndef add_friend(msg):\n print(u'[ Terminal Info ] New Friend Request 新朋友的请求,自动通过验证添加加好友 From: %s' % msg['RecommendInfo']['UserName']) \n itchat.add_friend(**msg['Text']) # 该操作会自动将新好友的消息录入,不需要重载通讯录\n itchat.send_msg(u'幸会幸会!Nice to meet you!', msg['RecommendInfo']['UserName'])\n\n \n# 在群里,如果收到@自己的文字信息,会自动回复:\n@itchat.msg_register(TEXT, isGroupChat=True)\ndef text_reply(msg):\n if msg['isAt']:\n print(u'[ Terminal Info ] Group@Info 在群里收到@自己的文字信息: %s From: %s %s' \n % (msg['Content'], msg['ActualNickName'], msg['FromUserName']))\n itchat.send(u'@%s\\u2005I received: %s' % (msg['ActualNickName'], msg['Content']), msg['FromUserName'])\n\n\nitchat.run()",
"Try the parcel enquiry bot by yourself\n\nScan below QR code to add friend IPA-Bot in WeChat\nSend a message to IPA-Bot, e.g. Parcel DZ140053180NZ\n\n<img src='reference/WeChat_IPA-Bot_QR.png' width=60% style=\"float: left;\">",
"# Click Kernel -> Interupt, then logout\nitchat.logout() # 安全退出",
"<img src='reference/WeChat_SamGu_QR.png' width=20% style=\"float: left;\">\nWorkshop Enhancements:\n\nExtend the parcel enquiry function to group chat when IPA-Bot is being @ : if msg['isAt']\nCreate a database/dataframe/csv for book keeping and administration\nProcess parcel enquiry for eligible customer only, by referring to a database."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MartyWeissman/Python-for-number-theory
|
P3wNT Notebook 1.ipynb
|
gpl-3.0
|
[
"Part 1. Computing with Python 3.x\nWhat is the difference between Python and a calculator? We begin this first lesson by showing how Python can be used as a calculator, and we move into some of the basic programming language constructs: data types, variables, lists, and loops.\nThis programming lesson complements Chapter 0 (Foundations) in An Illustrated Theory of Numbers.\nTable of Contents\n\nPython as a calculator\nCalculating with booleans\nDeclaring variables\nRanges\nIterating over a range\n\n<a id='calculator'></a>\nPython as a calculator\nDifferent kinds of data are stored as different types in Python. For example, if you wish to work with integers, your data is typically stored as an int. A real number might be stored as a float. There are types for booleans (True/False data), strings (like \"Hello World!\"), and many more we will see. \nA more complete reference for Python's numerical types and arithmetic operations can be found in the official Python documentation. The official Python tutorial is also a great place to start.\nPython allows you to perform arithmetic operations: addition, subtraction, multiplication, and division, on numerical types. The operation symbols are +, -, *, and /. Evaluate each of the following cells to see how Python performs operations on integers. To evaluate the cell, click anywhere within the cell to select it (a selected cell will probably have a thick <span style=\"color:green\">green</span> line on its left side) and use the keyboard shortcut Shift-Enter to evaluate. As you go through this and later lessons, try to predict what will happen when you evaluate the cell before you hit Shift-Enter.",
"2 + 3\n\n2 * 3\n\n5 - 11\n\n5.0 - 11\n\n5 / 11\n\n6 / 3\n\n5 // 11\n\n6 // 3",
"The results are probably not too surprising, though the last two require a bit of explanation. Python interprets the input number 5 as an int (integer) and 5.0 as a float. \"Float\" stands for \"floating point number,\" which are decimal approximations to real numbers. The word \"float\" refers to the fact that the decimal (or binary, for computers) point can float around (as in 1.2345 or 12.345 or 123.45 or 1234.5 or 0.00012345). There are deep computational issues related to how computers handle decimal approximations, and you can read about the IEEE standards if you're interested.\nPython enables different kinds of division. The single-slash division in Python 3.x gives a floating point approximation of the quotient. That's why 5 / 11 and 6 / 3 both output floats. On the other hand, 5 // 11 and 6 // 3 yield integer outputs (rounding down) -- this is useful, but one has to be careful!\nIn fact the designers of Python changed their mind. This tutorial assumes that you are using Python 3.x. If you are using Python 2.x, the command 5 / 11 would output zero.",
"-12 // 5",
"Why use integer division // and why use floating point division? In practice, integer division is typically a faster operation. So if you only need the rounded result (and that will often be the case), use integer division. It will run much faster than carrying out floating point division then manually rounding down.\nObserve that floating point operations involve approximation. The result of 5.0/11.0 might not be what you expect in the last digit. Over time, especially with repeated operations, floating point approximation errors can add up!\nPython allows you to group expressions with parentheses, and follows the order of operations that you learn in school.",
"(3 + 4) * 5\n\n3 + (4 * 5)\n\n3 + 4 * 5 # What do you think will be the result? Remember PEMDAS?",
"Now is a good time to try a few computations of your own, in the empty cell below. You can type any Python commands you want in the empty cell. If you want to insert a new cell into this notebook, it takes two steps:\n1. Click to the left of any existing cell. This should make a <span style=\"color:blue\">blue</span> bar appear to the left of the cell.\n2. Use the keyboard shortcut a to insert a new cell above the blue-selected cell or b to insert a new cell below the blue-selected cell.\nYou can also use the keyboard shortcut x do delete a blue-selected cell... be careful!",
"# An empty cell. Have fun!\n",
"For number theory, division with remainder is an operation of central importance. Integer division provides the quotient, and the operation % provides the remainder. It's a bit strange that the percent symbol is used for the remainder, but this dates at least to the early 1970s and has become standard across computer languages.",
"23 // 5 # Integer division\n\n23 % 5 # The remainder after division",
"Note in the code above, there are little \"comments\". To place a short comment on a line of code, just put a hashtag # at the end of the line of code, followed by your comment.\nPython gives a single command for division with remainder. Its output is a tuple.",
"divmod(23,5)\n\ntype(divmod(23,5))",
"All data in Python has a type, but a common complaint about Python is that types are a bit concealed \"under the hood\". But they are not far under the hood. Anyone can find out the type of some data with a single command.",
"type(3)\n\ntype(3.0)\n\ntype('Hello')\n\ntype([1,2,3])",
"The key to careful computation in Python is always being aware of the type of your data, and knowing how Python operates differently on data of different types.",
"3 + 3\n\n3.0 + 3.0\n\n'Hello' + 'World!'\n\n[1,2,3] + [4,5,6]\n\n3 + 3.0\n\n3 + 'Hello!'\n\n# An empty cell. Have fun!\n",
"As you can see, addition (the + operator) is interpreted differently in the contexts of numbers, strings, and lists. The designers of Python allowed us to add numbers of different types: if you try to operate on an int and a float, the int will typically be coerced into a float in order to perform the operation. But the designers of Python did not give meaning to the addition of a number with a string, for example. That's why you probably received a TypeError after trying the above line. \nOn the other hand, Python does interpret multiplication of a natural number with a string or a list.",
"3 * 'Hello!'\n\n0 * 'Hello!'\n\n2 * [1,2,3]",
"Can you create a string with 100 A's (like AAA...)? Use an appropriate operation in the cell below.",
"# Practice cell\n",
"Exponents in Python are given by the ** operator. The following lines compute 2 to the 1000th power, in two different ways.",
"2**1000\n\n2.0**1000",
"As before, Python interprets an operation (**) differently in different contexts. When given integer input, Python evaluates 2**1000 exactly. The result is a large integer. A nice fact about Python, for number theorists, is that it handles exact integers of arbitrary length! Many other programming languages (like C++) will give an error message if integers get too large in the midst of a computation. \nNew in version 3.x, Python implements long integers without giving signals to the programmer or changing types. In Python 2.x, there were two types: int for somewhat small integers (e.g., up to $2^{31}$) and long type for all larger integers. Python 2.x would signal which type of integer was being used, by placing the letter \"L\" at the end of a long integer. Now, in Python 3.x, the programmer doesn't really see the difference. There is only the int type. But Python still optimizes computations, using hardware functionality for arithmetic of small integers and custom routines for large integers. The programmer doesn't have to worry about it most of the time.\nFor scientific applications, one often wants to keep track of only a certain number of significant digits. If one computes the floating point exponent 2.0**1000, the result is a decimal approximation. It is still a float. The expression \"e+301\" stands for \"multiplied by 10 to the 301st power\", i.e., Python uses scientific notation for large floats.",
"type(2**1000)\n\ntype(2.0**1000)\n\n# An empty cell. Have fun!\n",
"Now is a good time for reflection. Double-click in the cell below to answer the given questions. Cells like this one are used for text rather than Python code. Text is entered using markdown, but you can typically just enter text as you would in any text editor without problems. Press shift-Enter after editing a markdown cell to complete the editing process. \nNote that a dropdown menu in the toolbar above the notebook allows you to choose whether a cell is Markdown or Code (or a few other things), if you want to add or remove markdown/code cells.\nExercises\n\n\nWhat data types have you seen, and what kinds of data are they used for? Can you remember them without looking back?\n\n\nHow is division / interpreted differently for different types of data?\n\n\nHow is multiplication * interpreted differently for different types of data?\n\n\nWhat is the difference between 100 and 100.0, for Python?\n\n\nDouble-click this markdown cell to edit it, and answer the exercises.\n<a id='booleans'></a>\nCalculating with booleans\nA boolean (type bool) is the smallest possible piece of data. While an int can be any integer, positive or negative, a boolean can only be one of two things: True or False. In this way, booleans are useful for storing the answers to yes/no questions. \nQuestions about (in)equality of numbers are answered in Python by operations with numerical input and boolean output. Here are some examples. A more complete reference is in the official Python documentation.",
"3 > 2\n\ntype(3 > 2)\n\n10 < 3\n\n2.4 < 2.4000001\n\n32 >= 32\n\n32 >= 31\n\n2 + 2 == 4",
"Which number is bigger: $23^{32}$ or $32^{23}$? Use the cell below to answer the question!",
"# Write your code here.\n",
"The expressions <, >, <=, >= are interpreted here as operations with numerical input and boolean output. The symbol == (two equal symbols!) gives a True result if the numbers are equal, and False if the numbers are not equal. An extremely common typo is to confuse = with ==. But the single equality symbol = has an entirely different meaning, as we shall see.\nUsing the remainder operator % and equality, we obtain a divisibility test.",
"63 % 7 == 0 # Is 63 divisible by 7?\n\n101 % 2 == 0 # Is 101 even?",
"Use the cell below to determine whether 1234567890 is divisible by 3.",
"# Your code goes here.\n",
"Booleans can be operated on by the standard logical operations and, or, not. In ordinary English usage, \"and\" and \"or\" are conjunctions, while here in Boolean algebra, \"and\" and \"or\" are operations with Boolean inputs and Boolean output. The precise meanings of \"and\" and \"or\" are given by the following truth tables.\n| and || True | False |\n|-----||------|-------|\n|||||\n| True || True | False |\n| False || False | False|\n| or || True | False |\n|-----||------|-------|\n|||||\n| True || True | True |\n| False || True | False|",
"True and False\n\nTrue or False\n\nTrue or True\n\nnot True",
"Use the truth tables to predict the result (True or False) of each of the following, before evaluating the code.",
"(2 > 3) and (3 > 2)\n\n(1 + 1 == 2) or (1 + 1 == 3)\n\nnot (-1 + 1 >= 0)\n\n2 + 2 == 4\n\n2 + 2 != 4 # For \"not equal\", Python uses the operation `!=`.\n\n2 + 2 != 5 # Is 2+2 *not* equal to 5?\n\nnot (2 + 2 == 5) # The same as above, but a bit longer to write.",
"Experiment below to see how Python handles a double or triple negative, i.e., something with a not not.",
"# Experiment here.\n",
"Python does give an interpretation to arithmetic operations with booleans and numbers. Try to guess this interpretation with the following examples. Change the examples to experiment!",
"False * 100\n\nTrue + 13",
"This ability of Python to interpret operations based on context is a mixed blessing. On one hand, it leads to handy shortcuts -- quick ways of writing complicated programs. On the other hand, it can lead to code that is harder to read, especially for a Python novice. Good programmers aim for code that is easy to read, not just short!\nThe Zen of Python is a series of 20 aphorisms for Python programmers. The first seven are below.\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\n\nExercises\n\n\nDid you look at the truth tables closely? Can you remember, from memory, what True or False equals, or what True and False equals? \n\n\nHow might you easily remember the truth tables? How do they resemble the standard English usage of the words \"and\" and \"or\"?\n\n\nIf you wanted to know whether a number, like 2349872348723, is a multiple of 7 but not a multiple of 11, how might you write this in one line of Python code?\n\n\nYou can chain together and commands, e.g., with an expression like True and True and True (which would evaluate to True). You can also group booleans, e.g., with True and (True or False). Experiment to figure out the order of operations (and, or, not) for booleans.\n\n\nThe operation xor means \"exclusive or\". Its truth table is: True xor True = False and False xor False = False and True xor False = True and False xor True = True. How might you implement xor in terms of the usual and, or, and not?",
"# Use this space to work on the exercises\n",
"<a id='variables'></a>\nDeclaring variables\nA central feature of programming is the declaration of variables. When you declare a variable, you are storing data in the computer's memory and you are assigning a name to that data. Both storage and name-assignment are carried out with the single equality symbol =.",
"e = 2.71828",
"With this command, the float 2.71828 is stored somewhere inside your computer, and Python can access this stored number by the name \"e\" thereafter. So if you want to compute \"e squared\", a single command will do.",
"e * e\n\ntype(e)",
"You can use just about any name you want for a variable, but your name must start with a letter, must not contain spaces, and your name must not be an existing Python word. Characters in a variable name can include letters (uppercase and lowercase) and numbers and underscores _. \nSo e is a valid name for a variable, but type is a bad name. It is very tempting for beginners to use very short abbreviation-style names for variables (like dx or vbn). But resist that temptation and use more descriptive names for variables, like difference_x or very_big_number. This will make your code readable by you and others!\nThere are different style conventions for variable names. We use lowercase names, with underscores separating words, roughly following Google's style conventions for Python code.",
"my_number = 17\n\nmy_number < 23",
"After you declare a variable, its value remains the same until it is changed. You can change the value of a variable with a simple assignment. After the above lines, the value of my_number is 17.",
"my_number = 3.14",
"This command reassigns the value of my_number to 3.14. Note that it changes the type too! It effectively overrides the previous value and replaces it with the new value.\nOften it is useful to change the value of a variable incrementally or recursively. Python, like many programming languages, allows one to assign variables in a self-referential way. What do you think the value of S will be after the following four lines?",
"S = 0\nS = S + 1\nS = S + 2\nS = S + 3\nprint(S)",
"The first line S = 0 is the initial declaration: the value 0 is stored in memory, and the name S is assigned to this value.\nThe next line S = S + 1 looks like nonsense, as an algebraic sentence. But reading = as assignment rather than equality, you should read the line S = S + 1 as assigning the value S + 1 to the name S. When Python interprets S = S + 1, it carries out the following steps.\n\nCompute the value of the right side, S+1. (The value is 1, since S was assigned the value 0 in the previous line.)\nAssign this value to the left side, S. (Now S has the value 1.)\n\nWell, this is a slight lie. Python probably does something more efficient, when given the command S = S + 1, since such operations are hard-wired in the computer and the Python interpreter is smart enough to take the most efficient route. But at this level, it is most useful to think of a self-referential assignment of the form X = expression(X) as a two step process as above.\n\nCompute the value of expression(X).\nAssign this value to X.\n\nNow consider the following three commands.",
"my_number = 17\nnew_number = my_number + 1\nmy_number = 3.14",
"What are the values of the variables my_number and new_number, after the execution of these three lines?\nTo access these values, you can use the print function.",
"print(my_number)\nprint(new_number)",
"Python is an interpreted language, which means (roughly) that Python carries out commands line-by-line from top to bottom. So consider the three lines\npython\nmy_number = 17\nnew_number = my_number + 1\nmy_number = 3.14\nLine 1 sets the value of my_number to 17. Line 2 sets the value of new_number to 18. Line 3 sets the value of my_number to 3.14. But Line 3 does not change the value of new_number at all.\n(This will become confusing and complicated later, as we study mutable and immutable types.)\nExercises\n\n\nWhat is the difference between = and == in the Python language?\n\n\nIf the variable x has value 3, and you then evaluate the Python command x = x * x, what will be the value of x after evaluation?\n\n\nImagine you have two variables a and b, and you want to switch their values. How could you do this in Python?",
"# Use this space to work on the exercises.\n",
"<a id='ranges'></a>\nLists and ranges\nPython stands out for the central role played by lists. A list is what it sounds like -- a list of data. Data within a list can be of any type. Multiple types are possible within the same list! The basic syntax for a list is to use brackets to enclose the list items and commas to separate the list items.",
"type([1,2,3])\n\ntype(['Hello',17])",
"There is another type called a tuple that we will rarely use. Tuples use parentheses for enclosure instead of brackets.",
"type((1,2,3))",
"There's another list-like type in Python 3, called the range type. Ranges are kind of like lists, but instead of plunking every item into a slot of memory, ranges just have to remember three integers: their start, their stop, and their step. \nThe range command creates a range with a given start, stop, and step. If you only input one number, the range will start at zero and use steps of one and will stop just before the given stop-number.\nOne can create a list from a range (plunking every term in the range into a slot of memory), by using the list command. Here are a few examples.",
"type(range(10)) # Ranges are their own type, in Python 3.x. Not in Python 2.x!\n\nlist(range(10)) # Let's see what's in the range. Note it starts at zero! Where does it stop?",
"A more complicated two-input form of the range command produces a range of integers starting at a given number, and terminating before another given number.",
"list(range(3,10))\n\nlist(range(-4,5))",
"This is a common source of difficulty for Python beginners. While the first parameter (-4) is the starting point of the list, the list ends just before the second parameter (5). This takes some getting used to, but experienced Python programmers grow to like this convention.\nThe length of a list can be accessed by the len command.",
"len([2,4,6])\n\nlen(range(10)) # The len command can deal with lists and ranges. No need to convert.\n\nlen(range(10,100)) # Can you figure out the length, before evaluating?",
"The final variant of the range command (for now) is the three-parameter command of the form range(a,b,s). This produces a list like range(a,b), but with a \"step size\" of s. In other words, it produces a list of integers, beginning at a, increasing by s from one entry to the next, and going up to (but not including) b. It is best to experiment a bit to get the feel for it!",
"list(range(1,10,2))\n\nlist(range(11,30,2))\n\nlist(range(-4,5,3))\n\nlist(range(10,100,17))",
"This can be used for descending ranges too, and observe that the final number b in range(a,b,s) is not included.",
"list(range(10,0,-1))",
"How many multiples of 7 are between 10 and 100? We can find out pretty quickly with the range command and the len command (to count).",
"list(range(10,100,7)) # What list will this create? It won't answer the question...\n\nlist(range(14,100,7)) # Starting at 14 gives the multiples of 7.\n\nlen(range(14,100,7)) # Gives the length of the list, and answers the question!",
"Exercises\n\n\nIf a and b are integers, what is the length of range(a,b)?\n\n\nUse a list and range command to produce the list [1,2,3,4,5,6,7,8,9,10].\n\n\nCreate the list [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5] with a single list and range command and another operation.\n\n\nHow many multiples of 3 are there between 300 and 3000?",
"# Use this space to work on the exercises.\n",
"<a id='iterating'></a>\nIterating over a range\nComputers are excellent at repetitive reliable tasks. If we wish to perform a similar computation, many times over, a computer a great tool. Here we look at a common and simple way to carry out a repetetive computation: the \"for loop\". The \"for loop\" iterates through items in a list or range, carrying out some action for each item. Two examples will illustrate.",
"for n in [1,2,3,4,5]:\n print(n*n)\n\nfor s in ['I','Am','Python']:\n print(s + \"!\")",
"The first loop, unraveled, carries out the following sequence of commands.",
"n = 1\nprint(n*n)\nn = 2\nprint(n*n)\nn = 3\nprint(n*n)\nn = 4\nprint(n*n)\nn = 5\nprint(n*n)",
"But the \"for loop\" is more efficient and more readable to programmers. Indeed, it saves the repetition of writing the same command print n*n over and over again. It also makes transparent, from the beginning, the range of values that n is assigned to. \nWhen you read and write \"for loops\", you should consider how they look unravelled -- that is how Python will carry out the loop. And when you find yourself faced with a repetetive task, you might consider whether it may be wrapped up in a for loop.\nTry to unravel the loop below, and predict the result, before evaluating the code.",
"P = 1\nfor n in range(1,6):\n P = P * n\nprint(P)",
"This might have been difficult! So what if you want to trace through the loop, as it goes? Sometimes, especially when debugging, it's useful to inspect every step of the loop to see what Python is doing. We can inspect the loop above, by inserting a print command within the scope of the loop.",
"P = 1\nfor n in range(1,6):\n P = P * n\n print(\"n is\",n,\"and P is\",P)\nprint(P)",
"Here we have used the print command with strings and numbers together. In Python 3.x, you can print multiple things on the same line by separating them by commas. The \"things\" can be strings (enclosed by single or double-quotes) and numbers (int, float, etc.).",
"print(\"My favorite number is\",17)",
"If we unravel the loop above, the linear sequence of commands interpreted by Python is the following.",
"P = 1\nn = 1\nP = P * n\nprint(\"n is\",n,\"and P is\",P)\nn = 2\nP = P * n\nprint(\"n is\",n,\"and P is\",P)\nn = 3\nP = P * n\nprint(\"n is\",n,\"and P is\",P)\nn = 4\nP = P * n\nprint(\"n is\",n,\"and P is\",P)\nn = 5\nP = P * n\nprint(\"n is\",n,\"and P is\",P)\nprint (P)",
"Let's analyze the loop syntax in more detail.\npython\nP = 1\nfor n in range(1,6):\n P = P * n # this command is in the scope of the loop.\n print(\"n is\",n,\"and P is\",P) # this command is in the scope of the loop too!\nprint(P)\nThe \"for\" command ends with a colon :, and the next two lines are indented. The colon and indentation are indicators of scope. The scope of the for loop begins after the colon, and includes all indented lines. The scope of the for loop is what is repeated in every step of the loop (in addition to the reassignment of n).",
"P = 1\nfor n in range(1,6):\n P = P * n # this command is in the scope of the loop.\n print(\"n is\",n,\"and P is\",P) # this command is in the scope of the loop too!\nprint(P)",
"If we change the indentation, it changes the scope of the for loop. Predict what the following loop will do, by unraveling, before evaluating it.",
"P = 1\nfor n in range(1,6):\n P = P * n\nprint(\"n is\",n,\"and P is\",P)\nprint(P)",
"Scopes can be nested by nesting indentation. What do you think the following loop will do? Can you unravel it?",
"for x in [1,2,3]:\n for y in ['a', 'b']:\n print(x,y)",
"How might you create a nested loop which prints 1 a then 2 a then 3 a then 1 b then 2 b then 3 b? Try it below.",
"# Insert your loop here.",
"Among popular programming languages, Python is particular about indentation. Other languages indicate scope with open/close braces, for example, and indentation is just a matter of style. By requiring indentation to indicate scope, Python effectively removes the need for open/close braces, and enforces a readable style.\nWe have now encountered data types, operations, variables, and loops. Taken together, these are powerful tools for computation! Try the following exercises for more practice. You can also try the exercises at the end of Chapter 0 of An Illustrated Theory of Numbers -- some can be done easily by writing a few lines of Python code.\nExercises\n\nDescribe how Python interprets division with remainder when the divisor and/or dividend is negative.\nWhat is the remainder when $2^{90}$ is divided by $91$?\nHow many multiples of 13 are there between 1 and 1000?\nHow many odd multiples of 13 are there between 1 and 1000?\nWhat is the sum of the numbers from 1 to 1000?\nWhat is the sum of the squares, from $1 \\cdot 1$ to $1000 \\cdot 1000$?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nadvamir/deep-learning
|
embeddings/Skip-Gram_word2vec.ipynb
|
mit
|
[
"Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.",
"import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils",
"Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()",
"Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.",
"words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))",
"And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.",
"vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]",
"Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.",
"## Your code here\nfrom collections import Counter\nfrom random import random\nfrom math import sqrt\n\ncounts = Counter(int_words)\nt = 100.0\nP = lambda x: 1.0 - sqrt(t / counts[x])\n\n# The final subsampled word list\ntrain_words = list(filter(lambda x: random() > P(x), int_words))\nprint('Final list of words = {}'.format(len(train_words)))",
"Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.",
"from random import randint\n\ndef get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n R = randint(1, window_size)\n \n return words[idx-R:idx] + words[idx+1:idx+R+1]",
"Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.",
"def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ",
"Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.",
"train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='outputs')",
"Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.",
"n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.truncated_normal([n_vocab, n_embedding]), name='Embedding') # create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, inputs, name='Embed') # use tf.nn.embedding_lookup to get the hidden layer output",
"Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.",
"# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding]), name='outW') # create softmax weight matrix here\n softmax_b = tf.zeros(n_vocab, name='outB') # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(\n softmax_w, softmax_b,\n labels, embed,\n n_sampled, n_vocab, name='LossT')\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)",
"Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.",
"import random\nwith train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints",
"Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.",
"epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)",
"Restore the trained network if you need to:",
"with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)",
"Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io
|
v0.12.1/examples/notebooks/generated/tsa_arma_0.ipynb
|
bsd-3-clause
|
[
"Autoregressive Moving Average (ARMA): Sunspots data",
"%matplotlib inline\n\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\nfrom statsmodels.tsa.arima.model import ARIMA\n\nfrom statsmodels.graphics.api import qqplot",
"Sunspots Data",
"print(sm.datasets.sunspots.NOTE)\n\ndta = sm.datasets.sunspots.load_pandas().data\n\ndta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))\ndel dta[\"YEAR\"]\n\ndta.plot(figsize=(12,8));\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)\n\narma_mod20 = ARIMA(dta, order=(2, 0, 0)).fit()\nprint(arma_mod20.params)\n\narma_mod30 = ARIMA(dta, order=(3, 0, 0)).fit()\n\nprint(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)\n\nprint(arma_mod30.params)\n\nprint(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)",
"Does our model obey the theory?",
"sm.stats.durbin_watson(arma_mod30.resid.values)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax = arma_mod30.resid.plot(ax=ax);\n\nresid = arma_mod30.resid\n\nstats.normaltest(resid)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nfig = qqplot(resid, line='q', ax=ax, fit=True)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)\n\nr,q,p = sm.tsa.acf(resid.values.squeeze(), fft=True, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))",
"This indicates a lack of fit.\n\n\nIn-sample dynamic prediction. How good does our model do?",
"predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True)\nprint(predict_sunspots)\n\ndef mean_forecast_err(y, yhat):\n return y.sub(yhat).mean()\n\nmean_forecast_err(dta.SUNACTIVITY, predict_sunspots)",
"Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order)\nSimulated ARMA(4,1): Model Identification is Difficult",
"from statsmodels.tsa.arima_process import ArmaProcess\n\nnp.random.seed(1234)\n# include zero-th lag\narparams = np.array([1, .75, -.65, -.55, .9])\nmaparams = np.array([1, .65])",
"Let's make sure this model is estimable.",
"arma_t = ArmaProcess(arparams, maparams)\n\narma_t.isinvertible\n\narma_t.isstationary",
"What does this mean?",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax.plot(arma_t.generate_sample(nsample=50));\n\narparams = np.array([1, .35, -.15, .55, .1])\nmaparams = np.array([1, .65])\narma_t = ArmaProcess(arparams, maparams)\narma_t.isstationary\n\narma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2)",
"For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags.\nThe partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.",
"arma11 = ARIMA(arma_rvs, order=(1, 0, 1)).fit()\nresid = arma11.resid\nr,q,p = sm.tsa.acf(resid, fft=True, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))\n\narma41 = ARIMA(arma_rvs, order=(4, 0, 1)).fit()\nresid = arma41.resid\nr,q,p = sm.tsa.acf(resid, fft=True, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))",
"Exercise: How good of in-sample prediction can you do for another series, say, CPI",
"macrodta = sm.datasets.macrodata.load_pandas().data\nmacrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))\ncpi = macrodta[\"cpi\"]",
"Hint:",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax = cpi.plot(ax=ax);\nax.legend();",
"P-value of the unit-root test, resoundingly rejects the null of a unit-root.",
"print(sm.tsa.adfuller(cpi)[1])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
msadegh97/machine-learning-course
|
08-text-mining.ipynb
|
gpl-3.0
|
[
"Text mining\nText Assemble\nIt is observed that 70% of data available to any businesses is unstructured. The first step is collating unstructured data from different sources such as open-ended feedback, phone calls, email support, online chat and social media networks like Twitter, LinkedIn and Facebook. Assembling these data and applying mining/machine learning techniques to analyze them provides valuable opportunities for organizations to build more power into customer experience. \nThere are several libraries available for extracting text content from different formats discussed above. By far the best library that provides simple and single interface for multiple formats is ‘textract’ (open source MIT license). Note that as of now this library/package is available for Linux, Mac OS and not Windows. Below is a list of supported formats.\nFor example twitter\n#### API access token\n\nGoto https://apps.twitter.com/\nClick on 'Create New App'\nFill the required information and click on 'Create your Twitter Application'\nYou'll get the access details under 'Keys and Access Tokens' tab",
"import pandas as pd\nimport numpy as np\nimport tweepy\nfrom tweepy.streaming import StreamListener\nfrom tweepy import OAuthHandler\nfrom tweepy import Stream\n\naccess_token = \"8397390582---------------------------------\"\naccess_token_secret = \"dr5L3QHHkIls6Rbffz-------------------\"\nconsumer_key = \"U1eVHGzL-----------------\"\nconsumer_secret = \"qATe7kb41zRAz------------------------------------\"\n\nauth = tweepy.auth.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\n\napi = tweepy.API(auth)\n\nfetched_tweets = api.search(q=['Bitcoin','ethereum'], result_type='recent', lang='en', count=10)\nprint (\"Number of tweets: \", len(fetched_tweets))\n\nfor tweet in fetched_tweets:\n print ('Tweet AUTOR: ', tweet.author.name)\n print ('Tweet ID: ', tweet.id)\n print ('Tweet Text: ', tweet.text, '\\n')",
"There are many other way to collect data from PDF, Voice and etc. \nPreprocessing\nNLTK (Natural Language Toolkit)\nNLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries\n Bad news nltk not support persian \n Good news hazm is a similar library for persian language processing",
"import nltk \nnltk.download()",
"stopword",
"from nltk.corpus import stopwords\n\nstopwords.words('english')[:10]\n\nlen(stopwords.words())\n\nimport hazm \nhazm.stopwords_list()\n\nlen(hazm.stopwords_list())",
"Feature Extraction",
"from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer\n\ntext = pd.read_csv('./Datasets/text_mining.csv').drop(['document_id','direction'], axis =1)\ntext2 =text[(text['category_id'] == 1) | (text['category_id'] == 8) ]\n\ntext[text['category_id'] == 1].head()\n\ntext[text['category_id'] == 8].head()\n\nclimate_con = text[text['category_id'] == 1].iloc[0:5]\npolitics = text[text['category_id'] == 8].iloc[0:5]\n\ncountvectorizer = CountVectorizer()\n\ncli = ''\nfor i in climate_con.text.as_matrix(): cli = cli + ' ' + i\npol = ''\nfor i in politics.text.as_matrix(): pol = pol + ' ' + i\n\ncontent = [cli, pol]\n\ncountvectorizer.fit(content)\n\ndoc_vec = countvectorizer.transform(content)\n\ndf = pd.DataFrame(doc_vec.toarray().transpose(), index = countvectorizer.get_feature_names())\n\ndf.sort_values(0, ascending=False)\n\ndf.sort_values(1, ascending=False)\n\ntfidf = TfidfVectorizer()\n\ntfidf_vec = tfidf.fit_transform(content)\n\ndf2 = pd.DataFrame(tfidf_vec.toarray().transpose(), index =tfidf.get_feature_names())\n\ndf2.sort_values(0, ascending=False)\n\ndf2.sort_values(1, ascending=False)\n\ntfidf.vocabulary_",
"Stemming",
"from nltk.stem import SnowballStemmer\n\n\nstemmer = SnowballStemmer('english')\n\nstemmer.stem(\"impressive\")\n\nstemmer.stem(\"impressness\")\n\nfrom hazm import Stemmer\n\nstem2 = Stemmer()\n\nstem2.stem('کتاب ها')\n\nstem2.stem('کتابهایش')\n\nstem2.stem('کتاب هایم')",
"Naive Bayes\n\nNaive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features.\n\nBayes Rule:\n$$ P(Y|X) = \\frac{P(Y)P(X|Y)}{P(X)} $$\nand Naive assumption:\n$$ P(x_i | y,x_1,x_2,...x_{i-1},x_{i+1},...,x_n) = P(x_i|y) $$\nleads to:\n$$ P(y| x_1,x_2,...,x_n) = \\frac{P(y) \\prod_{i=1}^n P(x_i|y)}{P(x_1,x_2,...,x_n)} $$\nIf the purpose is only to classify:\n$$ \\hat{y} = arg\\max_y P(y)\\prod_{i=1}^n P(x_i|y) $$",
"from sklearn.naive_bayes import GaussianNB\n\nmodel = GaussianNB()\n\nmodel.fit(tfidf.transform(text2.text).toarray(), text2.category_id)\n\nmodel.sigma_.shape\n\nfrom sklearn.metrics import classification_report\nprint(classification_report(text2.category_id, model.predict(tfidf.transform(text2.text).toarray())))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.6/_downloads/Advanced_Sounding.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Advanced Sounding\nPlot a sounding using MetPy with more advanced features.\nBeyond just plotting data, this uses calculations from metpy.calc to find the lifted\ncondensation level (LCL) and the profile of a surface-based parcel. The area between the\nambient profile and the parcel profile is colored as well.",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, SkewT\nfrom metpy.units import units",
"Upper air data can be obtained using the siphon package, but for this example we will use\nsome of MetPy's sample data.",
"col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\ndf['u_wind'], df['v_wind'] = mpcalc.get_wind_components(df['speed'],\n np.deg2rad(df['direction']))\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)",
"We will pull the data out of the example dataset into individual variables and\nassign units.",
"p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.get_wind_components(wind_speed, wind_dir)",
"Create a new figure. The dimensions here give a good aspect ratio.",
"fig = plt.figure(figsize=(9, 9))\nadd_metpy_logo(fig, 115, 100)\nskew = SkewT(fig, rotation=45)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Calculate LCL height and plot as black dot\nlcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\n\n# Calculate full parcel profile and add to plot as black line\nprof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')\nskew.plot(p, prof, 'k', linewidth=2)\n\n# Shade areas of CAPE and CIN\nskew.shade_cin(p, T, prof)\nskew.shade_cape(p, T, prof)\n\n# An example of a slanted line at constant T -- in this case the 0\n# isotherm\nskew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Show the plot\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
weixuanfu2016/tpot
|
tutorials/Portuguese Bank Marketing/Portuguese Bank Marketing Stratergy.ipynb
|
lgpl-3.0
|
[
"Portuguese Bank Marketing Stratergy- TPOT Tutorial\nThe data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.\nhttps://archive.ics.uci.edu/ml/datasets/Bank+Marketing",
"# Import required libraries\nfrom tpot import TPOTClassifier\nfrom sklearn.cross_validation import train_test_split\nimport pandas as pd \nimport numpy as np\n\n#Load the data\nMarketing=pd.read_csv('Data_FinalProject.csv')\nMarketing.head(5)",
"Data Exploration",
"Marketing.groupby('loan').y.value_counts()\n\nMarketing.groupby(['loan','marital']).y.value_counts()",
"Data Munging\nThe first and most important step in using TPOT on any data set is to rename the target class/response variable to class.",
"Marketing.rename(columns={'y': 'class'}, inplace=True)",
"At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 11 categorical variables \nwhich contain non-numerical values: job, marital, education, default, housing, loan, contact, month, day_of_week, poutcome, class.",
"Marketing.dtypes",
"We then check the number of levels that each of the five categorical variables have.",
"for cat in ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome' ,'class']:\n print(\"Number of levels in category '{0}': \\b {1:2.2f} \".format(cat, Marketing[cat].unique().size))",
"As we can see, contact and poutcome have few levels. Let's find out what they are.",
"for cat in ['contact', 'poutcome','class', 'marital', 'default', 'housing', 'loan']:\n print(\"Levels for catgeory '{0}': {1}\".format(cat, Marketing[cat].unique())) ",
"We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.",
"Marketing['marital'] = Marketing['marital'].map({'married':0,'single':1,'divorced':2,'unknown':3})\nMarketing['default'] = Marketing['default'].map({'no':0,'yes':1,'unknown':2})\nMarketing['housing'] = Marketing['housing'].map({'no':0,'yes':1,'unknown':2})\nMarketing['loan'] = Marketing['loan'].map({'no':0,'yes':1,'unknown':2})\nMarketing['contact'] = Marketing['contact'].map({'telephone':0,'cellular':1})\nMarketing['poutcome'] = Marketing['poutcome'].map({'nonexistent':0,'failure':1,'success':2})\nMarketing['class'] = Marketing['class'].map({'no':0,'yes':1})\n\nMarketing = Marketing.fillna(-999)\npd.isnull(Marketing).any()",
"For other categorical variables, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.",
"from sklearn.preprocessing import MultiLabelBinarizer\nmlb = MultiLabelBinarizer()\n\njob_Trans = mlb.fit_transform([{str(val)} for val in Marketing['job'].values])\neducation_Trans = mlb.fit_transform([{str(val)} for val in Marketing['education'].values])\nmonth_Trans = mlb.fit_transform([{str(val)} for val in Marketing['month'].values])\nday_of_week_Trans = mlb.fit_transform([{str(val)} for val in Marketing['day_of_week'].values])\n\nday_of_week_Trans",
"Drop the unused features from the dataset.",
"marketing_new = Marketing.drop(['marital','default','housing','loan','contact','poutcome','class','job','education','month','day_of_week'], axis=1)\n\nassert (len(Marketing['day_of_week'].unique()) == len(mlb.classes_)), \"Not Equal\" #check correct encoding done\n\nMarketing['day_of_week'].unique(),mlb.classes_",
"We then add the encoded features to form the final dataset to be used with TPOT.",
"marketing_new = np.hstack((marketing_new.values, job_Trans, education_Trans, month_Trans, day_of_week_Trans))\n\nnp.isnan(marketing_new).any()",
"Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.",
"marketing_new[0].size",
"Finally we store the class labels, which we need to predict, in a separate variable.",
"marketing_class = Marketing['class'].values",
"Data Analysis using TPOT\nTo begin our analysis, we need to divide our training data into training and validation sets. The validation set is just to give us an idea of the test set error. The model selection and tuning is entirely taken care of by TPOT, so if we want to, we can skip creating this validation set.",
"training_indices, validation_indices = training_indices, testing_indices = train_test_split(Marketing.index, stratify = marketing_class, train_size=0.75, test_size=0.25)\ntraining_indices.size, validation_indices.size",
"After that, we proceed to calling the fit(), score() and export() functions on our training dataset.\nAn important TPOT parameter to set is the number of generations (via the generations kwarg). Since our aim is to just illustrate the use of TPOT, we assume the default setting of 100 generations, whilst bounding the total running time via the max_time_mins kwarg (which may, essentially, override the former setting). Further, we enable control for the maximum amount of time allowed for optimization of a single pipeline, via max_eval_time_mins.\nOn a standard laptop with 4GB RAM, each generation takes approximately 5 minutes to run. Thus, for the default value of 100, without the explicit duration bound, the total run time could be roughly around 8 hours.",
"tpot = TPOTClassifier(verbosity=2, max_time_mins=2, max_eval_time_mins=0.04, population_size=15)\ntpot.fit(marketing_new[training_indices], marketing_class[training_indices])",
"In the above, 4 generations were computed, each giving the training efficiency of fitting model on the training set. As evident, the best pipeline is the one that has the CV score of 91.373%. The model that produces this result is one that fits a decision tree algorithm on the data set. Next, the test error is computed for validation purposes.",
"tpot.score(marketing_new[validation_indices], Marketing.loc[validation_indices, 'class'].values)\n\ntpot.export('tpot_marketing_pipeline.py')\n\n# %load tpot_marketing_pipeline.py\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\n\n# NOTE: Make sure that the class is labeled 'target' in the data file\ntpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)\nfeatures = tpot_data.drop('target', axis=1).values\ntraining_features, testing_features, training_target, testing_target = \\\n train_test_split(features, tpot_data['target'].values, random_state=None)\n\n# Average CV score on the training set was:0.913728927925\nexported_pipeline = DecisionTreeClassifier(criterion=\"gini\", max_depth=5, min_samples_leaf=16, min_samples_split=8)\n\nexported_pipeline.fit(training_features, training_target)\nresults = exported_pipeline.predict(testing_features)\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ernestyalumni/servetheloop
|
NISTFund/NISTFund.ipynb
|
mit
|
[
"National Institute of Standard and Technology (NIST) Fundamental Constants and official units conversions\n\nBuild the most current and accurate copy, directly from webscraping the NIST website, of a table of fundamental constants and unit conversions and have the values ready to use in Python.",
"import os, sys\nimport decimal\nfrom decimal import Decimal\n\nimport pandas as pd\n\n# get the current directory and files inside \nprint(os.getcwd()); print(os.listdir( os.getcwd() ));\n\nfrom NISTFund import retrieve_file, scraping_allascii, init_FundConst, make_pd_alphabeticalconv_lst, make_pd_conv_lst",
"retrieve_file will directly download the ASCII file that NIST uses in their website for Fundamental constants. By default, it'll check, and create if necessary, for subdirectory ./rawdata/ and put the ASCII text file there.",
"urladdr = retrieve_file(); print(urladdr);",
"scraping_allascii will parse this ASCII table. Then, the following step will put it into a pandas DataFrame with the correct header.",
"os.path.isfile('./rawdata/allascii.txt') \n\nlines,title,src,header,rawtbl,tbl=scraping_allascii()\n\nFundConst = pd.DataFrame(tbl, columns=header)",
"The above 3 lines are what init_FundConst essentially does.",
"FundConst = init_FundConst()",
"NIST Official conversions\nTo start creating to table (when it doesn't exist yet), run the following:",
"DF_conv=make_pd_conv_lst()\n\nfrom NISTFund import scraped_BS, NISTCONValpha\nconvBS = scraped_BS(NISTCONValpha)\n\nconvBS.soup.find_all(\"table\",{\"class\":'texttable'})",
"Lengthy, detailed explanation of how to create or modify the webscraping functions make_conv_lst, make_pd_conv_lst in the case of changes made on the NIST website, breaking previous links; drop down below this section if it had worked above\nIf you obtain errors as such, this means that the webmasters of the NIST site made changes. Then you'd have to manually go in and write another webscraping procedure. However, the Python function I created at least gives a guideline or outline of how one should proceed. Basically, it's a combination of using the Developer Inspection tool of your web browser (of choice) and using BeautifulSoup to traverse the HTML code of the webpage. \nEY : 20170424 note: I had imagined before of a general Python function/webscraper, that could search out the key words or terms so that even if the webmaster(s) change(s) the webpage, it'll dynamically and automatically scrape for the table of unit conversions. I don't know how to do that; please let me know if you do (email, twitter, github comment, etc.)\nOtherwise, I go manually to the webpage and look at the link I want to scrape, directly. I see \nNIST Guide to the SI, Appendix B.8: Factors for Units Listed Alphabetically \n NIST Guide to the SI, Appendix B.9: Factors for units listed by kind of quantity or field of science\nShare",
"convBS = scraped_BS(\"https://www.nist.gov/physical-measurement-laboratory/nist-guide-si-appendix-b8\")\n\nconvBS.soup.find_all(\"div\",{\"class\":\"table-inner\"})\n\nconvBS.soup.find_all(\"table\"); \nprint(len( convBS.soup.find_all(\"table\")) ) # there are 26 letters in the alphabet; NIST has entries for 21 of them\n\nconvBS.convtbls = convBS.soup.find_all(\"table\")\nconvdata=[]\nconvdata2=[]\nheaders = convBS.convtbls[0].find_all('tr')[1].find_all('th')\nheaders = [ele.text.replace(' ','') for ele in headers]\n\nheaders\n\nfor tbl in convBS.convtbls:\n for row in tbl.find_all('tr'):\n if row.find_all('td') != []:\n if row.text != '':\n rowsplit = row.text.replace(\"\\n\",'',1).split('\\n')\n try:\n rowsplit = [pt.replace(u'\\xa0',u' ').strip() for pt in rowsplit]\n except UnicodeDecodeError as err:\n print rowsplit\n Break\n raise err\n convdata.append( rowsplit )\n if len(row.find_all('td')) == (len(headers)+1):\n convdata2.append( row.find_all('td'))\n\n\nprint(len(convdata));\nprint(len(convdata2))\n\nconvdata3 = []\nfor row in convdata2:\n rowout = []\n rowout.append( row[0].text.strip())\n rowout.append( row[1].text.strip())\n value = (row[2].text+row[3].text).strip().replace(u'\\xa0',' ').replace(u'\\n',' ').replace(' ','')\n \n rowout.append(Decimal( value ))\n convdata3.append(rowout)\n\nprint(len(convdata3))\n\npd.DataFrame(convdata3,columns=headers).head()\n\nprint(len(convdata))",
"dealing with Appendix B.9: Factors for units listed by kind of quantity or field of science",
"convBS = scraped_BS(\"https://www.nist.gov/pml/nist-guide-si-appendix-b9-factors-units-listed-kind-quantity-or-field-science\")\n\nconvBS.convtbls = convBS.soup.find_all(\"table\")\nprint(len(convBS.convtbls))\n\nheaders = convBS.convtbls[1].find_all('tr')[1].find_all('th')\nheaders = [ele.text.replace(' ','') for ele in headers]\n\nheaders\n\nconvBS.convtbls[1].find_all('tr')\n\nfor rows in convBS.convtbls[1].find_all('tr'):\n print rows.find_all('td')\n\nfor row in convBS.convtbls[1].find_all('tr'):\n if row.find_all('td') != []:\n if row.text != '':\n rowsplit = row.text.split('\\n')\n# print rowsplit\n if u'' in rowsplit:\n rowsplit.remove(u'')\n print rowsplit\n\ntest_convdata=[]\nfield_of_science = \"\"\nfor row in convBS.convtbls[1].find_all('tr'):\n if row.find_all('td') != []:\n if row.text != '':\n rowsplit = row.text.split('\\n')\n if u'' in rowsplit:\n rowsplit.remove(u'')\n if len(rowsplit) is 1:\n field_of_science = rowsplit[0]\n print field_of_science\n elif field_of_science is not \"\":\n rowsplit.append(field_of_science)\n print rowsplit\n# print len(row.find_all('td')) \n if len(row.find_all('td')) is (len(headers)+1):\n test_convdata.append( rowsplit )\n\ntest_convdata\n\nfrom NISTFund import make_conv_lst\n\ntest_conv=make_conv_lst()\n\ntest_conv[2]",
"Finally",
"DF_alphaconv=make_pd_alphabeticalconv_lst()\n\nDF_alphaconv.head()\n\nDF_conv= make_pd_conv_lst()\n\nprint( DF_conv.head() )\nDF_conv.describe()\n",
"Once the data has all been made, you only need to run the following 3 commands to start using the data",
"FundConst = init_FundConst()\n\nconv = pd.read_pickle('./rawdata/DF_conv')\nalphaconv = pd.read_pickle('./rawdata/DF_alphabeticalconv')\n\n\nalphaconv"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hhain/sdap17
|
notebooks/solution_ueb01/02_Classification.ipynb
|
mit
|
[
"Aufgabe 2: Classification\nA short test to examine the performance gain when using multiple cores on sklearn's esemble classifier random forest.\nDepending on the available system the maximum number of jobs to test and the sample size can be adjusted by changing the respective parameters.",
"# imports\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nimport time\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"First we create a training set of size num_samples and num_features.",
"num_samples = 500 * 1000\nnum_features = 40\nX, y = make_classification(n_samples=num_samples, n_features=num_features)",
"Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.",
"# test different number of cores: here max 8\nmax_cores = 8\nnum_cpu_list = list(range(1,max_cores + 1))\nmax_sample_list = [int(l * num_samples) for l in [0.1, 0.2, 1, 0.001]]\ntraining_times_all = []\n\n# the default setting for classifier\nclf = RandomForestClassifier()\n\nfor max_sample in max_sample_list:\n training_times = []\n for num_cpu in num_cpu_list:\n # change number of cores\n clf.set_params(n_jobs=num_cpu)\n # train classifier on training data\n t = %timeit -o clf.fit(X[:max_sample+1], y[:max_sample+1])\n # save the runtime to the list\n training_times.append(t.best)\n # print logging message\n print(\"Computing for {} samples and {} cores DONE.\".format(max_sample,num_cpu))\n \n training_times_all.append(training_times)\n\nprint(\"All computations DONE.\")",
"Finally we plot and evaluate our results.",
"plt.plot(num_cpu_list, training_times_all[0], 'ro', label=\"{}k\".format(max_sample_list[0]//1000))\nplt.plot(num_cpu_list, training_times_all[1], \"bs\" , label=\"{}k\".format(max_sample_list[1]//1000))\nplt.plot(num_cpu_list, training_times_all[2], \"g^\" , label=\"{}k\".format(max_sample_list[2]//1000))\nplt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[2])+1])\nplt.title(\"Training time vs #CPU Cores\")\nplt.xlabel(\"#CPU Cores\")\nplt.ylabel(\"training time [s]\")\nplt.legend()\nplt.show()",
"The training time is inversely proportional to the number of used cpu cores.",
"plt.plot(num_cpu_list, training_times_all[3], 'ro', label=\"{}k\".format(max_sample_list[3]/1000))\nplt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[3])+1])\nplt.title(\"Training time vs #CPU Cores on small dataset\")\nplt.xlabel(\"#CPU Cores\")\nplt.ylabel(\"training time [s]\")\nplt.legend()\nplt.show()",
"However for small datasets the overhead introduced by multiprocessing and context copying can be higher than the actual execution time needed for training. In that case using multiple cores (n_jobs > 1) will not lead to decreased execution times."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dongwooc/StatisticalMethods
|
notes/InferenceSandbox.ipynb
|
gpl-2.0
|
[
"Inference Sandbox\nIn this notebook, we'll mock up some data from the linear model, as reviewed here. Then it's your job to implement a Metropolis sampler and constrain the posterior distriubtion. The goal is to play with various strategies for accelerating the convergence and acceptance rate of the chain. Remember to check the convergence and stationarity of your chains, and compare them to the known analytic posterior for this problem!\nGenerate a data set:",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 5.0) \n\n# the model parameters\na = np.pi\nb = 1.6818\n\n# my arbitrary constants\nmu_x = np.exp(1.0) # see definitions above\ntau_x = 1.0\ns = 1.0\nN = 50 # number of data points\n\n# get some x's and y's\nx = mu_x + tau_x*np.random.randn(N)\ny = a + b*x + s*np.random.randn(N)\n\nplt.plot(x, y, 'o');",
"Package up a log-posterior function.",
"def lnPost(params, x, y):\n # This is written for clarity rather than numerical efficiency. Feel free to tweak it.\n a = params[0]\n b = params[1]\n lnp = 0.0\n # Using informative priors to achieve faster convergence is cheating in this exercise!\n # But this is where you would add them.\n lnp += -0.5*np.sum((a+b*x - y)**2)\n return lnp",
"Convenience functions encoding the exact posterior:",
"class ExactPosterior:\n def __init__(self, x, y, a0, b0):\n X = np.matrix(np.vstack([np.ones(len(x)), x]).T)\n Y = np.matrix(y).T\n self.invcov = X.T * X\n self.covariance = np.linalg.inv(self.invcov)\n self.mean = self.covariance * X.T * Y\n self.a_array = np.arange(0.0, 6.0, 0.02)\n self.b_array = np.arange(0.0, 3.25, 0.02)\n self.P_of_a = np.array([self.marg_a(a) for a in self.a_array])\n self.P_of_b = np.array([self.marg_b(b) for b in self.b_array])\n self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array])\n self.P_of_ab = np.exp(self.P_of_ab)\n self.renorm = 1.0/np.sum(self.P_of_ab)\n self.P_of_ab = self.P_of_ab * self.renorm\n self.levels = scipy.stats.chi2.cdf(np.arange(1,4)**2, 1) # confidence levels corresponding to contours below\n self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2))\n def lnpost(self, a, b): # the 2D posterior\n z = self.mean - np.matrix([[a],[b]])\n return -0.5 * (z.T * self.invcov * z)[0,0]\n def marg_a(self, a): # marginal posterior of a\n return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0]))\n def marg_b(self, b): # marginal posterior of b\n return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1]))\nexact = ExactPosterior(x, y, a, b)",
"Demo some plots of the exact posterior distribution",
"plt.plot(exact.a_array, exact.P_of_a);\n\nplt.plot(exact.b_array, exact.P_of_b);\n\nplt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels);\nplt.plot(a, b, 'o', color='red');",
"Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to\n\nvisually inspect traces of each parameter to see whether they appear converged \ncompare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution\nNormally, you should always use quantitative tests of convergence in addition to visual inspection, as you saw on Tuesday. For this class (only), let's save some time by relying only on visual impressions and comparison to the exact posterior.\n\n\n\n(see the snippets farther down)\nIf you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class.\nOnce you have a working sampler, the question is: how can we make it converge faster? Experiment! We'll compare notes in a bit.",
"Nsamples = # fill in a number\nsamples = np.zeros((Nsamples, 2))\n# put any more global definitions here\n\nfor i in range(Nsamples):\n a_try, b_try = proposal() # propose new parameter value(s)\n lnp_try = lnPost([a_try,b_try], x, y) # calculate posterior density for the proposal\n if we_accept_this_proposal(lnp_try, lnp_current):\n # do something\n else:\n # do something else\n\nplt.rcParams['figure.figsize'] = (12.0, 3.0)\nplt.plot(samples[:,0]);\nplt.plot(samples[:,1]);\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.plot(samples[:,0], samples[:,1]);\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.hist(samples[:,0], 20, normed=True, color='cyan');\nplt.plot(exact.a_array, exact.P_of_a, color='red');\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.hist(samples[:,1], 20, normed=True, color='cyan');\nplt.plot(exact.b_array, exact.P_of_b, color='red');\n\n# If you know how to easily overlay the 2D sample and theoretical confidence regions, by all means do so."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
seblabbe/MATH2010-Logiciels-mathematiques
|
NotesDeCours/07-limites-calcul-diff.ipynb
|
gpl-3.0
|
[
"$$\n\\def\\CC{\\bf C}\n\\def\\QQ{\\bf Q}\n\\def\\RR{\\bf R}\n\\def\\ZZ{\\bf Z}\n\\def\\NN{\\bf N}\n$$\nCalcul différentiel et intégral",
"from __future__ import division, print_function # Python 3\nfrom sympy import init_printing\ninit_printing(use_latex='mathjax',use_unicode=False) # Affichage des résultats",
"Cette section concerne le calcul différentiel et intégral. On trouvera d'autres exemples dans le tutoriel de Sympy sur le même sujet: http://docs.sympy.org/latest/tutorial/calculus.html\nLimites\nPour calculer la limite d'une expression lorsqu'une variable tend vers une valeur, on utilise la fonction limit de sympy avec la syntaxe limit(expression, variable, valeur). :",
"from sympy import limit, sin, S\nfrom sympy.abc import x",
"Par exemple, pour évaluer la limite lorsque x tend vers 0 de l'expression $(\\sin(x)-x)/x^3$, on écrit:",
"limit((sin(x)-x)/x**3, x, 0)",
"La limite de $f(x)=2x+1$ lorsque $x \\to 5/2$ :",
"limit(2*x+1, x, S(5)/2) # la fonction S permet de créer un nombre rationel",
"Pour calculer la limite à gauche en un point, on doit spécifier l'option dir=\"-\" :",
"limit(1/x, x, 0, dir=\"-\")",
"Pour calculer la limite à droite en un point, on doit spécifier l'option dir=\"+\" :",
"limit(1/x, x, 0, dir=\"+\")",
"Lorsque la direction n'est pas spécifiée, c'est la limite à droite (dir=\"+\") qui est calculée par défaut:",
"limit(1/x, x, 0)",
"En sympy, tout comme dans SageMath, le symbole oo représente l'infini $\\infty$. Les deux o collés resemblent au symbole de l'infini 8 à l'horizontal. Les opérations d'addition, de soustraction, de multiplication, etc. sont possibles avec l'infini oo tant qu'elle soient bien définies. On doit l'importer pour l'utiliser:",
"from sympy import oo\noo\n\n5 - oo\n\noo - oo # nan signifie \"Not A Number\"",
"On peut calculer la limite d'une expression lorsque x tend vers plus l'infini:",
"limit(1/x, x, oo)",
"et aussi lorsque x tend vers moins l'infini:",
"limit(4+x*exp(x), x, -oo)",
"Sympy procède à des simplifications lorsque possible:",
"limit((1+1/x)**x, x, oo)",
"Sommes\nEn Python, il existe une fonction (sum) que l'on a pas besoin d'importer et qui permet de calculer la somme des valeurs d'une liste:",
"sum([1,2,3,4,5])",
"Cette fonction sum permet aussi de calculer une somme impliquant des variables et expressions symboliques de SymPy:",
"from sympy import tan\nfrom sympy.abc import x,z\nsum([1,2,3,4,5,x,tan(z)])",
"Par contre, sum ne permet pas de calculer des sommes infinies ou encore des séries données par un terme général. En SymPy, il existe une autre fonction (summation) pour calculer des sommes possiblement infinies d'expressions symboliques:",
"from sympy import summation",
"Pour calculer la somme d'une série dont le terme général est donné par une expression qui dépend de n pour toutes les valeurs entières de n entre debut et fin (debut et fin inclus), on utilise la syntaxe summation(expression (n,debut,fin)) :",
"from sympy.abc import n\nsummation(n, (n,1,5))",
"Le début et la fin de l'intervalle des valeurs de n peut être donné par des variables symboliques:",
"from sympy.abc import a,b\nsummation(n, (n,1,b))\n\nsummation(n, (n,a,b))",
"Pour faire la somme d'une série pour tous les nombres entiers de 1 à l'infini, on utilise le symbole oo :",
"from sympy import oo\nsummation(1/n**2, (n, 1, oo))",
"Si la série est divergente, elle sera évaluée à oo ou encore elle restera non évaluée:",
"summation(n, (n,1,oo))\n\nsummation((-1)**n, (n,1,oo))",
"Sympy peut aussi calculer une double somme. Il suffit de spéficier l'intervalle des valeurs pour chacune des variables en terminant avec la variable dont la somme est effectuée en dernier:",
"from sympy.abc import m,n\nsummation(n*m, (n,1,m), (m,1,10))",
"Les doubles sommes fonctionnent aussi avec des intervalles infinis:",
"summation(1/(n*m)**2, (n,1,oo), (m,1,oo))",
"Produit\nComme pour la somme, le calcul d'un produit dont le terme général est donné par une expression qui dépend de n pour toutes les valeurs entières de n entre debut et fin (debut et fin inclus), on utilise la syntaxe product(expression (n,debut,fin)) :",
"from sympy import product\nfrom sympy.abc import n,b\nproduct(n, (n,1,5))\n\nproduct(n, (n,1,b))",
"Voici un autre exemple:",
"product(n*(n+1), (n, 1, b))",
"Calcul différentiel\nPour dériver une fonction par rapport à une variable x, on utilise la fonction diff de sympy avec la syntaxe diff(fonction, x) :",
"from sympy import diff",
"Faisons quelques importations de fonctions et variables pour la suite:",
"from sympy import sin,cos,tan,atan,pi\nfrom sympy.abc import x,y",
"On calcule la dérivée de $\\sin(x)$ :",
"diff(sin(x), x)",
"Voici quelques autres exemples:",
"diff(cos(x**3), x)\n\ndiff(atan(2*x), x)\n\ndiff(1/tan(x), x)",
"Pour calculer la i-ème dérivée d'une fonction, on ajoute autant de variables que nécessaire ou bien on spécifie le nombre de dérivées à faire:",
"diff(sin(x), x, x, x)\n\ndiff(sin(x), x, 3)",
"Cela fonctionne aussi avec des variables différentes:",
"diff(x**2*y**3, x, y, y)",
"Calcul intégral\nLe calcul d'une intégrale indéfinie se fait avec la fonction integrate avec la syntaxe integrate(f, x) :",
"from sympy import integrate",
"Par exemple:",
"integrate(1/x, x)",
"Le calcul d'une intégrale définie se fait aussi avec la fonction integrate avec la syntaxe integrate(f, (x, a, b)) :",
"integrate(1/x, (x, 1, 57))",
"Voici quelques autres exemples:",
"from sympy import exp\nintegrate(cos(x)*exp(x), x)\n\nintegrate(x**2, (x,0,1))",
"L'intégrale d'une fonction rationnelle:",
"integrate((x+1)/(x**2+4*x+4), x)",
"L'intégrale d'une fonction exponentielle polynomiale:",
"integrate(5*x**2 * exp(x) * sin(x), x)",
"Deux intégrales non élémentaires:",
"from sympy import erf\nintegrate(exp(-x**2)*erf(x), x)",
"Calculer l'intégrale de $x^2 \\cos(x)$ par rapport à $x$ :",
"integrate(x**2 * cos(x), x)",
"Calculer l'intégrale définie de $x^2 \\cos(x)$ par rapport à $x$ sur l'intervalle de $0$ à $\\pi/2$ :",
"integrate(x**2 * cos(x), (x, 0, pi/2))",
"Sommes, produits, dérivées et intégrales non évaluées\nLes fonctions summation, product, diff et integrate ont tous un équivalent qui retourne un résultat non évalué. Elles s'utilisent avec la même syntaxe, mais portent un autre nom et commencent avec une majuscule: Sum, Product, Derivative, Integral.",
"from sympy import Sum, Product, Derivative, Integral, sin, oo\nfrom sympy.abc import n, x\nSum(1/n**2, (n, 1, oo))\n\nProduct(n, (n,1,10))\n\nDerivative(sin(x**2), x)\n\nIntegral(1/x**2, (x,1,oo))",
"Pour les évaluer, on ajoute .doit() :",
"Sum(1/n**2, (n, 1, oo)).doit()\n\nProduct(n, (n,1,10)).doit()\n\nDerivative(sin(x**2), x).doit()\n\nIntegral(1/x**2, (x,1,oo)).doit()",
"Cela est utile pour écrire des équations:",
"A = Sum(1/n**2, (n, 1, oo))\nB = Product(n, (n,1,10))\nC = Derivative(sin(x**2), x)\nD = Integral(1/x**2, (x,1,oo))\nfrom sympy import Eq\nEq(A, A.doit())\n\nEq(B, B.doit())\n\nEq(C, C.doit())\n\nEq(D, D.doit())",
"Intégrales multiples\nPour faire une intégrale double, on peut intégrer le résultat d'une première intégration comme ceci:",
"from sympy.abc import x,y\nintegrate(integrate(x**2+y**2, x), y)",
"Mais, il est plus commode d'utiliser une seule fois la commande integrate et sympy permet de le faire:",
"integrate(x**2+y**2, x, y)",
"Pour les intégrales définies multiples, on spécifie les intervalles pour chaque variable entre parenthèses. Ici, on fait l'intégrale sur les valeurs de x dans l'intervalle [0,y], puis pour les valeurs de y dans l'intervalle [0,10] :",
"integrate(x**2+y**2, (x,0,y), (y,0,10))",
"Développement en séries\nOn calcule la série de Taylor d'une expression qui dépend de x au point x0 d'ordre n avec la syntaxe series(expression, x, x0, n). Par exemple, la série de Maclaurin (une série de Maclaurin est une série de Taylor au point $x_0=0$) de $\\cos(x)$ d'ordre 14 est:",
"from sympy import series, cos\nfrom sympy.abc import x\nseries(cos(x), x, 0, 14)",
"Par défaut, le développement est efféctuée en 0 et est d'ordre 6:",
"series(cos(x), x)",
"De façon équivalente, on peut aussi utilise la syntaxe expression.series(x, x0, n) :",
"(1/cos(x**2)).series(x, 0, 14)",
"Le développement de Taylor de $\\log$ se fait en $x_0=1$ :",
"from sympy import log\nseries(log(x), x, 0)\n\nseries(log(x), x, 1)",
"Équations différentielles\nUne équation différentielle est une relation entre une fonction inconnue et ses dérivées. Comme la fonction est inconnue, on doit la définir de façon abstraite comme ceci:",
"from sympy import Function\nf = Function(\"f\")",
"Déjà, cela permet d'écrire f et f(x) :",
"f\n\nfrom sympy.abc import x\nf(x)",
"On peut définir les dérivées de f à l'aide de la fonction Derivative de sympy:",
"from sympy import Derivative\nDerivative(f(x), x) # ordre 1\n\nDerivative(f(x), x, x) # ordre 2",
"En utilisant, Eq on peut définir une équation impliquant la fonction f et ses dérivées, c'est-à-dire une équation différentielle:",
"Eq(f(x), Derivative(f(x),x))",
"Puis, on peut la résoudre avec la fonction dsolve de sympy avec la syntaxe dsolve(equation, f(x)) et trouver quelle fonction f(x) est égale à sa propre dérivée:",
"from sympy import dsolve\ndsolve(Eq(f(x), Derivative(f(x),x)), f(x))",
"Voici un autre exemple qui trouve une fonction égale à l'opposé de sa dérivée d'ordre 2:",
"Eq(f(x), -Derivative(f(x),x,x))\n\ndsolve(Eq(f(x), -Derivative(f(x),x,x)), f(x))",
"Résoudre une équation différentielle ordinaire comme $f''(x) + 9 f(x) = 1$ :",
"dsolve(Eq(Derivative(f(x),x,x) + 9*f(x), 1), f(x))",
"Pour définir la dérivée, on peut aussi utiliser .diff(). L'exemple précédent s'écrit:",
"dsolve(Eq(f(x).diff(x, x) + 9*f(x), 1), f(x))",
"Finalement, voici un exemple impliquant deux équations:",
"from sympy.abc import x,y,t\neq1 = Eq(Derivative(x(t),t), x(t)*y(t)*sin(t))\neq2 = Eq(Derivative(y(t),t), y(t)**2*sin(t))\nsysteme = [eq1, eq2]\nsysteme\n\ndsolve(systeme)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/guide/keras/save_and_serialize.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Kerasモデルの保存と読み込み\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/save_and_serialize\"> <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\nはじめに\nKeras モデルは以下の複数のコンポーネントで構成されています。\n\nアーキテクチャー/構成(モデルに含まれるレイヤーとそれらの接続方法を指定する)\n重み値のセット(「モデルの状態」)\nオプティマイザ(モデルのコンパイルで定義する)\n損失とメトリックのセット(モデルのコンパイルで定義するか、add_loss()またはadd_metric()を呼び出して定義する)\n\nKeras API を使用すると、これらを一度にディスクに保存したり、一部のみを選択して保存できます。\n\nすべてを TensorFlow SavedModel 形式(または古い Keras H5 形式)で1つのアーカイブに保存。これは標準的な方法です。\nアーキテクチャ/構成のみを(通常、JSON ファイルとして)保存。\n重み値のみを保存。(通常、モデルのトレーニング時に使用)。\n\nでは、次にこれらのオプションの用途と機能をそれぞれ見ていきましょう。\n保存と読み込みに関する簡単な説明\nこのガイドを読む時間が 10 秒しかない場合は、次のことを知っておく必要があります。\nKeras モデルの保存\npython\nmodel = ... # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location')\nモデルの再読み込み\npython\nfrom tensorflow import keras model = keras.models.load_model('path/to/location')\nでは、詳細を見てみましょう。\nセットアップ",
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras",
"モデル全体の保存と読み込み\nモデル全体を1つのアーティファクトとして保存できます。その場合、以下が含まれます。\n\nモデルのアーキテクチャ/構成\nモデルの重み値(トレーニング時に学習される)\nモデルのコンパイル情報(compile()が呼び出された場合)\nオプティマイザとその状態(存在する場合)。これは、中断した所からトレーニングを再開するために使用します。\n\nAPI\n\nmodel.save()またはtf.keras.models.save_model()\ntf.keras.models.load_model()\n\nモデル全体をディスクに保存するには {nbsp}TensorFlow SavedModel 形式と古い Keras H5 形式の 2 つの形式を使用できます。推奨される形式は SavedModel です。これは、model.save()を使用する場合のデフォルトです。\n次の方法で H5 形式に切り替えることができます。\n\nsave_format='h5'をsave()に渡す。\n.h5または.kerasで終わるファイル名をsave()に渡す。\n\nSavedModel 形式\n例:",
"def get_model():\n # Create a simple model.\n inputs = keras.Input(shape=(32,))\n outputs = keras.layers.Dense(1)(inputs)\n model = keras.Model(inputs, outputs)\n model.compile(optimizer=\"adam\", loss=\"mean_squared_error\")\n return model\n\n\nmodel = get_model()\n\n# Train the model.\ntest_input = np.random.random((128, 32))\ntest_target = np.random.random((128, 1))\nmodel.fit(test_input, test_target)\n\n# Calling `save('my_model')` creates a SavedModel folder `my_model`.\nmodel.save(\"my_model\")\n\n# It can be used to reconstruct the model identically.\nreconstructed_model = keras.models.load_model(\"my_model\")\n\n# Let's check:\nnp.testing.assert_allclose(\n model.predict(test_input), reconstructed_model.predict(test_input)\n)\n\n# The reconstructed model is already compiled and has retained the optimizer\n# state, so training can resume:\nreconstructed_model.fit(test_input, test_target)",
"SavedModel に含まれるもの\nmodel.save('my_model')を呼び出すと、以下を含むmy_modelという名前のフォルダが作成されます。",
"!ls my_model",
"モデルアーキテクチャとトレーニング構成(オプティマイザ、損失、メトリックを含む)は、saved_model.pbに格納されます。重みはvariables/ディレクトリに保存されます。\nSavedModel 形式についての詳細は「SavedModel ガイド(ディスク上の SavedModel 形式」)をご覧ください。\nSavedModel によるカスタムオブジェクトの処理\nモデルとそのレイヤーを保存する場合、SavedModel 形式はクラス名、呼び出し関数、損失、および重み(および実装されている場合は構成)を保存します。呼び出し関数は、モデル/レイヤーの計算グラフを定義します。\nモデル/レイヤーの構成がない場合、トレーニング、評価、および推論に使用できる元のモデルのようなモデルを作成するために呼び出し関数が使用されます。\nしかしながら、カスタムモデルまたはレイヤークラスを作成する場合は、常にget_configおよびfrom_configメソッドを使用して定義することをお勧めします。これにより、必要に応じて後で計算を簡単に更新できます。詳細については「カスタムオブジェクト」をご覧ください。\n以下は、config メソッドを上書きせずに SavedModel 形式からカスタムレイヤーを読み込んだ場合の例です。",
"class CustomModel(keras.Model):\n def __init__(self, hidden_units):\n super(CustomModel, self).__init__()\n self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]\n\n def call(self, inputs):\n x = inputs\n for layer in self.dense_layers:\n x = layer(x)\n return x\n\n\nmodel = CustomModel([16, 16, 10])\n# Build the model by calling it\ninput_arr = tf.random.uniform((1, 5))\noutputs = model(input_arr)\nmodel.save(\"my_model\")\n\n# Delete the custom-defined model class to ensure that the loader does not have\n# access to it.\ndel CustomModel\n\nloaded = keras.models.load_model(\"my_model\")\nnp.testing.assert_allclose(loaded(input_arr), outputs)\n\nprint(\"Original model:\", model)\nprint(\"Loaded model:\", loaded)",
"上記の例のように、ローダーは、元のモデルのように機能する新しいモデルクラスを動的に作成します。\nKeras H5 形式\nKeras は、モデルのアーキテクチャ、重み値、およびcompile()情報を含む1つの HDF5 ファイルの保存もサポートしています。これは、SavedModel に代わる軽量な形式です。\n例:",
"model = get_model()\n\n# Train the model.\ntest_input = np.random.random((128, 32))\ntest_target = np.random.random((128, 1))\nmodel.fit(test_input, test_target)\n\n# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.\nmodel.save(\"my_h5_model.h5\")\n\n# It can be used to reconstruct the model identically.\nreconstructed_model = keras.models.load_model(\"my_h5_model.h5\")\n\n# Let's check:\nnp.testing.assert_allclose(\n model.predict(test_input), reconstructed_model.predict(test_input)\n)\n\n# The reconstructed model is already compiled and has retained the optimizer\n# state, so training can resume:\nreconstructed_model.fit(test_input, test_target)",
"制限事項\nSavedModel 形式と比較して、H5 ファイルに含まれないものが 2 つあります。\n\nSavedModel とは異なり、model.add_loss()およびmodel.add_metric()を介して追加された外部損失およびメトリックは保存されません。モデルにそのような損失とメトリックがあり、トレーニングを再開する場合は、モデルを読み込んだ後、これらの損失を自分で追加する必要があります。これは、self.add_loss()およびself.add_metric()を介して内部レイヤーで作成された損失/メトリックには適用されないことに注意してください。レイヤーが読み込まれる限り、これらの損失とメトリックはレイヤーのcallメソッドの一部であるため保持されます。\nカスタムレイヤーなどのカスタムオブジェクトの計算グラフは、保存されたファイルに含まれません。読み込む際に、Keras はモデルを再構築するためにこれらのオブジェクトの Python クラス/関数にアクセスする必要があります。詳細については、「カスタムオブジェクト」をご覧ください。\n\nアーキテクチャの保存\nモデルの構成(アーキテクチャ)は、モデルに含まれるレイヤー、およびこれらのレイヤーの接続方法を指定します*。モデルの構成がある場合、コンパイル情報なしで、重みが新しく初期化された状態でモデルを作成することができます。\n*これは、サブクラス化されたモデルではなく、Functional または Sequential API を使用して定義されたモデルにのみ適用されることに注意してください。\nSequential モデルまたは Functional API モデルの構成\nこれらのタイプのモデルは、レイヤーの明示的なグラフです。それらの構成は常に構造化された形式で提供されます。\nAPI\n\nget_config()およびfrom_config()\ntf.keras.models.model_to_json()およびtf.keras.models.model_from_json()\n\nget_config()およびfrom_config()\nconfig = model.get_config()を呼び出すと、モデルの構成を含むPython dictが返されます。その後、同じモデルをSequential.from_config(config)(<br>Sequentialモデルの場合)またはModel.from_config(config)(Functional API モデルの場合) で再度構築できます。\n同じワークフローは、シリアル化可能なレイヤーでも使用できます。\nレイヤーの例:",
"layer = keras.layers.Dense(3, activation=\"relu\")\nlayer_config = layer.get_config()\nnew_layer = keras.layers.Dense.from_config(layer_config)",
"Sequential モデルの例:",
"model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])\nconfig = model.get_config()\nnew_model = keras.Sequential.from_config(config)",
"Functional モデルの例:",
"inputs = keras.Input((32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = keras.Model(inputs, outputs)\nconfig = model.get_config()\nnew_model = keras.Model.from_config(config)",
"to_json()およびtf.keras.models.model_from_json()\nこれは、get_config / from_configと似ていますが、モデルを JSON 文字列に変換します。この文字列は、元のモデルクラスなしで読み込めます。また、これはモデル固有であり、レイヤー向けではありません。\n例:",
"model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])\njson_config = model.to_json()\nnew_model = keras.models.model_from_json(json_config)",
"カスタムオブジェクト\nモデルとレイヤー\nサブクラス化されたモデルとレイヤーのアーキテクチャは、メソッド__init__およびcallで定義されています。それらは Python バイトコードと見なされ、JSON と互換性のある構成にシリアル化できません。pickleなどを使用してバイトコードのシリアル化を試すことができますが、これは安全ではなく、モデルを別のシステムに読み込むことはできません。\nカスタム定義されたレイヤーのあるモデル、またはサブクラス化されたモデルを保存/読み込むには、get_configおよびfrom_config(オプション) メソッドを上書きする必要があります。さらに、Keras が認識できるように、カスタムオブジェクトの登録を使用する必要があります。\nカスタム関数\nカスタム定義関数 (アクティブ化の損失や初期化など) には、get_configメソッドは必要ありません。カスタムオブジェクトとして登録されている限り、関数名は読み込みに十分です。\nTensorFlow グラフのみの読み込み\nKeras により生成された TensorFlow グラフを以下のように読み込むことができます。 その場合、custom_objectsを提供する必要はありません。",
"model.save(\"my_model\")\ntensorflow_graph = tf.saved_model.load(\"my_model\")\nx = np.random.uniform(size=(4, 32)).astype(np.float32)\npredicted = tensorflow_graph(x).numpy()",
"この方法にはいくつかの欠点があることに注意してください。\n\n再作成できないモデルをプロダクションにロールアウトしないように、履歴を追跡するために使用されたカスタムオブジェクトに常にアクセスできる必要があります。\ntf.saved_model.loadにより返されるオブジェクトは、Keras モデルではないので、簡単には使えません。たとえば、.predict()や.fit()へのアクセスはありません。\n\nこの方法は推奨されていませんが、カスタムオブジェクトのコードを紛失した場合やtf.keras.models.load_model()でモデルを読み込む際に問題が発生した場合などに役に立ちます。\n詳細は、tf.saved_model.loadに関するページをご覧ください。\n構成メソッドの定義\n仕様:\n\nget_configは、Keras のアーキテクチャおよびモデルを保存する API と互換性があるように、JSON シリアル化可能なディクショナリを返す必要があります。\nfrom_config(config) (classmethod) は、構成から作成された新しいレイヤーまたはモデルオブジェクトを返します。デフォルトの実装は cls(**config)を返します。\n\n例:",
"class CustomLayer(keras.layers.Layer):\n def __init__(self, a):\n self.var = tf.Variable(a, name=\"var_a\")\n\n def call(self, inputs, training=False):\n if training:\n return inputs * self.var\n else:\n return inputs\n\n def get_config(self):\n return {\"a\": self.var.numpy()}\n\n # There's actually no need to define `from_config` here, since returning\n # `cls(**config)` is the default behavior.\n @classmethod\n def from_config(cls, config):\n return cls(**config)\n\n\nlayer = CustomLayer(5)\nlayer.var.assign(2)\n\nserialized_layer = keras.layers.serialize(layer)\nnew_layer = keras.layers.deserialize(\n serialized_layer, custom_objects={\"CustomLayer\": CustomLayer}\n)",
"カスタムオブジェクトの登録\nKeras は構成を生成したクラスについての情報を保持します。上記の例では、tf.keras.layers.serializeはシリアル化された形態のカスタムレイヤーを生成します。\n{'class_name': 'CustomLayer', 'config': {'a': 2}}\nKeras は、すべての組み込みのレイヤー、モデル、オプティマイザ、およびメトリッククラスのマスターリストを保持し、from_configを呼び出すための正しいクラスを見つけるために使用されます。クラスが見つからない場合は、エラー(Value Error: Unknown layer)が発生します。このリストにカスタムクラスを登録する方法は、いくつかあります。\n\n読み込み関数でcustom_objects引数を設定する。(上記の「config メソッドの定義」セクションの例をご覧ください)\ntf.keras.utils.custom_object_scopeまたはtf.keras.utils.CustomObjectScope\ntf.keras.utils.register_keras_serializable\n\nカスタムレイヤーと関数の例",
"class CustomLayer(keras.layers.Layer):\n def __init__(self, units=32, **kwargs):\n super(CustomLayer, self).__init__(**kwargs)\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n def get_config(self):\n config = super(CustomLayer, self).get_config()\n config.update({\"units\": self.units})\n return config\n\n\ndef custom_activation(x):\n return tf.nn.tanh(x) ** 2\n\n\n# Make a model with the CustomLayer and custom_activation\ninputs = keras.Input((32,))\nx = CustomLayer(32)(inputs)\noutputs = keras.layers.Activation(custom_activation)(x)\nmodel = keras.Model(inputs, outputs)\n\n# Retrieve the config\nconfig = model.get_config()\n\n# At loading time, register the custom objects with a `custom_object_scope`:\ncustom_objects = {\"CustomLayer\": CustomLayer, \"custom_activation\": custom_activation}\nwith keras.utils.custom_object_scope(custom_objects):\n new_model = keras.Model.from_config(config)",
"メモリ内でモデルのクローンを作成する\nまた、tf.keras.models.clone_model()を通じて、メモリ内でモデルのクローンを作成できます。これは、構成を取得し、その構成からモデルを再作成する方法と同じです (したがって、コンパイル情報やレイヤーの重み値は保持されません)。\n例:",
"with keras.utils.custom_object_scope(custom_objects):\n new_model = keras.models.clone_model(model)",
"モデルの重み値のみを保存および読み込む\nモデルの重みのみを保存および読み込むように選択できます。これは次の場合に役立ちます。\n\n推論のためのモデルだけが必要とされる場合。この場合、トレーニングを再開する必要がないため、コンパイル情報やオプティマイザの状態は必要ありません。\n転移学習を行う場合。以前のモデルの状態を再利用して新しいモデルをトレーニングするため、以前のモデルのコンパイル情報は必要ありません。\n\nインメモリの重みの移動のための API\n異なるオブジェクト間で重みをコピーするにはget_weightsおよびset_weightsを使用します。\n\ntf.keras.layers.Layer.get_weights(): numpy配列のリストを返す。\ntf.keras.layers.Layer.set_weights(): モデルの重みをweights引数の値に設定する。\n\n以下に例を示します。\nインメモリで、1 つのレイヤーから別のレイヤーに重みを転送する",
"def create_layer():\n layer = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")\n layer.build((None, 784))\n return layer\n\n\nlayer_1 = create_layer()\nlayer_2 = create_layer()\n\n# Copy weights from layer 2 to layer 1\nlayer_2.set_weights(layer_1.get_weights())",
"インメモリで 1 つのモデルから互換性のあるアーキテクチャを備えた別のモデルに重みを転送する",
"# Create a simple functional model\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = keras.layers.Dense(10, name=\"predictions\")(x)\nfunctional_model = keras.Model(inputs=inputs, outputs=outputs, name=\"3_layer_mlp\")\n\n# Define a subclassed model with the same architecture\nclass SubclassedModel(keras.Model):\n def __init__(self, output_dim, name=None):\n super(SubclassedModel, self).__init__(name=name)\n self.output_dim = output_dim\n self.dense_1 = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")\n self.dense_2 = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")\n self.dense_3 = keras.layers.Dense(output_dim, name=\"predictions\")\n\n def call(self, inputs):\n x = self.dense_1(inputs)\n x = self.dense_2(x)\n x = self.dense_3(x)\n return x\n\n def get_config(self):\n return {\"output_dim\": self.output_dim, \"name\": self.name}\n\n\nsubclassed_model = SubclassedModel(10)\n# Call the subclassed model once to create the weights.\nsubclassed_model(tf.ones((1, 784)))\n\n# Copy weights from functional_model to subclassed_model.\nsubclassed_model.set_weights(functional_model.get_weights())\n\nassert len(functional_model.weights) == len(subclassed_model.weights)\nfor a, b in zip(functional_model.weights, subclassed_model.weights):\n np.testing.assert_allclose(a.numpy(), b.numpy())",
"ステートレスレイヤーの場合\nステートレスレイヤーは重みの順序や数を変更しないため、ステートレスレイヤーが余分にある場合や不足している場合でも、モデルのアーキテクチャは互換性があります。",
"inputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = keras.layers.Dense(10, name=\"predictions\")(x)\nfunctional_model = keras.Model(inputs=inputs, outputs=outputs, name=\"3_layer_mlp\")\n\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\n\n# Add a dropout layer, which does not contain any weights.\nx = keras.layers.Dropout(0.5)(x)\noutputs = keras.layers.Dense(10, name=\"predictions\")(x)\nfunctional_model_with_dropout = keras.Model(\n inputs=inputs, outputs=outputs, name=\"3_layer_mlp\"\n)\n\nfunctional_model_with_dropout.set_weights(functional_model.get_weights())",
"重みをディスクに保存して再度読み込むための API\n以下の形式でmodel.save_weightsを呼び出すことにより、重みをディスクに保存できます。\n\nTensorFlow Checkpoint\nHDF5\n\nmodel.save_weightsのデフォルトの形式は TensorFlow Checkpoint です。保存形式を指定する方法は 2 つあります。\n\nsave_format引数:値をsave_format = \"tf\"またはsave_format = \"h5\"に設定する。\npath引数:パスが.h5または.hdf5で終わる場合、HDF5 形式が使用されます。save_formatが設定されていない限り、他のサフィックスでは、TensorFlow Checkpoint になります。\n\nまた、オプションとしてインメモリの numpy 配列として重みを取得することもできます。各 API には、以下の長所と短所があります。\nTF Checkpoint 形式\n例:",
"# Runnable example\nsequential_model = keras.Sequential(\n [\n keras.Input(shape=(784,), name=\"digits\"),\n keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\"),\n keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\"),\n keras.layers.Dense(10, name=\"predictions\"),\n ]\n)\nsequential_model.save_weights(\"ckpt\")\nload_status = sequential_model.load_weights(\"ckpt\")\n\n# `assert_consumed` can be used as validation that all variable values have been\n# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other\n# methods in the Status object.\nload_status.assert_consumed()",
"形式の詳細\nTensorFlow Checkpoint 形式は、オブジェクト属性名を使用して重みを保存および復元します。 たとえば、tf.keras.layers.Denseレイヤーを見てみましょう。このレイヤーには、2 つの重み、dense.kernelとdense.biasがあります。レイヤーがtf形式で保存されると、結果のチェックポイントには、キー「kernel」と「bias」およびそれらに対応する重み値が含まれます。 詳細につきましては、TF Checkpoint ガイドの「読み込みの仕組み」をご覧ください。\n属性/グラフのエッジは、変数名ではなく、親オブジェクトで使用される名前で命名されていることに注意してください。以下の例のCustomLayerでは、変数CustomLayer.varは、\"var_a\"ではなく、\"var\"をキーの一部として保存されます。",
"class CustomLayer(keras.layers.Layer):\n def __init__(self, a):\n self.var = tf.Variable(a, name=\"var_a\")\n\n\nlayer = CustomLayer(5)\nlayer_ckpt = tf.train.Checkpoint(layer=layer).save(\"custom_layer\")\n\nckpt_reader = tf.train.load_checkpoint(layer_ckpt)\n\nckpt_reader.get_variable_to_dtype_map()",
"転移学習の例\n基本的に、2 つのモデルが同じアーキテクチャを持っている限り、同じチェックポイントを共有できます。\n例:",
"inputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = keras.layers.Dense(10, name=\"predictions\")(x)\nfunctional_model = keras.Model(inputs=inputs, outputs=outputs, name=\"3_layer_mlp\")\n\n# Extract a portion of the functional model defined in the Setup section.\n# The following lines produce a new model that excludes the final output\n# layer of the functional model.\npretrained = keras.Model(\n functional_model.inputs, functional_model.layers[-1].input, name=\"pretrained_model\"\n)\n# Randomly assign \"trained\" weights.\nfor w in pretrained.weights:\n w.assign(tf.random.normal(w.shape))\npretrained.save_weights(\"pretrained_ckpt\")\npretrained.summary()\n\n# Assume this is a separate program where only 'pretrained_ckpt' exists.\n# Create a new functional model with a different output dimension.\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = keras.layers.Dense(5, name=\"predictions\")(x)\nmodel = keras.Model(inputs=inputs, outputs=outputs, name=\"new_model\")\n\n# Load the weights from pretrained_ckpt into model.\nmodel.load_weights(\"pretrained_ckpt\")\n\n# Check that all of the pretrained weights have been loaded.\nfor a, b in zip(pretrained.weights, model.weights):\n np.testing.assert_allclose(a.numpy(), b.numpy())\n\nprint(\"\\n\", \"-\" * 50)\nmodel.summary()\n\n# Example 2: Sequential model\n# Recreate the pretrained model, and load the saved weights.\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\npretrained_model = keras.Model(inputs=inputs, outputs=x, name=\"pretrained\")\n\n# Sequential example:\nmodel = keras.Sequential([pretrained_model, keras.layers.Dense(5, name=\"predictions\")])\nmodel.summary()\n\npretrained_model.load_weights(\"pretrained_ckpt\")\n\n# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,\n# but will *not* work as expected. If you inspect the weights, you'll see that\n# none of the weights will have loaded. `pretrained_model.load_weights()` is the\n# correct method to call.",
"通常、モデルの作成には同じ API を使用することをお勧めします。Sequential と Functional、またはFunctional とサブクラス化などの間で切り替える場合は、常に事前トレーニング済みモデルを再構築し、事前トレーニング済みの重みをそのモデルに読み込みます。\nモデルのアーキテクチャがまったく異なる場合は、どうすれば重みを保存して異なるモデルに読み込むことができるのでしょうか?tf.train.Checkpointを使用すると、正確なレイヤー/変数を保存および復元することができます。\n例:",
"# Create a subclassed model that essentially uses functional_model's first\n# and last layers.\n# First, save the weights of functional_model's first and last dense layers.\nfirst_dense = functional_model.layers[1]\nlast_dense = functional_model.layers[-1]\nckpt_path = tf.train.Checkpoint(\n dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias\n).save(\"ckpt\")\n\n# Define the subclassed model.\nclass ContrivedModel(keras.Model):\n def __init__(self):\n super(ContrivedModel, self).__init__()\n self.first_dense = keras.layers.Dense(64)\n self.kernel = self.add_variable(\"kernel\", shape=(64, 10))\n self.bias = self.add_variable(\"bias\", shape=(10,))\n\n def call(self, inputs):\n x = self.first_dense(inputs)\n return tf.matmul(x, self.kernel) + self.bias\n\n\nmodel = ContrivedModel()\n# Call model on inputs to create the variables of the dense layer.\n_ = model(tf.ones((1, 784)))\n\n# Create a Checkpoint with the same structure as before, and load the weights.\ntf.train.Checkpoint(\n dense=model.first_dense, kernel=model.kernel, bias=model.bias\n).restore(ckpt_path).assert_consumed()",
"HDF5 形式\nHDF5 形式には、レイヤー名でグループ化された重みが含まれています。重みは、トレーニング可能な重みのリストをトレーニング不可能な重みのリストに連結することによって並べられたリストです(layer.weightsと同じ)。 したがって、チェックポイントに保存されているものと同じレイヤーとトレーニング可能な状態がある場合、モデルは HDF 5 チェックポイントを使用できます。\n例:",
"# Runnable example\nsequential_model = keras.Sequential(\n [\n keras.Input(shape=(784,), name=\"digits\"),\n keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\"),\n keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\"),\n keras.layers.Dense(10, name=\"predictions\"),\n ]\n)\nsequential_model.save_weights(\"weights.h5\")\nsequential_model.load_weights(\"weights.h5\")",
"ネストされたレイヤーがモデルに含まれている場合、layer.trainableを変更すると、layer.weightsの順序が異なる場合があることに注意してください。",
"class NestedDenseLayer(keras.layers.Layer):\n def __init__(self, units, name=None):\n super(NestedDenseLayer, self).__init__(name=name)\n self.dense_1 = keras.layers.Dense(units, name=\"dense_1\")\n self.dense_2 = keras.layers.Dense(units, name=\"dense_2\")\n\n def call(self, inputs):\n return self.dense_2(self.dense_1(inputs))\n\n\nnested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, \"nested\")])\nvariable_names = [v.name for v in nested_model.weights]\nprint(\"variables: {}\".format(variable_names))\n\nprint(\"\\nChanging trainable status of one of the nested layers...\")\nnested_model.get_layer(\"nested\").dense_1.trainable = False\n\nvariable_names_2 = [v.name for v in nested_model.weights]\nprint(\"\\nvariables: {}\".format(variable_names_2))\nprint(\"variable ordering changed:\", variable_names != variable_names_2)",
"転移学習の例\nHDF5 から事前トレーニングされた重みを読み込む場合は、元のチェックポイントモデルに重みを読み込んでから、目的の重み/レイヤーを新しいモデルに抽出することをお勧めします。\n例:",
"def create_functional_model():\n inputs = keras.Input(shape=(784,), name=\"digits\")\n x = keras.layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\n x = keras.layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\n outputs = keras.layers.Dense(10, name=\"predictions\")(x)\n return keras.Model(inputs=inputs, outputs=outputs, name=\"3_layer_mlp\")\n\n\nfunctional_model = create_functional_model()\nfunctional_model.save_weights(\"pretrained_weights.h5\")\n\n# In a separate program:\npretrained_model = create_functional_model()\npretrained_model.load_weights(\"pretrained_weights.h5\")\n\n# Create a new model by extracting layers from the original model:\nextracted_layers = pretrained_model.layers[:-1]\nextracted_layers.append(keras.layers.Dense(5, name=\"dense_3\"))\nmodel = keras.Sequential(extracted_layers)\nmodel.summary()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Unidata/MetPy
|
talks/2015 Unidata Users Workshop.ipynb
|
bsd-3-clause
|
[
"<h1 align=\"center\">MetPy: An Open Source Python Toolkit for Meteorology<h1>\n<h3 align=\"center\">Ryan May<br>UCAR/Unidata<br>22 June 2015<br></h3>\n\n## Who am I?\n\n- Software Engineer at Unidata\n- Been here for a year and a half\n- Work on THREDDS\n- Pythonista\n- I like long walks on the beach and watching Longhorns lose football games\n- Twitter/GitHub: @dopplershift\n\n#What is MetPy?\n\n- Python toolkit for Meteorology\n - Reading in weather data formats\n - Re-usable calculation routines\n - Weather plots\n- Intended for use by anyone in the field: Education, Research, etc.\n- Goal is to become a community resource to find useful bits and pieces\n\nDesign philosophy:\n- Fit well with the rest of the scientific Python ecosystem (NumPy, Matplotlib, etc.)\n- Simple to use with your own data\n- Unit correctness built-in (using `pint`)\n- Well-tested and documented\n- Development driven by use cases\n\n#What can MetPy do?\n\n## Reading Data\nPure python implementations for NEXRAD files:\n- Level 2\n- Level 3\n\n## Example of using NIDS decoding",
"# Level 3 example with multiple products\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy import ma\n\nfrom metpy.cbook import get_test_data\nfrom metpy.io.nexrad import Level3File\nfrom metpy.plots import ctables\n\n# Helper code for making sense of these products. This is hidden from the slideshow\n# and eventually, in some form, will make its way into MetPy proper.\ndef print_tab_pages(prod):\n print(('\\n' + '-'*80 + '\\n').join(prod.tab_pages))\n\ndef print_graph_pages(prod):\n colors = {0:'white', 3:'red', 4:'cyan'}\n for page in prod.graph_pages:\n fig, ax = plt.subplots(1, 1, figsize=(10,10))\n ax.axesPatch.set_facecolor('black')\n for line in page:\n if 'color' in line:\n c = colors[line['color']]\n if 'text' in line:\n ax.text(line['x'], line['y'], line['text'], color=c,\n transform=ax.transData, verticalalignment='top',\n horizontalalignment='left', fontdict={'family':'monospace'},\n fontsize=8)\n else:\n vecs = np.array(line['vectors'])\n ax.plot(vecs[:, ::2], vecs[:, 1::2], color=c)\n ax.set_xlim(0, 639)\n ax.set_ylim(511, 0)\n ax.set_aspect('equal', 'box')\n ax.xaxis.set_major_formatter(plt.NullFormatter())\n ax.xaxis.set_major_locator(plt.NullLocator())\n ax.yaxis.set_major_formatter(plt.NullFormatter())\n ax.yaxis.set_major_locator(plt.NullLocator())\n for s in ax.spines: ax.spines[s].set_color('none')\n\ndef plot_prod(prod, cmap, norm, ax=None):\n if ax is None:\n ax = plt.gca()\n\n data_block = prod.sym_block[0][0]\n data = np.array(data_block['data'])\n data = prod.map_data(data)\n data = np.ma.array(data, mask=np.isnan(data))\n if 'start_az' in data_block:\n az = np.array(data_block['start_az'] + [data_block['end_az'][-1]])\n rng = np.linspace(0, prod.max_range, data.shape[-1] + 1)\n x = rng * np.sin(np.deg2rad(az[:, None]))\n y = rng * np.cos(np.deg2rad(az[:, None]))\n else:\n x = np.linspace(-prod.max_range, prod.max_range, data.shape[1] + 1)\n y = np.linspace(-prod.max_range, prod.max_range, data.shape[0] + 1)\n data = data[::-1]\n pc = ax.pcolormesh(x, y, data, cmap=cmap, norm=norm)\n plt.colorbar(pc, extend='both')\n ax.set_aspect('equal', 'datalim')\n ax.set_xlim(-100, 100)\n ax.set_ylim(-100, 100)\n return pc, data\n\ndef plot_points(prod, ax=None):\n if ax is None:\n ax = plt.gca()\n\n data_block = prod.sym_block[0]\n styles = {'MDA': dict(marker='o', markerfacecolor='None', markeredgewidth=2, size='radius'),\n 'MDA (Elev.)': dict(marker='s', markerfacecolor='None', markeredgewidth=2, size='radius'),\n 'TVS': dict(marker='v', markerfacecolor='red', markersize=10),\n 'Storm ID': dict(text='id'),\n 'HDA': dict(marker='o', markersize=10, markerfacecolor='blue', alpha=0.5)}\n artists = []\n for point in data_block:\n if 'type' in point:\n info = styles.get(point['type'], {}).copy()\n x,y = point['x'], point['y']\n text_key = info.pop('text', None)\n if text_key:\n artists.append(ax.text(x, y, point[text_key], transform=ax.transData, clip_box=ax.bbox, **info))\n artists[-1].set_clip_on(True)\n else:\n size_key = info.pop('size', None)\n if size_key:\n info['markersize'] = np.pi * point[size_key]**2\n artists.append(ax.plot(x, y, **info))\n\ndef plot_tracks(prod, ax=None):\n if ax is None:\n ax = plt.gca()\n \n data_block = prod.sym_block[0]\n\n for track in data_block:\n if 'marker' in track:\n pass\n if 'track' in track:\n x,y = np.array(track['track']).T\n ax.plot(x, y, color='k')\n\n# Read in a bunch of NIDS products\ntvs = Level3File(get_test_data('nids/KOUN_SDUS64_NTVTLX_201305202016'))\nnmd = Level3File(get_test_data('nids/KOUN_SDUS34_NMDTLX_201305202016'))\nnhi = Level3File(get_test_data('nids/KOUN_SDUS64_NHITLX_201305202016'))\nn0q = Level3File(get_test_data('nids/KOUN_SDUS54_N0QTLX_201305202016'))\nnst = Level3File(get_test_data('nids/KOUN_SDUS34_NSTTLX_201305202016'))\n\n# What happens when we print one out\ntvs\n\n# Can print tabular (ASCII) information in the product\nprint_tab_pages(tvs)",
"What about the \"data\" content of the products?",
"fig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(1, 1, 1)\nnorm, cmap = ctables.registry.get_with_boundaries('NWSReflectivity', np.arange(0, 85, 5))\npc, data = plot_prod(n0q, cmap, norm, ax)\nplot_points(tvs)\nplot_points(nmd)\nplot_points(nhi)\nplot_tracks(nst)\nax.set_ylim(-20, 40)\nax.set_xlim(-50, 20)",
"Calculations\n\nBasic calculations (e.g. heat index, wind components)\nKinematic calculations (e.g. vertical vorticity)\nThermodynamic calculations (e.g. dewpoint, LCL)\n\nTurbulence (e.g. turbulence kinetic energy)\n\n\nCalculations simple and independent to promote re-use\n\nOnly requirement is to use unit support\n\nExample\nIt's 86F outside here (pressure 860 mb), with a relative humidity of 40%. What's the dewpoint?",
"import metpy.calc as mpcalc\nfrom metpy.units import units\ntemp = 86 * units.degF\npress = 860. * units.mbar\nhumidity = 40 / 100.\n\ndewpt = mpcalc.dewpoint_rh(temp, humidity).to('degF')\ndewpt",
"What does the LCL look like for that?",
"mpcalc.lcl(press, temp, dewpt)",
"Given those conditions, what does the profile of a parcel look like?",
"import numpy as np\npressure_levels = np.array([860., 850., 700., 500., 300.]) * units.mbar\nmpcalc.parcel_profile(pressure_levels, temp, dewpt)",
"Plots\n\nColortables\nSkew-T\n\nExample\nPlot some sounding data",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom metpy.cbook import get_test_data\nfrom metpy.calc import get_wind_components\nfrom metpy.plots import SkewT\n\n# Parse the data\np, T, Td, direc, spd = np.loadtxt(get_test_data('sounding_data.txt'),\n usecols=(0, 2, 3, 6, 7), skiprows=4, unpack=True)\nu, v = get_wind_components(spd, np.deg2rad(direc))\n\n# Create a skewT using matplotlib's default figure size\nfig = plt.figure(figsize=(8, 8))\nskew = SkewT(fig)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\nskew.ax.set_ylim(1000, 100)\nfig",
"Siphon: Sucking data down the pipe\nTHREDDS?\n\nUnidata technology for serving many different data formats\nCreates \"feature collections\" on top of data files to provide access across multiple files\nProvides a variety of access services on top of these datasets\nDemonstration server: http://thredds.ucar.edu/thredds/catalog.html\n\nSiphon is Unidata's Python library for talking to THREDDS:\n- Making sense of catalogs\n- Talking to the NetCDF Subset Service (NCSS)\n- Talking to the radar query service (radar server)\nExample\nLet's use Siphon to get the forecast from the GFS. Start by finding it on THREDDS.",
"from siphon.catalog import TDSCatalog\nbest_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p5deg/Best')\nbest_ds = list(best_gfs.datasets.values())[0]\n\nbest_ds.access_urls",
"Ok, let's get data using NCSS",
"# Set up a class to access\nfrom siphon.ncss import NCSS\nncss = NCSS(best_ds.access_urls['NetcdfSubset'])\n\n# Get today's date\nfrom datetime import datetime, timedelta\nnow = datetime.utcnow()\n\n# Get a query object and set to get temperature for Boulder for the next 7 days\nquery = ncss.query()\nquery.lonlat_point(-105, 40).vertical_level(100000).time_range(now, now + timedelta(days=7))\nquery.variables('Temperature_isobaric').accept('netcdf4')\n\n# Get the Data\ndata = ncss.get_data(query)\nlist(data.variables)\n\n# Pull the variables we want from the NetCDF file\ntemp = data.variables['Temperature_isobaric']\ntime = data.variables['time']\n\n# Convert time values from numbers to datetime\nfrom netCDF4 import num2date\ntime_vals = num2date(time[:].squeeze(), time.units)\n\n# Plot\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(1, 1, figsize=(9, 8))\nax.plot(time_vals, temp[:].squeeze(), 'r', linewidth=2)\nax.set_ylabel(temp.standard_name + ' (%s)' % temp.units)\nax.set_xlabel('Forecast Time (UTC)')\nax.grid(True)",
"\"With your powers combined...\"\n\nSiphon is a complementary technology to Metpy\nProvides an easy way to get data for use in MetPy\n\nExample\nLet's plot a forecast sounding from GFS for Boulder 12 hours from now",
"# Re-use the NCSS access from earlier, but make a new query\nquery = ncss.query()\nquery.lonlat_point(-105, 40).time(now + timedelta(hours=12)).accept('csv')\nquery.variables('Temperature_isobaric', 'Relative_humidity_isobaric',\n 'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')\ndata = ncss.get_data(query)\n\n# Pull out data with some units\np = (data['vertCoord'] * units('Pa')).to('mbar')\nT = data['Temperature_isobaric'] * units('kelvin')\nTd = mpcalc.dewpoint_rh(T, data['Relative_humidity_isobaric'] / 100.)\nu = data['ucomponent_of_wind_isobaric'] * units('m/s')\nv = data['vcomponent_of_wind_isobaric'] * units('m/s')\n\n# Create a skewT using matplotlib's default figure size\nfig = plt.figure(figsize=(8, 8))\nskew = SkewT(fig)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T.to('degC'), 'r')\nskew.plot(p, Td.to('degC'), 'g')\n\nskew.plot_barbs(p[p>=100 * units.mbar], u.to('knots')[p>=100 * units.mbar],\n v.to('knots')[p>=100 * units.mbar])\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_title(data['date'][0]);",
"Ideas for the future\nA sampling from our issue tracker:\n- Station plots (with weather symbols)\n- Hodograph\n- Pulling in sounding data from Wyoming archive",
"# This code isn't actually in a MetPy release yet.....\nimport importlib\nimport metpy.plots.station_plot\nimportlib.reload(metpy.plots.station_plot)\nstation_plot = metpy.plots.station_plot.station_plot\n\n# Set up query for point data\nncss = NCSS('http://thredds.ucar.edu/thredds/ncss/nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr/dataset.xml')\nquery = ncss.query()\nquery.lonlat_box(44, 37, -98, -108).time(now - timedelta(days=2)).accept('csv')\nquery.variables('air_temperature', 'dew_point_temperature',\n 'wind_from_direction', 'wind_speed')\ndata = ncss.get_data(query)\n\n# Some unit conversions\nspeed = data['wind_speed'] * units('m/s')\ndata['u'],data['v'] = mpcalc.get_wind_components(speed.to('knots'),\n data['wind_from_direction'] * units('degree'))\n\n# Plot using basemap for now\nfrom mpl_toolkits.basemap import Basemap\nfig = plt.figure(figsize=(9, 9))\nax = fig.add_subplot(1, 1, 1)\nm = Basemap(lon_0=-105, lat_0=40, lat_ts=40, resolution='i',\n projection='stere', urcrnrlat=44, urcrnrlon=-98, llcrnrlat=37,\n llcrnrlon=-108, ax=ax)\nm.bluemarble()\n\n# Just an early prototype...\nstation_plot(data, proj=m, ax=ax, layout={'NW': 'air_temperature', 'SW': 'dew_point_temperature'},\n styles={'air_temperature': dict(color='r'), 'dew_point_temperature': dict(color='lightgreen')},\n zorder=1);\nfig",
"Need more use cases to drive development\nWriting examples reveals what parts are missing or require too much code\n\nCommunity Participation\nInfrastructure\n\nThe project is built around automation to make it easy to maintain:\nAutomated tests (unit and style checking) on Travis CI\nTest coverage using Coverall.io\nAutomatic code quality checks on Landscape.io\n\nAutomatic documentation generation from Read the Docs\n\n\nThis is the long way of saying: it's really easy to incorporate new things and keep things up to date and working\n\n\nFor Siphon:\n\nTravis CI\nCoverall.io\nLandscape.io\nRead the Docs\n\nGetting and using MetPy\nMetPy is open-source and all development done in the open:\n- Supports Python 2.7 and >= 3.2\n- Packages available on the Python Package Index\n- Source available on GitHub\n- The GitHub Issue Tracker is used extensively\n- Documetation is online\n- It's as simple as: pip install metpy\nJoin Us!\nWe want to encourage everyone to join us in making MetPy as useful as possible:\n- Wrote something cool that you think everyone could benefit from?\n - Open a Pull Request!\n - Even if it's not completely up to snuff (tests, docs) we'll be happy to help you get it there.\n\n\nFound MetPy useful but it's missing one piece that would help you achieve inner peace?\n\nCreate a new issue!\nOr even better yet, take a stab at making it and open a Pull Request.\n\n\n\nMade some really cool plots with MetPy?\n\nWe love examples -- send it to us!\nOr better yet, add it to the examples and open a Pull Requst!\n\n\n\nHands on exercise"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aaschroeder/Titanic_example
|
Final_setup_Random_Forest.ipynb
|
gpl-3.0
|
[
"This script implements a Random Forest algorithm on the Titanic dataset. This is an algorithm that bootstraps both features and samples, and so will be auto-selecting features to evaluate. Since we're pretty agnostic about everything except prediction, let's just cycle through each of the option values to find some good ones (since we're not doing any backward evaluation after choosing an option, this probably isn't the optimal one, but it should be decent at least).",
"import numpy as np\nimport pandas as pd\n\ntitanic=pd.read_csv('./titanic_clean_data.csv')\n\ncols_to_norm=['Age','Fare']\ncol_norms=['Age_z','Fare_z']\n\ntitanic[col_norms]=titanic[cols_to_norm].apply(lambda x: (x-x.mean())/x.std())\n\ntitanic['cabin_clean']=(pd.notnull(titanic.Cabin))\n\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.cross_validation import KFold\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier\n\ntitanic_target=titanic.Survived.values\nfeatures=['Sex','SibSp','Parch','Pclass_1','Pclass_2','Pclass_3','Emb_C','Emb_Q','Emb_S',\\\n 'Emb_nan','Age_ct_C','Age_ct_A','Age_ct_S', 'Sp_ct','Age_z','Fare_z',\\\n 'Ti_Dr', 'Ti_Master', 'Ti_Mil', 'Ti_Miss', 'Ti_Mr', 'Ti_Mrs', 'Ti_Other', 'Ti_Rev',\\\n 'Fl_AB', 'Fl_CD', 'Fl_EFG', 'Fl_nan']\ntitanic_features=titanic[features].values\n\n\ntitanic_features, ensemble_features, titanic_target, ensemble_target= \\\n train_test_split(titanic_features,\n titanic_target,\n test_size=.1,\n random_state=7132016)",
"Now, what we're going to do is go stepwise through many of the features of the random forest classifier, to figure out which parameters will give us the best fit. RF naturally does cross-validation in the default, so we don't need to worry about that part. We'll go in the following order:\n\nNumber of Trees\nLoss Criterion\nmax_features\nmax_depth\nmin_samples_split\nmin_weight_fraction_leaf\nmax_leaf_nodes\n\n1.) Number of Trees",
"score=0\nscores=[]\nfor feature in range(50,1001,50):\n clf = RandomForestClassifier(n_estimators=feature, oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n scores.append(score_test)\n if score_test>score:\n n_out=feature\n score_diff=score_test-score\n score=score_test\n\n\nprint n_out ",
"2.) Loss Criterion",
"crit_param = ['gini','entropy']\nscore=0\nfor feature in crit_param:\n clf = RandomForestClassifier(n_estimators=n_out, criterion=feature, oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n if score_test>score:\n crit_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint crit_out ",
"3.) Number of features considered at each split",
"feat_param = ['sqrt','log2',None]\nscore=0\nfor feature in feat_param:\n clf = RandomForestClassifier(n_estimators=n_out, criterion=crit_out, max_features=feature,\\\n oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n if score_test>score:\n max_feat_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint max_feat_out ",
"4.) Maximum depth of tree",
"score=0\nfor feature in range(1,21):\n clf = RandomForestClassifier(n_estimators=n_out, criterion=crit_out, max_features=max_feat_out, \\\n max_depth=feature, oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n if score_test>score:\n depth_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint depth_out \n",
"5.) The number of samples available in order to make another split",
"score=0\nscores=[]\nfor feature in range(1,21):\n clf = RandomForestClassifier(n_estimators=n_out, criterion=crit_out,\\\n max_features=max_feat_out, max_depth=depth_out,\\\n min_samples_split=feature, oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n scores.append(score_test)\n if score_test>=score:\n sample_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint sample_out \n",
"6.) Min required weighted fraction of samples in a leaf or node",
"score=0\n\nfor feature in np.linspace(0.0,0.5,10):\n clf = RandomForestClassifier(n_estimators=n_out, criterion=crit_out,\\\n max_features=max_feat_out, max_depth=depth_out,\\\n min_samples_split=sample_out, min_weight_fraction_leaf=feature, \\\n oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n scores.append(score_test)\n if score_test>score:\n frac_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint frac_out",
"7.) Maximum possible number of nodes",
"#max_leaf_nodes - Note here we don't reset score because in order to use this variable we'll need to change other stuff\n\nnode_out=None\n\nfor feature in range(2,11):\n clf = RandomForestClassifier(n_estimators=n_out, criterion=crit_out,\\\n max_features=max_feat_out, max_depth=depth_out,\\\n min_samples_split=sample_out, min_weight_fraction_leaf=frac_out, \\\n max_leaf_nodes=feature, oob_score=True, random_state=7112016)\n clf.fit(titanic_features,titanic_target)\n score_test = clf.oob_score_\n scores.append(score_test)\n if score_test>score:\n node_out=feature\n score_diff=score_test-score\n score=score_test\n\nprint node_out\n\nmodel=RandomForestClassifier(n_estimators=n_out, criterion=crit_out,\\\n max_features=max_feat_out, max_depth=depth_out,\\\n min_samples_split=sample_out, min_weight_fraction_leaf=frac_out, \\\n max_leaf_nodes=node_out, random_state=7112016).fit(titanic_features, titanic_target)\n\ntest_data=pd.read_csv('./test.csv')\n\ntest_data.Sex.replace(['male','female'],[True,False], inplace=True)\ntest_data.Age= test_data.groupby(['Sex','Pclass'])[['Age']].transform(lambda x: x.fillna(x.mean()))\ntest_data.Fare= titanic.groupby(['Pclass'])[['Fare']].transform(lambda x: x.fillna(x.mean()))\ntitanic_class=pd.get_dummies(test_data.Pclass,prefix='Pclass',dummy_na=False)\ntest_data=pd.merge(test_data,titanic_class,on=test_data['PassengerId'])\ntest_data=pd.merge(test_data,pd.get_dummies(test_data.Embarked, prefix='Emb', dummy_na=True), on=test_data['PassengerId'])\ntitanic['Floor']=titanic['Cabin'].str.extract('^([A-Z])', expand=False)\ntitanic['Floor'].replace(to_replace='T',value=np.NaN ,inplace=True)\ntitanic=pd.merge(titanic,pd.get_dummies(titanic.Floor, prefix=\"Fl\", dummy_na=True),on=titanic['PassengerId'])\ntest_data['Age_cut']=pd.cut(test_data['Age'],[0,17.9,64.9,99], labels=['C','A','S'])\ntest_data=pd.merge(test_data,pd.get_dummies(test_data.Age_cut, prefix=\"Age_ct\", dummy_na=False),on=test_data['PassengerId'])\n\ntest_data['Title']=test_data['Name'].str.extract(', (.*)\\.', expand=False)\ntest_data['Title'].replace(to_replace='Mrs\\. .*',value='Mrs', inplace=True, regex=True)\ntest_data.loc[test_data.Title.isin(['Col','Major','Capt']),['Title']]='Mil'\ntest_data.loc[test_data.Title=='Mlle',['Title']]='Miss'\ntest_data.loc[test_data.Title=='Mme',['Title']]='Mrs'\ntest_data['Title_ct']=test_data.groupby(['Title'])['Title'].transform('count')\ntest_data.loc[test_data.Title_ct<5,['Title']]='Other'\ntest_data=pd.merge(test_data,pd.get_dummies(test_data.Title, prefix='Ti',dummy_na=False), on=test_data['PassengerId'])\n\ntest_data['NameTest']=test_data.Name\ntest_data['NameTest'].replace(to_replace=\" \\(.*\\)\",value=\"\",inplace=True, regex=True)\ntest_data['NameTest'].replace(to_replace=\", M.*\\.\",value=\", \",inplace=True, regex=True)\n\n\ncols_to_norm=['Age','Fare']\ncol_norms=['Age_z','Fare_z']\n\ntest_data['Age_z']=(test_data.Age-titanic.Age.mean())/titanic.Age.std()\ntest_data['Fare_z']=(test_data.Fare-titanic.Fare.mean())/titanic.Fare.std()\n\ntest_data['cabin_clean']=(pd.notnull(test_data.Cabin))\n\n\nname_list=pd.concat([titanic[['PassengerId','NameTest']],test_data[['PassengerId','NameTest']]])\nname_list['Sp_ct']=name_list.groupby('NameTest')['NameTest'].transform('count')-1\ntest_data=pd.merge(test_data,name_list[['PassengerId','Sp_ct']],on='PassengerId',how='left')\n\ndef add_cols(var_check,df):\n\n if var_check not in df.columns.values:\n df[var_check]=0\n\nfor x in features:\n add_cols(x, test_data)\n \nfeatures=['Sex','SibSp','Parch','Pclass_1','Pclass_2','Pclass_3','Emb_C','Emb_Q','Emb_S',\\\n 'Emb_nan','Age_ct_C','Age_ct_A','Age_ct_S', 'Sp_ct','Age_z','Fare_z',\\\n 'Ti_Dr', 'Ti_Master', 'Ti_Mil', 'Ti_Miss', 'Ti_Mr', 'Ti_Mrs', 'Ti_Other', 'Ti_Rev',\\\n 'Fl_AB', 'Fl_CD', 'Fl_EFG', 'Fl_nan']\ntest_features=test_data[features].values\n\npredictions=model.predict(ensemble_features)\nensemble_rf=pd.DataFrame({'rf_pred':predictions})\nensemble_rf.to_csv('./ensemble_rf.csv', index=False)\n\npredictions=model.predict(test_features)\ntest_data['Survived']=predictions\nkaggle=test_data[['PassengerId','Survived']]\nkaggle.to_csv('./kaggle_titanic_submission_rf.csv', index=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ethen8181/machine-learning
|
trees/xgboost.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#XGBoost-API-Walkthrough\" data-toc-modified-id=\"XGBoost-API-Walkthrough-1\"><span class=\"toc-item-num\">1 </span>XGBoost API Walkthrough</a></span><ul class=\"toc-item\"><li><span><a href=\"#Preparation\" data-toc-modified-id=\"Preparation-1.1\"><span class=\"toc-item-num\">1.1 </span>Preparation</a></span></li><li><span><a href=\"#XGBoost-Basics\" data-toc-modified-id=\"XGBoost-Basics-1.2\"><span class=\"toc-item-num\">1.2 </span>XGBoost Basics</a></span></li><li><span><a href=\"#Hyperparamter-Tuning-(Random-Search)\" data-toc-modified-id=\"Hyperparamter-Tuning-(Random-Search)-1.3\"><span class=\"toc-item-num\">1.3 </span>Hyperparamter Tuning (Random Search)</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style='custom2.css', plot_style=False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom operator import itemgetter\nfrom xgboost import XGBClassifier\nfrom scipy.stats import randint, uniform\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split, RandomizedSearchCV\n\n%watermark -a 'Ethen' -d -t -v -p numpy,pandas,xgboost,sklearn,matplotlib",
"XGBoost API Walkthrough\nQuoted from Quora: What is the difference between the R gbm (gradient boosting machine) and xgboost (extreme gradient boosting)?\n\nBoth xgboost (Extreme gradient boosting) and gbm follows the principle of gradient boosting. The name xgboost, though, actually refers to the engineering goal to push the limit of computations resources for boosted tree algorithms. Which is the reason why many people use xgboost. For model, it might be more suitable to be called as regularized gradient boosting, as it uses a more regularized model formalization to control overfitting.\n\nPreparation\nIn this toy example, we will be dealing with a binary classification task. We start off by generating a 20 dimensional artificial dataset with 1000 samples, where 8 features holding information, 3 are redundant and 2 repeated. And perform a train/test split. The testing data will be useful for validating the performance of our algorithms.",
"seed = 104\nX, y = make_classification(n_samples=1000, n_features=20, \n n_informative=8, n_redundant=3, \n n_repeated=2, random_state=seed)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed)\n\n# confirm that the dataset is balanced,\n# that is the target variable is equally \n# distributed across both dataset\nprint('Train label distribution:')\nprint(np.bincount(y_train))\n\nprint('\\nTest label distribution:')\nprint(np.bincount(y_test))",
"We can use a decision tree classifier to establish our baseline and see if a more complex model is capable of beating it.",
"tree = DecisionTreeClassifier(random_state=seed, max_depth=6)\n\n# train classifier\ntree.fit(X_train, y_train)\n\n# predict output\ntree_y_pred = tree.predict(X_test)\ntree_y_pred_prob = tree.predict_proba(X_test)[:, 1]\n\n# evaluation\ntree_auc = roc_auc_score(y_test, tree_y_pred_prob)\nprint('auc:', tree_auc)",
"XGBoost Basics\nWe start by training a xgboost model using a fix set of parameters. For further details of the parameter (using scikit-learn like API) refer to the XGBoost Documentation: Python API documentation.",
"xgb_params_fixed = {\n 'learning_rate': 0.1,\n \n # use 'multi:softprob' for multi-class problems\n 'objective': 'binary:logistic',\n \n # length of the longest path from a root to a leaf\n 'max_depth': 6,\n \n # subsample ratio of columns when constructing each tree\n 'colsample_bytree': 0.8,\n \n # setting it to a positive value \n # might help when class is extremely imbalanced\n # as it makes the update more conservative\n 'max_delta_step': 1, \n 'n_estimators': 150,\n \n # use all possible cores for training\n 'n_jobs': -1\n}\nmodel_xgb = XGBClassifier(**xgb_params_fixed)\n\n# we also specify the evaluation dataset and metric\n# to record the model's performance history, note that\n# we can supply multiple evaluation metric by passig a \n# list to `eval_metric`\neval_set = [(X_train, y_train), (X_test, y_test)]\nmodel_xgb.fit(X_train, y_train, eval_metric='auc', eval_set=eval_set, verbose=False)",
"We can retrieve the performance of the model on the evaluation dataset and plot it to get insight into the training process. The evals_results_ dictionary stores the validation_0 and validation_1 as its first key. This corresponds to the order that datasets were provided to the eval_set argument. The second key is the eval_metric that were provided.",
"# change default figure and font size\nplt.rcParams['figure.figsize'] = 10, 8\nplt.rcParams['font.size'] = 12\n\n\nhistory = model_xgb.evals_result_\nx_axis = range(len(history['validation_0']['auc']))\nplt.plot(x_axis, history['validation_0']['auc'], label='Train')\nplt.plot(x_axis, history['validation_1']['auc'], label='Test')\nplt.legend(loc = 'best')\nplt.ylabel('AUC')\nplt.title('Xgboost AUC')\nplt.show()",
"From reviewing the plot, it looks like there is an opportunity to stop the learning early, since the auc score for the testing dataset stopped increasing around 80 estimators. Luckily, xgboost supports this functionality.\nEarly stopping works by monitoring the performance of the model that is being trained on a separate validation or test dataset and stopping the training procedure once the performance on the validation or test dataset has not improved after a fixed number of training iterations (we can specify the number). This will potentially save us a lot of time from training a model that does not improve its performance over time.\nThe evaluation measure may be the loss function that is being optimized to train the model (such as logarithmic loss), or an external metric of interest to the problem in general (such as the auc score that we've used above). The full list of performace measure that we can directly specify can be found at the eval_metric section of the XGBoost Doc: Learning Task Parameters.\nIn addition to specifying a evaluation metric and dataset, to use early stopping we also need to specify the early_stopping_rounds. This is essentially telling the model to stop the training process if the evaluation dataset's evaluation metric does not improve over this many rounds. Note that if multiple evaluation datasets or multiple evaluation metrics are provided in a list, then early stopping will use the last one in the list.\nFor example, we can check for no improvement in auc over the 10 rounds as follows:",
"# we set verbose to 10 so that it will print out the evaluation metric for the\n# evaluation dataset for every 10 round\nmodel_xgb.fit(X_train, y_train, \n eval_metric = 'auc', eval_set = eval_set,\n early_stopping_rounds = 5, verbose = 10)\n\n# we can then access the best number of tree and use it later for prediction\nprint('best iteration', model_xgb.best_ntree_limit)",
"Keep in mind that XGBoost will return the model from the last iteration, not the best one. Hence when making the prediction, we need to pass the ntree_limit parameter to ensure that we get the optimal model's prediction. And we can see from the result below that this is already better than our original decision tree model.",
"# print the model's performance\nntree_limit = model_xgb.best_ntree_limit\ny_pred_prob = model_xgb.predict_proba(X_test, ntree_limit=ntree_limit)[:, 1]\nprint('auc:', roc_auc_score(y_test, y_pred_prob))\n\ndef plot_xgboost_importance(xgboost_model, feature_names, threshold=5):\n \"\"\"\n Improvements on xgboost's plot_importance function, where \n 1. the importance are scaled relative to the max importance, and \n number that are below 5% of the max importance will be chopped off\n 2. we need to supply the actual feature name so the label won't \n just show up as feature 1, feature 2, which are not very interpretable\n \n returns the important features's index sorted in descending order\n \"\"\"\n # convert from dictionary to tuples and sort by the\n # importance score in ascending order for plotting purpose\n importance = xgboost_model.get_booster().get_score(importance_type='gain')\n tuples = [(int(k[1:]), importance[k]) for k in importance]\n tuples = sorted(tuples, key = itemgetter(1))\n labels, values = zip(*tuples)\n\n # make importances relative to max importance,\n # and filter out those that have smaller than 5%\n # relative importance (threshold chosen arbitrarily)\n labels, values = np.array(labels), np.array(values)\n values = np.round(100 * values / np.max(values), 2)\n mask = values > threshold\n labels, values = labels[mask], values[mask]\n feature_labels = feature_names[labels]\n\n ylocs = np.arange(values.shape[0])\n plt.barh(ylocs, values, align='center')\n for x, y in zip(values, ylocs):\n plt.text(x + 1, y, x, va='center')\n\n plt.ylabel('Features')\n plt.xlabel('Relative Importance Score')\n plt.title('Feature Importance Score')\n plt.xlim([0, 110])\n plt.yticks(ylocs, feature_labels)\n\n # revert the ordering of the importance\n return labels[::-1]\n\n# we don't actually have the feature's actual name as those\n# were simply randomly generated numbers, thus we simply supply\n# a number ranging from 0 ~ the number of features\nfeature_names = np.arange(X_train.shape[1])\nplot_xgboost_importance(xgboost_model=model_xgb, feature_names=feature_names)",
"Side note: Apart from using the built-in evaluation metric, we can also define one ourselves. The evaluation metric should be a function that takes two argument y_pred, y_true (it doesn't have to named like this). It is assumed that y_true will be a DMatrix object so that we can call the get_label method to access the true labels. As for the return value, the function ust return a str, value pair where the str is a name for the evaluation metric and value is the value of the evaluation. This objective is always minimized.",
"def misclassified(y_pred, y_true):\n \"\"\"\n custom evaluation metric for xgboost, the metric\n counts the number of misclassified examples assuming \n that classes with p>0.5 are positive\n \"\"\"\n labels = y_true.get_label() # obtain true labels\n preds = y_pred > 0.5 # obtain predicted values\n return 'misclassified', np.sum(labels != preds)\n\n\nmodel_xgb.fit(X_train, y_train, \n eval_metric=misclassified, eval_set=eval_set,\n early_stopping_rounds=5, verbose=10)\n\nntree_limit = model_xgb.best_ntree_limit\ny_pred_prob = model_xgb.predict_proba(X_test, ntree_limit=ntree_limit)[:, 1]\nprint('auc:', roc_auc_score(y_test, y_pred_prob))",
"Another example of writing the customized rsquared evaluation metric.\n```python\ndef rsquared(y_pred, y_true):\n \"\"\"rsquared evaluation metric for xgboost's regression\"\"\"\n labels = y_true.get_label()\n sse = np.sum((labels - y_pred) 2)\n sst = np.sum((labels - np.mean(labels)) 2)\n rsquared = 1 - sse / sst\n# note that the documentation says the \n# objective function is minimized, thus\n# we take the negative sign of rsquared\nreturn 'r2', -rsquared\n\n```\nHyperparamter Tuning (Random Search)\nNext, since overfitting is a common problem with sophisticated algorithms like gradient boosting, we'll introduce ways to tune the model's hyperparameter and deal with them. If a xgboost model is too complex we can try:\n\nReduce max_depth, the depth of each tree.\nIncrease min_child_weight, minimum sum of observation's weight needed in a child (think of it as the number of observation's needed in a tree's node).\nIncrease gamma, the minimum loss reduction required to make a further partition.\nIncrease regularization parameters, reg_lambda (l2 regularization) and reg_alpha (l1 regularization).\nAdd more randomness by using subsample (the fraction of observations to be randomly samples for fitting each tree), colsample_bytree (the fraction of columns to be randomly samples for fitting each tree) parameters.\n\nWe'll use a Random Search to tune the model's hyperparameter.",
"def build_xgboost(X_train, y_train, X_test, y_test, n_iter):\n \"\"\"\n random search hyperparameter tuning for xgboost\n classification task, n_iter controls the number\n of hyperparameter combinations that it will search for\n \"\"\"\n # xgboost base parameter:\n xgb_param_fixed = { \n # setting it to a positive value \n # might help when class is extremely imbalanced\n # as it makes the update more conservative\n 'max_delta_step': 1,\n \n # use all possible cores for training\n 'n_jobs': -1,\n \n # set number of estimator to a large number\n # and the learning rate to be a small number,\n # we'll let early stopping decide when to stop\n 'n_estimators': 300,\n 'learning_rate': 0.1}\n xgb_base = XGBClassifier(**xgb_param_fixed)\n\n # random search's parameter:\n # scikit-learn's random search works with distributions; \n # but it must provide a rvs method for sampling values from it,\n # such as those from scipy.stats.distributions\n # randint: discrete random variables ranging from low to high\n # uniform: uniform continuous random variable between loc and loc + scale\n xgb_param_options = {\n 'max_depth': randint(low=3, high=15),\n 'colsample_bytree': uniform(loc=0.7, scale=0.3),\n 'subsample': uniform(loc=0.7, scale=0.3)}\n \n eval_set = [(X_train, y_train), (X_test, y_test)]\n xgb_fit_params = { \n 'eval_metric': 'auc', \n 'eval_set': eval_set,\n 'early_stopping_rounds': 5,\n 'verbose': False\n }\n\n model_xgb = RandomizedSearchCV(\n estimator=xgb_base,\n param_distributions=xgb_param_options,\n cv=10, \n \n # number of parameter settings that are sampled\n n_iter=n_iter,\n \n # n_jobs can be a parameter (since it's a fast task\n # for this toy dataset, we'll simply we using 1 jobs)\n n_jobs=1,\n verbose=1\n ).fit(X_train, y_train, **xgb_fit_params)\n \n print('Best score obtained: {0}'.format(model_xgb.best_score_))\n print('Best Parameters:')\n for param, value in model_xgb.best_params_.items():\n print('\\t{}: {}'.format(param, value))\n\n return model_xgb.best_estimator_\n\nxgb_model = build_xgboost(X_train, y_train, X_test, y_test, n_iter=15)\nntree_limit = xgb_model.best_ntree_limit\ny_pred_prob = xgb_model.predict_proba(X_test, ntree_limit=ntree_limit)[:, 1]\nprint('auc:', roc_auc_score(y_test, y_pred_prob))",
"Reference\n\nOnline Course: practical xgboost in python\nXGBoost Documentation: Python API documentation\nBlog: Complete Guide to Parameter Tuning in XGBoost\nBlog: Avoid Overfitting By Early Stopping With XGBoost In Python"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pfschus/fission_bicorrelation
|
methods/build_bicorr_hist_master.ipynb
|
mit
|
[
"<div id=\"toc\"></div>\n\n<div id=\"toc\"></div>",
"%%javascript\n$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')",
"Build Bicorrelation Histogram\nAuthor: Patricia Schuster\nAffiliation: University of Michigan\nDate: January 2017\nUpdates:\n 6/30/2017: Implement usage of det_df for alloc_bhm and fill_bhm (Steps 4, 5)\n 7/5/2017: Update built_bhm to use det_df, take dt_bin_edges as input (Functionalize)\nGoal\nMy goal is to build a script or notebook that will produce bicorrelation plots for the FNPC fission CVT project.\nThe plots will be 2d histograms of $\\Delta t_1$ vs. $\\Delta t_2$ for two detectors, where $\\Delta t$ for each detector is the time of an interaction relative to the time of the corresponding fission chamber interaction. \nSteps in the analysis:\n1) Load the data from bicorr1 (produced by bicorr.generate_bicorr)\n2) Make some benchmark plots to make sure the bicorr1 data looks like what we expect\n3) Fill a giant matrix with histogram information about detector pair interactions",
"import matplotlib.pyplot as plt\nimport matplotlib.colors\nimport numpy as np\nimport os\nimport scipy.io as sio\nimport sys\nimport time\nimport inspect\nimport pandas as pd\nfrom tqdm import *\n\n# Plot entire array\nnp.set_printoptions(threshold=np.nan)\n\nimport seaborn as sns\nsns.set_palette('spectral')\n\nsys.path.append('../scripts/')\nimport bicorr as bicorr\n\n%load_ext autoreload\n%autoreload 2",
"Step 1) Load the data from bicorr1\nI need to load the bicorr# file from each subfolder. Start by using the same technique I used to import the cced file. I built this into a function called bicorr.load_bicorr.",
"help(bicorr.load_bicorr)\n\nbicorr_data = bicorr.load_bicorr(1, root_path = '../datar')\n\nbicorr_data.shape",
"I want to work with larger data, so I'm going to pull 10000 lines from the bicorr1 file from flux (rather than the tiny bicorr1 file produced from cced1_part. \nI used the same technique as before on flux:\nhead -n 10000 bicorr1 > bicorr1_part\nAnd then I copied that to datar > 1",
"bicorr_data = bicorr.load_bicorr(bicorr_path = '../datar/1/bicorr1_part')\n\nbicorr_data.shape",
"I can parse this array in the same way I did with the cced file.",
"# Print all info from first line\nprint(bicorr_data[:][0])\n# Print event number from line 0\nprint(bicorr_data['event'][0])",
"Step 2) Make some benchmark plots of the 1/bicorr1 data\n\nWhich pairs had bicorrelation events?\nCount rates for each detector pair\nEvent number vs. line\n\nI wrote myself a convenient function for saving plots to file:",
"help(bicorr.save_fig_to_folder)\n\nfig_folder = 'fig'",
"Detector pairs that had bicorrelation events",
"# Which detector pairs fired?\nplt.plot(bicorr_data['det1ch'],bicorr_data['det2ch'],'.k')\nplt.xlabel('Detector 1 channel')\nplt.ylabel('Detector 2 channel')\nplt.title('Detector pairs with bicorrelation events')\nbicorr.save_fig_to_folder('bicorr_pairs_scatter',fig_folder)\nplt.show()",
"Count rate for each detector pair",
"plt.figure(figsize=(7,5))\nplt.hist2d(bicorr_data['det1ch'],bicorr_data['det2ch'],bins=np.arange(-0.5,46.5,1),cmin=1,cmap='viridis')\nplt.ylim([-.5,46.5])\nplt.colorbar()\nplt.grid(True, which='both')\nplt.xticks([i for i in np.arange(0,46,4)])\nplt.yticks([i for i in np.arange(0,46,4)])\nplt.xlabel('Detector 1 channel')\nplt.ylabel('Detector 2 channel')\nplt.title('Frequency of detector pair interactions')\nbicorr.save_fig_to_folder('bicorr_pairs_2dhist',fig_folder)\nplt.show()",
"I should take some time to explain the features on this plot. There are a few interesting observables:\n\nThe first detector channel on each board has a noticeably smaller count rate (blue bands). This is because of an electronics issue that the first channel on each board will only write out if there is a trigger in a detector on the same board. Thus, if the bicorrelation event happens on that channel and a detector on another board, it won't be counted.\nThe highest values along the diagonal correspond to detectors that are directly adjacent to one another, and will therefore experience more cross talk.",
"# Plot event number vs. line in\nplt.plot(bicorr_data['event'])\nplt.xlabel('Line number')\nplt.ylabel('Event number')\nplt.title('Event number vs. line number')\nbicorr.save_fig_to_folder('/bicorr_all_evnum',fig_folder)\nplt.show()",
"Try it as a function",
"bicorr.bicorr_checkpoint_plots(bicorr_data, show_flag=True)",
"Step 3) Preallocate massive matrix\nWe are going to store the bicorr events in a giant array of histograms called bicorr_hist_master, or bhm for short. The dimensions in the matrix will be:\n0: 990 in length: detector pair index (indicated using dictionary det_pair_dict)\n1: 4 in length: interaction type (0=nn, 1=np, 2=pn, 3=pp)\n2: $\\Delta t_1$ for detector 1\n3: $\\Delta t_2$ for detector 2 \nThus, calling bicorr_hist_master[0][0][:][:] will produce the 2-d histogram for the 0th detector pair (ch 1 and ch 2) for nn interactions.\nSet up this matrix.\nTime bins\nWrite a function that will generate dt_bin_edges. The length of this array will govern the number of bins in dimensions 2 ($\\Delta t_1$) and 3 ($\\Delta t_2$).",
"help(bicorr.build_dt_bin_edges)\n\ndt_bin_edges, num_dt_bins = bicorr.build_dt_bin_edges(print_flag=True)",
"Interaction type bins",
"# Number of bins in interaction type\nnum_intn_types = 4 #(0=nn, 1=np, 2=pn, 3=pp)",
"Detector pair bins",
"# What are the unique detector numbers? Use same technique as in bicorr.py\nchList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag=True)\n\n# Pre-allocate matrix\n# Note: This is very memory intensive... can't run it too many times or python crashes.\nbicorr_hist_master = np.zeros((num_det_pairs,num_intn_types,num_dt_bins,num_dt_bins),dtype=np.uint32)\n\nbicorr_hist_master.shape",
"How large will this be when we store it to disk?",
"bicorr_hist_master.nbytes/1e9",
"I had originally made the time range go from -200 to 200 ns, making for a 37 GB array. Now I am going from -50 to 200 which cuts it down to 15.84 GB. Now I can have two loaded in memory at a time. The further option still exists to make the time binning coarser to .5 ns instead of .25 ns.\nFunctionalize this\nCreate a general method for allocating bicorr_hist_master based on default settings.",
"bicorr_hist_master = bicorr.alloc_bhm(num_det_pairs,num_intn_types,num_dt_bins)",
"Step 4) Fill the histogram\nI'm updating this with latest methods. I am not going to rerun the first two attempts, so they may not work in the future. I'm only going to update the third attempt, filling the histogram line-by-line.\nFirst attempt: use np.where to find indices, fill with np.histogram2d\nFind the indices of the lines that will fill each detector type (based on detector pair and interaction type).\nIf this doesn't work, go line by line and fill the bins one at a time. But that's so boring.",
"# Pre-allocate matrix\n# Note: This is very memory intensive... can't run it too many times or python crashes.\nbicorr_hist_master = bicorr.alloc_bicorr_hist_master(num_det_pairs,num_intn_types,num_dt_bins)\n\n# Loaded above in Step 1\nbicorr_data.shape\n\n# Store a bunch of indices. Hopefully I won't run out of memory...\n# Find event indices for each event type\ntype0_indices = np.where(np.logical_and(bicorr_data['det1par']==1,bicorr_data['det2par']==1))\ntype1_indices = np.where(np.logical_and(bicorr_data['det1par']==1,bicorr_data['det2par']==2))\ntype2_indices = np.where(np.logical_and(bicorr_data['det1par']==2,bicorr_data['det2par']==1))\ntype3_indices = np.where(np.logical_and(bicorr_data['det1par']==2,bicorr_data['det2par']==2))\n\nnp.sum(type0_indices[0].shape+type1_indices[0].shape+type2_indices[0].shape+type3_indices[0].shape)",
"How can I quickly return the index?",
"# Set up dictionary for returning detector pair index\ndet_df = bicorr.load_det_df()\ndict_index_to_pair = det_df['d1d2'].to_dict()\ndict_pair_to_index = {v: k for k, v in dict_index_to_pair.items()}\n\n# Loop through all pairs and fill the histogram\nstart_time = time.time()\ncount = 0 # Running total of selected indices \ncount2 = 0 # Running sum of histogram counts\nfor i1 in np.arange(0, num_dets): # Just run det1ch = 1 for the time being due to memory constraints.\n det1ch = detList[i1]\n print(str(det1ch)+'---')\n for i2 in np.arange(i1+1,num_dets):\n det2ch = detList[i2]\n det_pair_indices = np.where(np.logical_and(bicorr_data['det1ch']==det1ch,bicorr_data['det2ch']==det2ch))\n \n # type interactions\n selected_type0 = np.intersect1d(det_pair_indices, type0_indices)\n selected_type1 = np.intersect1d(det_pair_indices, type1_indices)\n selected_type2 = np.intersect1d(det_pair_indices, type2_indices)\n selected_type3 = np.intersect1d(det_pair_indices, type3_indices)\n count = count + np.sum(selected_type0.shape+selected_type1.shape+selected_type2.shape+selected_type3.shape)\n \n # Adding these count2 lines for debugging to search for the error\n count2 += np.sum(np.histogram2d(bicorr_data['det1t'][selected_type0],bicorr_data['det2t'][selected_type0],bins=dt_bin_edges)[0])\n count2 += np.sum(np.histogram2d(bicorr_data['det1t'][selected_type1],bicorr_data['det2t'][selected_type1],bins=dt_bin_edges)[0])\n count2 += np.sum(np.histogram2d(bicorr_data['det1t'][selected_type2],bicorr_data['det2t'][selected_type2],bins=dt_bin_edges)[0])\n count2 += np.sum(np.histogram2d(bicorr_data['det1t'][selected_type3],bicorr_data['det2t'][selected_type3],bins=dt_bin_edges)[0])\n \n bicorr_hist_master[dict_pair_to_index[100*det1ch+det2ch],0,:,:] = np.histogram2d(bicorr_data['det1t'][selected_type0],bicorr_data['det2t'][selected_type0],bins=dt_bin_edges)[0]\n bicorr_hist_master[dict_pair_to_index[100*det1ch+det2ch],1,:,:] = np.histogram2d(bicorr_data['det1t'][selected_type1],bicorr_data['det2t'][selected_type1],bins=dt_bin_edges)[0]\n bicorr_hist_master[dict_pair_to_index[100*det1ch+det2ch],2,:,:] = np.histogram2d(bicorr_data['det1t'][selected_type2],bicorr_data['det2t'][selected_type2],bins=dt_bin_edges)[0]\n bicorr_hist_master[dict_pair_to_index[100*det1ch+det2ch],3,:,:] = np.histogram2d(bicorr_data['det1t'][selected_type3],bicorr_data['det2t'][selected_type3],bins=dt_bin_edges)[0]\n\nprint('Run time: ', time.time()-start_time) \nprint('Sum of bicorr_hist_master: ', np.sum(bicorr_hist_master))\nprint('Running total of type_indices_#: ', count)\nprint('Running total of histogram sum: ', count2)",
"This shows that events are getting lost at some point. The total number of events in the histogram do not match the number of events in bicorr_data. Where are they going? It is possible that those are events outside of the boundaries of dt_bin_edges. How many events are getting lost?",
"# How many lines are left out of the histogram?\ncount-count2",
"Are the missing events those in which the time is out of range of the time axes?",
"t_min = dt_bin_edges[0]; t_max = dt_bin_edges[-1]\nprint(t_min,t_max)\n\nover1 = np.ndarray.flatten(np.argwhere(bicorr_data['det1t']>=t_max))\nunder1 = np.ndarray.flatten(np.argwhere(bicorr_data['det1t']<=t_min))\nover2 = np.ndarray.flatten(np.argwhere(bicorr_data['det2t']>=t_max))\nunder2 = np.ndarray.flatten(np.argwhere(bicorr_data['det2t']<=t_min))\n\noverunder = np.concatenate([over1, under1, over2, under2])\nnp.unique(overunder).shape",
"There are more events outside of the range of dt_bin_edges than are being left out of the histogram. So I am not sure what this means. Last time the numbers were exactly the same.\nI believe the problem is in using np.histogram2d, which returns a overflow warning. I believe the problem is in passing in the time stamps as floats, there is an overflow error, which is causing some counts to be lost. \nI am not going to spend any more time on this because the next section shows that the line-by-line method is much faster and I will use that method instead.\nSecond attempt: Fill the array all at once using np.histogramdd\nnp.histogramdd, documentation here: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogramdd.html#numpy.histogramdd\nCompute the multidimensional histogram of some data.\nThe four dimensions are:\n\nDetector pair.\nInteraction type\ndet1t\ndet2t\n\nThis will take some work, since I will have to figure out how to provide the bins of the first two dimensions. Keep this in mind, but move on to the next method.\nThird attempt: Fill line-by-line by calculating bin index\nInstead of filling time histograms for given event type and detector pair, simply go through line by line and increase the count in the corresponding bin from bicorr_hist_master.",
"bhm = bicorr.alloc_bhm(num_det_pairs,num_intn_types,num_dt_bins)\n\n# Set up dictionary for returning detector pair index\ndet_df = bicorr.load_det_df()\ndict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair(det_df)\n\n# Type index\ndict_type_to_index = {11:0, 12:1, 21:2, 22:3}\n\n# Time indices\ndt_min = np.min(dt_bin_edges); dt_max = np.max(dt_bin_edges)\ndt_step = dt_bin_edges[1]-dt_bin_edges[0]",
"I will use the following equation to determine the time bin. \n$$ \\text{bin}(t) = \\lfloor \\frac{t-t_{min}}{\\Delta t}\\rfloor$$",
"count_loss = 0\nstart_time = time.time()\nfor i in tqdm(np.arange(bicorr_data.shape[0]),ascii=True):\n # What is the corresponding bin number for the four dimensions?\n ## Detector pair index\n pair_i = dict_pair_to_index[bicorr_data[i]['det1ch']*100+bicorr_data[i]['det2ch']]\n ## Event type index\n type_i = dict_type_to_index[bicorr_data[i]['det1par']*10+bicorr_data[i]['det2par']]\n ## Time 1 index\n t1_i = int(np.floor((bicorr_data[i]['det1t']-dt_min)/dt_step))\n t1_i_check = np.logical_and(t1_i>=0, t1_i<num_dt_bins) # Within range?\n ## Time 2 index\n t2_i = int(np.floor((bicorr_data[i]['det2t']-dt_min)/dt_step))\n t2_i_check = np.logical_and(t2_i>=0, t2_i<num_dt_bins) # Within range?\n \n if np.logical_and(t1_i_check, t2_i_check): \n # Increment the corresponding bin\n bhm[pair_i,type_i,t1_i,t2_i] += 1\n else:\n count_loss += 1\n \n \nprint(np.sum(bhm))\nprint(time.time()-start_time)\n\ncount_loss",
"This method is much faster so we will use it.\nFunctionalize this with default settings",
"print(inspect.getsource(bicorr.fill_bhm))\n\nbhm = bicorr.alloc_bhm(num_det_pairs,num_intn_types,num_dt_bins)\n\nbhm = bicorr.fill_bhm(bhm, bicorr_data, det_df, dt_bin_edges)",
"Step 5) Convert to sparse matrix\nAs developed in my notebook implement_sparse_matrix.",
"sparse_bhm = bicorr.generate_sparse_bhm(bhm)",
"Step 6) Save the histogram and related vectors to disk\nThe full bicorr_hist_master array\nI'm going to use np.savez to save several arrays in the same .npz file.\nCAUTION: This will store at 15 GB file to your hard drive. I can verify that it works. No need to repeat it. Instead, use the sparse matrix technique.",
"# np.savez('../datar/1/bicorr_hist_master', bicorr_hist_master = bicorr_hist_master, dict_pair_to_index=dict_pair_to_index, dt_bin_edges=dt_bin_edges)",
"The sparse bicorr_hist_master array\nI also developed a method for storing the bicorr_hist_master as a sparse array, which requires much less space. Save that to disk instead.",
"bicorr.save_sparse_bhm(sparse_bhm, det_df, dt_bin_edges, save_folder = '../datar/1')",
"Functionalize it\nPut it all together in a function with default settings\nIt will be convenient to generate the bicorrelation histogram for data in one step based on default settings. \nThe steps are:\n\nLoad relevant information\ndet_df\nroot_path\ndt_bin_edges\nnum_det_pairs, num_intn_types\n\n\nAllocate empty bicorr hist master: bhm\nLoop through all folders\nLoad bicorr_data\nCreate checkpoint plots\nFill bhm\n\n\nIf save_flag\nBuild sparse matrix\nSave sparse matrix to file\n\n\nReturn bhm, dt_bin_edges",
"help(bicorr.build_bhm)\n\nbhm, dt_bin_edges = bicorr.build_bhm(det_df,1,2,dt_bin_edges =dt_bin_edges, checkpoint_flag = True,save_flag = False, disable_tqdm = True)\nbicorr_hist_master.shape\n\nbhm, dt_bin_edges = bicorr.build_bhm(root_path = '../datar/')\n\nbhm, dt_bin_edges = bicorr.build_bhm(folder_end = 3, root_path = '../datar/')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joommf/tutorial
|
workshops/2017-04-24-Intermag2017/tutorial3_dynamics.ipynb
|
bsd-3-clause
|
[
"Tutorial 3 - Dynamics\nThe dynamics of magnetisation field $\\mathbf{m}$ is governed by the Landau-Lifshitz-Gilbert (LLG) equation\n$$\\frac{d\\mathbf{m}}{dt} = \\underbrace{-\\gamma_{0}(\\mathbf{m} \\times \\mathbf{H}\\text{eff})}\\text{precession} + \\underbrace{\\alpha\\left(\\mathbf{m} \\times \\frac{d\\mathbf{m}}{dt}\\right)}_\\text{damping},$$\nwhere $\\gamma_{0}$ is the gyromagnetic ratio, $\\alpha$ is the Gilbert damping, and $\\mathbf{H}_\\text{eff}$ is the effective field. It consists of two terms: precession and damping. In this exercise, we will explore some basic properties of this equation to understand how to define it in simulations.\nWe will study the simplest \"zero-dimensional\" case - macrospin. In the first step, after we import necessary modules (oommfc and discretisedfield), we create the mesh which consists of a single finite difference cell.",
"import oommfc as oc\nimport discretisedfield as df\n%matplotlib inline\n\n# Define macro spin mesh (i.e. one discretisation cell).\np1 = (0, 0, 0) # first point of the mesh domain (m)\np2 = (1e-9, 1e-9, 1e-9) # second point of the mesh domain (m)\ncell = (1e-9, 1e-9, 1e-9) # discretisation cell size (m)\nmesh = oc.Mesh(p1=p1, p2=p2, cell=cell)",
"Now, we can create a micromagnetic system object.",
"system = oc.System(name=\"macrospin\")",
"Let us assume we have a simple Hamiltonian which consists of only Zeeman energy term\n$$\\mathcal{H} = -\\mu_{0}M_\\text{s}\\mathbf{m}\\cdot\\mathbf{H},$$\nwhere $M_\\text{s}$ is the saturation magnetisation, $\\mu_{0}$ is the magnetic constant, and $\\mathbf{H}$ is the external magnetic field. For more information on defining micromagnetic Hamiltonians, please refer to the Hamiltonian tutorial. We apply the external magnetic field with magnitude $H = 2 \\times 10^{6} \\,\\text{A}\\,\\text{m}^{-1}$ in the positive $z$ direction.",
"H = (0, 0, 2e6) # external magnetic field (A/m)\nsystem.hamiltonian = oc.Zeeman(H=H)",
"In the next step we can define the system's dynamics. Let us assume we have $\\gamma_{0} = 2.211 \\times 10^{5} \\,\\text{m}\\,\\text{A}^{-1}\\,\\text{s}^{-1}$ and $\\alpha=0.1$.",
"gamma = 2.211e5 # gyromagnetic ratio (m/As)\nalpha = 0.1 # Gilbert damping\n\nsystem.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)",
"To check what is our dynamics equation:",
"system.dynamics",
"Before we start running time evolution simulations, we need to initialise the magnetisation. In this case, our magnetisation is pointing in the positive $x$ direction with $M_\\text{s} = 8 \\times 10^{6} \\,\\text{A}\\,\\text{m}^{-1}$. The magnetisation is defined using Field class from the discretisedfield package we imported earlier.",
"initial_m = (1, 0, 0) # vector in x direction\nMs = 8e6 # magnetisation saturation (A/m)\n\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)",
"Now, we can run the time evolution using TimeDriver for $t=0.1 \\,\\text{ns}$ and save the magnetisation configuration in $n=200$ steps.",
"td = oc.TimeDriver()\n\ntd.drive(system, t=0.1e-9, n=200)",
"How different system parameters vary with time, we can inspect by showing the system's datatable.",
"system.dt",
"However, in our case it is much more informative if we plot the time evolution of magnetisation $z$ component $m_{z}(t)$.",
"system.dt.plot(\"t\", \"mz\");",
"Similarly, we can plot all three magnetisation components",
"system.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"We can see that after some time the macrospin aligns parallel to the external magnetic field in the $z$ direction. We can explore the effect of Gilbert damping $\\alpha = 0.2$ on the magnetisation dynamics.",
"system.dynamics.damping.alpha = 0.2\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 1\nBy looking at the previous example, explore the magnetisation dynamics for $\\alpha=0.005$ in the following code cell.",
"# insert missing code here.\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 2\nRepeat the simulation with $\\alpha=0.1$ and H = (0, 0, -2e6).",
"# insert missing code here.\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 3\nKeep using $\\alpha=0.1$. \nChange the field from H = (0, 0, -2e6) to H = (0, -1.41e6, -1.41e6), and plot\n$m_x(t)$, $m_y(t)$ and $m_z(t)$ as above. Can you explain the (initially non-intuitive) output?",
"system.hamiltonian.zeeman.H = (0, -1.41e6, -1.41e6)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/statespace_concentrated_scale.ipynb
|
bsd-3-clause
|
[
"State space models - concentrating the scale out of the likelihood function",
"import numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\n\ndta = sm.datasets.macrodata.load_pandas().data\ndta.index = pd.date_range(start='1959Q1', end='2009Q4', freq='Q')",
"Introduction\n(much of this is based on Harvey (1989); see especially section 3.4)\nState space models can generically be written as follows (here we focus on time-invariant state space models, but similar results apply also to time-varying models):\n$$\n\\begin{align}\ny_t & = Z \\alpha_t + \\varepsilon_t, \\quad \\varepsilon_t \\sim N(0, H) \\\n\\alpha_{t+1} & = T \\alpha_t + R \\eta_t \\quad \\eta_t \\sim N(0, Q)\n\\end{align}\n$$\nOften, some or all of the values in the matrices $Z, H, T, R, Q$ are unknown and must be estimated; in statsmodels, estimation is often done by finding the parameters that maximize the likelihood function. In particular, if we collect the parameters in a vector $\\psi$, then each of these matrices can be thought of as functions of those parameters, for example $Z = Z(\\psi)$, etc.\nUsually, the likelihood function is maximized numerically, for example by applying quasi-Newton \"hill-climbing\" algorithms, and this becomes more and more difficult the more parameters there are. It turns out that in many cases we can reparameterize the model as $[\\psi_', \\sigma_^2]'$, where $\\sigma_^2$ is the \"scale\" of the model (usually, it replaces one of the error variance terms) and it is possible to find the maximum likelihood estimate of $\\sigma_^2$ analytically, by differentiating the likelihood function. This implies that numerical methods are only required to estimate the parameters $\\psi_*$, which has dimension one less than that of $\\psi$.\nExample: local level model\n(see, for example, section 4.2 of Harvey (1989))\nAs a specific example, consider the local level model, which can be written as:\n$$\n\\begin{align}\ny_t & = \\alpha_t + \\varepsilon_t, \\quad \\varepsilon_t \\sim N(0, \\sigma_\\varepsilon^2) \\\n\\alpha_{t+1} & = \\alpha_t + \\eta_t \\quad \\eta_t \\sim N(0, \\sigma_\\eta^2)\n\\end{align}\n$$\nIn this model, $Z, T,$ and $R$ are all fixed to be equal to $1$, and there are two unknown parameters, so that $\\psi = [\\sigma_\\varepsilon^2, \\sigma_\\eta^2]$.\nTypical approach\nFirst, we show how to define this model without concentrating out the scale, using statsmodels' state space library:",
"class LocalLevel(sm.tsa.statespace.MLEModel):\n _start_params = [1., 1.]\n _param_names = ['var.level', 'var.irregular']\n\n def __init__(self, endog):\n super(LocalLevel, self).__init__(endog, k_states=1, initialization='diffuse')\n\n self['design', 0, 0] = 1\n self['transition', 0, 0] = 1\n self['selection', 0, 0] = 1\n\n def transform_params(self, unconstrained):\n return unconstrained**2\n\n def untransform_params(self, unconstrained):\n return unconstrained**0.5\n\n def update(self, params, **kwargs):\n params = super(LocalLevel, self).update(params, **kwargs)\n\n self['state_cov', 0, 0] = params[0]\n self['obs_cov', 0, 0] = params[1]\n",
"There are two parameters in this model that must be chosen: var.level $(\\sigma_\\eta^2)$ and var.irregular $(\\sigma_\\varepsilon^2)$. We can use the built-in fit method to choose them by numerically maximizing the likelihood function.\nIn our example, we are applying the local level model to consumer price index inflation.",
"mod = LocalLevel(dta.infl)\nres = mod.fit(disp=False)\nprint(res.summary())",
"We can look at the results from the numerical optimizer in the results attribute mle_retvals:",
"print(res.mle_retvals)",
"Concentrating out the scale\nNow, there are two ways to reparameterize this model as above:\n\nThe first way is to set $\\sigma_^2 \\equiv \\sigma_\\varepsilon^2$ so that $\\psi_ = \\psi / \\sigma_\\varepsilon^2 = [1, q_\\eta]$ where $q_\\eta = \\sigma_\\eta^2 / \\sigma_\\varepsilon^2$.\nThe second way is to set $\\sigma_^2 \\equiv \\sigma_\\eta^2$ so that $\\psi_ = \\psi / \\sigma_\\eta^2 = [h, 1]$ where $h = \\sigma_\\varepsilon^2 / \\sigma_\\eta^2$.\n\nIn the first case, we only need to numerically maximize the likelihood with respect to $q_\\eta$, and in the second case we only need to numerically maximize the likelihood with respect to $h$.\nEither approach would work well in most cases, and in the example below we will use the second method.\nTo reformulate the model to take advantage of the concentrated likelihood function, we need to write the model in terms of the parameter vector $\\psi_* = [g, 1]$. Because this parameter vector defines $\\sigma_\\eta^2 \\equiv 1$, we now include a new line self['state_cov', 0, 0] = 1 and the only unknown parameter is $h$. Because our parameter $h$ is no longer a variance, we renamed it here to be ratio.irregular.\nThe key piece that is required to formulate the model so that the scale can be computed from the Kalman filter recursions (rather than selected numerically) is setting the flag self.ssm.filter_concentrated = True.",
"class LocalLevelConcentrated(sm.tsa.statespace.MLEModel):\n _start_params = [1.]\n _param_names = ['ratio.irregular']\n\n def __init__(self, endog):\n super(LocalLevelConcentrated, self).__init__(endog, k_states=1, initialization='diffuse')\n\n self['design', 0, 0] = 1\n self['transition', 0, 0] = 1\n self['selection', 0, 0] = 1\n self['state_cov', 0, 0] = 1\n \n self.ssm.filter_concentrated = True\n\n def transform_params(self, unconstrained):\n return unconstrained**2\n\n def untransform_params(self, unconstrained):\n return unconstrained**0.5\n\n def update(self, params, **kwargs):\n params = super(LocalLevelConcentrated, self).update(params, **kwargs)\n self['obs_cov', 0, 0] = params[0]\n",
"Again, we can use the built-in fit method to find the maximum likelihood estimate of $h$.",
"mod_conc = LocalLevelConcentrated(dta.infl)\nres_conc = mod_conc.fit(disp=False)\nprint(res_conc.summary())",
"The estimate of $h$ is provided in the middle table of parameters (ratio.irregular), while the estimate of the scale is provided in the upper table. Below, we will show that these estimates are consistent with those from the previous approach.\nAnd we can again look at the results from the numerical optimizer in the results attribute mle_retvals. It turns out that two fewer iterations were required in this case, since there was one fewer parameter to select. Moreover, since the numerical maximization problem was easier, the optimizer was able to find a value that made the gradient for this parameter slightly closer to zero than it was above.",
"print(res_conc.mle_retvals)",
"Comparing estimates\nRecall that $h = \\sigma_\\varepsilon^2 / \\sigma_\\eta^2$ and the scale is $\\sigma_*^2 = \\sigma_\\eta^2$. Using these definitions, we can see that both models produce nearly identical results:",
"print('Original model')\nprint('var.level = %.5f' % res.params[0])\nprint('var.irregular = %.5f' % res.params[1])\n\nprint('\\nConcentrated model')\nprint('scale = %.5f' % res_conc.scale)\nprint('h * scale = %.5f' % (res_conc.params[0] * res_conc.scale))",
"Example: SARIMAX\nBy default in SARIMAX models, the variance term is chosen by numerically maximizing the likelihood function, but an option has been added to allow concentrating the scale out.",
"# Typical approach\nmod_ar = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct')\nres_ar = mod_ar.fit(disp=False)\n\n# Estimating the model with the scale concentrated out\nmod_ar_conc = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct', concentrate_scale=True)\nres_ar_conc = mod_ar_conc.fit(disp=False)",
"These two approaches produce about the same loglikelihood and parameters, although the model with the concentrated scale was able to improve the fit very slightly:",
"print('Loglikelihood')\nprint('- Original model: %.4f' % res_ar.llf)\nprint('- Concentrated model: %.4f' % res_ar_conc.llf)\n\nprint('\\nParameters')\nprint('- Original model: %.4f, %.4f, %.4f, %.4f' % tuple(res_ar.params))\nprint('- Concentrated model: %.4f, %.4f, %.4f, %.4f' % (tuple(res_ar_conc.params) + (res_ar_conc.scale,)))",
"This time, about 1/3 fewer iterations of the optimizer are required under the concentrated approach:",
"print('Optimizer iterations')\nprint('- Original model: %d' % res_ar.mle_retvals['iterations'])\nprint('- Concentrated model: %d' % res_ar_conc.mle_retvals['iterations'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hannorein/rebound
|
ipython_examples/UniquelyIdentifyingParticlesWithHashes.ipynb
|
gpl-3.0
|
[
"Uniquely Identifying Particles With Hashes\nIn many cases, one can just identify particles by their position in the particle array, e.g. using sim.particles[5]. However, in cases where particles might get reordered in the particle array finding a particle might be difficult. This is why we added a hash attribute to particles.\nIn REBOUND particles might get rearranged when a tree code is used for the gravity or collision routine, when particles merge, when a particle leaves the simulation box, or when you manually remove or add particles. In general, therefore, the user should not assume that particles stay at the same index or in the same location in memory. The reliable way to access particles is to assign them hashes and to access particles through them. Assigning hashes make sim.particles to behave like Python's dict while keeping list-like integer-based indexing at the same time.\nNote: When you don't assign particles a hash, they automatically get set to 0. The user is responsible for making sure hashes are unique, so if you set up particles without a hash and later set a particle's hash to 0, you don't know which one you'll get back when you access hash 0. See Possible Pitfalls below.\nIn this example, we show the basic usage of the hash attribute.",
"import rebound\nsim = rebound.Simulation()\nsim.add(m=1., hash=999)\nsim.add(a=0.4, hash=\"mercury\")\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5., hash=\"jupiter\")\nsim.add(a=7.)",
"We can now not only access the Earth particle with:",
"sim.particles[2]",
"but also with",
"sim.particles[\"earth\"]",
"We can access particles with negative indices like a list. We can get the last particle with",
"sim.particles[-1]",
"We can also set hash after particle is added.",
"sim.particles[-1].hash = 'pluto'\nsim.particles['pluto']",
"Details\nWe usually use strings as hashes, however, under the hood hash is an unsigned integer (c_uint). There is a function rebound.hash that calculates actual hash of a string.",
"from rebound import hash as h\nh(\"earth\")",
"The same function can be applied to integers. In this case it just casts the value to the underlying C datatype (c_uint).",
"h(999)\n\nh(-2)",
"When we above set the hash to some value, REBOUND converted this value to an unsigned integer using the same rebound.hash function.",
"sim.particles[0].hash # particle was created with sim.add(m=1., hash=999)\n\nsim.particles[2].hash \n# particle was created with sim.add(a=1., hash=\"earth\")\n# so the hash is the same as h(\"earth\") above",
"When we use string as an index to access particle, function rebound.hash is applied to the index and a particle with this hash is returned. On the other hand, if we use integer index, it is not treated as a hash, REBOUND just returns a particle with given position in array, i.e. sim.particles[0] is the first particle, etc.\nWe can access particles through their hash directly. However, to differentiate from passing an integer index, we have to first cast the hash to the underlying C datatype by using rebound.hash manually.",
"sim.particles[h(999)]",
"which corresponds to particles[0] as it should. sim.particles[999] would try to access index 999, which doesn't exist in the simulation, and REBOUND would raise an AttributeError.\nThe hash attribute always returns the appropriate unsigned integer ctypes type. (Depending on your computer architecture, ctypes.c_uint32 can be an alias for another ctypes type).\nSo we could also access the earth with:",
"sim.particles[h(1424801690)]",
"The numeric hashes could be useful in cases where you have a lot of particles you don't want to assign individual names, but you still need to keep track of them individually as they get rearranged:",
"for i in range(1,100):\n sim.add(m=0., a=i, hash=i)\n\nsim.particles[99].a\n\nsim.particles[h(99)].a",
"Possible Pitfalls\nThe user is responsible for making sure the hashes are unique. If two particles share the same hash, you could get either one when you access them using their hash (in most cases the first hit in the particles array). Two random strings used for hashes have a $\\sim 10^{-9}$ chance of clashing. The most common case is setting a hash to 0:",
"sim = rebound.Simulation()\nsim.add(m=1., hash=0)\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5.)\nsim.particles[h(0)]",
"Here we expected to get back the first particle, but instead got the last one. This is because we didn't assign a hash to the last particle and it got automatically set to 0. If we give hashes to all the particles in the simulation, then there's no clash:",
"sim = rebound.Simulation()\nsim.add(m=1., hash=0)\nsim.add(a=1., hash=\"earth\")\nsim.add(a=5., hash=\"jupiter\")\nsim.particles[h(0)]",
"Due to details of the ctypes library, comparing two ctypes.c_uint32 instances for equality fails:",
"h(32) == h(32)",
"You have to compare the value",
"h(32).value == h(32).value",
"See the docs for further information: https://docs.python.org/3/library/ctypes.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ocelot-collab/ocelot
|
demos/ipython_tutorials/small_useful_features.ipynb
|
gpl-3.0
|
[
"This notebook was created by Sergey Tomin (sergey.tomin@desy.de). Source and license info is on GitHub. May 2020.\nAppendix: some useful OCELOT functions\nThis notebook was created to show some small functions that may be useful for accelerator physicists.\nContents\n\nAperture\nRK tracking\nDump the beam distribution at a specific location of the lattice",
"# import modules\nfrom ocelot import *\nfrom ocelot.gui.accelerator import *\nimport copy\nimport pandas as pd\nimport seaborn as sbn\nimport time\nimport matplotlib.pyplot as plt",
"<a id='aperture'></a>\nAperture\nSuppose you have a simple (and imposible) missaligned dump beam line. \nAnd you want to know the best corrector setting to get 100% transmission. \nWe are goring to explor the transmission in horizontal only. \nLattice",
"d = Drift(l=1)\n\n# horizontal correctors\nc1 = Hcor()\nc2 = Hcor()\n\n# Sextupoles\nsf = Sextupole(l=0.2, k2=3000)\nsf.dx, sf.dy = 1e-3, -1e-3\n\nsd = Sextupole(l=0.2, k2=-3000)\nsd.dx, sd.dy = 1e-3, -1e-3\n\n\n# Quadrupoles with transversal offsets\nqf = Quadrupole(l=0.2, k1=1, k2=20)\nqf.dx, qf.dy = 1e-3, -1e-3\n\nqd = Quadrupole(l=0.2, k1=-1, k2=-20)\nqd.dx, qd.dy = -1e-3, 1e-3\n\n# Collimators\nap1 = Aperture(xmax=5e-3, dx=-1e-3)\nap2 = Aperture(xmax=5e-3, dx=1e-3)\n# BPMs\nm1 = Monitor()\nm2 = Monitor()\n\ncell = (d, c1, d, sf, d, qf, d, ap1, d, m1, d, c2, d, sd, d, qd, d, ap2, d, m2, d,)\n\nlat = MagneticLattice(cell, method=MethodTM({\"global\": SecondTM}))",
"Create ParticleArray and Navigator objects",
"p_array_init = generate_parray(sigma_x=1e-3, sigma_px=5e-5, sigma_y=1e-3, sigma_py=5e-5, \n nparticles=20000, charge=1e-09, energy=1.)\n\ncorrectors = [c1, c2]\n\n",
"function to calculate transmission through the lattice",
"def transmission(lat, navi, correctors, kicks):\n for i, kick in enumerate(kicks):\n correctors[i].angle = kick\n \n lat.update_transfer_maps()\n \n # reset position of the Navigator \n navi.reset_position()\n \n p_array = copy.deepcopy(p_array_init)\n tws_tack, p_array = track(lat, p_array, navi, calc_tws=False, print_progress=False)\n trans = p_array.n / p_array_init.n\n return trans\n\n\ndef scan(cor1_range, cor2_range):\n trans_response = np.zeros((len(cor2_range), len(cor1_range)))\n\n for i, a2 in enumerate(A2):\n print(f\"{i} of {len(A2)}\", end=\"\\r\")\n for j, a1 in enumerate(A1):\n kicks = [a1, a2]\n trans_response[i, j] = transmission(lat, navi, correctors, kicks)\n \n return trans_response",
"Scan with two correctors. Apertures are NOT activated.\nHere we expect 100% transmission for any corrector settings.",
"navi = Navigator(lat)\n\nA1 = np.arange(-2, 2.5, 0.25)*1e-3\nA2 = np.arange(-8, 9., 0.5)*1e-3\n\nstart = time.time()\n\ntrans_response = scan(A1, A2)\n\nprint(f\" exec n_tracks={len(A2) * len(A1)}: {time.time() - start} s\")\ndf = pd.DataFrame(trans_response, index=A2, columns=A1)\nsbn.heatmap(df)\nplt.show()",
"Scan with two correctors. Apertures are activated.",
"navi = Navigator(lat)\n\n# activate apertures\nnavi.activate_apertures()\n\n# activate apertures starting from element \"start\" up to element \"stop\"\n# navi.activate_apertures(start=None, stop=m1)\n\nstart = time.time()\n\ntrans_response = scan(A1, A2)\n\nprint(f\" exec n_tracks={len(A2) * len(A1)}: {time.time() - start} s\")\ndf = pd.DataFrame(trans_response, index=A2, columns=A1)\nsbn.heatmap(df)\nplt.show()",
"Losses along accelerator lattice\nNew feature which is available currently in dev branch.\nParticleArray has lost_particle_recorder atribute (LostParticleRecorder) has list of s_positions along accelerator and number of particles which were lost at that point\np_array.lost_particle_recorder.lp_to_pos_hist = [(s1, n_lost_particles), (s2, n_lost_particlse), ..., (sn, n_lost_particlse)]",
"navi = Navigator(lat)\n\n# activate apertures\nnavi.activate_apertures()\n\nc1.angle = 0\nc2.angle = 0\n\nlat.update_transfer_maps()\n \n# reset position of the Navigator \nnavi.reset_position()\n \np_array = copy.deepcopy(p_array_init)\ntws_tack, p_array = track(lat, p_array, navi, calc_tws=False, print_progress=False)\ntrans = p_array.n / p_array_init.n\nprint(\"transmission: \", trans)\n\ns = [p[0] for p in p_array.lost_particle_recorder.lp_to_pos_hist]\nnlost = [p[1] for p in p_array.lost_particle_recorder.lp_to_pos_hist]\n\nfig, ax_xy = plot_API(lat, legend=True, fig_name=10)\nax_xy.bar(s, nlost)\nplt.show()\n",
"<a id='rk'></a>\nTracking the electron beam with Runge-Kutta integrator in magnetic fields\nIn OCELOT, there is a possibility to track the beam in the arbitrary defined 3D magnetic fields. \nYou need two components to do this: \n1. define the 3D magnetic fields \n2. MethodTM, the class which creates Transfer Maps, should know that you want to apply RK integrator to an element\ndefine 3D Magnetic fields.",
"lperiod = 0.01 # [m] undulator period \nnperiods = 50 # number of periods\nKx = 2 # undulator deflection parameter\n\n\ndef und_field_3D(x, y, z, lperiod, Kx):\n kx = 0.\n kz = 2 * pi / lperiod\n ky = np.sqrt(kz * kz + kx * kx)\n c = speed_of_light\n m0 = m_e_eV\n B0 = Kx * m0 * kz / c\n k1 = -B0 * kx / ky\n k2 = -B0 * kz / ky\n\n kx_x = kx * x\n ky_y = ky * y\n kz_z = kz * z\n\n cosz = np.cos(kz_z)\n\n cosx = np.cos(kx_x)\n sinhy = np.sinh(ky_y)\n Bx = k1 * np.sin(kx_x) * sinhy * cosz \n By = B0 * cosx * np.cosh(ky_y) * cosz\n Bz = k2 * cosx * sinhy * np.sin(kz_z)\n return (Bx, By, Bz)\n\n\nund = Undulator(lperiod=lperiod, nperiods=nperiods, Kx=Kx, eid=\"und\")\nund.mag_field = lambda x, y, z: und_field_3D(x, y, z, lperiod=lperiod, Kx=Kx)\n\n# define number of points along z-axis, by default npoints = 200\nund.npoints = 500 ",
"Create MagneticLattice and MethodTM",
"from ocelot.cpbd.optics import RungeKuttaTM\nd = Drift(l=0.5)\nqf = Quadrupole(l=0.2, k1=1.2)\nqd = Quadrupole(l=0.2, k1=-1.2)\n\nmethod = MethodTM()\n# let the MethodTM to know\nmethod.params[Undulator] = RungeKuttaTM\n\n\nlat = MagneticLattice((d, qf, d, qd, d, und, d, qf, d, qd, d), method=method)",
"Tracking through the lattice WITH RK integrator",
"p_array = copy.deepcopy(p_array_init)\n\nnavi = Navigator(lat)\n\ntws_track, _ = track(lat, p_array, navi)\n\n\n\nplot_opt_func(lat, tws_track)\nplt.show()",
"Tracking through the lattice WITHOUT RK integrator",
"d = Drift(l=0.5)\nqf = Quadrupole(l=0.2, k1=1.2)\nqd = Quadrupole(l=0.2, k1=-1.2)\n\nmethod = MethodTM()\n\nlat = MagneticLattice((d, qf, d, qd, d, und, d, qf, d, qd, d), method=method)\np_array = copy.deepcopy(p_array_init)\n\nnavi = Navigator(lat)\n\ntws_track, _ = track(lat, p_array, navi)\n\nplot_opt_func(lat, tws_track)\nplt.show()",
"<a id='savebeam'></a>\nDump the beam distribution at a specific location of the lattice",
"\nfrom ocelot import *\nfrom ocelot.gui import *\n\n# define elements of the lattice\nd = Drift(1.)\nqf = Quadrupole(l=0.3, k1=1)\nqd = Quadrupole(l=0.3, k1=-1)\nm = Marker()\nfodo = (qf, d, qd, d)\ncell = (fodo*3, m, qd, fodo*3)\n\n# init MagneticLattice\nlat = MagneticLattice(cell)\n\n# calc twiss\ntws0 = Twiss()\ntws0.beta_x = 10\ntws0.beta_y = 10\ntws = twiss(lat, tws0)\nplot_opt_func(lat, tws)",
"Tracking",
"# generate ParticleArray\nparray = generate_parray(tws=tws[0])\n\nshow_e_beam(parray)\n\nfrom ocelot.cpbd.physics_proc import SaveBeam\nnavi = Navigator(lat)\n\n# define SaveBeam \nsv = SaveBeam(filename=\"test.npz\")\nnavi.add_physics_proc(sv, m, m)\n\ntws_track, _ = track(lat, parray, navi)\n\nplot_opt_func(lat, tws_track)\n\n\nparray_dump = load_particle_array(\"test.npz\")\nshow_e_beam(parray_dump)",
"To be continued ..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/csiro-bom/cmip6/models/sandbox-3/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CSIRO-BOM\nSource ID: SANDBOX-3\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:56\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mfinkle/user-data-analytics
|
mobile-clients.ipynb
|
mit
|
[
"import datetime as dt\nimport pandas as pd\nimport ujson as json\nfrom pyspark.sql.types import *\n\nfrom moztelemetry import get_pings, get_pings_properties\n\n%pylab inline",
"Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.",
"def dedupe_pings(rdd):\n return rdd.filter(lambda p: p[\"meta/clientId\"] is not None)\\\n .map(lambda p: (p[\"meta/documentId\"], p))\\\n .reduceByKey(lambda x, y: x)\\\n .map(lambda x: x[1])\n",
"Transform and sanitize the pings into arrays.",
"def transform(ping):\n # Should not be None since we filter those out.\n clientId = ping[\"meta/clientId\"]\n\n # Added via the ingestion process so should not be None.\n submissionDate = dt.datetime.strptime(ping[\"meta/submissionDate\"], \"%Y%m%d\")\n geoCountry = ping[\"meta/geoCountry\"]\n\n profileDate = None\n profileDaynum = ping[\"profileDate\"]\n if profileDaynum is not None:\n try:\n # Bad data could push profileDaynum > 32767 (size of a C int) and throw exception\n profileDate = dt.datetime(1970, 1, 1) + dt.timedelta(int(profileDaynum))\n except:\n profileDate = None\n\n # Create date should already be in ISO format\n creationDate = ping[\"creationDate\"]\n if creationDate is not None:\n # This is only accurate because we know the creation date is always in 'Z' (zulu) time.\n creationDate = dt.datetime.strptime(ping[\"creationDate\"], \"%Y-%m-%dT%H:%M:%S.%fZ\")\n\n appVersion = ping[\"meta/appVersion\"]\n buildId = ping[\"meta/appBuildId\"]\n locale = ping[\"locale\"]\n os = ping[\"os\"]\n osVersion = ping[\"osversion\"]\n device = ping[\"device\"]\n arch = ping[\"arch\"]\n defaultSearch = ping[\"defaultSearch\"]\n distributionId = ping[\"distributionId\"]\n\n experiments = ping[\"experiments\"]\n if experiments is None:\n experiments = []\n\n return [clientId, submissionDate, creationDate, profileDate, geoCountry, locale, os, osVersion, buildId, appVersion, device, arch, defaultSearch, distributionId, json.dumps(experiments)]\n",
"Create a set of pings from \"core\" to build a set of core client data. Output the data to CSV or Parquet.\nThis script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs.",
"channels = [\"nightly\", \"aurora\", \"beta\", \"release\"]\n\nstart = dt.datetime.now() - dt.timedelta(1)\nend = dt.datetime.now() - dt.timedelta(1)\n\nday = start\nwhile day <= end:\n for channel in channels:\n print \"\\nchannel: \" + channel + \", date: \" + day.strftime(\"%Y%m%d\")\n\n kwargs = dict(\n doc_type=\"core\",\n submission_date=(day.strftime(\"%Y%m%d\"), day.strftime(\"%Y%m%d\")),\n channel=channel,\n app=\"Fennec\",\n fraction=1\n )\n\n # Grab all available source_version pings\n pings = get_pings(sc, source_version=\"*\", **kwargs)\n\n subset = get_pings_properties(pings, [\"meta/clientId\",\n \"meta/documentId\",\n \"meta/submissionDate\",\n \"meta/appVersion\",\n \"meta/appBuildId\",\n \"meta/geoCountry\",\n \"locale\",\n \"os\",\n \"osversion\",\n \"device\",\n \"arch\",\n \"profileDate\",\n \"creationDate\",\n \"defaultSearch\",\n \"distributionId\",\n \"experiments\"])\n\n subset = dedupe_pings(subset)\n print \"\\nDe-duped pings:\" + str(subset.count())\n print subset.first()\n\n transformed = subset.map(transform)\n print \"\\nTransformed pings:\" + str(transformed.count())\n print transformed.first()\n\n s3_output = \"s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/mobile_clients\"\n s3_output += \"/v1/channel=\" + channel + \"/submission=\" + day.strftime(\"%Y%m%d\") \n schema = StructType([\n StructField(\"clientid\", StringType(), False),\n StructField(\"submissiondate\", TimestampType(), False),\n StructField(\"creationdate\", TimestampType(), True),\n StructField(\"profiledate\", TimestampType(), True),\n StructField(\"geocountry\", StringType(), True),\n StructField(\"locale\", StringType(), True),\n StructField(\"os\", StringType(), True),\n StructField(\"osversion\", StringType(), True),\n StructField(\"buildid\", StringType(), True),\n StructField(\"appversion\", StringType(), True),\n StructField(\"device\", StringType(), True),\n StructField(\"arch\", StringType(), True),\n StructField(\"defaultsearch\", StringType(), True),\n StructField(\"distributionid\", StringType(), True),\n StructField(\"experiments\", StringType(), True)\n ])\n # Make parquet parition file size large, but not too large for s3 to handle\n coalesce = 1\n if channel == \"release\":\n coalesce = 4\n grouped = sqlContext.createDataFrame(transformed, schema)\n grouped.coalesce(coalesce).write.parquet(s3_output)\n\n day += dt.timedelta(1)\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kit-cel/lecture-examples
|
nt2_ce2/vorlesung/ch_2_properties_lin_modulation/psd_linear_modulation.ipynb
|
gpl-2.0
|
[
"Content and Objectives\n\nShow PSD of ASK for random data\nSpectra are determined using FFT and averaging along several realizations\n\n<b> Note: </b> You may extend these lines to include other modulation schemes\nImport",
"# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 20}\nplt.rc('font', **font)\nplt.rc('text', usetex=True)\n\nmatplotlib.rc('figure', figsize=(18, 10) )",
"Function for determining the impulse response of an RC filter",
"\n########################\n# find impulse response of an RRC filter\n########################\ndef get_rrc_ir(K, n_sps, t_symbol, beta):\n \n ''' \n Determines coefficients of an RRC filter \n \n Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15\n \n NOTE: roll-off factor must not equal zero\n \n NOTE: Length of the IR has to be an odd number\n \n IN: length of IR, sps factor, symbol time, roll-off factor\n OUT: filter coefficients\n '''\n \n if beta == 0:\n beta = 1e-32\n \n K = int(K) \n\n if ( K%2 == 0):\n raise ValueError('Length of the impulse response should be an odd number')\n \n \n # initialize np.array \n rrc = np.zeros( K )\n \n # find sample time and initialize index vector\n t_sample = t_symbol / n_sps\n time_ind = range( -(K-1)//2, (K-1)//2+1)\n\n # assign values of rrc\n for t_i in time_ind:\n t = (t_i)* t_sample \n\n if t_i == 0:\n rrc[ int( t_i+(K-1)//2 ) ] = (1-beta+4*beta/np.pi)\n\n elif np.abs(t) == t_symbol / ( 4 * beta ): \n rrc[ int( t_i+(K-1)//2 ) ] = beta*np.sin( np.pi/(4*beta)*(1+beta) ) \\\n - 2*beta/np.pi*np.cos(np.pi/(4*beta)*(1+beta)) \n\n else:\n rrc[ int( t_i+(K-1)//2 ) ] = ( 4 * beta * t / t_symbol * np.cos( np.pi*(1+beta)*t/t_symbol ) \\\n + np.sin( np.pi * (1-beta) * t / t_symbol ) ) / ( np.pi * t / t_symbol * (1-(4*beta*t/t_symbol)**2) )\n\n rrc = rrc / np.sqrt(t_symbol)\n \n return rrc ",
"Parameters",
"########################\n# parameters\n########################\n\n# number of realizations along which to average the psd estimate\nn_real = 100\n\n# modulation scheme and constellation points\nM = 16\nconstellation = 2. * np.arange( M ) - M + 1\n#constellation = np.exp( 1j * 2 * np.pi * np.arange(M) / M )\n\nconstellation /= np.sqrt( np.linalg.norm( constellation )**2 / M )\n\n# number of symbols \nn_symb = int( 1e4 )\n\nt_symb = 1.0 \n\n# parameters of the filter\nbeta = 0.33\n\nn_sps = 4 # samples per symbol\nsyms_per_filt = 4 # symbols per filter (plus minus in both directions)\n\nK_filt = 2*syms_per_filt * n_sps + 1 # length of the fir filter",
"Signals and their spectra",
"# define rrc filter response \n\nrrc = get_rrc_ir( K_filt, n_sps, t_symb, beta)\nrrc = rrc/ np.linalg.norm(rrc)\n\n# get frequency regime and initialize PSD\nomega = np.linspace( -np.pi, np.pi, 512)\npsd = np.zeros( (n_real, len(omega) ) )\npsd_str = np.zeros( (n_real, len(omega) ) ) \n\n\n# loop for realizations\nfor k in np.arange(n_real):\n\n # generate random binary vector and modulate the specified modulation scheme\n d = np.random.randint( M, size = n_symb)\n s = constellation[ d ]\n\n # prepare sequence to be filtered\n s_up = np.zeros(n_symb * n_sps, dtype=complex) \n s_up[ : : n_sps ] = s\n s_up = np.append( s_up, np.zeros( K_filt - 1 ) ) \n\n # apply rrc \n #s_filt_rrc = signal.lfilter(rrc, [1], s_up)\n s_filt_rrc = np.convolve( rrc, s_up )\n x = s_filt_rrc\n\n # get spectrum using Bartlett method\n psd[k, :] = np.abs( 1 / n_sps * np.fft.fftshift( np.fft.fft( x, 512 ) ) )**2\n\n \n\n# average along realizations\npsd_average = np.average(psd, axis=0)\npsd_str_average = np.average(psd_str, axis=0) ",
"Plotting",
"plt.figure()\nplt.plot(omega, 10*np.log10(psd_average) )\nplt.plot(omega, 10*np.log10(psd_str_average) ) \n\nplt.grid(True); \nplt.xlabel('$\\Omega$'); \nplt.ylabel('$\\Phi(\\Omega)$ (dB)') \n\n\nplt.figure()\n\nplt.plot(omega, psd_average ) \n\nplt.grid(True); \nplt.xlabel('$\\Omega$'); \nplt.ylabel('$\\Phi(\\Omega)$') \n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hvanwyk/quadmesh
|
tests/test_fem/.ipynb_checkpoints/debug-checkpoint.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"></ul></div>",
"# Set Path\nimport sys\nsys.path.append('../../src/')\n\n%autoreload 2\n\n# Import Libraries\nfrom fem import Function\nfrom fem import QuadFE\nfrom fem import DofHandler\nfrom fem import Kernel\nfrom fem import Basis\nfrom fem import Form\nfrom fem import Assembler\nfrom fem import LinearSystem\nfrom plot import Plot\nfrom mesh import convert_to_array\nfrom mesh import QuadMesh\nfrom mesh import Mesh1D\nimport matplotlib.pyplot as plt\nimport scipy.sparse as sp\nimport numpy as np\n\n% matplotlib inline\nplt.rcParams['figure.figsize'] = [7, 7]",
"We test the system \n\\begin{equation}\\label{eq:elliptic}\n- u_{xx} - u_{yy} = 0 \n\\end{equation}\nsubject to Dirichlet conditions\n\\begin{align}\nu(0,y) &= 0 \\label{eq:dirichlet_at_xis0}\\\nu(1,y) &= 1 \\label{eq:dirichlet_at_xis1}\n\\end{align}\nWhose exact solution is \n\\begin{equation}\nu_e(x,y) = x. \n\\end{equation}\nWe use a Galerkin approximation with $Q_1$ elements.",
"#\n# Define the element\n#\nQ1 = QuadFE(2, 'Q1')",
"Since we have already tested the assembly, we focus here on the linear system. In particular:\n\nMarking and extracting Dirichlet boundary conditions\nExtracting hanging nodes\n\nboth by (i) eliminating the variables from the system (compressed=True) and (ii) by replacing affected equations with explicit Dirichlet data or interpolation formulae. \nWe first test solving with Dirichlet conditions. To that end we define our first mesh.",
"mesh1 = QuadMesh(resolution=(2,2))",
"To test extract_hanging_nodes and resolve_hanging_nodes we construct a simple mesh with hanging_nodes.",
"mesh2 = QuadMesh(resolution=(2,2))\nmesh2.cells.get_child(2).mark(1)\nmesh2.cells.refine(refinement_flag=1)",
"For the assembly, we must define the bilinear form\n\\begin{equation}\na(u,v) = \\int_\\Omega \\nabla u \\cdot \\nabla v dx =\\int_\\Omega u_x v_x + u_y v_y dx, \\ \\ \\ \\forall v \\in H^1_0(\\Omega)\n\\end{equation}\nand the linear form\n\\begin{equation}\nL(v) = \\int_\\Omega f v dx, \\qquad \\ \\ \\ \\forall v \\in H_0^1(\\Omega)\n\\end{equation}\nwhere $\\Omega = [0,1]^2$ and $H^1_0(\\Omega) = {v\\in H^1(\\Omega): v(0,\\cdot) = v(1,\\cdot)=0}$",
"#\n# Weak form\n#\n\n# Kernel functions \none = Function(1, 'constant')\nzero = Function(0, 'constant')\n\n# Basis functions \nu = Basis(Q1, 'u')\nux = Basis(Q1, 'ux')\nuy = Basis(Q1, 'uy')\n\n# Forms\nax = Form(kernel=Kernel(one), trial=ux, test=ux)\nay = Form(kernel=Kernel(one), trial=uy, test=uy)\nL = Form(kernel=Kernel(zero), test=u)\n\n# Assembler for mesh1\nassembler1 = Assembler([ax, ay, L], mesh1)\nassembler1.assemble()\n\n# Assembler for mesh2\nassembler2 = Assembler([ax,ay,L], mesh2)\nassembler2.assemble()",
"Let's visualize the meshes.",
"# Get dofhandlers\ndh1 = assembler1.dofhandlers['Q1']\ndh2 = assembler2.dofhandlers['Q1']\n\n# Plotting mesh 1\nplot = Plot()\nplot.mesh(mesh1, dofhandler=dh1, dofs=True)\n\n\n# Plotting mesh 2\nplot = Plot()\nplot.mesh(mesh2, dofhandler=dh2, dofs=True)",
"It looks like the following dofs from mesh1 and mesh2 are equivalent \nmesh1 -> mesh2\n- 0 -> 0\n- 1 -> 1\n- 4 -> 4\n- 5 -> 5\n- 8 -> 8\nIf we restrict to these, we should get the same matrix.",
"# Assembled matrices\n\n# Mesh1 \n\n# bilinear\nrows = assembler1.af[0]['bilinear']['rows']\ncols = assembler1.af[0]['bilinear']['cols']\nvals = assembler1.af[0]['bilinear']['vals']\ndofs = assembler1.af[0]['bilinear']['row_dofs']\nA1 = sp.coo_matrix((vals, (rows, cols)))\nA1 = A1.todense()\n\n# linear\nb1 = assembler1.af[0]['linear']['vals']\n\n# number of dofs \nn = len(dofs)\n\n# Print\nprint('Mesh 1')\nprint('A1 = \\n', 6*A1)\nprint('b1 = \\n', 6*b1)\nprint('n_dofs=', n)\n\nprint('='*60)\n\n#\n# Mesh2 \n# \n\n# bilinear\nrows = assembler2.af[0]['bilinear']['rows']\ncols = assembler2.af[0]['bilinear']['cols']\nvals = assembler2.af[0]['bilinear']['vals']\ndofs = assembler2.af[0]['bilinear']['row_dofs']\nA2 = sp.coo_matrix((vals, (rows, cols)))\nA2 = A2.todense()\n# linear\nb2 = assembler1.af[0]['linear']['vals']\n\n# number of dofs \nn = len(dofs)\n\n# Print\nprint('Mesh 2')\nprint('A2 = \\n', 6*A2)\nprint('b2 = \\n', 6*b2)\nprint('n_dofs=', n)",
"Check that A1 and A2 coincide when restricting to the nodes",
"print(A1[np.ix_([0,1,4,5,8],[0,1,4,5,8])] - A2[np.ix_([0,1,4,5,8],[0,1,4,5,8])])\n\n# System for mesh1\nsystem1 = LinearSystem(assembler1)\n\n# Check that it's the same as before\nassert np.allclose(A1, system1.A().todense())",
"Mark Dirichlet Regions on Meshes",
"# Mark Dirichlet Regions\nf_left = lambda x,dummy: np.abs(x)<1e-9\nf_right = lambda x,dummy: np.abs(x-1)<1e-9\n\n# Mesh 1\nmesh1.mark_region('left', f_left, on_boundary=True)\nmesh1.mark_region('right', f_right, on_boundary=True)\n\n# Mesh 2\nmesh2.mark_region('left', f_left, on_boundary=True)\nmesh2.mark_region('right', f_right, on_boundary=True)\n\n#\n# Check that we get the correct vertices back\n#\nfor side in ['left', 'right']:\n # mesh1\n print('mesh1: ', side)\n for v in mesh1.get_region(side, entity_type='vertex', \\\n on_boundary=True, return_cells=False):\n print(v.coordinates())\n \n print('')\n \n # mesh2\n print('mesh2: ', side)\n for v in mesh2.get_region(side, entity_type='vertex', \\\n on_boundary=True, return_cells=False):\n print(v.coordinates())\n \n \n print('\\n\\n')",
"Now extract Dirichlet nodes",
"#\n# Extract Dirichlet conditions (uncompressed format)\n#\n\nsystem1a = LinearSystem(assembler1, compressed=False)\n\nprint('System matrix and vector before left Dirichlet nodes')\n\nprint('6A = \\n', 6*system1a.A().todense())\nprint('6b = \\n', 6*system1a.b() )\n\nprint('Extracting Dirichlet nodes on left')\nsystem1a.extract_dirichlet_nodes('left', 0)\n\nprint('')\n\nprint('6A = \\n', 6*system1a.A().todense())\nprint('6b = \\n', 6*system1a.b() )\n\nprint('\\n\\n')\n\nprint('Extracting Dirichlet nodes on right')\nsystem1a.extract_dirichlet_nodes('right',1)\n\nprint('')\n\nprint('6A = \\n', 6*system1a.A().todense())\nprint('6b = \\n', 6*system1a.b() )\n\n#\n# Extract Dirichlet conditions (compressed format)\n#\n\nsystem1b = LinearSystem(assembler1, compressed=True)\n\nprint('System matrix and vector before left Dirichlet nodes')\n\nprint('6A = \\n', 6*system1b.A().todense())\nprint('6b = \\n', 6*system1b.b() )\n\nprint('Extracting Dirichlet nodes on left')\nsystem1b.extract_dirichlet_nodes('left', 0)\n\nprint('')\n\nprint('6A = \\n', 6*system1b.A().todense())\nprint('6b = \\n', 6*system1b.b() )\n\nprint('\\n\\n')\n\nprint('Extracting Dirichlet nodes on right')\nsystem1b.extract_dirichlet_nodes('right',1)\n\nprint('')\n\nprint('6A = \\n', 6*system1b.A().todense())\nprint('6b = \\n', 6*system1b.b() )\n\n#\n# Check solutions \n# \nsystem1a.solve()\nu1a = system1a.sol(as_function=True)\nplot = Plot()\nplot.wire(u1a)\n\n# "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aboSamoor/polyglot
|
notebooks/NamedEntityRecognition.ipynb
|
gpl-3.0
|
[
"Named Entity Extraction\nNamed entity extraction task aims to extract phrases from plain text that correpond to entities.\nPolyglot recognizes 3 categories of entities:\n\nLocations (Tag: I-LOC): cities, countries, regions, continents, neighborhoods, administrative divisions ...\nOrganizations (Tag: I-ORG): sports teams, newspapers, banks, universities, schools, non-profits, companies, ...\nPersons (Tag: I-PER): politicians, scientists, artists, atheletes ...\n\nLanguages Coverage\nThe models were trained on datasets extracted automatically from Wikipedia.\nPolyglot currently supports 40 major languages.",
"from polyglot.downloader import downloader\nprint(downloader.supported_languages_table(\"ner2\", 3))",
"Download Necessary Models",
"%%bash\npolyglot download embeddings2.en ner2.en",
"Example\nEntities inside a text object or a sentence are represented as chunks.\nEach chunk identifies the start and the end indices of the word subsequence within the text.",
"from polyglot.text import Text\n\nblob = \"\"\"The Israeli Prime Minister Benjamin Netanyahu has warned that Iran poses a \"threat to the entire world\".\"\"\"\ntext = Text(blob)\n\n# We can also specify language of that text by using\n# text = Text(blob, hint_language_code='en')",
"We can query all entities mentioned in a text.",
"text.entities",
"Or, we can query entites per sentence",
"for sent in text.sentences:\n print(sent, \"\\n\")\n for entity in sent.entities:\n print(entity.tag, entity)",
"By doing more careful inspection of the second entity Benjamin Netanyahu, we can locate the position of the entity within the sentence.",
"benjamin = sent.entities[1]\nsent.words[benjamin.start: benjamin.end]",
"Command Line Interface",
"!polyglot --lang en tokenize --input testdata/cricket.txt | polyglot --lang en ner | tail -n 20",
"Demo\nCitation\nThis work is a direct implementation of the research being described in the Polyglot-NER: Multilingual Named Entity Recognition paper.\nThe author of this library strongly encourage you to cite the following paper if you are using this software.\n@article{polyglotner,\n author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},\n title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},\n journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30 - May 2, 2015}},\n month = {April},\n year = {2015},\n publisher = {SIAM}\n}\nReferences\n\nPolyglot-NER project page.\nWikipedia on NER."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rflamary/POT
|
notebooks/plot_stochastic.ipynb
|
mit
|
[
"%matplotlib inline",
"Stochastic examples\nThis example is designed to show how to use the stochatic optimization\nalgorithms for descrete and semicontinous measures from the POT library.",
"# Author: Kilian Fatras <kilian.fatras@gmail.com>\n#\n# License: MIT License\n\nimport matplotlib.pylab as pl\nimport numpy as np\nimport ot\nimport ot.plot",
"COMPUTE TRANSPORTATION MATRIX FOR SEMI-DUAL PROBLEM\n\n\nDISCRETE CASE:\nSample two discrete measures for the discrete case\n\nDefine 2 discrete measures a and b, the points where are defined the source\n and the target measures and finally the cost matrix c.",
"n_source = 7\nn_target = 4\nreg = 1\nnumItermax = 1000\n\na = ot.utils.unif(n_source)\nb = ot.utils.unif(n_target)\n\nrng = np.random.RandomState(0)\nX_source = rng.randn(n_source, 2)\nY_target = rng.randn(n_target, 2)\nM = ot.dist(X_source, Y_target)",
"Call the \"SAG\" method to find the transportation matrix in the discrete case\nDefine the method \"SAG\", call ot.solve_semi_dual_entropic and plot the\nresults.",
"method = \"SAG\"\nsag_pi = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method,\n numItermax)\nprint(sag_pi)",
"SEMICONTINOUS CASE:\nSample one general measure a, one discrete measures b for the semicontinous\ncase\n\nDefine one general measure a, one discrete measures b, the points where\nare defined the source and the target measures and finally the cost matrix c.",
"n_source = 7\nn_target = 4\nreg = 1\nnumItermax = 1000\nlog = True\n\na = ot.utils.unif(n_source)\nb = ot.utils.unif(n_target)\n\nrng = np.random.RandomState(0)\nX_source = rng.randn(n_source, 2)\nY_target = rng.randn(n_target, 2)\nM = ot.dist(X_source, Y_target)",
"Call the \"ASGD\" method to find the transportation matrix in the semicontinous\ncase\n\nDefine the method \"ASGD\", call ot.solve_semi_dual_entropic and plot the\nresults.",
"method = \"ASGD\"\nasgd_pi, log_asgd = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method,\n numItermax, log=log)\nprint(log_asgd['alpha'], log_asgd['beta'])\nprint(asgd_pi)",
"Compare the results with the Sinkhorn algorithm\nCall the Sinkhorn algorithm from POT",
"sinkhorn_pi = ot.sinkhorn(a, b, M, reg)\nprint(sinkhorn_pi)",
"PLOT TRANSPORTATION MATRIX\n\nPlot SAG results",
"pl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, sag_pi, 'semi-dual : OT matrix SAG')\npl.show()",
"Plot ASGD results",
"pl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, asgd_pi, 'semi-dual : OT matrix ASGD')\npl.show()",
"Plot Sinkhorn results",
"pl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn')\npl.show()",
"COMPUTE TRANSPORTATION MATRIX FOR DUAL PROBLEM\n\n\nSEMICONTINOUS CASE:\nSample one general measure a, one discrete measures b for the semicontinous\n case\n\nDefine one general measure a, one discrete measures b, the points where\n are defined the source and the target measures and finally the cost matrix c.",
"n_source = 7\nn_target = 4\nreg = 1\nnumItermax = 100000\nlr = 0.1\nbatch_size = 3\nlog = True\n\na = ot.utils.unif(n_source)\nb = ot.utils.unif(n_target)\n\nrng = np.random.RandomState(0)\nX_source = rng.randn(n_source, 2)\nY_target = rng.randn(n_target, 2)\nM = ot.dist(X_source, Y_target)",
"Call the \"SGD\" dual method to find the transportation matrix in the\nsemicontinous case\n\nCall ot.solve_dual_entropic and plot the results.",
"sgd_dual_pi, log_sgd = ot.stochastic.solve_dual_entropic(a, b, M, reg,\n batch_size, numItermax,\n lr, log=log)\nprint(log_sgd['alpha'], log_sgd['beta'])\nprint(sgd_dual_pi)",
"Compare the results with the Sinkhorn algorithm\nCall the Sinkhorn algorithm from POT",
"sinkhorn_pi = ot.sinkhorn(a, b, M, reg)\nprint(sinkhorn_pi)",
"Plot SGD results",
"pl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, sgd_dual_pi, 'dual : OT matrix SGD')\npl.show()",
"Plot Sinkhorn results",
"pl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn')\npl.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
emsi/ml-toolbox
|
random/catfish/TL_02_Fixed feature extraction (CNN Codes vel bottleneck).ipynb
|
agpl-3.0
|
[
"The purpose of this notebook it to provide scientific proof that cats are stranger than dogs (possibly of alien origin). Cats' features are of enormous variety compared to dogs and simply annoying to our brains. \nLet's start by following the procedure to rearrange folders:\nhttps://github.com/daavoo/kaggle_solutions/blob/master/dogs_vs_cats/01_rearrange_folders.ipynb",
"from __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport zipfile\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\n\nfrom skimage import color, io\nfrom scipy.misc import imresize\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D, Activation, GlobalAveragePooling2D\nfrom keras.layers import merge, Input, Lambda\nfrom keras.callbacks import EarlyStopping\nfrom keras.models import Model\n\nnp.random.seed(31337)\n\n# Config the matplotlib backend as plotting inline in IPython\n%matplotlib inline",
"Load original Keras ResNet50 model without the top layer.",
"\nfrom keras.applications.resnet50 import ResNet50\nfrom keras.preprocessing import image\nfrom keras.applications.resnet50 import preprocess_input, decode_predictions\nimport numpy as np\n\nresnet_codes_model = ResNet50(input_shape=(300,300,3), include_top=False, weights='imagenet') \n#resnet_codes_model.summary()",
"Add a Pooling layer at the top to extract the CNN coded (aka bottleneck)",
"# Final model\nmodel=Model(input=resnet_codes_model.input, output=GlobalAveragePooling2D()(resnet_codes_model.output))\n\nmodel.summary()",
"The following preprocessing is not proper for the ResNet as it uses mean image rather than mean pixel (I chose VGG paper values) yet it yields little numerical differencies hence works properly and is more than enough for this experiment.\nNote that it's required to install github version of keras for preprocessing_function to work with: \npip install git+https://github.com/fchollet/keras.git --upgrade",
"from keras.preprocessing.image import ImageDataGenerator\ndef img_to_bgr(im):\n # the following BGR values should be subtracted: [103.939, 116.779, 123.68]. (VGG)\n return (im[:,:,::-1] - np.array([103.939, 116.779, 123.68]))\n\ndatagen = ImageDataGenerator(rescale=1., preprocessing_function=img_to_bgr) #(rescale=1./255)",
"Get the trainign and validation DirectoryIterators",
"train_batches = datagen.flow_from_directory(\"train\", model.input_shape[1:3], shuffle=False, batch_size=32)\nvalid_batches = datagen.flow_from_directory(\"valid\", model.input_shape[1:3], shuffle=False, batch_size=32)\n#test_batches = datagen.flow_from_directory(\"test\", model.input_shape[1:3], shuffle=False, batch_size=32, class_mode=None)\n",
"Obtain the CNN codes for all images (it takes ~10 minutes on GTX 1080 GPU)",
"train_codes = model.predict_generator(train_batches, train_batches.nb_sample)\n\nvalid_codes = model.predict_generator(valid_batches, valid_batches.nb_sample)\n\n#test_codes = model.predict_generator(test_batches, test_batches.nb_sample)",
"Save the CNN codes for futher analysys",
"import h5py\n\nfrom keras.utils.np_utils import to_categorical\n\nwith h5py.File(\"ResNet50-300x300_codes-train.h5\") as hf:\n hf.create_dataset(\"X_train\", data=train_codes)\n hf.create_dataset(\"X_valid\", data=valid_codes)\n hf.create_dataset(\"Y_train\", data=to_categorical(train_batches.classes))\n hf.create_dataset(\"Y_valid\", data=to_categorical(valid_batches.classes))\n\nwith h5py.File(\"ResNet50-300x300_codes-test.h5\") as hf:\n hf.create_dataset(\"X_test\", data=test_codes)\n",
"Compute mean values of codes across all training codes",
"def get_codes_by_class(X,Y):\n l=len(Y)\n if (len(X)!=l):\n raise Exception(\"X and Y are of different lengths\")\n classes=set(Y)\n \n return [[X[i] for i in xrange(l) if Y[i]==c] for c in classes], classes\n \nclass_codes, classes=get_codes_by_class(train_codes, train_batches.classes)\n\ncats=np.mean(class_codes[0],0)\ndogs=np.mean(class_codes[1],0)\ncats=cats/cats.max()\ndogs=dogs/dogs.max()",
"Visualize codes as images. As it can be clearly seen, Cats have many different features (plenty of high value - dark spots) while dogs highly activate only two neurons (two distinct dark spots). \nIt can be concluded that cats activate more brain regions or are more annoying than dogs.\nIt's even more apparent when looking at the histograms of frequency domain.",
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 6))\nax[0,0].imshow(cats.reshape(32,64),cmap=\"Greys\")\nax[0,0].set_title('Cats')\n\nax[0,1].imshow(dogs.reshape(32,64),cmap=\"Greys\")\nax[0,1].set_title('Dogs')\n\nfreq = np.fft.fft2(cats.reshape(32,64))\nfreq = np.abs(freq)\nax[1,0].hist(np.log(freq).ravel(), bins=100)\nax[1,0].set_title('hist(log(freq))')\n\nfreq = np.fft.fft2(dogs.reshape(32,64))\nfreq = np.abs(freq)\nax[1,1].hist(np.log(freq).ravel(), bins=100)\nax[1,1].set_title('hist(log(freq))')\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pastas/pasta
|
examples/notebooks/12_emcee_uncertainty.ipynb
|
mit
|
[
"Testing MCMC on Pastas Models\nR.A. Collenteur, University of Graz, November 2019\nIn this notebook it is shown how the MCMC-algorithm can be used to estimate the model parameters for a Pastas model. Besides Pastas the following Python Packages have to be installed to run this notebook:\n\nemcee\nlmfit\ncorner",
"import numpy as np\nimport pandas as pd\n\nimport pastas as ps\nimport corner\nimport emcee as mc\n\nimport matplotlib.pyplot as plt\n\nps.set_log_level(\"ERROR\")",
"1. Create a Pastas Model\nThe first step is to create a Pastas Model object, including the RechargeModel to simulate the effect of precipitation and evaporation on the groundwater heads.",
"# read observations and create the time series model\nobs = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\nrain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\nevap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\n\n# Create the time series model\nml = ps.Model(obs, name=\"head\")\n\nsm = ps.RechargeModel(prec=rain, evap=evap, rfunc=ps.Exponential, name='recharge')\nml.add_stressmodel(sm)\nml.solve(noise=True, report=\"basic\")\n\nml.plot(figsize=(10,3))",
"2. Use the EMCEE Hammer\nApart from the default solver (ps.LeastSquares), Pastas also contains the option to use the LmFit package to estimate the parameters. This package wraps multiple optimization techniques, one of which is Emcee. The code bock below shows how to use this method to estimate the parameters of Pastas models.\nEmcee takes a number of keyword arguments that determine how the optimization is done. The most important is the steps argument, that determines how many steps each of the walkers takes. The argument nwalkers can be used to set the number of walkers (default is 100). The burn argument determines how many samples from the start of the walkers are removed. The argument thin finally determines how many samples are accepted (1 in thin samples).",
"ml.set_parameter(\"noise_alpha\", vary=True)\n#ml.set_parameter(\"constant_d\", vary=False)\nml.solve(tmin=\"2002\", noise=True, initial=False, fit_constant=False, solver=ps.LmfitSolve, \n method=\"emcee\", steps=200, burn=10, thin=5, is_weighted=True, nan_policy=\"omit\");",
"3. Visualize the results\nThe results are stored in the result object, accessible through ml.fit.result. The object ml.fit.result.flatchain contains a Pandas DataFrame with $n$ the parameter samples, whgere $n$ is calculated as follows:\n$n = \\frac{(\\text{steps}-\\text{burn})\\cdot\\text{nwalkers}}{\\text{thin}} $\nCorner.py\nCorner is a simple but great python package that makes creating corner graphs easy. One line of code suffices to create a plot of the parameter distributions and the covariances between the parameters.",
"corner.corner(ml.fit.result.flatchain, truths=list(ml.parameters[ml.parameters.vary == True].optimal));",
"4. What happens to the walkers at each step?\nThe walkers take steps in different directions for each step. It is expected that after a number of steps, the direction of the step becomes random, as a sign that an optimum has been found. This can be checked by looking at the autocorrelation, which should be insignificant after a number of steps (NOT SURE HOW TO INTERPRET THIS YET). However it does not seem the case that the parameters converge to come optimum yet, even for the simple Linear model.",
"labels = ml.fit.result.flatchain.columns\n\nfig, axes = plt.subplots(labels.size, figsize=(10, 7), sharex=True)\nsamples = ml.fit.result.chain\nfor i in range(labels.size):\n ax = axes[i]\n ax.plot(samples[:, :, i], \"k\", alpha=0.3)\n ax.set_xlim(0, len(samples))\n ax.set_ylabel(labels[i])\n ax.yaxis.set_label_coords(-0.1, 0.5)\n\naxes[-1].set_xlabel(\"step number\");",
"5. Plot some simulated time series to display uncertainty?",
"ax = ml.plot(figsize=(10,3))\n\ninds = np.random.randint(len(ml.fit.result.flatchain), size=100)\nfor ind in inds:\n params = ml.fit.result.flatchain.iloc[ind].values\n ml.simulate(params).plot(c=\"k\", alpha=0.1)\nplt.ylim(27,30)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ALEXKIRNAS/DataScience
|
CS231n/assignment3/GANs-PyTorch.ipynb
|
mit
|
[
"Generative Adversarial Networks (GANs)\nSo far in CS231N, all the applications of neural networks that we have explored have been discriminative models that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build generative models using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images.\nWhat is a GAN?\nIn 2014, Goodfellow et al. presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the discriminator. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the generator, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.\nWe can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game:\n$$\\underset{G}{\\text{minimize}}\\; \\underset{D}{\\text{maximize}}\\; \\mathbb{E}{x \\sim p\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\nwhere $z \\sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In Goodfellow et al., they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.\nTo optimize this minimax game, we will aternate between taking gradient descent steps on the objective for $G$, and gradient ascent steps on the objective for $D$:\n1. update the generator ($G$) to minimize the probability of the discriminator making the correct choice. \n2. update the discriminator ($D$) to maximize the probability of the discriminator making the correct choice.\nWhile these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the discriminator making the incorrect choice. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from Goodfellow et al.. \nIn this assignment, we will alternate the following updates:\n1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data:\n$$\\underset{G}{\\text{maximize}}\\; \\mathbb{E}{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\n2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data:\n$$\\underset{D}{\\text{maximize}}\\; \\mathbb{E}{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\nWhat else is there?\nSince 2014, GANs have exploded into a huge research area, with massive workshops, and hundreds of new papers. Compared to other approaches for generative models, they often produce the highest quality samples but are some of the most difficult and finicky models to train (see this github repo that contains a set of 17 hacks that are useful for getting models working). Improving the stabiilty and robustness of GAN training is an open research question, with new papers coming out every day! For a more recent tutorial on GANs, see here. There is also some even more recent exciting work that changes the objective function to Wasserstein distance and yields much more stable results across model architectures: WGAN, WGAN-GP.\nGANs are not the only way to train a generative model! For other approaches to generative modeling check out the deep generative model chapter of the Deep Learning book. Another popular way of training neural networks as generative models is Variational Autoencoders (co-discovered here and here). Variatonal autoencoders combine neural networks with variationl inference to train deep generative models. These models tend to be far more stable and easier to train but currently don't produce samples that are as pretty as GANs.\nHere's an example of what your outputs from the 3 different models you're going to train should look like... note that GANs are sometimes finicky, so your outputs might not look exactly like this... this is just meant to be a rough guideline of the kind of quality you can expect:\n\nSetup",
"import torch\nimport torch.nn as nn\nfrom torch.nn import init\nfrom torch.autograd import Variable\nimport torchvision\nimport torchvision.transforms as T\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import sampler\nimport torchvision.datasets as dset\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\ndef show_images(images):\n images = np.reshape(images, [images.shape[0], -1]) # images reshape to (batch_size, D)\n sqrtn = int(np.ceil(np.sqrt(images.shape[0])))\n sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))\n\n fig = plt.figure(figsize=(sqrtn, sqrtn))\n gs = gridspec.GridSpec(sqrtn, sqrtn)\n gs.update(wspace=0.05, hspace=0.05)\n\n for i, img in enumerate(images):\n ax = plt.subplot(gs[i])\n plt.axis('off')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_aspect('equal')\n plt.imshow(img.reshape([sqrtimg,sqrtimg]))\n return \n\ndef preprocess_img(x):\n return 2 * x - 1.0\n\ndef deprocess_img(x):\n return (x + 1.0) / 2.0\n\ndef rel_error(x,y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef count_params(model):\n \"\"\"Count the number of parameters in the current TensorFlow graph \"\"\"\n param_count = np.sum([np.prod(p.size()) for p in model.parameters()])\n return param_count\n\nanswers = np.load('gan-checks-tf.npz')",
"Dataset\nGANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy. \nTo simplify our code here, we will use the PyTorch MNIST wrapper, which downloads and loads the MNIST dataset. See the documentation for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called MNIST_data.",
"class ChunkSampler(sampler.Sampler):\n \"\"\"Samples elements sequentially from some offset. \n Arguments:\n num_samples: # of desired datapoints\n start: offset where we should start selecting from\n \"\"\"\n def __init__(self, num_samples, start=0):\n self.num_samples = num_samples\n self.start = start\n\n def __iter__(self):\n return iter(range(self.start, self.start + self.num_samples))\n\n def __len__(self):\n return self.num_samples\n\nNUM_TRAIN = 50000\nNUM_VAL = 5000\n\nNOISE_DIM = 96\nbatch_size = 128\n\nmnist_train = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,\n transform=T.ToTensor())\nloader_train = DataLoader(mnist_train, batch_size=batch_size,\n sampler=ChunkSampler(NUM_TRAIN, 0))\n\nmnist_val = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,\n transform=T.ToTensor())\nloader_val = DataLoader(mnist_val, batch_size=batch_size,\n sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))\n\n\nimgs = loader_train.__iter__().next()[0].view(batch_size, 784).numpy().squeeze()\nshow_images(imgs)",
"Random Noise\nGenerate uniform noise from -1 to 1 with shape [batch_size, dim].\nHint: use torch.rand.",
"def sample_noise(batch_size, dim):\n \"\"\"\n Generate a PyTorch Tensor of uniform random noise.\n\n Input:\n - batch_size: Integer giving the batch size of noise to generate.\n - dim: Integer giving the dimension of noise to generate.\n \n Output:\n - A PyTorch Tensor of shape (batch_size, dim) containing uniform\n random noise in the range (-1, 1).\n \"\"\"\n return torch.Tensor(batch_size, dim).uniform_(-1, 1)\n",
"Make sure noise is the correct shape and type:",
"def test_sample_noise():\n batch_size = 3\n dim = 4\n torch.manual_seed(231)\n z = sample_noise(batch_size, dim)\n np_z = z.cpu().numpy()\n assert np_z.shape == (batch_size, dim)\n assert torch.is_tensor(z)\n assert np.all(np_z >= -1.0) and np.all(np_z <= 1.0)\n assert np.any(np_z < 0.0) and np.any(np_z > 0.0)\n print('All tests passed!')\n \ntest_sample_noise()",
"Flatten\nRecall our Flatten operation from previous notebooks... this time we also provide an Unflatten, which you might want to use when implementing the convolutional generator. We also provide a weight initializer (and call it for you) that uses Xavier initialization instead of PyTorch's uniform default.",
"class Flatten(nn.Module):\n def forward(self, x):\n N, C, H, W = x.size() # read in N, C, H, W\n return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image\n \nclass Unflatten(nn.Module):\n \"\"\"\n An Unflatten module receives an input of shape (N, C*H*W) and reshapes it\n to produce an output of shape (N, C, H, W).\n \"\"\"\n def __init__(self, N=-1, C=128, H=7, W=7):\n super(Unflatten, self).__init__()\n self.N = N\n self.C = C\n self.H = H\n self.W = W\n def forward(self, x):\n return x.view(self.N, self.C, self.H, self.W)\n\ndef initialize_weights(m):\n if isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose2d):\n init.xavier_uniform(m.weight.data)",
"CPU / GPU\nBy default all code will run on CPU. GPUs are not needed for this assignment, but will help you to train your models faster. If you do want to run the code on a GPU, then change the dtype variable in the following cell.",
"dtype = torch.FloatTensor\n# dtype = torch.cuda.FloatTensor ## UNCOMMENT THIS LINE IF YOU'RE ON A GPU!\n",
"Discriminator\nOur first step is to build a discriminator. Fill in the architecture as part of the nn.Sequential constructor in the function below. All fully connected layers should include bias terms. The architecture is:\n * Fully connected layer from size 784 to 256\n * LeakyReLU with alpha 0.01\n * Fully connected layer from 256 to 256\n * LeakyReLU with alpha 0.01\n * Fully connected layer from 256 to 1\nRecall that the Leaky ReLU nonlinearity computes $f(x) = \\max(\\alpha x, x)$ for some fixed constant $\\alpha$; for the LeakyReLU nonlinearities in the architecture above we set $\\alpha=0.01$.\nThe output of the discriminator should have shape [batch_size, 1], and contain real numbers corresponding to the scores that each of the batch_size inputs is a real image.",
"def discriminator():\n \"\"\"\n Build and return a PyTorch model implementing the architecture above.\n \"\"\"\n model = nn.Sequential(\n Flatten(),\n nn.Linear(784, 256),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(256, 256),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(256, 1)\n )\n return model",
"Test to make sure the number of parameters in the discriminator is correct:",
"def test_discriminator(true_count=267009):\n model = discriminator()\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in discriminator. Check your achitecture.')\n else:\n print('Correct number of parameters in discriminator.') \n\ntest_discriminator()",
"Generator\nNow to build the generator network:\n * Fully connected layer from noise_dim to 1024\n * ReLU\n * Fully connected layer with size 1024 \n * ReLU\n * Fully connected layer with size 784\n * TanH\n * To clip the image to be [-1,1]",
"def generator(noise_dim=NOISE_DIM):\n \"\"\"\n Build and return a PyTorch model implementing the architecture above.\n \"\"\"\n model = nn.Sequential(\n nn.Linear(noise_dim, 1024),\n nn.ReLU(inplace=True),\n nn.Linear(1024, 1024),\n nn.ReLU(inplace=True),\n nn.Linear(1024, 784),\n nn.Tanh()\n )\n return model",
"Test to make sure the number of parameters in the generator is correct:",
"def test_generator(true_count=1858320):\n model = generator(4)\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_generator()",
"GAN Loss\nCompute the generator and discriminator loss. The generator loss is:\n$$\\ell_G = -\\mathbb{E}{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\nand the discriminator loss is:\n$$ \\ell_D = -\\mathbb{E}{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] - \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\nNote that these are negated from the equations presented earlier as we will be minimizing these losses.\nHINTS: You should use the bce_loss function defined below to compute the binary cross entropy loss which is needed to compute the log probability of the true label given the logits output from the discriminator. Given a score $s\\in\\mathbb{R}$ and a label $y\\in{0, 1}$, the binary cross entropy loss is\n$$ bce(s, y) = y * \\log(s) + (1 - y) * \\log(1 - s) $$\nA naive implementation of this formula can be numerically unstable, so we have provided a numerically stable implementation for you below.\nYou will also need to compute labels corresponding to real or fake and use the logit arguments to determine their size. Make sure you cast these labels to the correct data type using the global dtype variable, for example:\ntrue_labels = Variable(torch.ones(size)).type(dtype)\nInstead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing.",
"def bce_loss(input, target):\n \"\"\"\n Numerically stable version of the binary cross-entropy loss function.\n\n As per https://github.com/pytorch/pytorch/issues/751\n See the TensorFlow docs for a derivation of this formula:\n https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits\n\n Inputs:\n - input: PyTorch Variable of shape (N, ) giving scores.\n - target: PyTorch Variable of shape (N,) containing 0 and 1 giving targets.\n\n Returns:\n - A PyTorch Variable containing the mean BCE loss over the minibatch of input data.\n \"\"\"\n neg_abs = - input.abs()\n loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()\n return loss.mean()\n\ndef discriminator_loss(logits_real, logits_fake):\n \"\"\"\n Computes the discriminator loss described above.\n \n Inputs:\n - logits_real: PyTorch Variable of shape (N,) giving scores for the real data.\n - logits_fake: PyTorch Variable of shape (N,) giving scores for the fake data.\n \n Returns:\n - loss: PyTorch Variable containing (scalar) the loss for the discriminator.\n \"\"\"\n N, _ = logits_real.size()\n \n labels_real = Variable(torch.ones(N)).type(dtype)\n loss_real = bce_loss(logits_real, labels_real)\n \n labels_fake = Variable(torch.zeros(N)).type(dtype)\n loss_fake = bce_loss(logits_fake, labels_fake)\n \n loss = loss_real + loss_fake\n \n return loss\n\ndef generator_loss(logits_fake):\n \"\"\"\n Computes the generator loss described above.\n\n Inputs:\n - logits_fake: PyTorch Variable of shape (N,) giving scores for the fake data.\n \n Returns:\n - loss: PyTorch Variable containing the (scalar) loss for the generator.\n \"\"\"\n \n N, _ = logits_fake.size()\n \n generated_labels = Variable(torch.ones(N)).type(dtype)\n loss = bce_loss(logits_fake, generated_labels)\n \n return loss",
"Test your generator and discriminator loss. You should see errors < 1e-7.",
"def test_discriminator_loss(logits_real, logits_fake, d_loss_true):\n d_loss = discriminator_loss(Variable(torch.Tensor(logits_real)).type(dtype),\n Variable(torch.Tensor(logits_fake)).type(dtype)).data.cpu().numpy()\n print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n\ntest_discriminator_loss(answers['logits_real'], answers['logits_fake'],\n answers['d_loss_true'])\n\ndef test_generator_loss(logits_fake, g_loss_true):\n g_loss = generator_loss(Variable(torch.Tensor(logits_fake)).type(dtype)).data.cpu().numpy()\n print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n\ntest_generator_loss(answers['logits_fake'], answers['g_loss_true'])",
"Optimizing our loss\nMake a function that returns an optim.Adam optimizer for the given model with a 1e-3 learning rate, beta1=0.5, beta2=0.999. You'll use this to construct optimizers for the generators and discriminators for the rest of the notebook.",
"def get_optimizer(model):\n \"\"\"\n Construct and return an Adam optimizer for the model with learning rate 1e-3,\n beta1=0.5, and beta2=0.999.\n \n Input:\n - model: A PyTorch model that we want to optimize.\n \n Returns:\n - An Adam optimizer for the model with the desired hyperparameters.\n \"\"\"\n \n optimizer = optim.Adam(model.parameters(), lr=1e-3, betas=(0., 0.999))\n return optimizer",
"Training a GAN!\nWe provide you the main training loop... you won't need to change this function, but we encourage you to read through and understand it.",
"def run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss, show_every=250, \n batch_size=128, noise_size=96, num_epochs=10):\n \"\"\"\n Train a GAN!\n \n Inputs:\n - D, G: PyTorch models for the discriminator and generator\n - D_solver, G_solver: torch.optim Optimizers to use for training the\n discriminator and generator.\n - discriminator_loss, generator_loss: Functions to use for computing the generator and\n discriminator loss, respectively.\n - show_every: Show samples after every show_every iterations.\n - batch_size: Batch size to use for training.\n - noise_size: Dimension of the noise to use as input to the generator.\n - num_epochs: Number of epochs over the training dataset to use for training.\n \"\"\"\n iter_count = 0\n for epoch in range(num_epochs):\n for x, _ in loader_train:\n if len(x) != batch_size:\n continue\n D_solver.zero_grad()\n real_data = Variable(x).type(dtype)\n logits_real = D(2* (real_data - 0.5)).type(dtype)\n\n g_fake_seed = Variable(sample_noise(batch_size, noise_size)).type(dtype)\n fake_images = G(g_fake_seed).detach()\n logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n\n d_total_error = discriminator_loss(logits_real, logits_fake)\n d_total_error.backward() \n D_solver.step()\n\n G_solver.zero_grad()\n g_fake_seed = Variable(sample_noise(batch_size, noise_size)).type(dtype)\n fake_images = G(g_fake_seed)\n\n gen_logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n g_error = generator_loss(gen_logits_fake)\n g_error.backward()\n G_solver.step()\n\n if (iter_count % show_every == 0):\n print('Iter: {}, D: {:.4}, G:{:.4}'.format(iter_count,d_total_error.data[0],g_error.data[0]))\n imgs_numpy = fake_images.data.cpu().numpy()\n show_images(imgs_numpy[0:16])\n plt.show()\n print()\n iter_count += 1",
"Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000.",
"# Make the discriminator\nD = discriminator().type(dtype)\n\n# Make the generator\nG = generator().type(dtype)\n\n# Use the function you wrote earlier to get optimizers for the Discriminator and the Generator\nD_solver = get_optimizer(D)\nG_solver = get_optimizer(G)\n# Run it!\nrun_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss)",
"Least Squares GAN\nWe'll now look at Least Squares GAN, a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss:\n$$\\ell_G = \\frac{1}{2}\\mathbb{E}{z \\sim p(z)}\\left[\\left(D(G(z))-1\\right)^2\\right]$$\nand the discriminator loss:\n$$ \\ell_D = \\frac{1}{2}\\mathbb{E}{x \\sim p_\\text{data}}\\left[\\left(D(x)-1\\right)^2\\right] + \\frac{1}{2}\\mathbb{E}_{z \\sim p(z)}\\left[ \\left(D(G(z))\\right)^2\\right]$$\nHINTS: Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing. When plugging in for $D(x)$ and $D(G(z))$ use the direct output from the discriminator (scores_real and scores_fake).",
"def ls_discriminator_loss(scores_real, scores_fake):\n \"\"\"\n Compute the Least-Squares GAN loss for the discriminator.\n \n Inputs:\n - scores_real: PyTorch Variable of shape (N,) giving scores for the real data.\n - scores_fake: PyTorch Variable of shape (N,) giving scores for the fake data.\n \n Outputs:\n - loss: A PyTorch Variable containing the loss.\n \"\"\"\n \n loss_real = torch.mean((scores_real - 1).type(dtype) ** 2)\n loss_fake = torch.mean(scores_fake ** 2)\n loss = 0.5 * (loss_real + loss_fake)\n \n return loss\n\ndef ls_generator_loss(scores_fake):\n \"\"\"\n Computes the Least-Squares GAN loss for the generator.\n \n Inputs:\n - scores_fake: PyTorch Variable of shape (N,) giving scores for the fake data.\n \n Outputs:\n - loss: A PyTorch Variable containing the loss.\n \"\"\"\n \n loss = torch.mean((scores_fake - 1).type(dtype) ** 2)\n loss *= 0.5\n \n return loss",
"Before running a GAN with our new loss function, let's check it:",
"def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):\n d_loss = ls_discriminator_loss(score_real, score_fake).data.numpy()\n g_loss = ls_generator_loss(score_fake).data.numpy()\n print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n\ntest_lsgan_loss(answers['logits_real'], answers['logits_fake'],\n answers['d_loss_lsgan_true'], answers['g_loss_lsgan_true'])\n\nD_LS = discriminator().type(dtype)\nG_LS = generator().type(dtype)\n\nD_LS_solver = get_optimizer(D_LS)\nG_LS_solver = get_optimizer(G_LS)\n\nrun_a_gan(D_LS, G_LS, D_LS_solver, G_LS_solver, ls_discriminator_loss, ls_generator_loss)",
"INLINE QUESTION 1\nDescribe how the visual quality of the samples changes over the course of training. Do you notice anything about the distribution of the samples? How do the results change across different training runs?\nTODO: SEE TF NOTEBOOk.\nDeeply Convolutional GANs\nIn the first part of the notebook, we implemented an almost direct copy of the original GAN network from Ian Goodfellow. However, this network architecture allows no real spatial reasoning. It is unable to reason about things like \"sharp edges\" in general because it lacks any convolutional layers. Thus, in this section, we will implement some of the ideas from DCGAN, where we use convolutional networks \nDiscriminator\nWe will use a discriminator inspired by the TensorFlow MNIST classification tutorial, which is able to get above 99% accuracy on the MNIST dataset fairly quickly. \n* Reshape into image tensor (Use Unflatten!)\n* 32 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)\n* Max Pool 2x2, Stride 2\n* 64 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)\n* Max Pool 2x2, Stride 2\n* Flatten\n* Fully Connected size 4 x 4 x 64, Leaky ReLU(alpha=0.01)\n* Fully Connected size 1",
"def build_dc_classifier():\n \"\"\"\n Build and return a PyTorch model for the DCGAN discriminator implementing\n the architecture above.\n \"\"\"\n return nn.Sequential(\n ###########################\n ######### TO DO ###########\n ###########################\n Unflatten(batch_size, 1, 28, 28),\n nn.Conv2d(1, 32, kernel_size=5, stride=1),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.MaxPool2d(2, stride=2),\n nn.Conv2d(32, 64, kernel_size=5, stride=1),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.MaxPool2d(2, stride=2),\n Flatten(),\n nn.Linear(1024, 1024),\n nn.LeakyReLU(negative_slope=0.01, inplace=True),\n nn.Linear(1024, 1)\n )\n\ndata = Variable(loader_train.__iter__().next()[0]).type(dtype)\nb = build_dc_classifier().type(dtype)\nout = b(data)\nprint(out.size())",
"Check the number of parameters in your classifier as a sanity check:",
"def test_dc_classifer(true_count=1102721):\n model = build_dc_classifier()\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_dc_classifer()",
"Generator\nFor the generator, we will copy the architecture exactly from the InfoGAN paper. See Appendix C.1 MNIST. See the documentation for tf.nn.conv2d_transpose. We are always \"training\" in GAN mode. \n* Fully connected of size 1024, ReLU\n* BatchNorm\n* Fully connected of size 7 x 7 x 128, ReLU\n* BatchNorm\n* Reshape into Image Tensor\n* 64 conv2d^T filters of 4x4, stride 2, 'same' padding, ReLU\n* BatchNorm\n* 1 conv2d^T filter of 4x4, stride 2, 'same' padding, TanH\n* Should have a 28x28x1 image, reshape back into 784 vector",
"def build_dc_generator(noise_dim=NOISE_DIM):\n \"\"\"\n Build and return a PyTorch model implementing the DCGAN generator using\n the architecture described above.\n \"\"\"\n return nn.Sequential(\n ###########################\n ######### TO DO ###########\n ###########################\n nn.Linear(noise_dim, 1024),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(1024),\n nn.Linear(1024, 7*7*128),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(7*7*128),\n Unflatten(batch_size, 128, 7, 7),\n nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1),\n nn.ReLU(inplace=True),\n nn.BatchNorm2d(64),\n nn.ConvTranspose2d(64, 1, kernel_size=4, stride=2, padding=1),\n nn.Tanh(),\n Flatten()\n )\n\ntest_g_gan = build_dc_generator().type(dtype)\ntest_g_gan.apply(initialize_weights)\n\nfake_seed = Variable(torch.randn(batch_size, NOISE_DIM)).type(dtype)\nfake_images = test_g_gan.forward(fake_seed)\nfake_images.size()",
"Check the number of parameters in your generator as a sanity check:",
"def test_dc_generator(true_count=6580801):\n model = build_dc_generator(4)\n cur_count = count_params(model)\n if cur_count != true_count:\n print('Incorrect number of parameters in generator. Check your achitecture.')\n else:\n print('Correct number of parameters in generator.')\n\ntest_dc_generator()\n\nD_DC = build_dc_classifier().type(dtype) \nD_DC.apply(initialize_weights)\nG_DC = build_dc_generator().type(dtype)\nG_DC.apply(initialize_weights)\n\nD_DC_solver = get_optimizer(D_DC)\nG_DC_solver = get_optimizer(G_DC)\n\nrun_a_gan(D_DC, G_DC, D_DC_solver, G_DC_solver, discriminator_loss, generator_loss, num_epochs=5)",
"INLINE QUESTION 2\nWhat differences do you see between the DCGAN results and the original GAN results?\nTODO: SEE TF NOTEBOOK.\nExtra Credit\n Be sure you don't destroy your results above, but feel free to copy+paste code to get results below \n* For a small amount of extra credit, you can implement additional new GAN loss functions below, provided they converge. See AFI, BiGAN, Softmax GAN, Conditional GAN, InfoGAN, etc. \n* Likewise for an improved architecture or using a convolutional GAN (or even implement a VAE)\n* For a bigger chunk of extra credit, load the CIFAR10 data (see last assignment) and train a compelling generative model on CIFAR-10\n* Something new/cool.\nDescribe what you did here\n TBD"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rashikaranpuria/Machine-Learning-Specialization
|
Machine Learning Foundations: A Case Study Approach/Assignment_two/Analyzing product sentiment.ipynb
|
mit
|
[
"Predicting sentiment from product reviews\nFire up GraphLab Create",
"import graphlab;\n",
"Read some product review data\nLoading reviews for a set of baby products.",
"products = graphlab.SFrame('amazon_baby.gl/')",
"Let's explore this data together\nData includes the product name, the review text and the rating of the review.",
"products.head()",
"Build the word count vector for each review",
"products['word_count'] = graphlab.text_analytics.count_words(products['review'])\n\nproducts.head()\n\ngraphlab.canvas.set_target('ipynb')\n\nproducts['name'].show()",
"Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'",
"giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']\n\nlen(giraffe_reviews)\n\ngiraffe_reviews['rating'].show(view='Categorical')",
"Build a sentiment classifier",
"products['rating'].show(view='Categorical')",
"Define what's a positive and a negative sentiment\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.",
"#ignore all 3* reviews\nproducts = products[products['rating'] != 3]\n\n#positive sentiment = 4* or 5* reviews\nproducts['sentiment'] = products['rating'] >=4\n\nproducts.head()",
"Let's train the sentiment classifier",
"train_data,test_data = products.random_split(.8, seed=0)\n\nsentiment_model = graphlab.logistic_classifier.create(train_data,\n target='sentiment',\n features=['word_count'],\n validation_set=test_data)",
"Evaluate the sentiment model",
"sentiment_model.evaluate(test_data, metric='roc_curve')\n\nsentiment_model.show(view='Evaluation')",
"Applying the learned model to understand sentiment for Giraffe",
"giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')\n\ngiraffe_reviews.head()",
"Sort the reviews based on the predicted sentiment and explore",
"giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)\n\ngiraffe_reviews.head()",
"Most positive reviews for the giraffe",
"giraffe_reviews[0]['review']\n\ngiraffe_reviews[1]['review']",
"Show most negative reviews for giraffe",
"giraffe_reviews[-1]['review']\n\ngiraffe_reviews[-2]['review']\n\nselected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']\n\n\nproducts['word_count'] = graphlab.text_analytics.count_words(products['review'])\nselected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']\n\ndef awesome_count(cell):\n if 'hate' in cell:\n return cell['hate']\n else:\n return 0\nproducts['hate'] = products['word_count'].apply(awesome_count)\nproducts.head()\n\ntrain_data,test_data = products.random_split(.8, seed=0)\nselected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']\nselected_words_model = graphlab.logistic_classifier.create(train_data,target='sentiment',features=selected_words,validation_set=test_data, )\nselected_words_model['coefficients'].sort('value', ascending = True)\n\n\n\nselected_words_model.evaluate(test_data)\n\nsentiment_model.evaluate(test_data)\n\ngiraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')\ngiraffe_reviews.head()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AtmaMani/pyChakras
|
stats_101/04_normal_distribution.ipynb
|
mit
|
[
"Normal distribution\nStandard normal distribution takes a bell curve. It is also called as gaussian distribution. Values in nature are believed to take a normal distribution. The equation for normal distribution is \n$$y = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{\\frac{(x-\\mu)^2}{2\\sigma^2}}$$\nwhere\n$\\mu$ is mean\n$\\sigma$ is standard deviation\n$\\pi$ = 3.14159..\n$e$ = 2.71828.. (natural log)",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nvals = np.random.standard_normal(100000)\nlen(vals)\n\nfig, ax = plt.subplots(1,1)\nhist_vals = ax.hist(vals, bins=200, color='red', density=True)",
"The above is the standard normal distribution. Its mean is 0 and SD is 1. About 95% values fall within $\\mu \\pm 2 SD$ and 98% within $\\mu \\pm 3 SD$\nThe area under this curve is 1 which gives the probability of values falling within the range of standard normal.\nA common use is to find the probability of a value falling at a particular range. For instance, find $p(-2 \\le z \\le 2)$ which is the probability of a value falling within $\\mu \\pm 2SD$. This calculated by summing the area under the curve between these bounds.\n$$p(-2 \\le z \\le 2) = 0.9544$$ which is 95.44% probability. Its z score is 0.9544.\nSimilarly\n$$p(z \\ge 5.1) = 0.00000029$$\nFinding z score and p values using SciPy\nThe standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.\nLevels of significance\nBy rule of thumb, a z score greater than 0.005 is considered significant as such a value has a very low probability of occuring. Thus, there is less chance of it occurring randomly and hence, there is probably a force acting on it (significant force, not random chance).\nTransformation to standard normal\nIf the distribution of a phenomena follows normal dist, then you can transform it to standard normal, so you can measure the z scores. To do so,\n$$std normal value = \\frac{observed - \\mu}{\\sigma}$$ You subtract the mean and divide by SD of the distribution.\nExample\nLet X be age of US presidents at inaugration. $X \\in N(\\mu = 54.8, \\sigma=6.2)$. What is the probability of choosing a president at random that is less than 44 years of age.\nWe need to find $p(x<44)$. First we need to transform to standard normal.\n$$p(z< \\frac{44-54.8}{6.2})$$\n$$p(z<-1.741) = 0.0409 \\approx 4\\%$$\nFinding z score and p values using SciPy\nThe standard normal is useful as a z table to look up the probability of a z score (x axis). You can use Scipy to accomplish this.",
"import scipy.stats as st\n\n# compute the p value for a z score\nst.norm.cdf(-1.741)",
"Let us try for some common z scores:",
"[st.norm.cdf(-3), st.norm.cdf(-1), st.norm.cdf(0), st.norm.cdf(1), st.norm.cdf(2)]",
"As you noticed, the norm.cdf() function gives the cumulative probability (left tail) from -3 to 3 approx. If you need right tailed distribution, you simply subtract this value from 1.\nFinding z score from a p value\nSometimes, you have the probability (p value), but want to find the z score or how many SD does this value fall from mean. You can do this inverse using ppf().",
"# Find Z score for a probability of 0.97 (2sd)\nst.norm.ppf(0.97)\n\n[st.norm.ppf(0.95), st.norm.ppf(0.97), st.norm.ppf(0.98), st.norm.ppf(0.99)]",
"As is the ppf() function gives only positive z scores, you need to apply $\\pm$ to it.\nTransformation to standard normal and machine learning\nTransforming features to standard normal has applications in machine learning. As each feature has a different unit, their range, standard deviation vary. Hence we scale them all to standard normal distribution with mean=0 and SD=1. This way a learner finds those variables that are truly influencial and not simply because it has a larger range.\nTo accomplish this easily, we use scikit-learn's StandardScaler object as shown below:",
"demo_dist = 55 + np.random.randn(200) * 3.4\nstd_normal = np.random.randn(200)\n\n[demo_dist.mean(), demo_dist.std(), demo_dist.min(), demo_dist.max()]\n\n[std_normal.mean(), std_normal.std(), std_normal.min(), std_normal.max()]",
"Now let us use scikit-learn to easily transform this dataset",
"from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\n\ndemo_dist = demo_dist.reshape(200,1)\ndemo_dist_scaled = scaler.fit_transform(demo_dist)\n\n[round(demo_dist_scaled.mean(),3), demo_dist_scaled.std(), demo_dist_scaled.min(), demo_dist_scaled.max()]\n\nfig, axs = plt.subplots(2,2, figsize=(15,8))\np1 = axs[0][0].scatter(sorted(demo_dist), sorted(std_normal))\naxs[0][0].set_title(\"Scatter of original dataset against standard normal\")\n\np1 = axs[0][1].scatter(sorted(demo_dist_scaled), sorted(std_normal))\naxs[0][1].set_title(\"Scatter of scaled dataset against standard normal\")\n\np2 = axs[1][0].hist(demo_dist, bins=50)\naxs[1][0].set_title(\"Histogram of original dataset against standard normal\")\n\np3 = axs[1][1].hist(demo_dist_scaled, bins=50)\naxs[1][1].set_title(\"Histogram of scaled dataset against standard normal\")",
"As you see above, the shape of distribution is the same, just the values are scaled.\nAssessing normality of a distribution\nTo assess how normal a distribution of values is, we sort the values, then plot them against sorted values of standard normal distribution. If the values fall on a straight line, then they are normally distributed, else they exhibit skewness and or kurtosis.",
"demo_dist = 55 + np.random.randn(200) * 3.4\nstd_normal = np.random.randn(200)\n\ndemo_dist = sorted(demo_dist)\nstd_normal = sorted(std_normal)\n\nplt.scatter(demo_dist, std_normal)",
"For the most part, the values fall on a straight line, except in the fringes. Thus, the demo distribution is fairly normal.\nStandard error\nAs can be seen, we use statistics to estimate population mean $\\mu$ from sample mean $\\bar x$. Standard error of $\\bar x$ represents on average, how far will it be from $\\mu$.\nAs you suspect, the quality of $\\bar x$, or its standard error will depend on the sample size. In addition, it also depends on population standard deviation. Thus for a tighter population, it is much easy to estimate mean from a small sample as there are fewer outliers.\nNonetheless,\n$$SE(\\bar x) = \\frac{\\sigma}{\\sqrt{n}}$$ where $\\sigma$ is population SD and $n$ is sample size.\nEmpirically, $SE(\\bar x)$ is same as the SD of a distribution of sample means. If you were to collect a number of samples, find their means to form a distribution, the SD of this distribution represents the standard error of that estimate (mean in this case).\nConfidence intervals\nFrom a population, many samples of size > 30 is drawn and their means are computed and plotted, then with $\\bar x$ or $\\bar y$ -> mean of a sample and $n$ -> size of 1 sample, $\\sigma_{\\bar x}$ or $\\sigma_{\\bar y}$ is SD of distribution of samples, you can observe that\n - $\\mu_{\\bar x} = \\mu$ (mean of distribution of sample means equals population mean)\n - $\\frac{\\sigma}{\\sqrt n} = \\sigma_{\\bar x}$ (SD of population over sqrt of sample size equals SD of sampling distribution)\n - relationship between population mean and mean of a single sample is \n - $$ \\mu = \\bar y \\pm z(\\frac{\\sigma}{\\sqrt n})$$\nThe z table and normal distribution are used to derive confidence intervals. Popular intervals and their corresponding z scores are\n|interval|z-value|\n|--------|-------|\n|99%|$\\pm 2.576$|\n|98%|$\\pm 2.326$|\n|95%|$\\pm 1.96$|\n|90%|$\\pm 1.645$|\nAs you imagine, these are the values of z on X axis of the standard normal distribution and the area they cover.\nFor a normal distribution, confidence intervals for an estimate (such as mean) can be given as\n$$CI = \\bar x \\pm z\\frac{s}{\\sqrt{n}}$$\nwhere $s$ is sample SD that is substituted in place of population SD, if sample size is larger than 30.\nExample\nThe average TV viewing times of 40 adults sampled in Iowa is 7.75 hours per week. The SD of this sample is 12.5. Find the 95% CI population's average TV viewing times.\n$\\bar x = 40$, $s=12.5$, $n=40$, $Z=1.96$ for 95% CI. Thus\n$$95\\%CI = 7.75 \\pm 1.96\\frac{12.5}{\\sqrt{40}}$$\n$$95\\%CI = (3.877 | 11.623)$$\nThus the 95% CI is pretty wide. Intuitively, if SD of sample is smaller, then so is the CI."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ModSimPy
|
notebooks/chap20.ipynb
|
mit
|
[
"Modeling and Simulation in Python\nChapter 20\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International",
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *",
"Dropping pennies\nI'll start by getting the units we need from Pint.",
"m = UNITS.meter\ns = UNITS.second",
"And defining the initial state.",
"init = State(y=381 * m, \n v=0 * m/s)",
"Acceleration due to gravity is about 9.8 m / s$^2$.",
"g = 9.8 * m/s**2",
"I'll start with a duration of 10 seconds and step size 0.1 second.",
"t_end = 10 * s\n\ndt = 0.1 * s",
"Now we make a System object.",
"system = System(init=init, g=g, t_end=t_end, dt=dt)",
"And define the slope function.",
"def slope_func(state, t, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing `g`\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n g = system.g \n\n dydt = v\n dvdt = -g\n \n return dydt, dvdt",
"It's always a good idea to test the slope function with the initial conditions.",
"dydt, dvdt = slope_func(system.init, 0, system)\nprint(dydt)\nprint(dvdt)",
"Now we're ready to call run_ode_solver",
"results, details = run_ode_solver(system, slope_func)\ndetails",
"Here are the results:",
"results.head()\n\nresults.tail()",
"And here's position as a function of time:",
"def plot_position(results):\n plot(results.y, label='y')\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n\nplot_position(results)\nsavefig('figs/chap20-fig01.pdf')",
"Onto the sidewalk\nTo figure out when the penny hit the sidewalk, we can use crossings, which finds the times where a Series passes through a given value.",
"t_crossings = crossings(results.y, 0)",
"For this example there should be just one crossing, the time when the penny hits the sidewalk.",
"t_sidewalk = t_crossings[0] * s",
"We can compare that to the exact result. Without air resistance, we have\n$v = -g t$\nand\n$y = 381 - g t^2 / 2$\nSetting $y=0$ and solving for $t$ yields\n$t = \\sqrt{\\frac{2 y_{init}}{g}}$",
"sqrt(2 * init.y / g)",
"The estimate is accurate to about 9 decimal places.\nEvents\nInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. run_ode_solver provides exactly the tool we need, event functions.\nHere's an event function that returns the height of the penny above the sidewalk:",
"def event_func(state, t, system):\n \"\"\"Return the height of the penny above the sidewalk.\n \"\"\"\n y, v = state\n return y",
"And here's how we pass it to run_ode_solver. The solver should run until the event function returns 0, and then terminate.",
"results, details = run_ode_solver(system, slope_func, events=event_func)\ndetails",
"The message from the solver indicates the solver stopped because the event we wanted to detect happened.\nHere are the results:",
"results.tail()",
"With the events option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. \nThe last time step is when the event occurred:",
"t_sidewalk = get_last_label(results) * s",
"The result is accurate to about 4 decimal places.\nWe can also check the velocity of the penny when it hits the sidewalk:",
"v_sidewalk = get_last_value(results.v)",
"And convert to kilometers per hour.",
"km = UNITS.kilometer\nh = UNITS.hour\nv_sidewalk.to(km / h)",
"If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.\nSo it's a good thing there is air resistance.\nUnder the hood\nHere is the source code for crossings so you can see what's happening under the hood:",
"source_code(crossings)",
"The documentation of InterpolatedUnivariateSpline is here.\nExercises\nExercise: Here's a question from the web site Ask an Astronomer:\n\"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed.\"\nUse run_ode_solver to answer this question.\nHere are some suggestions about how to proceed:\n\n\nLook up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.\n\n\nWhen the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.\n\n\nExpress your answer in days, and plot the results as millions of kilometers versus days.\n\n\nIf you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.\nYou might also be interested to know that it's actually not that easy to get to the Sun.",
"# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jgomezc1/medios
|
NOTEBOOKS/Ej1_EcPlano.ipynb
|
mit
|
[
"Ejemplo 1. Determinar la ecuación del plano que pasa por 3 puntos\nEsta es de la forma:\n$ax+by+cz=d$",
"from numpy import array, cross, dot",
"Primero determinemos el vector posición $\\vec{r_{1}}$, $\\vec{r_{2}}$ y $\\vec{r_{3}}$ de cada punto:",
"r1 = array([2,-1,1])\n\nr2 = array([3,2,-1])\n\nr3 = array([-1,3,2])",
"Posteriormente se determina una base vectorial en el plano por medio de los vectores:\n$\\vec{A}=\\vec{r_{2}}-\\vec{r_{1}}$ y $\\vec{B}=\\vec{r_{3}}-\\vec{r_{1}}$",
"A = r2 - r1\n\nB = r3 - r1\n\nprint A, B",
"Via producto cruz se determina el vector normal al plano \n$$ \\vec{N} = \\vec{A} \\times \\vec{B} $$",
"N = cross(A,B)\n\nprint N",
"Con este vector la ecuación general del plano será: \n$$Ax + By + Cz + D = 0$$ \nDonde $A$, $B$, $C$ son las componentes de $\\vec{N}$ y D es una constante por evaluar. \nPara ello en la ecuación del plano reemplazamos las coordenadas del punto $P_1$, por ejemplo. Así el valor de D, podría construirse como el negativo producto punto entre $\\vec{N}$ y el vector $\\vec{r_1}$",
"D=array([0,0,0])\nD = -dot(N,r1)\n\nprint D",
"De esta forma, la ecuación de plano es:",
"N = str(N[0]) + 'x + ' + str(N[1]) + 'y + ' + str(N[2]) + 'z = ' + str(-D)\n\nprint N\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ExaScience/smurff
|
docs/notebooks/a_first_example.ipynb
|
mit
|
[
"A first example running SMURFF\nIn this notebook we will run the BPMF algorithm using SMURFF, \non compound-activity data.\nDownloading the data files\nIn these examples we use ChEMBL dataset for compound-proteins activities (IC50). The IC50 values and ECFP fingerprints can be downloaded using this smurff function:",
"import logging\nlogging.basicConfig(level = logging.INFO)\n\nimport smurff\n\nic50_train, ic50_test, ecfp = smurff.load_chembl()",
"The resulting variables are all scipy.sparse matrices: ic50 is\na sparse matrix containing interactions between chemical compounds (in the rows)\nand protein targets (called essays - in the columns). The matrix is already split in \nas train and test set.\nThe ecfp contains compound features. These features will not be used in this example.\nHaving a look at the data\nThe spy function in matplotlib is a handy function to plot sparsity pattern of a matrix.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt \n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.spy(ic50_train.tocsr()[0:1000,:].T, markersize = 1)\n",
"Running SMURFF\nFinally we run make a BPMF training trainSession and call run. The run function builds the model and\nreturns the predictions of the test data.",
"trainSession = smurff.BPMFSession(\n Ytrain = ic50_train,\n Ytest = ic50_test,\n num_latent = 16,\n burnin = 40,\n nsamples = 200,\n verbose = 0,\n checkpoint_freq = 1,\n save_freq = 1,)\n\npredictions = trainSession.run()",
"We can use the calc_rmse function to calculate the RMSE.",
"rmse = smurff.calc_rmse(predictions)\nrmse",
"Plotting predictions versus actual values\nNext to RMSE, we can also plot the predicted versus the actual values, to see how well the model performs.",
"import numpy\nfrom matplotlib.pyplot import subplots, show\n\ny = numpy.array([ p.val for p in predictions ])\npredicted = numpy.array([ p.pred_avg for p in predictions ])\n\nfig, ax = subplots()\nax.scatter(y, predicted, edgecolors=(0, 0, 0))\nax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)\nax.set_xlabel('Measured')\nax.set_ylabel('Predicted')\nshow()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dawenl/cofactor
|
src/WMF_ML20M.ipynb
|
apache-2.0
|
[
"Fit WMF (weighted matrix factorization) to the binarized ML20M",
"import itertools\nimport os\nimport sys\nos.environ['OPENBLAS_NUM_THREADS'] = '1'\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import sparse\n\nimport content_wmf\nimport batched_inv_joblib\nimport rec_eval",
"Load pre-processed data\nChange this to wherever you saved the pre-processed data following this notebook.",
"DATA_DIR = '/hdd2/dawen/data/ml-20m/pro/'\n\nunique_uid = list()\nwith open(os.path.join(DATA_DIR, 'unique_uid.txt'), 'r') as f:\n for line in f:\n unique_uid.append(line.strip())\n \nunique_sid = list()\nwith open(os.path.join(DATA_DIR, 'unique_sid.txt'), 'r') as f:\n for line in f:\n unique_sid.append(line.strip())\n\nn_items = len(unique_sid)\nn_users = len(unique_uid)\n\nprint n_users, n_items\n\ndef load_data(csv_file, shape=(n_users, n_items)):\n tp = pd.read_csv(csv_file)\n timestamps, rows, cols = np.array(tp['timestamp']), np.array(tp['uid']), np.array(tp['sid'])\n seq = np.concatenate((rows[:, None], cols[:, None], np.ones((rows.size, 1), dtype='int'), timestamps[:, None]), axis=1)\n data = sparse.csr_matrix((np.ones_like(rows), (rows, cols)), dtype=np.int16, shape=shape)\n return data, seq\n\ntrain_data, train_raw = load_data(os.path.join(DATA_DIR, 'train.csv'))\n\nvad_data, vad_raw = load_data(os.path.join(DATA_DIR, 'validation.csv'))",
"Train the model",
"num_factors = 100\nnum_iters = 50\nbatch_size = 1000\n\nn_jobs = 4\nlam_theta = lam_beta = 1e-5\n\nbest_ndcg = -np.inf\nU_best = None\nV_best = None\nbest_alpha = 0\n\nfor alpha in [2, 5, 10, 30, 50]: \n S = content_wmf.linear_surplus_confidence_matrix(train_data, alpha=alpha)\n\n U, V, vad_ndcg = content_wmf.factorize(S, num_factors, vad_data=vad_data, num_iters=num_iters, \n init_std=0.01, lambda_U_reg=lam_theta, lambda_V_reg=lam_beta, \n dtype='float32', random_state=98765, verbose=False, \n recompute_factors=batched_inv_joblib.recompute_factors_batched, \n batch_size=batch_size, n_jobs=n_jobs)\n if vad_ndcg > best_ndcg:\n best_ndcg = vad_ndcg\n U_best = U.copy()\n V_best = V.copy()\n best_alpha = alpha\n\nprint best_alpha, best_ndcg\n\ntest_data, _ = load_data(os.path.join(DATA_DIR, 'test.csv'))\ntest_data.data = np.ones_like(test_data.data)\n\n# alpha = 10 gives the best validation performance\nprint 'Test Recall@20: %.4f' % rec_eval.recall_at_k(train_data, test_data, U_best, V_best, k=20, vad_data=vad_data)\nprint 'Test Recall@50: %.4f' % rec_eval.recall_at_k(train_data, test_data, U_best, V_best, k=50, vad_data=vad_data)\nprint 'Test NDCG@100: %.4f' % rec_eval.normalized_dcg_at_k(train_data, test_data, U_best, V_best, k=100, vad_data=vad_data)\nprint 'Test MAP@100: %.4f' % rec_eval.map_at_k(train_data, test_data, U_best, V_best, k=100, vad_data=vad_data)\n\nnp.savez('WMF_K100_ML20M.npz', U=U_best, V=V_best)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs
|
site/en/r1/tutorials/distribute/keras.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Distributed training in TensorFlow\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\n\nNote: This is an archived TF1 notebook. These are configured\nto run in TF2's \ncompatibility mode\nbut will run in TF1 as well. To use TF1 in Colab, use the\n%tensorflow_version 1.x\nmagic.\n\nOverview\nThe tf.distribute.Strategy API provides an abstraction for distributing your training\nacross multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.\nThis tutorial uses the tf.distribute.MirroredStrategy, which\ndoes in-graph replication with synchronous training on many GPUs on one machine.\nEssentially, it copies all of the model's variables to each processor.\nThen, it uses all-reduce to combine the gradients from all processors and applies the combined value to all copies of the model.\nMirroredStategy is one of several distribution strategy available in TensorFlow core. You can read about more strategies at distribution strategy guide.\nKeras API\nThis example uses the tf.keras API to build the model and training loop. For custom training loops, see this tutorial.\nImport Dependencies",
"# Import TensorFlow\nimport tensorflow.compat.v1 as tf\n\nimport tensorflow_datasets as tfds\n\nimport os",
"Download the dataset\nDownload the MNIST dataset and load it from TensorFlow Datasets. This returns a dataset in tf.data format.\nSetting with_info to True includes the metadata for the entire dataset, which is being saved here to ds_info.\nAmong other things, this metadata object includes the number of train and test examples.",
"datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)\nmnist_train, mnist_test = datasets['train'], datasets['test']",
"Define Distribution Strategy\nCreate a MirroredStrategy object. This will handle distribution, and provides a context manager (tf.distribute.MirroredStrategy.scope) to build your model inside.",
"strategy = tf.distribute.MirroredStrategy()\n\nprint ('Number of devices: {}'.format(strategy.num_replicas_in_sync))",
"Setup Input pipeline\nIf a model is trained on multiple GPUs, the batch size should be increased accordingly so as to make effective use of the extra computing power. Moreover, the learning rate should be tuned accordingly.",
"# You can also do ds_info.splits.total_num_examples to get the total\n# number of examples in the dataset.\n\nnum_train_examples = ds_info.splits['train'].num_examples\nnum_test_examples = ds_info.splits['test'].num_examples\n\nBUFFER_SIZE = 10000\n\nBATCH_SIZE_PER_REPLICA = 64\nBATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync",
"Pixel values, which are 0-255, have to be normalized to the 0-1 range. Define this scale in a function.",
"def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n\n return image, label",
"Apply this function to the training and test data, shuffle the training data, and batch it for training.",
"train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\neval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)",
"Create the model\nCreate and compile the Keras model in the context of strategy.scope.",
"with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])",
"Define the callbacks.\nThe callbacks used here are:\n\nTensorBoard: This callback writes a log for TensorBoard which allows you to visualize the graphs.\nModel Checkpoint: This callback saves the model after every epoch.\nLearning Rate Scheduler: Using this callback, you can schedule the learning rate to change after every epoch/batch.\n\nFor illustrative purposes, add a print callback to display the learning rate in the notebook.",
"# Define the checkpoint directory to store the checkpoints\n\ncheckpoint_dir = './training_checkpoints'\n# Name of the checkpoint files\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt_{epoch}\")\n\n# Function for decaying the learning rate.\n# You can define any decay function you need.\ndef decay(epoch):\n if epoch < 3:\n return 1e-3\n elif epoch >= 3 and epoch < 7:\n return 1e-4\n else:\n return 1e-5\n\n# Callback for printing the LR at the end of each epoch.\nclass PrintLR(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs=None):\n print ('\\nLearning rate for epoch {} is {}'.format(\n epoch + 1, tf.keras.backend.get_value(model.optimizer.lr)))\n\ncallbacks = [\n tf.keras.callbacks.TensorBoard(log_dir='./logs'),\n tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,\n save_weights_only=True),\n tf.keras.callbacks.LearningRateScheduler(decay),\n PrintLR()\n]",
"Train and evaluate\nNow, train the model in the usual way, calling fit on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.",
"model.fit(train_dataset, epochs=10, callbacks=callbacks)",
"As you can see below, the checkpoints are getting saved.",
"# check the checkpoint directory\n!ls {checkpoint_dir}",
"To see how the model perform, load the latest checkpoint and call evaluate on the test data.\nCall evaluate as before using appropriate datasets.",
"model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))\n\neval_loss, eval_acc = model.evaluate(eval_dataset)\nprint ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))",
"To see the output, you can download and view the TensorBoard logs at the terminal.\n$ tensorboard --logdir=path/to/log-directory",
"!ls -sh ./logs",
"Export to SavedModel\nIf you want to export the graph and the variables, SavedModel is the best way of doing this. The model can be loaded back with or without the scope. Moreover, SavedModel is platform agnostic.",
"path = 'saved_model/'\n\ntf.keras.experimental.export_saved_model(model, path)",
"Load the model without strategy.scope.",
"unreplicated_model = tf.keras.experimental.load_from_saved_model(path)\n\nunreplicated_model.compile(\n loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\neval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)\nprint ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))",
"What's next?\nRead the distribution strategy guide.\nNote: tf.distribute.Strategy is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via issues on GitHub."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ealogar/curso-python
|
sysadmin/03_parsing_benchmarks.ipynb
|
apache-2.0
|
[
"Parsing\nGoals:\n- Plan a parsing strategy\n- Use basic regular expressions: match, search, sub\n- Benchmarking a parser\n- Running nosetests\n- Write a simple parser\n\nModules:",
"import re\nimport nose\n# %timeit",
"Parsing is hard...\n<h2>\n<i>\"System Administrators spent $24.3\\%$ of\n their work-life parsing files.\"</i>\n <br><br>\n\n Independent analysis by The GASP* Society ;) <br>\n </h2>\n<h3>\n *(Grep Awk Sed Perl)\n </h3>\n\nstrategy!\n<table>\n<tr><td>\n<ol><li>Collect parsing samples\n<li>Play in ipython and collect %history\n<li>Write tests, then the parser\n<li>Eventually benchmark\n</ol>\n</td><td>\n<img src=\"parsing-lifecycle.png\" />\n</td></tr>\n</table>\n\nParsing postfix logs",
"from __future__ import print_function\n# Before writing the parser, collect samples of\n# the interesting lines. For now just\nmail_sent = 'May 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'\nmail_delivered = 'May 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'\n\nprint(\"I'm goint to parse the following line\", mail_sent, sep=\"\\n\\n\")\n\ndef test_sent():\n hour, host, to = parse_line(mail_sent)\n assert hour == '08:00:00'\n assert to == 'jon@doe.it'\n\n# Play with mail_sent\nmail_sent.split()\n\n# You can number fields with enumerate. \n# Remember that ipython puts the last returned value in `_`\n# in our case: _ = mail_sent.split()\n# which is useful in interactive mode!\nfields, counting = _, enumerate(_)\nprint(*counting, sep=\"\\n\")\n#counting = enumerate(mail_sent.split())\n#for it in counting:\n# print(it)\n\n# Now we can pick fields singularly...\nhour, host, dest = fields[2], fields[3], fields[6]\nprint(\"Hour: {}, host: {}, dest: {}\".format(hour, host, dest))",
"Exercise I\n- complete the parse_line(line) function\n- run the tests until all pass",
"test_str_1 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'\ntest_str_2 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'\n\n\ndef test_sent():\n hour, host, destination = parse_line(test_str_1)\n assert hour == '08:00:00'\n assert host == 'test-fe1'\n assert destination == 'to=<jon@doe.it>,'\n\n\ndef test_delivered():\n hour, host, destination = parse_line(test_str_2)\n print(destination)\n assert hour == '08:00:00'\n assert host == 'test-fe1'\n assert destination is None\n\n\ndef parse_line(line):\n \"\"\" Complete the parse line function.\n \"\"\"\n # Hint: \"you can\".split()\n # Hint: \"<you can slice>\"[1:-1] or use re.split\n pass\ntest_sent()\ntest_delivered()",
"Python Regexp",
"# Python supports regular expressions via\nimport re\n\n# We start showing a grep-reloaded function\ndef grep(expr, fpath):\n one = re.compile(expr) # ...has two lookup methods...\n assert ( one.match # which searches from ^ the beginning\n and one.search ) # that searches $\\pyver{anywhere}$\n\n with open(fpath) as fp:\n return [x for x in fp if one.search(x)]\n\n# The function seems to work as expected ;)\nassert not grep(r'^localhost', '/etc/hosts')\n\n# And some more tests\nret = grep('127.0.0.1', '/etc/hosts')\nassert ret, \"ret should not be empty\"\nprint(*ret)",
"Achieve more complex splitting using regular expressions.",
"# Splitting with re.findall\n\nfrom re import findall # can be misused too;\n\n# eg for adding the \":\" to a\nmac = \"00\"\"24\"\"e8\"\"b4\"\"33\"\"20\"\n\n# ...using this\nre_hex = \"[0-9a-fA-F]{2}\"\nmac_address = ':'.join(findall(re_hex, mac))\nprint(\"The mac address is \", mac_address)\n\n# Actually this does a bit of validation, requiring all chars to be in the 0-F range",
"Benchmarking in iPython - I\n\nParsing big files needs benchmarks. timeit module is a good starting point\n\nWe are going to measure the execution time of various tasks, using different strategies like regexp, join and split.",
"# Run the following cell many times. \n# Do you always get the same results?\nimport timeit\ntest_all_regexps = (\"..\", \"[a-fA-F0-9]{2}\")\nfor re_s in test_all_regexps:\n print(timeit.timeit(stmt=\"':'.join(findall(re_s, mac))\",\n setup=\"from re import findall;re_s='{}';mac='{}'\".format(re_s, mac))) \n\n# We can even compare compiled vs inline regexp\nimport re\nfrom time import sleep\nfor re_s in test_all_regexps:\n print(timeit.timeit(stmt=\"':'.join(re_c.findall(mac))\",\n setup=\"from re import findall, compile;re_c=compile('{}');mac='{}'\".format(re_s, mac)))\n\n# ...or simple\nprint(timeit.timeit(stmt=\"':'.join([mac[i:i+2] for i in range(0,12,2)])\",\n setup=\"from re import findall;mac='{}'\".format(mac))) ",
"Parsing: Exercise II\nNow another test for the delivered messages\n - Improve the parse_line function to make the tests pass",
"#\n# Use this cell for Exercise II\n#\n\ntest_str_1 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'\ntest_str_2 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'\n\n\ndef test_sent():\n hour, host, destination = parse_line(test_str_1)\n assert hour == '08:00:00'\n assert host == 'test-fe1'\n assert destination == 'jon@doe.it'\n\n\ndef test_delivered():\n hour, host, destination = parse_line(test_str_2)\n assert hour == '08:00:00'\n assert host == 'test-fe1'\n assert destination is None\n\n\ndef parse_line(line):\n \"\"\" Complete the parse line function.\n \"\"\"\n # Hint: \"you can\".split()\n # Hint: \"<you can slice>\"[1:-1] or use re.split\n pass\n\ntest_sent()\ntest_delivered()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DTUWindEnergy/FUSED-Wake
|
examples/Script.ipynb
|
mit
|
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\nfrom scipy import interpolate\nimport matplotlib.pyplot as plt\n\nimport os,sys,inspect\ncurrentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\nparentdir = os.path.dirname(os.path.dirname(currentdir))\n#sys.path.insert(0,parentdir) \n#sys.path.append(parentdir+'/gclarsen/src/gclarsen')\n\nimport fusedwake.WindTurbine as wt\nimport fusedwake.WindFarm as wf\n#import fusedwake.fused as fused_gcl\nfrom fusedwake.gcl import *\n\n#from gclarsen.fusedwasp import PlantFromWWH\n#wwh = PlantFromWWH(filename = parentdir+'/wind-farm-wake-model/gclarsen/src/gclarsen/test/wind_farms/horns_rev/hornsrev1_turbine_nodescription.wwh')\n#wwh = PlantFromWWH(filename = 'hornsrev1_turbine_nodescription.wwh')",
"Verification of the FUSED-Wind wrapper\ncommon inputs",
"wf.WindFarm?\n\nv80 = wt.WindTurbine('Vestas v80 2MW offshore','V80_2MW_offshore.dat',70,40)\nHR1 = wf.WindFarm(name='Horns Rev 1',yml='hornsrev.yml')#,v80)\nWD = range(0,360,1)",
"FUSED-Wind implementation",
"\"\"\"\n##Fused inputs\ninputs = dict(\n wind_speed=8.0,\n roughness=0.0001,\n TI=0.05,\n NG=4,\n sup='lin',\n wt_layout = fused_gcl.generate_GenericWindFarmTurbineLayout(HR1))\n\nfgcl = fused_gcl.FGCLarsen()\n# Setting the inputs\nfor k,v in inputs.iteritems():\n setattr(fgcl, k, v)\n\nfP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n fgcl.wind_direction = wd\n fgcl.run()\n fP_WF[iwd] = fgcl.power\n\"\"\"",
"pure python implementation",
"P_WF = np.zeros([len(WD)])\nP_WF_v0 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF[iwd] = P_WT.sum()\n P_WT,U_WT,CT_WT = GCLarsen_v0(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF_v0[iwd] = P_WT.sum()\n\n\n\n\nfig, ax = plt.subplots()\nax.plot(WD,P_WF/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-o', label='python')\nax.plot(WD,P_WF_v0/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-d', label='python v0')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')",
"Asserting new implementation",
"WD = 261.05\nP_WT,U_WT,CT_WT = GCLarsen_v0(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')\nP_WT_2,U_WT_2,CT_WT_2 = GCLarsen(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')\n\nnp.testing.assert_array_almost_equal(U_WT,U_WT_2)\nnp.testing.assert_array_almost_equal(P_WT,P_WT_2)",
"There was a bug corrected in the new implementation of the GCL model\nTime comparison\nNew implementation is wrapped inside fusedwind",
"WD = range(0,360,1)\n\n%%timeit\nfP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n fgcl.wind_direction = wd\n fgcl.run()\n fP_WF[iwd] = fgcl.power\n\n%%timeit\n#%%prun -s cumulative #profiling\nP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF[iwd] = P_WT.sum()",
"Pandas",
"df=pd.DataFrame(data=P_WF, index=WD, columns=['P_WF'])\n\ndf.plot()",
"WD uncertainty\nNormally distributed wind direction uncertainty (reference wind direction, not for individual turbines).",
"P_WF_GAv8 = np.zeros([len(WD)])\nP_WF_GAv16 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GAv8[iwd] = P_WT_GAv.sum()\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=16, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GAv16[iwd] = P_WT_GAv.sum()\n\n\nfig, ax = plt.subplots()\nfig.set_size_inches([12,6])\nax.plot(WD,P_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-o', label='Pure python')\nax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')\nax.plot(WD,P_WF_GAv16/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 16')\nax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 8')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')",
"Uniformly distributed wind direction uncertainty (bin/sectors definition)",
"P_WF_GA_u8 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Uni_U_WD(meanWD=wd,U_WD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GA_u8[iwd] = P_WT_GAv.sum()\n\n\nfig, ax = plt.subplots()\nfig.set_size_inches([12,6])\nax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')\nax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Normal, NG_P = 8')\nax.plot(WD,P_WF_GA_u8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Uniform, NG_P = 8')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
alexandrnikitin/algorithm-sandbox
|
courses/DAT256x/Module04/04-01-Data and Visualization.ipynb
|
mit
|
[
"Data and Data Visualization\nMachine learning, and therefore a large part of AI, is based on statistical analysis of data. In this notebook, you'll examine some fundamental concepts related to data and data visualization.\nIntroduction to Data\nStatistics are based on data, which consist of a collection of pieces of information about things you want to study. This information can take the form of descriptions, quantities, measurements, and other observations. Typically, we work with related data items in a dataset, which often consists of a collection of observations or cases. Most commonly, we thing about this dataset as a table that consists of a row for each observation, and a column for each individual data point related to that observation - we variously call these data points attributes or features, and they each describe a specific characteristic of the thing we're observing.\nLet's take a look at a real example. In 1886, Francis Galton conducted a study into the relationship between heights of parents and their (adult) children. Run the Python code below to view the data he collected (you can safely ignore a deprecation warning if it is displayed):",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\ndf",
"Types of Data\nNow, let's take a closer look at this data (you can click the left margin next to the dataset to toggle between full height and a scrollable pane). There are 933 observations, each one recording information pertaining to an individual child. The information recorded consists of the following features:\n- family: An identifier for the family to which the child belongs.\n- father: The height of the father.\n- mother: The height of the mother.\n- midparentHeight: The mid-point between the father and mother's heights (calculated as (father + 1.08 x mother) ÷ 2)\n- children: The total number of children in the family.\n- childNum: The number of the child to whom this observation pertains (Galton numbered the children in desending order of height, with male children listed before female children)\n- gender: The gender of the child to whom this observation pertains.\n- childHeight: The height of the child to whom this observation pertains.\nIt's worth noting that there are several distinct types of data recorded here. To begin with, there are some features that represent qualities, or characteristics of the child - for example, gender. Other feaures represent a quantity or measurement, such as the child's height. So broadly speaking, we can divide data into qualitative and quantitative data.\nQualitative Data\nLet's take a look at qualitative data first. This type of data is categorical - it is used to categorize or identify the entity being observed. Sometimes you'll see features of this type described as factors. \nNominal Data\nIn his observations of children's height, Galton assigned an identifier to each family and he recorded the gender of each child. Note that even though the family identifier is a number, it is not a measurement or quantity. Family 002 it not \"greater\" than family 001, just as a gender value of \"male\" does not indicate a larger or smaller value than \"female\". These are simply named values for some characteristic of the child, and as such they're known as nominal data.\nOrdinal Data\nSo what about the childNum feature? It's not a measurement or quantity - it's just a way to identify individual children within a family. However, the number assigned to each child has some additional meaning - the numbers are ordered. You can find similar data that is text-based; for example, data about training courses might include a \"level\" attribute that indicates the level of the course as \"basic:, \"intermediate\", or \"advanced\". This type of data, where the value is not itself a quantity or measurement, but it indicates some sort of inherent order or heirarchy, is known as ordinal data.\nQuantitative Data\nNow let's turn our attention to the features that indicate some kind of quantity or measurement.\nDiscrete Data\nGalton's observations include the number of children in each family. This is a discrete quantative data value - it's something we count rather than measure. You can't, for example, have 2.33 children!\nContinuous Data\nThe data set also includes height values for father, mother, midparentHeight, and childHeight. These are measurements along a scale, and as such they're described as continuous quantative data values that we measure rather than count.\nSample vs Population\nGalton's dataset includes 933 observations. It's safe to assume that this does not account for every person in the world, or even just the UK, in 1886 when the data was collected. In other words, Galton's data represents a sample of a larger population. It's worth pausing to think about this for a few seconds, because there are some implications for any conclusions we might draw from Galton's observations.\nThink about how many times you see a claim such as \"one in four Americans enjoys watching football\". How do the people who make this claim know that this is a fact? Have they asked everyone in the the US about their football-watching habits? Well, that would be a bit impractical, so what usually happens is that a study is conducted on a subset of the population, and (assuming that this is a well-conducted study), that subset will be a representative sample of the population as a whole. If the survey was conducted at the stadium where the Superbowl is being played, then the results are likely to be skewed because of a bias in the study participants.\nSimilarly, we might look at Galton's data and assume that the heights of the people included in the study bears some relation to the heights of the general population in 1886; but if Galton specifically selected abnormally tall people for his study, then this assumption would be unfounded.\nWhen we deal with statistics, we usually work with a sample of the data rather than a full population. As you'll see later, this affects the way we use notation to indicate statistical measures; and in some cases we calculate statistics from a sample differently than from a full population to account for bias in the sample.\nVisualizing Data\nData visualization is one of the key ways in which we can examine data and get insights from it. If a picture is worth a thousand words, then a good graph or chart is worth any number of tables of data.\nLet's examine some common kinds of data visualization:\nBar Charts\nA bar chart is a good way to compare numeric quantities or counts across categories. For example, in the Galton dataset, you might want to compare the number of female and male children.\nHere's some Python code to create a bar chart showing the number of children of each gender.",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Create a data frame of gender counts\ngenderCounts = df['gender'].value_counts()\n\n# Plot a bar chart\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\ngenderCounts.plot(kind='bar', title='Gender Counts')\nplt.xlabel('Gender')\nplt.ylabel('Number of Children')\nplt.show()",
"From this chart, you can see that there are slightly more male children than female children; but the data is reasonably evenly split between the two genders.\nBar charts are typically used to compare categorical (qualitative) data values; but in some cases you might treat a discrete quantitative data value as a category. For example, in the Galton dataset the number of children in each family could be used as a way to categorize families. We might want to see how many familes have one child, compared to how many have two children, etc.\nHere's some Python code to create a bar chart showing family counts based on the number of children in the family.",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Create a data frame of child counts\n# there's a row for each child, so we need to filter to one row per family to avoid over-counting\nfamilies = df[['family', 'children']].drop_duplicates()\n# Now count number of rows for each 'children' value, and sort by the index (children)\nchildCounts = families['children'].value_counts().sort_index()\n\n\n# Plot a bar chart\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nchildCounts.plot(kind='bar', title='Family Size')\nplt.xlabel('Number of Children')\nplt.ylabel('Families')\nplt.show()\n",
"Note that the code sorts the data so that the categories on the x axis are in order - attention to this sort of detail can make your charts easier to read. In this case, we can see that the most common number of children per family is 1, followed by 5 and 6. Comparatively fewer families have more than 8 children.\nHistograms\nBar charts work well for comparing categorical or discrete numeric values. When you need to compare continuous quantitative values, you can use a similar style of chart called a histogram. Histograms differ from bar charts in that they group the continuous values into ranges or bins - so the chart doesn't show a bar for each individual value, but rather a bar for each range of binned values. Because these bins represent continuous data rather than discrete data, the bars aren't separated by a gap. Typically, a histogram is used to show the relative frequency of values in the dataset.\nHere's some Python code to create a histogram of the father values in the Galton dataset, which record the father's height:",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Plot a histogram of midparentHeight\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\ndf['father'].plot.hist(title='Father Heights')\nplt.xlabel('Height')\nplt.ylabel('Frequency')\nplt.show()",
"The histogram shows that the most frequently occuring heights tend to be in the mid-range. There are fewer extremely short or exteremely tall fathers.\nIn the histogram above, the number of bins (and their corresponding ranges, or bin widths) was determined automatically by Python. In some cases you may want to explicitly control the number of bins, as this can help you see detail in the distribution of data values that otherwise you might miss. The following code creates a histogram for the same father's height values, but explicitly distributes them over 20 bins (19 are specified, and Python adds one):",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Plot a histogram of midparentHeight\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\ndf['father'].plot.hist(title='Father Heights', bins=19)\nplt.xlabel('Height')\nplt.ylabel('Frequency')\nplt.show()",
"We can still see that the most common heights are in the middle, but there's a notable drop in the number of fathers with a height between 67.5 and 70.\nPie Charts\nPie charts are another way to compare relative quantities of categories. They're not commonly used by data scientists, but they can be useful in many business contexts with manageable numbers of categories because they not only make it easy to compare relative quantities by categories; they also show those quantities as a proportion of the whole set of data.\nHere's some Python to show the gender counts as a pie chart:",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Create a data frame of gender counts\ngenderCounts = df['gender'].value_counts()\n\n# Plot a pie chart\n%matplotlib inline\nfrom matplotlib import pyplot as plt\ngenderCounts.plot(kind='pie', title='Gender Counts', figsize=(6,6))\nplt.legend()\nplt.show()",
"Note that the chart includes a legend to make it clear what category each colored area in the pie chart represents. From this chart, you can see that males make up slightly more than half of the overall number of children; with females accounting for the rest.\nScatter Plots\nOften you'll want to compare quantative values. This can be especially useful in data science scenarios where you are exploring data prior to building a machine learning model, as it can help identify apparent relationships between numeric features. Scatter plots can also help identify potential outliers - values that are significantly outside of the normal range of values.\nThe following Python code creates a scatter plot that plots the intersection points for midparentHeight on the x axis, and childHeight on the y axis:",
"import statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n# Create a data frame of heights (father vs child)\nparentHeights = df[['midparentHeight', 'childHeight']]\n\n# Plot a scatter plot chart\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nparentHeights.plot(kind='scatter', title='Parent vs Child Heights', x='midparentHeight', y='childHeight')\nplt.xlabel('Avg Parent Height')\nplt.ylabel('Child Height')\nplt.show()",
"In a scatter plot, each dot marks the intersection point of the two values being plotted. In this chart, most of the heights are clustered around the center; which indicates that most parents and children tend to have a height that is somewhere in the middle of the range of heights observed. At the bottom left, there's a small cluster of dots that show some parents from the shorter end of the range who have children that are also shorter than their peers. At the top right, there are a few extremely tall parents who have extremely tall children. It's also interesting to note that the top left and bottom right of the chart are empty - there aren't any cases of extremely short parents with extremely tall children or vice-versa.\nLine Charts\nLine charts are a great way to see changes in values along a series - usually (but not always) based on a time period. The Galton dataset doesn't include any data of this type, so we'll use a different dataset that includes observations of sea surface temperature between 1950 and 2010 for this example:",
"import statsmodels.api as sm\n\ndf = sm.datasets.elnino.load_pandas().data\n\ndf['AVGSEATEMP'] = df.mean(1)\n\n# Plot a line chart\n%matplotlib inline\nfrom matplotlib import pyplot as plt\ndf.plot(title='Average Sea Temperature', x='YEAR', y='AVGSEATEMP')\nplt.xlabel('Year')\nplt.ylabel('Average Sea Temp')\nplt.show()",
"The line chart shows the temperature trend from left to right for the period of observations. From this chart, you can see that the average temperature fluctuates from year to year, but the general trend shows an increase."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fluffy-hamster/A-Beginners-Guide-to-Python
|
A Beginners Guide to Python/26. Design Decisions, How to Build Chess Game.ipynb
|
mit
|
[
"Features of Good Design\nHi guys, this lecture is a bit different, today we are mostly glossing over Python itself and instead we are going to be talk about software development. \nAt this moment in time you are writing tiny programs. This is all good experience but you will quickly learn that as programs that larger the harder it is to get working. For example, writing two-hundred one page letters to your grandma over the course of a year is much easier than trying to write a two-hundred page novel. The main reason being is that the bunch of letters are mostly independent bits of writing, whereas a novel requires hundreds of pages to 'flow' together.\nI’d argue software development is a bit like that too; writing small programs is very different from writing code for large complex systems. For small problems if things go wrong you can just start 'from scratch' whereas for large systems starting from scratch is often not possible (and/or would take years), and so any mistakes made in the design of the system are likely 'warts' you are going to have to live with. As codebases grow concepts like readability become increasingly important. \nToday’s lecture is intended as an introduction to some of skills you will likely need once you try to develop a substantial programs. The TLDR version; figuring out a good design ‘off the bat’ can potentially save you hours upon hours of work later on down the line.\nToday I will be showing you an example of how we might code up a game of chess. But crucially I’m going to skip over a lot of the ‘low-level’ stuff and instead try to provide a ‘high-level’ sketch for what such a program may look like. If you have the time it may be worthwhile to quickly skim back over the intuition for OOP lecture. \nThere is a saying in England:\n\n“[you] can’t see the forest for the trees”*. \n\nIt means that if you examine something closely (i.e. each tree) you might miss the bigger picture (i.e. that all the trees come together to make a forest). Most of this lecture series has been talking about trees, but today we are talking about the forest. \nWhat is good design?\nSo before we start looking at a chess game lets say a few words about design; in particular, what counts as good program design?\nSimplicity\nSimple is better than complex.\nComplex is better than complicated.\n[...]\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\n\nAs always, Tim Peter's ‘Zen of Python’ has a thing or two to say about design, the lines highlighted here place an emphasis on simplicity and clarity of expression, these concepts are core to the entire language and we would do well to remember that; if things start to get complicated maybe it would be prudent to take a step back and reconsider our approach.\nPerformance\nAt first glance we might think performance is a 'low-level' consideration. You write something and then find ways to save a byte of memory here or there. But considering performance merely as ‘fine-tuning’ would be a crucial mistake. \nThose of you that read my 'joy of fast cars' lecture would have seen a few examples of such low-level 'fine tuning', in one example I showed how we could optimize a call to range such that we could search for prime numbers faster. And for what its worth this tinkering did in fact pay off, significantly so, in fact.\nHowever, that lecture also contained a ‘high-level’ idea as well; our tinkering with the range function was, although faster, still blindly searching for a needle in a haystack. We then stepped back and wondered if there was a better way to do it and indeed there was; generating primes is better than blindly searching for them.\nThe lesson here is that good design choices, even if executed poorly, can easily out-perform bad ideas implemented well. If you want to know more about this, please check do some reading on ‘algorithmic complexity’ and Big(O) notation (we wont cover this stuff in this course).\nIn short, good design/algorithm choices tend to be very performant once we scale up the problem to huge data sets, and this is why it’s worth taking the time to come up with a good idea. \nReadable\nThroughout this lectures series I have highlighted readability numerous times so I'm going to keep this section super short:\n\nReadability Counts!\n\nModular\nModularity is one way to deal with complexity. In short, this is where we try to chop everything up into neat little boxes, where each little box has just one job to do. \nAn alternative (and terrible) design would be to have one big box that does everything. The problem with such an approach is that you end up with a complex spider web, and you fear changing one small part because you don't know how that small change may affect the entire system. \nGeneralisable / Reusable\nWriting good code once and then reusing it is often better than starting from scratch each time. The way to make code reusable is to generalise it to solve variety of problems. This concept is probably best understood by example. \nSuppose we were making a function that counted 1-to-100. What can we use this for other than its intended purpose? \nNow suppose we write a function that counts from n-to-m. This code works for the current problem but because its design is generalised we may be able to reuse this code at a later date, in this project or the next. \nIf code is reusable, then that is often a good sign that it is modular as well.",
"# One use, \"throw away\" code:\ndef one_to_one_hundred():\n for i in range(1, 101):\n print (i)\n \n# Multi use, 'generalised' code:\ndef n_to_x(n, m):\n for i in range(n, m+1):\n print(i)",
"Beauty\n\nBeautiful is better than ugly.\nTim Peters, ‘Zen of Python’\n\nBeauty!? At first glance making beauty a consideration my sound like a strange or 'out-of-place' concept. But if you take a broad view of human achievement you’ll find that we mere mortals make things, and then make those things beautiful. Just think of something like a sword, it is an object made with the most brutal of applications in mind and yet we still decided that even this was an object worthy of being made beautiful. \nAnother discipline where discussions of aesthetics may initially seem out-of-place is mathematics, and yet, there are no shortage of mathematicians throughout the ages discussing the aesthetic qualities of field and moreover there is some experimental evidence to suggest mathematicians genuinely see beauty in formula's in the same way the rest of us see beauty in music or art. For some, beauty truly is the joy of mathematics.\n\n\"Why are numbers beautiful? It's like asking why is Beethoven's Ninth Symphony beautiful. If you don't see why, someone can't tell you.\" -Paul Erdos\n\nI think it would be wrong to dismiss beauty as a trivial aspect of mathematics or programming for that matter. There truly is a joy in experiencing good code, you just need to learn to appreciate it, I guess. \nBuilding a Chess Program...\nOkay so the above discussion highlighted a few aspirations and considerations for our chess project. Let’s start by making a list of all the things we need to do:\n\nRepresent the board (8x8 grid, alternating black/white squares)\nDefine piece movement, capture rules.\nDefine all other rules (e.g. promotion, castling, checkmate, 3-fold repetition, etc)\nPeripherals (e.g. clocks, GUI's, multiplayer features like match-making, etc)\n\nThats a lot of stuff to do right there, today's lecture will mostly deal with points one and two.\nBuilding the board\nHow should we represent a board in Python? This question mostly just boils down to what data-type we should use. Right now, I have two candidates in mind; strings and lists. \nWe could of course jump ‘straight-in’, pick one the data types at random and see what happens but, as alluded to in the above discussions such a method is both silly and wasteful. A better use of time would be to carefully consider our options BEFORE we write even a single line of code.\nThe Board as a string\nOkay, so let’s consider using a string for the board. What might that look like?\nWell, the letters \"QKPN\" could represent the pieces (lower-case for white), and we could use the new-line character (\"\\n\") to separate the rows. Something like this:",
"print(\"RNBQKBNR\\nPPPPPPPP\\n-x-x-x-x\\nx-x-x-x-\\n-x-x-x-x\\npppppppp\\nrnbkqbnr\") # 'x' and '-' represent black and white squares.",
"Actually, we can do even better than this, Python strings support unicode and there are unicode characters for chessmen. So now our string implementation even comes with some basic graphics:",
"print(\"♖♘♗♔♕♗♘♖\\n♙♙♙♙♙♙♙♙\\n□ ■ □ ■ □ ■ □ ■\\n■ □ ■ □ ■ □ ■ □\\n□ ■ □ ■ □ ■ □ ■\\n♟♟♟♟♟♟♟♟\\n♜♞♝♛♚♝♞♜\")",
"Okay so the board as a string seems possible, but are there any drawbacks of an implementation like this? Well, I can think of two. Firstly, notice that because these unicode characters are a bigger than normal letters we are going to need a new way to denote black and white squares. You can see from above I tried to use a combination of spaces and ‘□■’ characters but even then the formating is a bit off. In short, it looks like trying to get the board to look nice is going to be both tedious and fiddly. \nWhat is the second problem?\nYou remember me mentioning that strings are an immutable data-type, which means that every time we want to change the board we have to make a new one. Not only would this be computationally inefficient it may also be a bit tricky to actually change the board.\nFor example, lets see what sort of work we would have to to make the move 1.Nf3:",
"def make_move(move):\n \"\"\"Takes a string a returns a new string with the specified move\"\"\"\n # Code here\n pass\n\n# Our new function would have to take the original string and return the new string (both below)...\noriginal_string = \"RNBQKBNR\\nPPPPPPPP\\n-x-x-x-x\\nx-x-x-x-\\n-x-x-x-x\\npppppppp\\nrnbkqbnr\"\nnew_string = \"RNBQKBNR\\nPPPPPPPP\\n-x-x-x-x\\nx-x-x-x-\\n-x-x-n-x\\npppppppp\\nrnbkqb-r\"\nprint(new_string)",
"Now, to be clear, it is certainly possible to make the “make_move” function work, but it does seem to have several small moving parts and therefore probably lots of interesting ways to go wrong. And then lets think about the more complex functions; if movement seems a bit tricky, how easy do we think defining checkmate is going to be?\nBasically, using strings seems doable but complicated. And as Tim Peters says, simple and complex are both better than complicated. Alright, on that note, let’s see if lists seem more straight-forward.\nThe Board as a List\n [[00, 01, 02],\n [10, 11, 12],\n [20, 21, 22]]\n\nThe above is a nested list but we have put each of the sublists on a new line to make it easier to visualise how such a structure can work like a game board. The numbers represent the (x, y) coordinate of that 'square'. And remember that lists can contain strings as well, so this option doesn't stop us from using those pretty graphics we saw earlier. \nCompared to the string implementation, the 1.Nf3 move should be somewhat straightforward:\ncurrent_position_knight = (a, b) # where (a, b) are coordinates. \nnext_position_knight = (a2, b2)\n\nBoard[a][b] = {empty} \nBoard[a2][b2] = {Knight}\n\nAt first glance, this seems considerably easier than a messing arround mutating strings. \nThere is also another possible advantage to lists as well, and that is they can store a variety of data-types. I haven't spoken about classes in this lecture series and I'm not going to into detail (classes are not suitable for a beginner class, in my opinion) But, I’ll will very briefly introduce you to the concept and how we could use it here.*\nIn short, the brainwave is that if we use lists we could litterally build Knight objects, King Objects, etc and they can be placed inside a list. We can't do that with strings.\nDefining Chessmen\nBasically Python makes it possible to create your own objects with their own methods. Using classes it is literally possible make a knight and put him onto a game board. Below I’ve provided a very rough sketch of what such a class could look like.\nI would like to stress that this code is intended as a 'high-level' sketch and by that I mean lots of the small details are missing. Notice that the code for a Queen, King, Pawn, etc could all be written in the same way.",
"class Knight (object):\n \n def __init__ (self, current_square, colour):\n self.colour = colour\n self.current_square = current_square\n \n @staticmethod\n def is_legal_move(square1, square2):\n \"\"\"\n Checks if moving from square1 to square2 is a legal move for the Knight.\n Returns True/False\n \"\"\"\n # Code goes here... i.e we calculate all 'game legal' squares a Knight could reach from position X. \n pass\n \n def make_move(self, new_square):\n # since we don't want to make illegal moves, we check the intended move is legal first.\n if Knight.is_legal_move(self.current_square, new_square):\n self.current_square = new_square # <= moves the knight!\n else:\n return \"Invalid Move!\"\n \n # Other methods would go here. \n\n# Lets make a White Knight. Let's call him 'Dave'. \nDave = Knight((0,0), \"White\")\n\n# Once the knight is made, we can move it using the move_to method:\nDave.make_move((3,3))",
"Defining Piece movement\nAlright, onto the next problem. \"How are we going to make the peices move?\" And once again a smart choice here will make things so much easier than it otherwise could be.\nOne simple approach is to write a function for each piece, like so:",
"# Note the following code doesn't work, it is for demonstration purposes only!\n\ndef all_bishop_moves(position, board):\n \"\"\" When given a starting square, returns all squares the bishop can move to\"\"\"\n pass\n \ndef all_rook_moves(position, board):\n \"\"\" When given a starting square, returns all squares the rook can move to\"\"\"\n pass\n \ndef all_queen_moves(position, board):\n \"\"\"When given a starting square, returns all squares the queen can move to\"\"\"\n pass",
"At first glance this code seems pretty good, but there are a few drawbacks. Firstly it looks like we are going to be repeating ourselves a lot; queen movement for example is just a copy & paste of the rook + bishop. The king function is likely a copy & paste of queen but where we change the distance to 1.\nAnd by the way guys, repeating oneself is NOT quite the same as reusing code! \nWhat we would really like to do here is generalise the problem as much as we can. And good technique for doing that is to think of the next project we might want to implement. For example, let’s suppose after building my chess game I want to support Capablanca Chess ?\n<img src=\"http://hgm.nubati.net/rules/Capablanca.png\" style=\"width:200px;height:150px;\" ALIGN=\"centre\">\nCapablanca chess is played on a 10x8 board and it has two new pieces; the ‘archbishop’ moves like a bishop combined with a knight and a ‘chancellor’ moves like a rook and a knight.\nSo, what should we do here? Well, I think the first thing we should do is define movement of pieces WITHOUT referencing a board. If we don’t reference a board that means we should be able to handle boards of many different sizes. Secondly, if we define pieces in terms of combining general patterns (e.g Queen = diagonal + orthogonal movement) then defining new pieces will probably be less the five lines of code in many cases. \nLet’s examine what that might look like:",
"# Note the following code doesn't work, it is for demonstration purposes only!\n\ndef diagonal_movement(position, directions, distance =1):\n \"\"\"\n Returns all diagonal squares in all valid directions N distance from the origin \n \"\"\"\n # code here\n pass\n\ndef othagonal_movement(position, directions, distance=1):\n \"\"\"\n # doctests showing example useage:\n \n >>> othagonal_movement( (2, 2), directions=[left], distance= 2)\n [(2,2), (2, 1), (2, 0)]\n \n >>> othagonal_movement( (2, 2), directions=[right], distance= 4)\n [(2, 2), (2, 3), (2, 4), (2, 5), (2, 6)]\n \n >>> othagonal_movement( (2, 2), directions=[right, left], distance= 1)\n [(2, 1), (2,2), (2, 3)]\n \"\"\"\n # code here\n pass",
"Notice here that we have defined movement without reference to a board, our code here simply takes an (x,y) coordinate in space and will keep returning valid squares until it reaches the limit set by distance. The documentation in row movement explains the idea.\nWith this generalisation, we should be able to handle different boards AND we can define pieces with just a few lines of code like so:",
"def queen_movement(position, limit):\n \n o = othagonal_movement(position, direction=\"all\", distance=limit)\n d = diagonal_movement(position, direction=\"all\", distance=limit)\n \n return o + d\n\ndef bishop_movement(position, limit):\n return diagonal_movement(position, direction=\"all\", distance=limit)",
"In the case of an 8x8 board 'limit' would be set to 8. If a piece is only allowed to move 1 square forward (regardless of board shape/size) we can easily model that by setting the limit to 1. \nnotice that with this design some peices can be defined very simply. For example, we can define a king in the following way:",
"def king_movement(position, limit):\n return queen_movement(position, limit=1)",
"But lets take a step back for a moment, what are actually doing here?\nWhen naming variables, a good practice is to state what the code does rather than state what it is used for. The reason this can be a code idea is that renaming a function/variable to something general can make code more reusable and modular. The process of thinking about a name may help you spot ideas and patterns that you may have otherwise missed. So, let me ask the question again, what are we actually doing here?\nSuppose I give you a table of data, and I want you so sum up all the columns. For example:\n[0, 1, 2]\n[5, 6, 7]\n\nreturns [5, 7, 9]\n\nHow can we solve this problem? Well, notice that our othagonal_movement function can be used to solve this problem:",
"def sum_column(table, column):\n num_rows = len(table[0])\n \n position = (0, column)\n points = othagonal_movement(position, direction=[\"down\"], limit=num_rows)\n \n total = 0\n for p in points:\n x, y = p[0][1]\n number = table[x][y]\n total += number\n return total",
"In short, the point is that although our main goal is to write code for a chess game it turns out some of the code we write could be useful in other contexts too. With good design, these sorts of 'coincidences' happen all the time.\nNotice that it was only really possible to reuse othagonal_movement for a different purpose because that function doesn't actually care about chess. It doesn't need an 8x8 board to work, and so on.\nConclusion\nSo today we moved away from the nitty-gritty and focused on the ‘big picture’. We looked at a few ways to represent a chess program in Python and although my code today was very ‘loose’ hopefully you guys followed along and understood the main lesson I was trying to teach; good, thought-out design matters; it makes coding faster, less frustrating, and also more expressive.\nAs we move toward the final project you would do well to remember some of these principles. Alright that’s it for this lecture, no homework this week. See ya next time!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/ml_ops/stage3/get_started_with_airflow_and_vertex_pipelines.ipynb
|
apache-2.0
|
[
"# Copyright 2022 Google LLC\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"E2E ML on GCP: MLOps stage 3 : formalization: get started with Apache Airflow and Vertex AI Pipelines\n<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_airflow_and_vertex_pipelines.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_airflow_and_vertex_pipelines.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\\\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage3/get_started_with_airflow_and_vertex_pipelines.ipynb\">\n <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 3 : formalization: get started with Apache Airflow and Vertex AI Pipelines.\nDataset\nThe dataset used for this tutorial is Condensed Game Data, which comes from the Apache Beam examples. The version used in this tutorial is stored in a Cloud Storage bucket.\nObjective\nIn this tutorial, you learn how to use Apache Airflow with Vertex AI Pipelines.\nThis tutorial uses the following Google Cloud ML services:\n\nVertex AI Pipelines\nVertex AI Dataset, Model and Endpoint resources\nBigQuery\nCloud Composer\n\nThe steps performed include:\n\nCreate Cloud Composer environment.\nUpload Airflow DAG to Composer environment that performs data processing -- i.e., creates a BigQuery table from a CSV file.\nCreate a Vertex AI Pipeline that triggers the Airflow DAG.\nExecute the Vertex AI Pipeline.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI pricing and Cloud Storage pricing and use the Pricing Calculator to generate a cost estimate based on your projected usage.\nInstallations\nInstall the following packages for executing this MLOps notebook.",
"import os\n\n# The Vertex AI Workbench Notebook product has specific requirements\nIS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\")\nIS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n \"/opt/deeplearning/metadata/env_version\"\n)\n\n# Vertex AI Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_WORKBENCH_NOTEBOOK:\n USER_FLAG = \"--user\"\n \n! pip3 install {USER_FLAG} google-cloud-aiplatform==1.0.0 --upgrade -q\n! pip3 install {USER_FLAG} kfp google-cloud-pipeline-components==0.1.1 --upgrade -q\n! pip3 install {USER_FLAG} apache-airflow[celery]==2.3.2 --constraint \"https://raw.githubusercontent.com/apache/airflow/constraints-2.3.2/constraints-3.7.txt\" \\\n apache-airflow-providers-google --upgrade -q",
"Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so that it can find the packages.",
"import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Check that you have correctly installed the packages. The KFP SDK version should be >=1.6:",
"!python3 -c \"import kfp; print('KFP SDK version: {}'.format(kfp.__version__))\"\n!python3 -c \"import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))\"",
"Set up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI and Composer API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions.",
"REGION = \"[your-region]\" # @param {type: \"string\"}\n\nif REGION == \"[your-region]\":\n REGION = \"us-central1\"",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Vertex AI Notebook Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your local environment.\n\n\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Vertex AI Workbench, then don't execute this code\nIS_COLAB = \"google.colab\" in sys.modules\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n \"DL_ANACONDA_HOME\"\n):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\nBUCKET_URI = f\"gs://{BUCKET_NAME}\"\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP\n BUCKET_URI = f\"gs://{BUCKET_NAME}\"",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_URI",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_URI",
"Service Account\nYou use a service account to create Vertex AI Pipeline jobs. If you do not want to use your project's Compute Engine service account, set SERVICE_ACCOUNT to another service account ID.",
"SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}\n\nif (\n SERVICE_ACCOUNT == \"\"\n or SERVICE_ACCOUNT is None\n or SERVICE_ACCOUNT == \"[your-service-account]\"\n):\n # Get your service account from gcloud\n if not IS_COLAB:\n shell_output = !gcloud auth list 2>/dev/null\n SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n\n if IS_COLAB:\n shell_output = ! gcloud projects describe $PROJECT_ID\n project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n\n print(\"Service Account:\", SERVICE_ACCOUNT)",
"Set service account access for Vertex AI Pipelines\nRun the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step. You only need to run this step once per service account.",
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n\n! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI",
"Import libraries",
"import kfp\nfrom google.cloud import aiplatform\nfrom kfp import dsl\nfrom kfp.v2 import compiler\nfrom kfp.v2.dsl import Artifact, Output, component",
"Initialize Vertex AI SDK for Python\nInitialize the Vertex AI SDK for Python for your project and corresponding bucket.",
"aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)",
"Introduction: Trigger Airflow DAG in Cloud Composer from a Vertex AI Pipeline\nApache Airflow is a popular choice for data pipelining in general. However, arguably not a good choice to run Machine learning pipelines due to lack of ML metadata tracking, artifact lineage, tracking ML metrics across metrics etc. Vertex AI Pipelines solves this problem and automates, monitors, and governs your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata.\nCloud Composer is fully managed workflow orchestration service built on Apache Airflow.\nLearn more about Cloud Composer.\nIn this tutorial, we will show you how you can trigger a data pipeline i.e. Airflow DAG on Cloud Composer from a ML pipeline running on Vertex AI Pipelines.\n\nCreate Cloud Composer Environment\nIn this tutorial, you create a bare minimum Cloud Composer environment. \nTo trigger an Airflow DAG from Vertex Pipeline, we will using Airflow web server REST API. By default, the API authentication feature is disabled in Airflow 1.10.11 and above which would deny all requests made to Airflow web server. To trigger DAG, you enable this feature. To enable the API authentication feature you override auth_backend configuration in Cloud Composer environment to airflow.api.auth.backend.default.\nNOTE: Cloud Composer environment creation may take up to 30 min. Grab your favorite beverage until then.\nLearn more about Creating Cloud Composer environment.",
"COMPOSER_ENV_NAME = \"test-composer-env\"\nZONE = f\"{REGION}-f\"\nMACHINE_TYPE = \"n1-standard-2\"\n\n! gcloud beta composer environments create $COMPOSER_ENV_NAME \\\n --location $REGION \\\n --zone $ZONE \\\n --machine-type $MACHINE_TYPE \\\n --image-version composer-latest-airflow-1.10.15 \\\n --airflow-configs=api-auth_backend=airflow.api.auth.backend.default",
"Get Composer Environment configuration\nYou get the Composer environment configuration such as webserver URL and client ID to use in the Vertex AI Pipeline using the script get_composer_client_id.py",
"%%writefile get_composer_config.py\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Get the client ID associated with a Cloud Composer environment.\"\"\"\n\nimport argparse\n\n\ndef get_client_id(project_id, location, composer_environment):\n # [START composer_get_environment_client_id]\n import google.auth\n import google.auth.transport.requests\n import requests\n import six.moves.urllib.parse\n\n # Authenticate with Google Cloud.\n # See: https://cloud.google.com/docs/authentication/getting-started\n credentials, _ = google.auth.default(\n scopes=['https://www.googleapis.com/auth/cloud-platform'])\n authed_session = google.auth.transport.requests.AuthorizedSession(\n credentials)\n\n # project_id = 'YOUR_PROJECT_ID'\n # location = 'us-central1'\n # composer_environment = 'YOUR_COMPOSER_ENVIRONMENT_NAME'\n\n environment_url = (\n 'https://composer.googleapis.com/v1beta1/projects/{}/locations/{}'\n '/environments/{}').format(project_id, location, composer_environment)\n composer_response = authed_session.request('GET', environment_url)\n environment_data = composer_response.json()\n airflow_uri = environment_data['config']['airflowUri']\n print(airflow_uri)\n dag_gcs_prefix = environment_data['config']['dagGcsPrefix']\n print(dag_gcs_prefix)\n\n # The Composer environment response does not include the IAP client ID.\n # Make a second, unauthenticated HTTP request to the web server to get the\n # redirect URI.\n redirect_response = requests.get(airflow_uri, allow_redirects=False)\n redirect_location = redirect_response.headers['location']\n\n # Extract the client_id query parameter from the redirect.\n parsed = six.moves.urllib.parse.urlparse(redirect_location)\n query_string = six.moves.urllib.parse.parse_qs(parsed.query)\n print(query_string['client_id'][0])\n # [END composer_get_environment_client_id]\n\n\n# Usage: python get_client_id.py your_project_id your_region your_environment_name\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(\n description=__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter)\n parser.add_argument('project_id', help='Your Project ID.')\n parser.add_argument(\n 'location', help='Region of the Cloud Composer environment.')\n parser.add_argument(\n 'composer_environment', help='Name of the Cloud Composer environment.')\n\n args = parser.parse_args()\n get_client_id(\n args.project_id, args.location, args.composer_environment)\n\n\n# This code is modified version of https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/composer/rest/get_client_id.py\n\nshell_output=! python3 get_composer_config.py $PROJECT_ID $REGION $COMPOSER_ENV_NAME\nCOMPOSER_WEB_URI = shell_output[0]\nCOMPOSER_DAG_GCS = shell_output[1]\nCOMPOSER_CLIENT_ID = shell_output[2]\n\nprint(f\"COMPOSER_WEB_URI = {COMPOSER_WEB_URI}\")\nprint(f\"COMPOSER_DAG_GCS = {COMPOSER_DAG_GCS}\")\nprint(f\"COMPOSER_CLIENT_ID = {COMPOSER_CLIENT_ID}\")",
"Display the Airflow webserver UI\nYou can navigate to Airflow webserver by going to this URL.",
"COMPOSER_WEB_URI",
"Upload DAG to Cloud Composer environment\nYou have a sample data processing DAG data_orchestration_bq_example_dag.py that reads a CSV file from GCS bucket and writes to BigQuery. We will add this file to the GCS bucket configure for the Composer environment that Airflow watches.",
"COMPOSER_DAG_NAME = \"dag_gcs_to_bq_orch\"\nCOMPOSER_DAG_FILENAME = \"data_orchestration_bq_example_dag.py\"\n\n%%writefile $COMPOSER_DAG_FILENAME\n\n\"\"\"An example Composer workflow integrating GCS and BigQuery.\n\nA CSV is read from a GCS bucket to a BigQuery table; a query is made, and the\nresult is written back to a different BigQuery table within a new dataset.\n\"\"\"\n\nfrom datetime import datetime, timedelta\nfrom airflow import DAG\nfrom airflow.contrib.operators.bigquery_operator import BigQueryOperator\nfrom airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator\nfrom airflow.operators.bash_operator import BashOperator\n\nYESTERDAY = datetime.combine(\n datetime.today() - timedelta(days=1), datetime.min.time())\nBQ_DATASET_NAME = 'bq_demos'\n\ndefault_args = {\n 'owner': 'airflow',\n 'depends_on_past': False,\n 'start_date': YESTERDAY,\n 'email_on_failure': False,\n 'email_on_retry': False,\n 'retries': 1,\n 'retry_delay': timedelta(minutes=5),\n}\n\n# Solution: pass a schedule_interval argument to DAG instantiation.\nwith DAG('dag_gcs_to_bq_orch', default_args=default_args,\n schedule_interval=None) as dag:\n create_bq_dataset_if_not_exist = \"\"\"\n bq ls {0}\n if [ $? -ne 0 ]; then\n bq mk {0}\n fi\n \"\"\".format(BQ_DATASET_NAME)\n\n # Create destination dataset.\n t1 = BashOperator(\n task_id='create_destination_dataset',\n bash_command=create_bq_dataset_if_not_exist,\n dag=dag)\n\n # Create a bigquery table from a CSV file located in a GCS bucket\n # (gs://example-datasets/game_data_condensed.csv).\n # Store it in our dataset.\n t2 = GoogleCloudStorageToBigQueryOperator(\n task_id='gcs_to_bq',\n bucket='example-datasets',\n source_objects=['game_data_condensed.csv'],\n destination_project_dataset_table='{0}.composer_game_data_table'\n .format(BQ_DATASET_NAME),\n schema_fields=[\n {'name': 'name', 'type': 'string', 'mode': 'nullable'},\n {'name': 'team', 'type': 'string', 'mode': 'nullable'},\n {'name': 'total_score', 'type': 'integer', 'mode': 'nullable'},\n {'name': 'timestamp', 'type': 'integer', 'mode': 'nullable'},\n {'name': 'window_start', 'type': 'string', 'mode': 'nullable'},\n ],\n write_disposition='WRITE_TRUNCATE')\n\n # Run example query (http://shortn/_BdF1UTEYOb) and save result to the\n # destination table.\n t3 = BigQueryOperator(\n task_id='bq_example_query',\n bql=f\"\"\"\n SELECT\n name, team, total_score\n FROM\n {BQ_DATASET_NAME}.composer_game_data_table\n WHERE total_score > 15\n LIMIT 100;\n \"\"\",\n destination_dataset_table='{0}.gcp_example_query_result'\n .format(BQ_DATASET_NAME),\n write_disposition='WRITE_TRUNCATE')\n\n t1 >> t2 >> t3\n\n!gsutil cp $COMPOSER_DAG_FILENAME $COMPOSER_DAG_GCS/\n\n!gsutil ls -l $COMPOSER_DAG_GCS/$COMPOSER_DAG_FILENAME",
"View the DAG in the Composer UI\nYou should see the DAG in your Airflow webserver -- specified by COMPOSER_WEB_URI.\nNote: There may be a momentary delay before your DAG is loaded and appears in the UI.",
"# view Composer UI at this URL\nCOMPOSER_WEB_URI",
"Create a Python function based component to trigger Airflow DAG\nUsing the KFP SDK, you can create components based on Python functions. The component takes an Airflow DAG name dag_name a string as input and returns response from Airflow web server as an Artifact that contains Airflow DAG run information. The component makes a request to Airflow REST API of your Cloud Composer environment. Airflow processes this request and runs a DAG. The DAG outputs information about the change that is logged as artifact.\nUnderstanding the component structure:\n\nThe @component decorator compiles this function to a component when the pipeline is run. You'll use this anytime you write a custom component.\nThe base_image parameter specifies the container image this component will use.\nThe output_component_file parameter is optional, and specifies the yaml file to write the compiled component to.\nThe packages_to_install parameter installs required python packages in the container to run the component",
"@component(\n base_image=\"gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.3\",\n output_component_file=\"composer-trigger-dag-component.yaml\",\n packages_to_install=[\"requests\"],\n)\ndef trigger_airflow_dag(\n dag_name: str,\n composer_client_id: str,\n composer_webserver_id: str,\n response: Output[Artifact]\n):\n # [START composer_trigger]\n\n import json\n import os\n\n import requests\n from google.auth.transport.requests import Request\n from google.oauth2 import id_token\n\n\n IAM_SCOPE = 'https://www.googleapis.com/auth/iam'\n OAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'\n \n data = '{\"replace_microseconds\":\"false\"}'\n context = None\n\n \"\"\"Makes a POST request to the Composer DAG Trigger API\n\n When called via Google Cloud Functions (GCF),\n data and context are Background function parameters.\n\n For more info, refer to\n https://cloud.google.com/functions/docs/writing/background#functions_background_parameters-python\n\n To call this function from a Python script, omit the ``context`` argument\n and pass in a non-null value for the ``data`` argument.\n \"\"\"\n\n # Form webserver URL to make REST API calls\n webserver_url = f'{composer_webserver_id}/api/experimental/dags/{dag_name}/dag_runs'\n # print(webserver_url)\n\n # This code is copied from\n # https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/iap/make_iap_request.py\n # START COPIED IAP CODE\n def make_iap_request(url, client_id, method='GET', **kwargs):\n \"\"\"Makes a request to an application protected by Identity-Aware Proxy.\n Args:\n url: The Identity-Aware Proxy-protected URL to fetch.\n client_id: The client ID used by Identity-Aware Proxy.\n method: The request method to use\n ('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')\n **kwargs: Any of the parameters defined for the request function:\n https://github.com/requests/requests/blob/master/requests/api.py\n If no timeout is provided, it is set to 90 by default.\n Returns:\n The page body, or raises an exception if the page couldn't be retrieved.\n \"\"\"\n # Set the default timeout, if missing\n if 'timeout' not in kwargs:\n kwargs['timeout'] = 90\n\n # Obtain an OpenID Connect (OIDC) token from metadata server or using service\n # account.\n google_open_id_connect_token = id_token.fetch_id_token(Request(), client_id)\n\n # Fetch the Identity-Aware Proxy-protected URL, including an\n # Authorization header containing \"Bearer \" followed by a\n # Google-issued OpenID Connect token for the service account.\n resp = requests.request(\n method, url,\n headers={'Authorization': 'Bearer {}'.format(\n google_open_id_connect_token)}, **kwargs)\n if resp.status_code == 403:\n raise Exception('Service account does not have permission to '\n 'access the IAP-protected application.')\n elif resp.status_code != 200:\n raise Exception(\n 'Bad response from application: {!r} / {!r} / {!r}'.format(\n resp.status_code, resp.headers, resp.text))\n else:\n print(f\"response = {resp.text}\")\n # not executed when testing locally\n if response:\n file_path = os.path.join(response.path)\n os.makedirs(file_path)\n with open(os.path.join(file_path, \"airflow_response.json\"), 'w') as f:\n json.dump(resp.text, f)\n\n # END COPIED IAP CODE\n\n \n # Make a POST request to IAP which then Triggers the DAG\n make_iap_request(\n webserver_url, composer_client_id, method='POST', json={\"conf\": data, \"replace_microseconds\": 'false'})\n \n # [END composer_trigger]",
"Test Triggering Airflow DAG from Notebook\nNext, you can optionally test your component locally. To do so, before running comment out @component decorator from the above definition of trigger_airflow_dag.\nAfterwards, you will need to add the @component decorator back to be used as a pipeline component.",
"try:\n trigger_airflow_dag(\n dag_name=COMPOSER_DAG_NAME,\n composer_client_id=COMPOSER_CLIENT_ID,\n composer_webserver_id=COMPOSER_WEB_URI,\n response=None\n )\nexcept Exception as e:\n print(e)",
"Create a pipeline with the component\nNext, you create a pipeline definition for executing your custom Airflow DAG component.\nPIPELINE_ROOT is the Cloud Storage path where the artifacts created by the pipeline will be written.",
"PATH=%env PATH\n%env PATH={PATH}:/home/jupyter/.local/bin\n\nPIPELINE_ROOT = f\"{BUCKET_URI}/pipeline_root/\"\nprint(PIPELINE_ROOT)\n\n@dsl.pipeline(\n name=\"pipeline-trigger-airflow-dag\",\n description=\"Trigger Airflow DAG from Vertex AI Pipelines\",\n pipeline_root=PIPELINE_ROOT,\n)\n\n# BLAH, don't see params\n# You can change the `text` and `emoji_str` parameters here to update the pipeline output\ndef pipeline():\n data_processing_task_dag_name = COMPOSER_DAG_NAME\n data_processing_task = trigger_airflow_dag(\n dag_name=data_processing_task_dag_name,\n composer_client_id=COMPOSER_CLIENT_ID,\n composer_webserver_id=COMPOSER_WEB_URI\n )",
"Compile and execute the pipeline\nNext, you compile the pipeline and then exeute it.",
"compiler.Compiler().compile(\n pipeline_func=pipeline, package_path=\"pipeline-trigger-airflow-dag.json\"\n)\n\npipeline = aiplatform.PipelineJob(\n display_name=\"airflow_pipeline\",\n template_path=\"pipeline-trigger-airflow-dag.json\",\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\n },\n enable_caching=False\n)\n\npipeline.run()\n\n! rm -f pipeline-trigger-airflow-dag.json",
"Monitor Vertex Pipeline status\nFrom Cloud Console, you can monitor the pipeline run status and view the output artifact\n\nMonitor Airflow DAG run\nGo to Airflow webserver and monitor the status of data processing DAG. Airflow webserver URL is",
"COMPOSER_WEB_URI + '/admin/airflow/tree?dag_id=dag_gcs_to_bq_orch'",
"View the pipeline execution results",
"import tensorflow as tf\n\nPROJECT_NUMBER = pipeline.gca_resource.name.split(\"/\")[1]\nprint(PROJECT_NUMBER)\n\n\ndef print_pipeline_output(job, output_task_name):\n PROJECT_NUMBER = job.gca_resource.name.split(\"/\")[1]\n print(PROJECT_NUMBER)\n\n JOB_ID = job.name\n print(JOB_ID)\n for _ in range(len(job.gca_resource.job_detail.task_details)):\n TASK_ID = job.gca_resource.job_detail.task_details[_].task_id\n EXECUTE_OUTPUT = (\n PIPELINE_ROOT\n + \"/\"\n + PROJECT_NUMBER\n + \"/\"\n + JOB_ID\n + \"/\"\n + output_task_name\n + \"_\"\n + str(TASK_ID)\n + \"/executor_output.json\"\n )\n GCP_RESOURCES = (\n PIPELINE_ROOT\n + \"/\"\n + PROJECT_NUMBER\n + \"/\"\n + JOB_ID\n + \"/\"\n + output_task_name\n + \"_\"\n + str(TASK_ID)\n + \"/gcp_resources\"\n )\n EVAL_METRICS = (\n PIPELINE_ROOT\n + \"/\"\n + PROJECT_NUMBER\n + \"/\"\n + JOB_ID\n + \"/\"\n + output_task_name\n + \"_\"\n + str(TASK_ID)\n + \"/evaluation_metrics\"\n )\n if tf.io.gfile.exists(EXECUTE_OUTPUT):\n ! gsutil cat $EXECUTE_OUTPUT\n return EXECUTE_OUTPUT\n elif tf.io.gfile.exists(GCP_RESOURCES):\n ! gsutil cat $GCP_RESOURCES\n return GCP_RESOURCES\n elif tf.io.gfile.exists(EVAL_METRICS):\n ! gsutil cat $EVAL_METRICS\n return EVAL_METRICS\n\n return None\n\n\nprint_pipeline_output(pipeline, \"tigger-airflow-dag\")",
"Verify the BigQuery dataset\nFinally, verify that the BigQuery dataset was created by the execution of your Airflow DAG.",
"! bq ls | grep bq_demos",
"Cleaning Up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.\n\nCloud Storage bucket\nCloud Composer environment\nBigQuery table",
"delete_bucket = False\n\n# Delete the Cloud Composer environment\n! gcloud beta composer environments delete $COMPOSER_ENV_NAME \\\n --location $REGION \\\n --quiet\n\n# Delete the temporary BigQuery dataset\n! bq rm -r -f $PROJECT_ID:$DATASET_NAME\n\nif delete_bucket or os.getenv(\"IS_TESTING\"):\n ! gsutil rm -rf {BUCKET_URI}\n \n! rm get_composer_config.py data_orchestration_bq_example_dag.py"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ellisonbg/leafletwidget
|
examples/DrawControl.ipynb
|
mit
|
[
"from ipyleaflet import (\n Map,\n Marker,\n TileLayer, ImageOverlay,\n Polyline, Polygon, Rectangle, Circle, CircleMarker,\n GeoJSON,\n DrawControl\n)\n\nfrom traitlets import link\n\ncenter = [34.6252978589571, -77.34580993652344]\nzoom = 10\n\nm = Map(center=center, zoom=zoom)\nm\n\nm.zoom",
"Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.",
"dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}},\n rectangle={'shapeOptions': {'color': '#0000FF'}},\n circle={'shapeOptions': {'color': '#0000FF'}},\n circlemarker={},\n )\n\ndef handle_draw(target, action, geo_json):\n print(action)\n print(geo_json)\n\ndc.on_draw(handle_draw)\nm.add_control(dc)",
"In addition, the DrawControl also has last_action and last_draw attributes that are created dynamically anytime a new drawn path arrives.",
"dc.last_action\n\ndc.last_draw",
"It's possible to remove all drawings from the map",
"dc.clear_circles()\n\ndc.clear_polylines()\n\ndc.clear_rectangles()\n\ndc.clear_markers()\n\ndc.clear_polygons()\n\ndc.clear()",
"Let's draw a second map and try to import this GeoJSON data into it.",
"m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px'))\nm2",
"We can use link to synchronize traitlets of the two maps:",
"map_center_link = link((m, 'center'), (m2, 'center'))\nmap_zoom_link = link((m, 'zoom'), (m2, 'zoom'))\n\nnew_poly = GeoJSON(data=dc.last_draw)\n\nm2.add_layer(new_poly)",
"Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.\nNow let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit.",
"dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={},\n circle={'shapeOptions': {'color': '#0000FF'}})\nm2.add_control(dc2)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
c22n/ion-channel-ABC
|
docs/examples/human-atrial/nygren_isus_unified.ipynb
|
gpl-3.0
|
[
"ABC calibration of $I_\\text{Kur}$ in Nygren model to unified dataset.\nNote the term $I_\\text{sus}$ for sustained outward Potassium current is used throughout the notebook.",
"import os, tempfile\nimport logging\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nfrom ionchannelABC import theoretical_population_size\nfrom ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor\nfrom ionchannelABC.experiment import setup\nfrom ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom\nimport myokit\n\nfrom pyabc import Distribution, RV, History, ABCSMC\nfrom pyabc.epsilon import MedianEpsilon\nfrom pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler\nfrom pyabc.populationstrategy import ConstantPopulationSize",
"Initial set-up\nLoad experiments used for unified dataset calibration:\n - Steady-state activation [Wang1993]\n - Activation time constant [Wang1993]\n - Deactivation time constant [Courtemanche1998]\n - Steady-state inactivation [Firek1995]\n - Inactivation time constant [Nygren1998]\n - Recovery time constant [Nygren1998]",
"from experiments.isus_wang import (wang_act_and_kin)\nfrom experiments.isus_courtemanche import (courtemanche_deact)\n\nmodelfile = 'models/nygren_isus.mmt'",
"Plot steady-state and time constant functions of original model",
"from ionchannelABC.visualization import plot_variables\n\nsns.set_context('talk')\n\nV = np.arange(-100, 40, 0.01)\n\nnyg_par_map = {'ri': 'isus.r_inf',\n 'si': 'isus.s_inf',\n 'rt': 'isus.tau_r',\n 'st': 'isus.tau_s'}\n\nf, ax = plot_variables(V, nyg_par_map, modelfile, figshape=(2,2))",
"Activation gate ($r$) calibration\nCombine model and experiments to produce:\n - observations dataframe\n - model function to run experiments and return traces\n - summary statistics function to accept traces",
"observations, model, summary_statistics = setup(modelfile,\n wang_act_and_kin,\n courtemanche_deact)\n\nassert len(observations)==len(summary_statistics(model({})))\n\ng = plot_sim_results(modelfile,\n wang_act_and_kin,\n courtemanche_deact)",
"Set up prior ranges for each parameter in the model.\nSee the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.",
"limits = {'isus.p1': (-100, 100),\n 'isus.p2': (1e-7, 50),\n 'log_isus.p3': (-5, 0),\n 'isus.p4': (-100, 100),\n 'isus.p5': (1e-7, 50),\n 'log_isus.p6': (-6, -1)}\nprior = Distribution(**{key: RV(\"uniform\", a, b - a)\n for key, (a,b) in limits.items()})",
"Run ABC calibration",
"db_path = (\"sqlite:///\" + os.path.join(tempfile.gettempdir(), \"nygren_isus_rgate_unified.db\"))\n\nlogging.basicConfig()\nabc_logger = logging.getLogger('ABC')\nabc_logger.setLevel(logging.DEBUG)\neps_logger = logging.getLogger('Epsilon')\neps_logger.setLevel(logging.DEBUG)\n\npop_size = theoretical_population_size(2, len(limits))\nprint(\"Theoretical minimum population size is {} particles\".format(pop_size))\n\nabc = ABCSMC(models=model,\n parameter_priors=prior,\n distance_function=IonChannelDistance(\n exp_id=list(observations.exp_id),\n variance=list(observations.variance),\n delta=0.05),\n population_size=ConstantPopulationSize(1000),\n summary_statistics=summary_statistics,\n transitions=EfficientMultivariateNormalTransition(),\n eps=MedianEpsilon(initial_epsilon=100),\n sampler=MulticoreEvalParallelSampler(n_procs=16),\n acceptor=IonChannelAcceptor())\n\nobs = observations.to_dict()['y']\nobs = {str(k): v for k, v in obs.items()}\n\nabc_id = abc.new(db_path, obs)\n\nhistory = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)",
"Database results analysis",
"history = History('sqlite:///results/nygren/isus/unified/nygren_isus_rgate_unified.db')\n\ndf, w = history.get_distribution()\n\ndf.describe()\n\nsns.set_context('poster')\n\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['legend.fontsize'] = 14\n\ng = plot_sim_results(modelfile,\n wang_act_and_kin,\n courtemanche_deact,\n df=df, w=w)\nplt.tight_layout()\n\nimport pandas as pd\nN = 100\nnyg_par_samples = df.sample(n=N, weights=w, replace=True)\nnyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))])\nnyg_par_samples = nyg_par_samples.to_dict(orient='records')\n\nsns.set_context('talk')\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['legend.fontsize'] = 14\n\nf, ax = plot_variables(V, nyg_par_map, \n 'models/nygren_isus.mmt', \n [nyg_par_samples],\n figshape=(2,2))\n\nfrom ionchannelABC.visualization import plot_kde_matrix_custom\nimport myokit\nimport numpy as np\n\nm,_,_ = myokit.load(modelfile)\n\noriginals = {}\nfor name in limits.keys():\n if name.startswith(\"log\"):\n name_ = name[4:]\n else:\n name_ = name\n val = m.value(name_)\n if name.startswith(\"log\"):\n val_ = np.log10(val)\n else:\n val_ = val\n originals[name] = val_\n\nsns.set_context('paper')\ng = plot_kde_matrix_custom(df, w, limits=limits, refval=originals)\nplt.tight_layout()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scotthuang1989/Python-3-Module-of-the-Week
|
data_structure/queue.ipynb
|
apache-2.0
|
[
"Purpose\n\nProvide Thread-Safe FIFO Implementation\nmulti-producer, multi-consumer queue\n\nBasic FIFO Queue\nThe Queue class implements a basic first-in, first-out container. Element are added to one \"end\" of the sequence using put(), and removed from the other using get()",
"import queue\n\nq = queue.Queue()\n\nfor i in range(5):\n q.put(i)\n \nwhile not q.empty():\n print(q.get(), end=' ')\n",
"This example uses a single thread to illustrate that elements are removed from the queue in the same order in which they are inserted\nLIFO Queue\nIn contrast to the standard FIFO implementation of Queue, the LifoQueue uses last-in, first-out ordering (normally associated with a stack data structure)",
"import queue\n\nq = queue.LifoQueue()\n\nfor i in range(5):\n q.put(i)\n \nwhile not q.empty():\n print(q.get(), end=' ')\n \n\n ",
"Priority Queue\nSometimes the processing order of the items in a queue needs to be based on characteristics of those items, rather than just the order they are created or added to the queue. For example, print jobs from the payroll department may take precedence over a code listing that a developer wants to print. PriorityQueue uses the sort order of the contents of the queue to decide which item to retrieve.",
"import functools\nimport queue\nimport threading\nimport time\n\n\n@functools.total_ordering\nclass Job:\n\n def __init__(self, priority, description):\n self.priority = priority\n self.description = description\n print('New job:', description)\n return\n\n def __eq__(self, other):\n try:\n return self.priority == other.priority\n except AttributeError:\n return NotImplemented\n\n def __lt__(self, other):\n try:\n return self.priority < other.priority\n except AttributeError:\n return NotImplemented\n \nq = queue.PriorityQueue()\n\nq.put(Job(3, 'Mid-level job'))\nq.put(Job(10, 'Low-level job'))\nq.put(Job(1, 'Important job'))\n\n\ntime.sleep(3)\n\ndef process_job(q):\n while True:\n next_job = q.get()\n print('Processing job:', next_job.description)\n q.task_done()\n\n\nworkers = [\n threading.Thread(target=process_job, args=(q,)),\n# threading.Thread(target=process_job, args=(q,)),\n]\nfor w in workers:\n w.setDaemon(True)\n w.start()\n\nq.join()\n",
"This example has multiple threads consuming the jobs, which are processed based on the priority of items in the queue at the time get() was called. The order of processing for items added to the queue while the consumer threads are running depends on thread context switching.\nBuilding a Threaded Podcast Client",
"# %load fetch_podcasts.py\n# First, some operating parameters are established. \n# Usually, these would come from user inputs \n# (e.g., preferences or a database). The example uses hard-coded \n# values for the number of threads and list of URLs to fetch.\n\nfrom queue import Queue\nimport threading\nimport time\nimport urllib\nfrom urllib.parse import urlparse\n\nimport feedparser\n\n# Set up some global variables\nnum_fetch_threads = 2\nenclosure_queue = Queue()\n\n# A real app wouldn't use hard-coded data...\nfeed_urls = [\n 'http://talkpython.fm/episodes/rss',\n]\n\n\ndef message(s):\n print('{}: {}'.format(threading.current_thread().name, s))\n\n# The function download_enclosures() runs in the worker thread \n# and processes the downloads using urllib. \n \ndef download_enclosures(q):\n \"\"\"This is the worker thread function.\n It processes items in the queue one after\n another. These daemon threads go into an\n infinite loop, and exit only when\n the main thread ends.\n \"\"\"\n while True:\n message('looking for the next enclosure')\n url = q.get()\n filename = url.rpartition('/')[-1]\n message('downloading {}'.format(filename))\n response = urllib.request.urlopen(url)\n data = response.read()\n # Save the downloaded file to the current directory\n message('writing to {}'.format(filename))\n with open(filename, 'wb') as outfile:\n outfile.write(data)\n q.task_done()\n\n# Set up some threads to fetch the enclosures\nfor i in range(num_fetch_threads):\n worker = threading.Thread(\n target=download_enclosures,\n args=(enclosure_queue,),\n name='worker-{}'.format(i),\n )\n worker.setDaemon(True)\n worker.start()\n \n\n# Download the feed(s) and put the enclosure URLs into\n# the queue.\nfor url in feed_urls:\n response = feedparser.parse(url, agent='fetch_podcasts.py')\n for entry in response['entries'][:5]:\n for enclosure in entry.get('enclosures', []):\n parsed_url = urlparse(enclosure['url'])\n message('queuing {}'.format(\n parsed_url.path.rpartition('/')[-1]))\n enclosure_queue.put(enclosure['url'])\n\n# Now wait for the queue to be empty, indicating that we have\n# processed all of the downloads.\nmessage('*** main thread waiting')\nenclosure_queue.join()\nmessage('*** done')\n\n!python fetch_podcasts.py\n\n# remove all download files\n!rm *.mp3",
"Join and task_done\nQueue.task_done and Queue.join is used closely:\n\n\nWhen you call join, the thread call it will block untile all item in the queue have been gotten and processed.\n\n\nThe count of unfinised tasks goes up whenever an item is added to the queue.\n\n\nThe count goes down whenever a consumer call task_done to indicate that the item was retrived and all work on it is complete. When the count of unfinised tasks drops to zero, join unblocks."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DJCordhose/ai
|
notebooks/tensorflow/tf_low_level_advanced.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tensorflow/tf_low_level_advanced.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nLow Level TensorFlow, Part II: Advanced\n\nhttps://www.tensorflow.org/guide/low_level_intro",
"# import and check version\nimport tensorflow as tf\n# tf can be really verbose\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)\n\n# a small sanity check, does tf seem to work ok? \nhello = tf.constant('Hello TF!')\nsess = tf.Session()\nprint(sess.run(hello))\nsess.close()",
"Reading in data sets",
"x = tf.placeholder(tf.float32)\ny = tf.placeholder(tf.float32)\nz = x + y\n\nr = tf.random_normal([10, 2])\ndataset = tf.data.Dataset.from_tensor_slices(r)\niterator = dataset.make_initializable_iterator()\nnext_row = iterator.get_next()\n\nwith tf.Session() as sess:\n sess.run(iterator.initializer)\n while True:\n try:\n data = sess.run(next_row)\n print(data)\n print(sess.run(z, feed_dict={x: data[0], y: data[1]}))\n except tf.errors.OutOfRangeError:\n break",
"Layers",
"x = tf.placeholder(tf.float32, shape=[None, 3])\ny = tf.layers.dense(inputs=x, units=1)\n\nwith tf.Session() as sess:\n try:\n print(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))\n except tf.errors.FailedPreconditionError as fpe:\n print(fpe.message)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))\n\ny = tf.layers.dense(inputs=x, units=2, activation=tf.nn.tanh)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))",
"Feature columns\ntransform a diverse range of raw data into formats input layers can accept\n\nhttps://www.tensorflow.org/guide/feature_columns\nhttps://www.tensorflow.org/api_docs/python/tf/feature_column/input_layer",
"features = {\n 'sales' : [[5], [10], [8], [9]],\n 'department': ['sports', 'sports', 'gardening', 'gardening']\n}\n\n# numeric values are simple\nsales_column = tf.feature_column.numeric_column('sales')\ncolumns = {\n sales_column\n}\n\ninputs = tf.feature_column.input_layer(features, columns)\n\n# categories are harders, as NNs only accept dense numeric values\n\ncategorical_department_column = tf.feature_column.categorical_column_with_vocabulary_list(\n 'department', ['sports', 'gardening'])\n\ncolumns = {\n sales_column,\n categorical_department_column\n}\n\n# we can decide if we want the category to be encoded as embedding or multi-hot \ntry:\n inputs = tf.feature_column.input_layer(features, columns)\nexcept ValueError as ve:\n print(ve)\n\nmulti_hot_department_column = tf.feature_column.indicator_column(categorical_department_column)\n\ncolumns = {\n sales_column,\n multi_hot_department_column\n}\n\ninputs = tf.feature_column.input_layer(features, columns)\n\n# feature columns also need initialization\nvar_init = tf.global_variables_initializer()\ntable_init = tf.tables_initializer()\nwith tf.Session() as sess:\n sess.run((var_init, table_init))\n # first two are departments last entry is just sales as is\n print(sess.run(inputs))\n\n# multi (one in our case) hot encoding of departments\ncolumns = {\n multi_hot_department_column\n}\n\ninputs = tf.feature_column.input_layer(features, columns)\nvar_init = tf.global_variables_initializer()\ntable_init = tf.tables_initializer()\nwith tf.Session() as sess:\n sess.run((var_init, table_init))\n print(sess.run(inputs))\n\n\n# alternative, embedding in three dimensions\nembedding_department_column = tf.feature_column.embedding_column(categorical_department_column, dimension=3)\ncolumns = {\n embedding_department_column\n}\n\ninputs = tf.feature_column.input_layer(features, columns)\nvar_init = tf.global_variables_initializer()\ntable_init = tf.tables_initializer()\nwith tf.Session() as sess:\n sess.run((var_init, table_init))\n print(sess.run(inputs))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sastels/Onboarding
|
1 - Introduction.ipynb
|
mit
|
[
"Language Introduction\nPython is a dynamic, interpreted (bytecode-compiled) language. There are no type declarations of variables, parameters, functions, or methods in source code. This makes the code short and flexible, and you lose the compile-time type checking of the source code. Python tracks the types of all values at runtime and flags code that does not make sense as it runs.\nAn excellent way to see how Python code works is to type it into a notebook.",
"a = 6 ## set a variable in this interpreter session\na ## entering an expression prints its value\n\na + 2\n\na = 'hi' ## 'a' can hold a string just as well\na\n\nlen(a) ## call the len() function on a string\n\na + len(a) ## try something that doesn't work\n\na + str(len(a)) ## probably what you really wanted\n\nfoo ## try something else that doesn't work",
"As you can see above, it's easy to experiment with variables and operators. Also, the interpreter throws, or \"raises\" in Python parlance, a runtime error if the code tries to read a variable that has not been assigned a value. Like C++ and Java, Python is case sensitive so \"a\" and \"A\" are different variables. The end of a line marks the end of a statement, so unlike C++ and Java, Python does not require a semicolon at the end of each statement. Comments begin with a '#' and extend to the end of the line.\nPython source code\nPython source files use the \".py\" extension and are called \"modules.\" With a Python module hello.py, the easiest way to run it is with the shell command \"python hello.py Alice\" which calls the Python interpreter to execute the code in hello.py, passing it the command line argument \"Alice\". See the official docs page on all the different options you have when running Python from the command-line.\nHere's a very simple hello.py program (notice that blocks of code are delimited strictly using indentation rather than curly braces — more on this later!):\n```python\n!/usr/bin/env python\nimport modules used here -- sys is a very standard one\nimport sys\nGather our code in a main() function\ndef main():\n print 'Hello there', sys.argv[1]\n # Command line args are in sys.argv[1], sys.argv[2] ...\n # sys.argv[0] is the script name itself and can be ignored\nStandard boilerplate to call the main() function to begin\nthe program.\nif name == 'main':\n main()\n```\nOpen a terminal window and paste this code into the file hello.py, then run the program a few times.\nImports, Command-line arguments, and len()\nThe outermost statements in a Python file, or \"module\", do its one-time setup — those statements run from top to bottom the first time the module is imported somewhere, setting up its variables and functions. A Python module can be run directly — as above \"python hello.py Bob\" — or it can be imported and used by some other module. When a Python file is run directly, the special variable __name__ is set to __main__. Therefore, it's common to have the boilerplate\nif __name__ ==...\nshown above to call a main() function when the module is run directly, but not when the module is imported by some other module.\nIn a standard Python program, the list sys.argv contains the command-line arguments in the standard way with sys.argv[0] being the program itself, sys.argv[1] the first argument, and so on. If you know about argc, or the number of arguments, you can simply request this value from Python with len(sys.argv), just like we did above when requesting the length of a string. In general, len() can tell you how long a string is, the number of elements in lists and tuples (another array-like data structure), and the number of key-value pairs in a dictionary.\nUser-defined Functions\nFunctions in Python are defined like this:",
"# Defines a \"repeat\" function that takes 2 arguments.\ndef repeat(s, exclaim):\n \"\"\"\n Returns the string 's' repeated 3 times.\n If exclaim is true, add exclamation marks.\n \"\"\"\n\n result = s + s + s # can also use \"s * 3\" which is faster (Why?)\n if exclaim:\n result = result + '!!!'\n return result",
"Notice also how the lines that make up the function or if-statement are grouped by all having the same level of indentation. We also presented 2 different ways to repeat strings, using the + operator which is more user-friendly, but * also works because it's Python's \"repeat\" operator, meaning that '-' * 10 gives '----------', a neat way to create an onscreen \"line.\" In the code comment, we hinted that * works faster than +, the reason being that * calculates the size of the resulting object once whereas with +, that calculation is made each time + is called. Both + and * are called \"overloaded\" operators because they mean different things for numbers vs. for strings (and other data types).\nThe def keyword defines the function with its parameters within parentheses and its code indented. The first line of a function can be a documentation string (\"docstring\") that describes what the function does. The docstring can be a single line, or a multi-line description as in the example above. (Yes, those are \"triple quotes,\" a feature unique to Python!) Variables defined in the function are local to that function, so the \"result\" in the above function is separate from a \"result\" variable in another function. The return statement can take an argument, in which case that is the value returned to the caller.\nHere is code that calls the above repeat() function, printing what it returns:",
"def happy(name):\n print repeat(' Yay ' + name, False)\n print repeat('Woo Hoo', True)",
"At run time, functions must be defined by the execution of a \"def\" before they are called.",
"happy('Tobi')",
"Indentation\nOne unusual Python feature is that the whitespace indentation of a piece of code affects its meaning. A logical block of statements such as the ones that make up a function should all have the same indentation, set in from the indentation of their parent function or \"if\" or whatever. If one of the lines in a group has a different indentation, it is flagged as a syntax error.\nPython's use of whitespace feels a little strange at first, but it's logical and I found I got used to it very quickly. Avoid using TABs as they greatly complicate the indentation scheme (not to mention TABs may mean different things on different platforms). Set your editor to insert spaces instead of TABs for Python code.\nA common question beginners ask is, \"How many spaces should I indent?\" According to the official Python style guide (PEP 8), you should indent with 4 spaces. A good IDE should take care of this for you.\nCode Checked at Runtime\nPython does very little checking at compile time, deferring almost all type, name, etc. checks on each line until that line runs. Suppose the above happy() calls repeat() like this:",
"def happy(name):\n if name == 'Tobi':\n print repeet(' Yay ' + name, False)\n else:\n print repeat('Woo Hoo', True)",
"The if-statement contains an obvious error, where the repeat() function is accidentally typed in as repeet(). The funny thing in Python ... this code compiles and runs fine so long as the name at runtime is not 'Tobi'. Only when a run actually tries to execute the repeet() will it notice that there is no such function and raise an error. This just means that when you first run a Python program, some of the first errors you see will be simple typos like this. This is one area where languages with a more verbose type system, like Java, have an advantage ... they can catch such errors at compile time (but of course you have to maintain all that type information ... it's a tradeoff).",
"happy('Cody')\n\nhappy('Tobi')",
"Variable Names\nSince Python variables don't have any type spelled out in the source code, it's extra helpful to give meaningful names to your variables to remind yourself of what's going on. So use \"name\" if it's a single name, and \"names\" if it's a list of names, and \"tuples\" if it's a list of tuples. Many basic Python errors result from forgetting what type of value is in each variable, so use your variable names (all you have really) to help keep things straight.\nAs far as actual naming goes, some languages prefer underscored_parts for variable names made up of \"more than one word,\" but other languages prefer camelCasing. In general, Python prefers the underscore method but guides developers to defer to camelCasing if integrating into existing Python code that already uses that style. Readability counts. Read more in the section on naming conventions in PEP 8.\nAs you can guess, keywords like 'print' and 'while' cannot be used as variable names — you'll get a syntax error if you do. However, be careful not to use built-ins as variable names. For example, while 'str' and 'list' may seem like good names, you'd be overriding those system variables. Built-ins are not keywords and thus, are susceptible to inadvertent use by new Python developers.\nMore on Modules and their Namespaces\nSuppose you've got a module \"binky.py\" which contains a \"def foo()\". The fully qualified name of that foo function is \"binky.foo\". In this way, various Python modules can name their functions and variables whatever they want, and the variable names won't conflict — module1.foo is different from module2.foo. In the Python vocabulary, we'd say that binky, module1, and module2 each have their own \"namespaces,\" which as you can guess are variable name-to-object bindings.\nFor example, we have the standard \"sys\" module that contains some standard system facilities, like the argv list, and exit() function. With the statement \"import sys\" you can can then access the definitions in the sys module and makes them available by their fully-qualified name, e.g. sys.exit(). (Yes, 'sys' has a namespace too!)",
"import sys\n\n# Now can refer to sys.xxx facilities\nsys.maxint",
"There is another import form that looks like this: \"from sys import argv, exit\". That makes argv and exit() available by their short names; however, we recommend the original form with the fully-qualified names because it's a lot easier to determine where a function or attribute came from.\nThere are many modules and packages which are bundled with a standard installation of the Python interpreter, so you don't have do anything extra to use them. These are collectively known as the \"Python Standard Library.\" Commonly used modules/packages include:\n\nsys — access to exit(), argv, stdin, stdout, ...\nre — regular expressions\nos — operating system interface, file system\n\nYou can find the documentation of all the Standard Library modules and packages at http://docs.python.org/library.\nhelp(), and dir()\nInside the Python interpreter, the help() function pulls up documentation strings for various modules, functions, and methods. These doc strings are similar to Java's javadoc. The dir() function tells you what the attributes of an object are. Below are some ways to call help() and dir() from the interpreter:",
"help(len)",
"In Jupyter notebook you can also find help on a function by pressing shift-TAB after a function's first parenthesis.",
"len( <PRESS SHIFT-TAB BEFORE THIS> ",
"Pressing shift-TAB twice in succession makes the documentation box bigger (showing more of the docs).\nRecall our string \"a\".",
"a\n\ndir(a)",
"The above output is annoyingly long - click in the margin under the Out[] to make it smaller.\nNote: This notebook is an adaption of Google's python tutorial https://developers.google.com/edu/python"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dsquareindia/gensim
|
docs/notebooks/doc2vec-IMDB.ipynb
|
lgpl-2.1
|
[
"gensim doc2vec & IMDB sentiment dataset\nTODO: section on introduction & motivation\nTODO: prerequisites + dependencies (statsmodels, patsy, ?)\nRequirements\nFollowing are the dependencies for this tutorial:\n - testfixtures\n - statsmodels\nLoad corpus\nFetch and prep exactly as in Mikolov's go.sh shell script. (Note this cell tests for existence of required files, so steps won't repeat once the final summary file (aclImdb/alldata-id.txt) is available alongside this notebook.)",
"import locale\nimport glob\nimport os.path\nimport requests\nimport tarfile\n\ndirname = 'aclImdb'\nfilename = 'aclImdb_v1.tar.gz'\nlocale.setlocale(locale.LC_ALL, 'C')\n\n\n# Convert text to lower-case and strip punctuation/symbols from words\ndef normalize_text(text):\n norm_text = text.lower()\n\n # Replace breaks with spaces\n norm_text = norm_text.replace('<br />', ' ')\n\n # Pad punctuation with spaces on both sides\n for char in ['.', '\"', ',', '(', ')', '!', '?', ';', ':']:\n norm_text = norm_text.replace(char, ' ' + char + ' ')\n\n return norm_text\n\n\nif not os.path.isfile('aclImdb/alldata-id.txt'):\n if not os.path.isdir(dirname):\n if not os.path.isfile(filename):\n # Download IMDB archive\n url = 'http://ai.stanford.edu/~amaas/data/sentiment/' + filename\n r = requests.get(url)\n with open(filename, 'wb') as f:\n f.write(r.content)\n\n tar = tarfile.open(filename, mode='r')\n tar.extractall()\n tar.close()\n\n # Concat and normalize test/train data\n folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']\n alldata = u''\n\n for fol in folders:\n temp = u''\n output = fol.replace('/', '-') + '.txt'\n\n # Is there a better pattern to use?\n txt_files = glob.glob('/'.join([dirname, fol, '*.txt']))\n\n for txt in txt_files:\n with open(txt, 'r', encoding='utf-8') as t:\n control_chars = [chr(0x85)]\n t_clean = t.read()\n\n for c in control_chars:\n t_clean = t_clean.replace(c, ' ')\n\n temp += t_clean\n\n temp += \"\\n\"\n\n temp_norm = normalize_text(temp)\n with open('/'.join([dirname, output]), 'w', encoding='utf-8') as n:\n n.write(temp_norm)\n\n alldata += temp_norm\n\n with open('/'.join([dirname, 'alldata-id.txt']), 'w', encoding='utf-8') as f:\n for idx, line in enumerate(alldata.splitlines()):\n num_line = \"_*{0} {1}\\n\".format(idx, line)\n f.write(num_line)\n\nimport os.path\nassert os.path.isfile(\"aclImdb/alldata-id.txt\"), \"alldata-id.txt unavailable\"",
"The data is small enough to be read into memory.",
"import gensim\nfrom gensim.models.doc2vec import TaggedDocument\nfrom collections import namedtuple\n\nSentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')\n\nalldocs = [] # will hold all docs in original order\nwith open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:\n for line_no, line in enumerate(alldata):\n tokens = gensim.utils.to_unicode(line).split()\n words = tokens[1:]\n tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost\n split = ['train','test','extra','extra'][line_no//25000] # 25k train, 25k test, 25k extra\n sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown\n alldocs.append(SentimentDocument(words, tags, split, sentiment))\n\ntrain_docs = [doc for doc in alldocs if doc.split == 'train']\ntest_docs = [doc for doc in alldocs if doc.split == 'test']\ndoc_list = alldocs[:] # for reshuffling per pass\n\nprint('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))",
"Set-up Doc2Vec Training & Evaluation Models\nApproximating experiment of Le & Mikolov \"Distributed Representations of Sentences and Documents\", also with guidance from Mikolov's example go.sh:\n./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1\nParameter choices below vary:\n\n100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task\nsimilarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out\ncbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0\nadded to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)\na min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)",
"from gensim.models import Doc2Vec\nimport gensim.models.doc2vec\nfrom collections import OrderedDict\nimport multiprocessing\n\ncores = multiprocessing.cpu_count()\nassert gensim.models.doc2vec.FAST_VERSION > -1, \"this will be painfully slow otherwise\"\n\nsimple_models = [\n # PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size\n Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),\n # PV-DBOW \n Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),\n # PV-DM w/average\n Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),\n]\n\n# speed setup by sharing results of 1st model's vocabulary scan\nsimple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template\nprint(simple_models[0])\nfor model in simple_models[1:]:\n model.reset_from(simple_models[0])\n print(model)\n\nmodels_by_name = OrderedDict((str(model), model) for model in simple_models)",
"Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)",
"from gensim.test.test_doc2vec import ConcatenatedDoc2Vec\nmodels_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])\nmodels_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])",
"Predictive Evaluation Methods\nHelper methods for evaluating error rate.",
"import numpy as np\nimport statsmodels.api as sm\nfrom random import sample\n\n# for timing\nfrom contextlib import contextmanager\nfrom timeit import default_timer\nimport time \n\n@contextmanager\ndef elapsed_timer():\n start = default_timer()\n elapser = lambda: default_timer() - start\n yield lambda: elapser()\n end = default_timer()\n elapser = lambda: end-start\n \ndef logistic_predictor_from_data(train_targets, train_regressors):\n logit = sm.Logit(train_targets, train_regressors)\n predictor = logit.fit(disp=0)\n #print(predictor.summary())\n return predictor\n\ndef error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):\n \"\"\"Report error rate on test_doc sentiments, using supplied model and train_docs\"\"\"\n\n train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])\n train_regressors = sm.add_constant(train_regressors)\n predictor = logistic_predictor_from_data(train_targets, train_regressors)\n\n test_data = test_set\n if infer:\n if infer_subsample < 1.0:\n test_data = sample(test_data, int(infer_subsample * len(test_data)))\n test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]\n else:\n test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]\n test_regressors = sm.add_constant(test_regressors)\n \n # predict & evaluate\n test_predictions = predictor.predict(test_regressors)\n corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])\n errors = len(test_predictions) - corrects\n error_rate = float(errors) / len(test_predictions)\n return (error_rate, errors, len(test_predictions), predictor)",
"Bulk Training\nUsing explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass.\nNote that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.\nEvaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors. \n(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)",
"from collections import defaultdict\nbest_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved\n\nfrom random import shuffle\nimport datetime\n\nalpha, min_alpha, passes = (0.025, 0.001, 20)\nalpha_delta = (alpha - min_alpha) / passes\n\nprint(\"START %s\" % datetime.datetime.now())\n\nfor epoch in range(passes):\n shuffle(doc_list) # shuffling gets best results\n \n for name, train_model in models_by_name.items():\n # train\n duration = 'na'\n train_model.alpha, train_model.min_alpha = alpha, alpha\n with elapsed_timer() as elapsed:\n train_model.train(doc_list)\n duration = '%.1f' % elapsed()\n \n # evaluate\n eval_duration = ''\n with elapsed_timer() as eval_elapsed:\n err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)\n eval_duration = '%.1f' % eval_elapsed()\n best_indicator = ' '\n if err <= best_error[name]:\n best_error[name] = err\n best_indicator = '*' \n print(\"%s%f : %i passes : %s %ss %ss\" % (best_indicator, err, epoch + 1, name, duration, eval_duration))\n\n if ((epoch + 1) % 5) == 0 or epoch == 0:\n eval_duration = ''\n with elapsed_timer() as eval_elapsed:\n infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)\n eval_duration = '%.1f' % eval_elapsed()\n best_indicator = ' '\n if infer_err < best_error[name + '_inferred']:\n best_error[name + '_inferred'] = infer_err\n best_indicator = '*'\n print(\"%s%f : %i passes : %s %ss %ss\" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))\n\n print('completed pass %i at alpha %f' % (epoch + 1, alpha))\n alpha -= alpha_delta\n \nprint(\"END %s\" % str(datetime.datetime.now()))",
"Achieved Sentiment-Prediction Accuracy",
"# print best error rates achieved\nfor rate, name in sorted((rate, name) for name, rate in best_error.items()):\n print(\"%f %s\" % (rate, name))",
"In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.\nExamining Results\nAre inferred vectors close to the precalculated ones?",
"doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples\nprint('for doc %d...' % doc_id)\nfor model in simple_models:\n inferred_docvec = model.infer_vector(alldocs[doc_id].words)\n print('%s:\\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))",
"(Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)\nDo close documents seem more related than distant ones?",
"import random\n\ndoc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples\nmodel = random.choice(simple_models) # and a random model\nsims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents\nprint(u'TARGET (%d): «%s»\\n' % (doc_id, ' '.join(alldocs[doc_id].words)))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: «%s»\\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))",
"(Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)\nDo the word vectors show useful similarities?",
"word_models = simple_models[:]\n\nimport random\nfrom IPython.display import HTML\n# pick a random word with a suitable number of occurences\nwhile True:\n word = random.choice(word_models[0].wv.index2word)\n if word_models[0].wv.vocab[word].count > 10:\n break\n# or uncomment below line, to just pick a word from the relevant domain:\n#word = 'comedy/drama'\nsimilars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\\n') for model in word_models]\nsimilar_table = (\"<table><tr><th>\" +\n \"</th><th>\".join([str(model) for model in word_models]) + \n \"</th></tr><tr><td>\" +\n \"</td><td>\".join(similars_per_model) +\n \"</td></tr></table>\")\nprint(\"most similar words for '%s' (%d occurences)\" % (word, simple_models[0].wv.vocab[word].count))\nHTML(similar_table)",
"Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task. \nWords from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)\nAre the word vectors from this dataset any good at analogies?",
"# assuming something like\n# https://word2vec.googlecode.com/svn/trunk/questions-words.txt \n# is in local directory\n# note: this takes many minutes\nfor model in word_models:\n sections = model.accuracy('questions-words.txt')\n correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])\n print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))",
"Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)\nSlop",
"This cell left intentionally erroneous.",
"To mix the Google dataset (if locally available) into the word tests...",
"from gensim.models import KeyedVectors\nw2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)\nw2v_g100b.compact_name = 'w2v_g100b'\nword_models.append(w2v_g100b)",
"To get copious logging output from above steps...",
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nrootLogger = logging.getLogger()\nrootLogger.setLevel(logging.INFO)",
"To auto-reload python code while developing...",
"%load_ext autoreload\n%autoreload 2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chseifert/tutorials
|
visualizations/Parallel-Coordinates.ipynb
|
apache-2.0
|
[
"PARALLEL COORDINATES\nAuthors\nNdèye Gagnessiry Ndiaye and Christin Seifert \nLicense\nThis work is licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/ \nThis notebook:\n\nTakes the Iris flower data set and creates a parallel coordinates plot \nReproduces the ambiguity effect for parallel coordinates",
"import pandas as pd\nimport numpy as np\nimport pylab as plt\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nfrom pandas.tools.plotting import parallel_coordinates\nfrom pandas.tools.plotting import scatter_matrix",
"The Iris flower data set\nThe Iris flower data set or Fisher's Iris data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres.",
"iris_data = pd.read_csv('Parallel-Coordinates_Iris-dataset.csv')\niris_data.head()\n\niris_data.tail()",
"We create two parallel coordinates plot of Iris flower dataset. Using parallel coordinates points are represented as connected line segments. Each vertical line represents one attribute. One set of connected line segments represents one data point.\nThe first figure shows the normal parallel coordinates plot. In the second plot, we shift randomly the order of the axis. The visualisation changes quite when the axes are rendered.",
"# Remove the 'ID' column from Iris flower dataset\niris = iris_data.drop('Id', 1)\niris.head()\n\n# Get columns list of Iris flower dataset\ncols = iris.columns.tolist()\nprint(cols)\n\n# Permute randomly the columns list of Iris flower dataset\ncols_new= np.random.permutation(cols)\ncols_new= cols_new.tolist()\nprint (cols_new)\n\n# Create a new version of Iris flower dataset with permutated columns\niris_new = iris[cols_new]\niris_new.head()\n\n# Plot two parallel coordinates of Iris flower dataset with different ordering of the axis.\nfig = plt.figure(figsize=(15,6))\n\nax1 = plt.subplot(121)\nparallel_coordinates(iris, 'Species',color=['r','g','b'])\n\n\nax2= plt.subplot(122)\nparallel_coordinates(iris_new, 'Species',color=['r','g','b'])\n\n\nax1.spines[\"left\"].set_visible(False)\nax1.spines[\"bottom\"].set_visible(False)\nax1.spines[\"top\"].set_visible(False)\n#ax1.get_yaxis().tick_left() # remove unneeded ticks \n\nax2.spines[\"left\"].set_visible(False)\nax2.spines[\"bottom\"].set_visible(False)\nax2.spines[\"top\"].set_visible(False)\n#ax2.get_yaxis().tick_left() # remove unneeded ticks \n\nplt.tight_layout()\nplt.show()",
"The figure below shows a scatter plot matrix of the Iris flower dataset.",
"scatter_matrix(iris, figsize=(10, 8),c=['r','g','b'],diagonal='kde')\nplt.show()",
"Ambiguity effect for parallel coordinates\nFigures (A) and (B) plot respectively the vectors (y1,y2) and (y3,y4) without color coding. Figures (C) and (D) are respectively the same plots than (A) and (B) but with color coding. Single color lines may yield to ambiguities.",
"#vectors to plot\ny1=[1,2,1]\ny2=[4,2,3]\ny3=[4,2,1]\ny4=[1,2,3]\n\n#spines\nx=[1,2,3]\n\nplt.figure(1)\nfig,(ax1,ax2) = plt.subplots(1,2, sharey=False)\nax3 = fig.add_axes([1, 0.125, 0.4, 0.775], label='axes1')\nax4 = fig.add_axes([1.4, 0.125, 0.4, 0.775], label='axes1')\n\n\n#plot vectors y1 and y2 without color coding\nax1.plot(x,y1,'black', x,y2,'black')\nax2.plot(x,y1,'black', x,y2,'black')\nax1.set_xlim([ x[0],x[1]])\nax2.set_xlim([ x[1],x[2]])\nax1.spines[\"bottom\"].set_visible(False)\nax1.spines[\"top\"].set_visible(False)\nax2.spines[\"bottom\"].set_visible(False)\nax2.spines[\"top\"].set_visible(False)\nax1.set_xticks([]) #Remove unneeded ticks\nax1.get_yaxis().tick_left() \nax2.set_xticks([]) \nax2.set_yticks([]) \nplt.subplots_adjust(wspace=0)\n\n\n#plot vectors y3 and y4 withour color coding\nax3.plot(x,y3,'black', x,y4,'black')\nax4.plot(x,y3,'black', x,y4,'black')\nax3.set_xlim([ x[0],x[1]])\nax4.set_xlim([ x[1],x[2]])\nax3.spines[\"bottom\"].set_visible(False)\nax3.spines[\"top\"].set_visible(False)\nax4.spines[\"bottom\"].set_visible(False)\nax4.spines[\"top\"].set_visible(False)\nax3.set_xticks([]) #Remove unneeded ticks\nax3.get_yaxis().tick_left() \nax4.set_xticks([]) \nax4.set_yticks([]) \n\nfig.suptitle('Single color lines', x=1,y=1.05,fontsize=14, fontweight='bold')\nax1.set_title(\"A (y1=[1,2,1],y2=[4,2,3])\")\nax3.set_title(\"B (y3=[4,2,1],y4=[1,2,3])\")\n\n#################################################\n\nplt.figure(2)\nfig,(ax1,ax2) = plt.subplots(1,2, sharey=False)\nax3 = fig.add_axes([1, 0.125, 0.4, 0.775], label='axes1')\nax4 = fig.add_axes([1.4, 0.125, 0.4, 0.775], label='axes1')\n\n#plot vectors y1 and y2 with color coding\nax1.plot(x,y1,'red', x,y2,'blue')\nax2.plot(x,y1,'red', x,y2,'blue')\nax1.set_xlim([ x[0],x[1]])\nax2.set_xlim([ x[1],x[2]])\nax1.spines[\"bottom\"].set_visible(False)\nax1.spines[\"top\"].set_visible(False)\nax2.spines[\"bottom\"].set_visible(False)\nax2.spines[\"top\"].set_visible(False)\nax1.set_xticks([]) #Remove unneeded ticks\nax1.get_yaxis().tick_left() \nax2.set_xticks([]) \nax2.set_yticks([]) \nplt.subplots_adjust(wspace=0)\n\n#plot vectors y3 and y4 with color coding\nax3.plot(x,y3,'red', x,y4,'blue')\nax4.plot(x,y3,'red', x,y4,'blue')\nax3.set_xlim([ x[0],x[1]])\nax4.set_xlim([ x[1],x[2]])\nax3.spines[\"bottom\"].set_visible(False)\nax3.spines[\"top\"].set_visible(False)\nax4.spines[\"bottom\"].set_visible(False)\nax4.spines[\"top\"].set_visible(False)\nax3.set_xticks([]) #Remove unneeded ticks\nax3.get_yaxis().tick_left() \nax4.set_xticks([]) \nax4.set_yticks([]) \n\nfig.suptitle('Color-coded lines', x=1,y=1.05,fontsize=14, fontweight='bold')\nax1.set_title(\"C (y1=[1,2,1], y2=[4,2,3])\")\nax3.set_title(\"D (y3=[4,2,1], y4=[1,2,3])\")\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nwfpug/python-primer
|
notebooks/01-basics.ipynb
|
gpl-3.0
|
[
"The Basics: Synatx, Indentation, Comments, etc ...\nVariables\n\nPython has similar syntax to C/C++/Java so it uses the same convention for naming variables: variable names can contain upper and lower case character & numbers, including the underscore ( _ ) character. \nVariable names are case sensitive\nVariable names can't start with numbers or special characters (except for the undescore character)\nVariables cannot have the same names as Python keywords \n\n| Keywords |\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|and|assert | as |break |class|continue|def|del |\n|elif |else |except |exec|finally |for|from |global |\n|if |import|in |is|lambda |not|or |pass|\n|print |raise|return |try|while |with|yield | |",
"# this is a correct variable name\nvariable_name = 10\n\n# this is not a correct variable name - variable names can't start with a number\n4tops = 5\n\n# here's the error generated from using a Python keyword - notice the color of the variable name: it's not the same \n# as the one in the cells below\nfinally = 7\n\n# these variables are not the same\ndistance = 10\nDistance = 20\ndisTance = 30\n\nprint \"distance = \", distance\nprint \"Distance = \", Distance\nprint \"disTance = \", disTance",
"Indentation and Code Blocks",
"x = 11\n\nif x == 10:\n print 'x = 10'\nelse:\n print 'x != 10'\n print 'Done'\n\nprint \"Bye now\" \n",
"Statements/Multiline statements\nA new line signals the end of a statement",
"# these are two separate statements\na = 4\nb = 6",
"For multi-line statements, use the \\ character at the end of the line",
"# use line continuation symbol (use with care)\na = 10 \\\n + \\\n 10 + 10\nprint a",
"or put the multilines in ()",
"# use prenthesis (preferable way) to have a multiline statements\na = (10\n + 20 + \n 10 + 10)\nprint a",
"Comments\nComments begin after the # character which can appear anywhere on the line. Multiline comments begin and end with 3 matching ' or \" characters.",
"# this is a line comment\na = 10 # this is also a line comment\n'''\nthis line is indsie a multi-line comment\nso is this\nand this\n'''\na = 100 # this is outside the multi-line comment\n\n\"\"\"\nThis is another way to declare a multi-line comment\nInside it\ninside it too\n\"\"\"\n\na = 100 # this is outside the multi-line comment",
"You can put a comment inside a muliline statement",
"# multiline statement with comments\na = (10 # this is line 1\n + 20 + # this is line 2\n 10 + 10)\nprint a"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ozorich/phys202-2015-work
|
assignments/assignment02/ProjectEuler59.ipynb
|
mit
|
[
"Project Euler: Problem 59\nhttps://projecteuler.net/problem=59\nEach character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.\nA modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.\nFor unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both \"halves\", it is impossible to decrypt the message.\nUnfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.\nYour task has been made easy, as the encryption key consists of three lower case characters. Using cipher.txt (in this directory), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.\nThe following cell shows examples of how to perform XOR in Python and how to go back and forth between characters and integers:",
"assert 65 ^ 42 == 107\nassert 107 ^ 42 == 65\nassert ord('a') == 97\nassert chr(97) == 'a'",
"Certain functions in the itertools module may be useful for computing permutations:",
"from itertools import *\n\n\nencrypted=open(\"cipher.txt\",\"r\")\nmessage=encrypted.read().split(\",\")\nencrypted.close()\n\n\ndef key_cycler(cycles):\n for n in range(cycles): #will repeat key for every index of message this won't translate very last character since length in 400.33\n u1=key[0]^int(message[3*n])\n unencrypted.insert(3*n,u1) #inserts into corresponding spot in unencrypted list\n u2=key[1]^int(message[(3*n)+1])\n unencrypted.insert((3*n)+1,u2)\n u3=key[2]^int(message[(3*n)+2]) #XOR each message interger against its corresponding key value\n unencrypted.insert((3*n)+2,u3)\n \n \n",
"The below is what I think should work however it takes a while to run so I end up interrupting kernel so I don't bog down system. Finding all the values of the key doesn't take long so it must be an error in my method of cycling through the message with each key value that takes too much time. I have been trying to figure out how to possibly use cycle() or repeat() function to run key against encrypted message. I am going to submit now but still attempt to fix problem then resubmit",
"length=len(message) \nprint(length) \nrepeat_times=1201/3 #gives me estimate of number of times to cycle through\nprint(repeat_times)\nfor a in range(97,123): #the values of lower case letters\n for b in range(97,123):\n for c in range(97,123):\n key=[a,b,c] #iterates through all key values for 3 lowercase letters\n unencrypted=[]\n key_cycler(400) #cycles key through message and puts into unencrypted\n english=[]\n for i in unencrypted:\n e=chr(i)\n english.append(e) #converts from ACSII to character string\n english=\"\".join(english) #converts to whole string\n if \" the \" in english: #checks to see if \" the \" is in message . Like suggested in the Gitter Chat I am assuming this won't appear if not correct key\n print(english) # if it does appear for incorrect keys then I can remove the break and print all instance where\n print(key) #\" the \" appears and then select which key produces a completely legible message\n break #prints the key that made instance of message and then breaks the for loop so only first message with\n # instances of \" the \" occuring is printed\n \n \n \n \n \n \n ",
"Test of lower half of code by using a set key",
"key=[97,97,97] #iterates through all key values for 3 lowercase letters\nunencrypted=[]\nkey_cycler(400) #cycles key through message and puts into unencrypted\nenglish=[]\nfor i in unencrypted:\n e=chr(i)\n english.append(e) #converts from ACSII to character string\nenglish=\"\".join(english) #converts to whole string\n\nprint(english)\n ",
"However this still takes too long to finish so must be an erro in the key_cycler function",
"# This cell will be used for grading, leave it at the end of the notebook."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.