repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
ituethoslab/navcom-2017
exercises/Week 3-What are Digital Methods/Exercises week 3.ipynb
gpl-3.0
[ "Exercises week 3: What are Digital Methods?\n1. Install Tableau Desktop\n<img src=\"https://cdns.tblsft.com/sites/default/files/pages/answerdeeperquestions.png\" style=\"width: 50%; float: right;\"></img>\nStudents are given a license for this software.\n2. Open the DAMD data in Tableau\nWhat shape is the data? What variables are there? Is the data complete, or are there missing values somewhere? How is the phenomenon represented? What is the data even about?\n3. How is the data distributed over time?\nCreate a histogram over the years. What new insight is gained to the data? What are we looking at? What is this thing called \"time\"?\n<img src=\"tweet-activity-over-years-with-tableau.png\" style=\"width: 50%; float: right;\"></img>\n3.1 A reproduction with Python\nTo demonstrate programming, similar visualization can be produced computationally. Can you follow the steps and get a general idea what the steps are?", "import pandas as pd\n%matplotlib inline", "Open the data file", "damd = pd.read_csv(\"20170718 hashtag_damd uncleaned.csv\")\ndamd.columns", "Let's look at the created variable.", "damd['created'].head(3)", "Looks like dates, great. Let's set the data type.", "damd['created'] = pd.to_datetime(damd['created'])\ndamd['created'].head(3)", "Let's group the data by year, and plot the count of items per year as a vertical barchart.", "damd['created'].groupby(by=damd['created'].dt.year).count().plot.bar(figsize=(5, 6), title=\"Tweet activity over years\").grid(True, axis=\"y\")", "Did the above program create same output as you did in Tableau?\n4. Produce an interactive visualization for tweet exploration with Tableau\n<img src=\"interactive-timeline-visualization-for-tweet-exploration-with-tableau.png\" style=\"float: right; width: 50%;\"></img>\nIn Tableau, create a timeline, where tweets are coloured by username, and the details include the tweet content. Use this to explore the topics in the DAMD data.\nCompare Twitter and Facebook data.\n5. Submit a data visualisation\nHand in a visualization on LearnIT." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cloudmesh/book
notebooks/machinelearning/crossvalidation.ipynb
apache-2.0
[ "Cross-validation\nIn the machine learning examples, we have already shown the importance of split training and splitting data. However, a rough trial is not enough. Because if we randomly assign a training and testing data, it could be bias. We could improve it by cross validation.\nFirst, let's review how we split training and spliting:", "from sklearn.datasets import load_iris\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import metrics\n\n# read in the iris data\niris = load_iris()\n\n# create X (features) and y (response)\nX = iris.data\ny = iris.target\n\n# use train/test split with different random_state values\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)\n\n# check classification accuracy of KNN with K=5\nknn = KNeighborsClassifier(n_neighbors=5)\nknn.fit(X_train, y_train)\ny_pred = knn.predict(X_test)\nprint(metrics.accuracy_score(y_test, y_pred))", "Question 1: If you haven't learn KNN, please first search it and please answer this simple question: Is it a supervised learning or an unsupervised learning? Also please answer, in the previous example, what does n_neighbors=5 mean?\nAnswer: Double click this cell and input your answer here.\nSteps for K-fold cross-validation\n\nSplit the dataset into K equal partitions (or \"folds\").\nUse fold 1 as the testing set and the union of the other folds as the training set.\nCalculate testing accuracy.\n\nCross-validation example:", "from sklearn.cross_validation import cross_val_score\n\n# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)\n# First we initialize a knn model\nknn = KNeighborsClassifier(n_neighbors=5)\n\n# Secondly we use cross_val_scores to get all possible accuracies. \n# It works like this, first we make the data into 10 chunks. \n# Then we run KNN for 10 times and we make each chunk as testing data for each iteration.\nscores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')\n\nprint(scores)\n\n# use average accuracy as an estimate of out-of-sample accuracy\nprint(scores.mean())", "From this example, we could see that if we just split training and testing data for just once, sometimes we could get a very \"good model and sometimes we may have got a very \"\"bad\" model. From the example, we could know that it is not about model itself. It is just because we use different set of training data and test data.\nYour exercise for cross-validation\nFrom the previous example, we may have question, how we choose the parameter for knn? (n_neighbors=?). A good way to do it is called tuning parameters. In this exercise, we could learn how to tune your parameter by taking the advantage of corss-validation. \nGoal: Select the best tuning parameters (aka \"hyperparameters\") for KNN on the iris dataset\nYour programming task:\nFrom the above example, we know that, if we set number of neighbors as K=5, we could get an average accuracy as 0.97. However, if we want to find a better number, what should we do? It is very straight forward, we could iteratively set different numbers for K and find what K could bring us the best accuaracy.", "# search for an optimal value of K for KNN\n# Suppose we set the range of K is from 1 to 31.\nk_range = list(range(1, 31))\n\n# An list that stores different accuracy scores.\nk_scores = []\n\nfor k in k_range:\n # Your code: \n # First, initilize a knn model with number k \n # Second, use 10-fold cross validation to get 10 scores with that model.\n k_scores.append(scores.mean())\n\n# Make a visuliazaion for it, and please check what is the best k for knn\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)\nplt.plot(k_range, k_scores)\nplt.xlabel('Value of K for KNN')\nplt.ylabel('Cross-Validated Accuracy')", "A new Cross-validation task: model selection\nWe already apply cross-validation to knn model. How about other models? Please continue to read the notes and do another exercise.", "# 10-fold cross-validation with the best KNN model\nknn = KNeighborsClassifier(n_neighbors=20)\nprint(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean())\n\n# How about logistic regression? Please finish the code below and make a comparison.\n# Hint, please check how we make it by knn.\nfrom sklearn.linear_model import LogisticRegression\n# initialize a logistic regression model here.\n# Then print the average score of logistic model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
michael-hoffman/titanic
Titanic_ML_v1.ipynb
gpl-3.0
[ "Titanic: Machine Learning from Disaster\nAn Exploration into the Data using Python\nData Science on the Hill (Michael Hoffman and Charlies Bonfield)\nTable of Contents:\n\nIntroduction <br/>\nLoading/Examining the Data <br/>\nAll the Features! <br/>\n 3a. Extracting Titles from Names <br/>\n 3b. Treating Missing Ports of Departure <br/>\n 3c. Handling Missing Fares <br/>\n 3d. Cabin Number: Relevant or Not? <br/>\n 3e. Quick Fixes <br/>\n 3f. Imputing Missing Ages <br/>\nPrediction <br/>\n\n1. Introduction <a class=\"anchor\" id=\"first-bullet\"></a>\nTo get familiar with Kaggle competitions we worked on the initial tutorial project. The goal is to predict who onboard the Titanic survived the accident. In our initial analysis, we wanted to see how much the predictions would change when the input data was scaled properly as opposed to unscaled (violating the assumptions of the underlying SVM model). We saw an approximately five percent improvement in accuracy by preprocessing the data properly.", "# data analysis and wrangling\nimport pandas as pd\nimport numpy as np\nimport scipy\n\n# visualization\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# machine learning\nfrom sklearn.svm import SVC\nfrom sklearn import preprocessing\nimport fancyimpute\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import RandomizedSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.model_selection import cross_val_score\n\n%matplotlib inline", "2. Loading/Examining the Data <a class=\"anchor\" id=\"second-bullet\"></a>", "# Load the data. \ntraining_data = pd.read_csv('train.csv')\ntest_data = pd.read_csv('test.csv')\n\n# Examine the first few rows of data in the training set. \ntraining_data.head()", "3. All the Features! <a class=\"anchor\" id=\"third-bullet\"></a>\nWe will be extracting the features with custom functions. This isn't necessary for all the features for this project, but we want to leave the possibilty for further development open for the future. \n3a. Extracting Titles from Names <a class=\"anchor\" id=\"third-first\"></a>\nWhile the Name feature itself may not appear to be useful at first glance, we can tease out additional features that may be useful for predicting survival on the Titanic. We will extract a Title from each name, as that carries information about social and marital status (which in turn may relate to survival).", "# Extract title from names, then assign to one of five classes.\n# Function based on code from: https://www.kaggle.com/startupsci/titanic/titanic-data-science-solutions \ndef add_title(data):\n data['Title'] = data.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)\n data.Title = data.Title.replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', \n 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')\n data.Title = data.Title.replace('Mlle', 'Miss')\n data.Title = data.Title.replace('Ms', 'Miss')\n data.Title = data.Title.replace('Mme', 'Mrs')\n \n # Map from strings to numerical variables.\n title_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\n \n data.Title = data.Title.map(title_mapping)\n data.Title = data.Title.fillna(0)\n\n return data", "3b. Treating Missing Ports of Embarkation <a class=\"anchor\" id=\"third-second\"></a>\nNext, let's see if there are any rows that are missing ports of embarkation.", "missing_emb_training = training_data[pd.isnull(training_data.Embarked) == True]\nmissing_emb_test = test_data[pd.isnull(test_data.Embarked) == True]\n\nmissing_emb_training.head()\n\nmissing_emb_test.head()", "We have two passengers in the training set that are missing ports of embarkation, while we are not missing any in the test set. <br>\nThe features which may allow us to assign a port of embarkation based on the data that we do have are Pclass, Fare, and Cabin. However, since we are missing so much of the Cabin column (more on that later), let's focus in on the other two.", "grid = sns.FacetGrid(training_data[training_data.Pclass == 1], col='Embarked', size=2.2, aspect=1.6)\ngrid.map(plt.hist, 'Fare', alpha=.5, bins=20)\ngrid.map(plt.axvline, x=80.0, color='red', ls='dashed')\ngrid.add_legend();", "Although Southampton was the most popular port of embarkation, there was a greater fraction of passengers in the first ticket class from Cherbourg who paid 80.00 for their tickets. Therefore, we will assign 'C' to the missing values for Embarked. We will also recast Embarked as a numerical feature.", "# Recast port of departure as numerical feature. \ndef simplify_embark(data):\n # Two missing values, assign Cherbourg as port of departure.\n data.Embarked = data.Embarked.fillna('C')\n \n le = preprocessing.LabelEncoder().fit(data.Embarked)\n data.Embarked = le.transform(data.Embarked)\n \n return data", "3c. Handling Missing Fares <a class=\"anchor\" id=\"third-third\"></a>\nWe will perform a similar analysis to see if there are any missing fares.", "missing_fare_training = training_data[np.isnan(training_data['Fare'])]\nmissing_fare_test = test_data[np.isnan(test_data['Fare'])]\n\nmissing_fare_training.head()\n\nmissing_fare_test.head()", "This time, the Fare column in the training set is complete, but we are missing that information for one passenger in the test set. Since we do have PClass and Embarked, however, we will assign a fare based on the distribution of fares for those particular values of PClass and Embarked.", "restricted_training = training_data[(training_data.Pclass == 3) & (training_data.Embarked == 'S')]\nrestricted_test = test_data[(test_data.Pclass == 3) & (test_data.Embarked == 'S')]\nrestricted_test = restricted_test[~np.isnan(restricted_test.Fare)] # Leave out poor Mr. Storey\ncombine = [restricted_training, restricted_test]\ncombine = pd.concat(combine)\n\n# Find median fare, plot over resulting distribution. \nfare_med = np.median(combine.Fare)\n\nsns.kdeplot(combine.Fare, shade=True)\nplt.axvline(fare_med, color='r', ls='dashed', lw='1', label='Median')\nplt.legend();", "After examining the distribution of Fare restricted to the specified values of Pclass and Fare, we will use the median for the missing fare (as it falls very close the fare corresponding to the peak of the distribution).", "test_data['Fare'] = test_data['Fare'].fillna(fare_med)", "3d. Cabin Number: Relevant or Not? <a class=\"anchor\" id=\"third-fourth\"></a>\nWhen we first encountered the data, we figured that Cabin would be one of the most important features in predicting survival, as it would not be unreasonable to think of it as a proxy for a passenger's position on the Titanic relative to the lifeboats (distance to deck, distance to nearest stairwell, social class, etc.). \nUnfortunately, much of this data is missing:", "missing_cabin_training = np.size(training_data.Cabin[pd.isnull(training_data.Cabin) == True]) / np.size(training_data.Cabin) * 100.0\nmissing_cabin_test = np.size(test_data.Cabin[pd.isnull(test_data.Cabin) == True]) / np.size(test_data.Cabin) * 100.0\n\nprint('Percentage of Missing Cabin Numbers (Training): %0.1f' % missing_cabin_training)\nprint('Percentage of Missing Cabin Numbers (Test): %0.1f' % missing_cabin_test)", "What can we do with this data (rather, the lack thereof)? \nFor now, let's just pull out the first letter of each cabin number (including NaNs), cast them as numbers, and hope they improve the performance of our classifier.", "## Set of functions to transform features into more convenient format.\n#\n# Code performs three separate tasks:\n# (1). Pull out the first letter of the cabin feature. \n# Code taken from: https://www.kaggle.com/jeffd23/titanic/scikit-learn-ml-from-start-to-finish\n# (2). Recasts cabin feature as number.\ndef simplify_cabins(data):\n data.Cabin = data.Cabin.fillna('N')\n data.Cabin = data.Cabin.apply(lambda x: x[0])\n \n #cabin_mapping = {'N': 0, 'A': 1, 'B': 1, 'C': 1, 'D': 1, 'E': 1, \n # 'F': 1, 'G': 1, 'T': 1}\n #data['Cabin_Known'] = data.Cabin.map(cabin_mapping)\n \n le = preprocessing.LabelEncoder().fit(data.Cabin)\n data.Cabin = le.transform(data.Cabin)\n \n return data", "3e. Quick Fixes <a class=\"anchor\" id=\"third-fifth\"></a>\nPrior to the last step (which is arguably the largest one), we need to tie up a few remaining loose ends:\n- Recast Sex as numerical feature.\n- Drop unwanted features. \n - Name: We've taken out the information that we need (Title).\n - Ticket: There appears to be no rhyme or reason to the data in this column, so we remove it from our analysis.\n- Combine training/test data prior to age imputation.", "# Recast sex as numerical feature. \ndef simplify_sex(data):\n sex_mapping = {'male': 0, 'female': 1}\n data.Sex = data.Sex.map(sex_mapping).astype(int)\n \n return data\n\n# Drop all unwanted features (name, ticket). \ndef drop_features(data):\n return data.drop(['Name','Ticket'], axis=1)\n\n# Perform all feature transformations. \ndef transform_all(data):\n data = add_title(data)\n data = simplify_embark(data)\n data = simplify_cabins(data)\n data = simplify_sex(data)\n data = drop_features(data)\n \n return data\n\ntraining_data = transform_all(training_data)\ntest_data = transform_all(test_data)\n\nall_data = [training_data, test_data]\ncombined_data = pd.concat(all_data)\n\n# Inspect data.\ncombined_data.head()", "3f. Imputing Missing Ages <a class=\"anchor\" id=\"third-sixth\"></a>\nIt is expected that age will be an important feature; however, a number of ages are missing. Attempting to predict the ages with a simple model was not very successful. We decided to follow the recommendations for imputation from this article.", "null_ages = pd.isnull(combined_data.Age)\nknown_ages = pd.notnull(combined_data.Age)\ninitial_dist = combined_data.Age[known_ages]\n\n# Examine distribution of ages prior to imputation (for comparison). \nsns.distplot(initial_dist)", "This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. We have chosen to use the MICE implementation from fancyimpute.", "def impute_ages(data):\n drop_survived = data.drop(['Survived'], axis=1)\n column_titles = list(drop_survived)\n mice_results = fancyimpute.MICE().complete(np.array(drop_survived))\n results = pd.DataFrame(mice_results, columns=column_titles)\n results['Survived'] = list(data['Survived'])\n return results\n\ncomplete_data = impute_ages(combined_data)\ncomplete_data.Age = complete_data.Age[~(complete_data.Age).index.duplicated(keep='first')]\n\n# Examine distribution of ages after imputation (for comparison). \nsns.distplot(initial_dist, label='Initial Distribution')\nsns.distplot(complete_data.Age, label='After Imputation')\nplt.title('Distribution of Ages')\nplt.legend()", "From the output above it is easy to see that distribution is not dramatically altered by the imputation. Additionally, there are no non-sensical ages (negative or extremely old) produced in the process. For experimenting with the tutorial this step is sufficent. In a more serious project this may be one of the most critical steps. \n4. Prediction <a class=\"anchor\" id=\"fourth-bullet\"></a>\nThe model was chosen to be a support vector machine (SVM) model. The reason for this choice:\n 1. the method is for supervised classification. \n 2. effective for high dimensional systems (we intend to expand the features in the next post)\nIn this model we choose to use the rbf kernel which is essentially an expansion to an \"infinite\" hyperspace of Gaussians. Therefore, if there is a why to seperate the data points by a single boundary this method has a chance to find it.\nBelow is the code that impliments the model. The commented code below was used to perform a parameter search to find an optimal fit to the data. The highest ranked outcome was used. There is no particular reason for using a random search -we were just experimenting with the tools availiable natively in sci-kit learn. In fact, it is likely faster to use a straigt-forward grid search for a problem with so few features; however, this random search will scale more favorable as the number of features grows.", "# Transform age and fare data to have mean zero and variance 1.0.\nscaler = preprocessing.StandardScaler()\nselect = 'Age Fare'.split()\ncomplete_data[select] = scaler.fit_transform(complete_data[select])\n\ntraining_data = complete_data[:891]\ntest_data = complete_data[891:].drop('Survived', axis=1)\n\n# ----------------------------------\n# Support Vector Machines\ndroplist = 'Survived PassengerId'.split()\ndata = training_data.drop(droplist, axis=1)\n# Define features and target values\nX, y = data, training_data['Survived']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)\n\n#\n# # Set the parameters by cross-validation\n# param_dist = {'C': scipy.stats.uniform(0.1, 1000), 'gamma': scipy.stats.uniform(.001, 1.0),\n# 'kernel': ['rbf'], 'class_weight':['balanced', None]}\n#\n# clf = SVC()\n#\n# # run randomized search\n# n_iter_search = 10000\n# random_search = RandomizedSearchCV(clf, param_distributions=param_dist,\n# n_iter=n_iter_search, n_jobs=-1, cv=4)\n#\n# start = time()\n# random_search.fit(X, y)\n# print(\"RandomizedSearchCV took %.2f seconds for %d candidates\"\n# \" parameter settings.\" % ((time() - start), n_iter_search))\n# report(random_search.cv_results_)\n# exit()\n\n\"\"\"\nRandomizedSearchCV took 4851.48 seconds for 10000 candidates parameter settings.\nModel with rank: 1\nMean validation score: 0.833 (std: 0.013)\nParameters: {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None}\n\nModel with rank: 2\nMean validation score: 0.832 (std: 0.012)\nParameters: {'kernel': 'rbf', 'C': 154.85033872208422, 'gamma': 0.010852578446979289, 'class_weight': None}\n\nModel with rank: 2\nMean validation score: 0.832 (std: 0.012)\nParameters: {'kernel': 'rbf', 'C': 142.60506747360913, 'gamma': 0.011625955252680842, 'class_weight': None}\n\"\"\"\n\nparams = {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None}\nclf = SVC(**params)\nscores = cross_val_score(clf, X, y, cv=4, n_jobs=-1)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))\n\ndroplist = 'PassengerId'.split()\nclf.fit(X,y)\npredictions = clf.predict(test_data.drop(droplist, axis=1))\n#print(predictions)\nprint('Predicted Number of Survivors: %d' % int(np.sum(predictions)))\n\n# output .csv for upload\n# submission = pd.DataFrame({\n# \"PassengerId\": test_data['PassengerId'].astype(int),\n# \"Survived\": predictions.astype(int)\n# })\n#\n# submission.to_csv('../submission.csv', index=False)", "Summary of Results\nThe base model is gubbed the gender model. It predicts that all women survive and all men don't. The second model is the SVM model above without scaling on the fare and age features. Finally, the last model is the fully scaled SVM. The results are summarized below, the accuracy is that on the test set withheld by Kaggle:\n 1. gender only: 76.5% \n 2. SVM non-scaled: 75.6%\n 3. SVM scaled: 79.9%\nThe modest increase over the the base model is enough to from the lowest rank (gender only model) to the top 20%. It will clearly be necessary to generate addition informative features to make further progress." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
Neural Networks and Deep Learning/Logistic_Regression_with_a_Neural_Network_mindset_v3.ipynb
mit
[ "Logistic Regression with a Neural Network mindset\nWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.\nInstructions:\n- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.\nYou will learn to:\n- Build the general architecture of a learning algorithm, including:\n - Initializing parameters\n - Calculating the cost function and its gradient\n - Using an optimization algorithm (gradient descent) \n- Gather all three functions above into a main model function, in the right order.\n1 - Packages\nFirst, let's run the cell below to import all the packages that you will need during this assignment. \n- numpy is the fundamental package for scientific computing with Python.\n- h5py is a common package to interact with a dataset that is stored on an H5 file.\n- matplotlib is a famous library to plot graphs in Python.\n- PIL and scipy are used here to test your model with your own picture at the end.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport h5py\nimport scipy\nfrom PIL import Image\nfrom scipy import ndimage\nfrom lr_utils import load_dataset\n\n%matplotlib inline", "2 - Overview of the Problem set\nProblem Statement: You are given a dataset (\"data.h5\") containing:\n - a training set of m_train images labeled as cat (y=1) or non-cat (y=0)\n - a test set of m_test images labeled as cat or non-cat\n - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).\nYou will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.\nLet's get more familiar with the dataset. Load the data by running the following code.", "# Loading the data (cat/non-cat)\ntrain_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()", "We added \"_orig\" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).\nEach line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.", "# Example of a picture\nindex = 25\nplt.imshow(train_set_x_orig[index])\nprint (\"y = \" + str(train_set_y[:, index]) + \", it's a '\" + classes[np.squeeze(train_set_y[:, index])].decode(\"utf-8\") + \"' picture.\")", "Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. \nExercise: Find the values for:\n - m_train (number of training examples)\n - m_test (number of test examples)\n - num_px (= height = width of a training image)\nRemember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].", "### START CODE HERE ### (≈ 3 lines of code)\nm_train = None\nm_test = None\nnum_px = None\n### END CODE HERE ###\n\nprint (\"Number of training examples: m_train = \" + str(m_train))\nprint (\"Number of testing examples: m_test = \" + str(m_test))\nprint (\"Height/Width of each image: num_px = \" + str(num_px))\nprint (\"Each image is of size: (\" + str(num_px) + \", \" + str(num_px) + \", 3)\")\nprint (\"train_set_x shape: \" + str(train_set_x_orig.shape))\nprint (\"train_set_y shape: \" + str(train_set_y.shape))\nprint (\"test_set_x shape: \" + str(test_set_x_orig.shape))\nprint (\"test_set_y shape: \" + str(test_set_y.shape))", "Expected Output for m_train, m_test and num_px: \n<table style=\"width:15%\">\n <tr>\n <td>**m_train**</td>\n <td> 209 </td> \n </tr>\n\n <tr>\n <td>**m_test**</td>\n <td> 50 </td> \n </tr>\n\n <tr>\n <td>**num_px**</td>\n <td> 64 </td> \n </tr>\n\n</table>\n\nFor convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.\nExercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).\nA trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use: \npython\nX_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X", "# Reshape the training and test examples\n\n### START CODE HERE ### (≈ 2 lines of code)\ntrain_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T \ntest_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T \n### END CODE HERE ###\n\nprint (\"train_set_x_flatten shape: \" + str(train_set_x_flatten.shape))\nprint (\"train_set_y shape: \" + str(train_set_y.shape))\nprint (\"test_set_x_flatten shape: \" + str(test_set_x_flatten.shape))\nprint (\"test_set_y shape: \" + str(test_set_y.shape))\nprint (\"sanity check after reshaping: \" + str(train_set_x_flatten[0:5,0]))", "Expected Output: \n<table style=\"width:35%\">\n <tr>\n <td>**train_set_x_flatten shape**</td>\n <td> (12288, 209)</td> \n </tr>\n <tr>\n <td>**train_set_y shape**</td>\n <td>(1, 209)</td> \n </tr>\n <tr>\n <td>**test_set_x_flatten shape**</td>\n <td>(12288, 50)</td> \n </tr>\n <tr>\n <td>**test_set_y shape**</td>\n <td>(1, 50)</td> \n </tr>\n <tr>\n <td>**sanity check after reshaping**</td>\n <td>[17 31 56 22 33]</td> \n </tr>\n</table>\n\nTo represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.\nOne common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).\n<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->\n\nLet's standardize our dataset.", "train_set_x = train_set_x_flatten/255.\ntest_set_x = test_set_x_flatten/255.", "<font color='blue'>\nWhat you need to remember:\nCommon steps for pre-processing a new dataset are:\n- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)\n- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)\n- \"Standardize\" the data\n3 - General Architecture of the learning algorithm\nIt's time to design a simple algorithm to distinguish cat images from non-cat images.\nYou will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!\n<img src=\"images/LogReg_kiank.png\" style=\"width:650px;height:400px;\">\nMathematical expression of the algorithm:\nFor one example $x^{(i)}$:\n$$z^{(i)} = w^T x^{(i)} + b \\tag{1}$$\n$$\\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\\tag{2}$$ \n$$ \\mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \\log(a^{(i)}) - (1-y^{(i)} ) \\log(1-a^{(i)})\\tag{3}$$\nThe cost is then computed by summing over all training examples:\n$$ J = \\frac{1}{m} \\sum_{i=1}^m \\mathcal{L}(a^{(i)}, y^{(i)})\\tag{6}$$\nKey steps:\nIn this exercise, you will carry out the following steps: \n - Initialize the parameters of the model\n - Learn the parameters for the model by minimizing the cost\n - Use the learned parameters to make predictions (on the test set)\n - Analyse the results and conclude\n4 - Building the parts of our algorithm ##\nThe main steps for building a Neural Network are:\n1. Define the model structure (such as number of input features) \n2. Initialize the model's parameters\n3. Loop:\n - Calculate current loss (forward propagation)\n - Calculate current gradient (backward propagation)\n - Update parameters (gradient descent)\nYou often build 1-3 separately and integrate them into one function we call model().\n4.1 - Helper functions\nExercise: Using your code from \"Python Basics\", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \\frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().", "# GRADED FUNCTION: sigmoid\n\ndef sigmoid(z):\n \"\"\"\n Compute the sigmoid of z\n\n Arguments:\n z -- A scalar or numpy array of any size.\n\n Return:\n s -- sigmoid(z)\n \"\"\"\n\n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+np.exp(-z))\n ### END CODE HERE ###\n \n return s\n\nprint (\"sigmoid([0, 2]) = \" + str(sigmoid(np.array([0,2]))))", "Expected Output: \n<table>\n <tr>\n <td>**sigmoid([0, 2])**</td>\n <td> [ 0.5 0.88079708]</td> \n </tr>\n</table>\n\n4.2 - Initializing parameters\nExercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.", "# GRADED FUNCTION: initialize_with_zeros\n\ndef initialize_with_zeros(dim):\n \"\"\"\n This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.\n \n Argument:\n dim -- size of the w vector we want (or number of parameters in this case)\n \n Returns:\n w -- initialized vector of shape (dim, 1)\n b -- initialized scalar (corresponds to the bias)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n w = np.zeros((dim,1))\n b = 0\n ### END CODE HERE ###\n\n assert(w.shape == (dim, 1))\n assert(isinstance(b, float) or isinstance(b, int))\n \n return w, b\n\ndim = 2\nw, b = initialize_with_zeros(dim)\nprint (\"w = \" + str(w))\nprint (\"b = \" + str(b))", "Expected Output: \n<table style=\"width:15%\">\n <tr>\n <td> ** w ** </td>\n <td> [[ 0.]\n [ 0.]] </td>\n </tr>\n <tr>\n <td> ** b ** </td>\n <td> 0 </td>\n </tr>\n</table>\n\nFor image inputs, w will be of shape (num_px $\\times$ num_px $\\times$ 3, 1).\n4.3 - Forward and Backward propagation\nNow that your parameters are initialized, you can do the \"forward\" and \"backward\" propagation steps for learning the parameters.\nExercise: Implement a function propagate() that computes the cost function and its gradient.\nHints:\nForward Propagation:\n- You get X\n- You compute $A = \\sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$\n- You calculate the cost function: $J = -\\frac{1}{m}\\sum_{i=1}^{m}y^{(i)}\\log(a^{(i)})+(1-y^{(i)})\\log(1-a^{(i)})$\nHere are the two formulas you will be using: \n$$ \\frac{\\partial J}{\\partial w} = \\frac{1}{m}X(A-Y)^T\\tag{7}$$\n$$ \\frac{\\partial J}{\\partial b} = \\frac{1}{m} \\sum_{i=1}^m (a^{(i)}-y^{(i)})\\tag{8}$$", "# GRADED FUNCTION: propagate\n\ndef propagate(w, b, X, Y):\n \"\"\"\n Implement the cost function and its gradient for the propagation explained above\n\n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of size (num_px * num_px * 3, number of examples)\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)\n\n Return:\n cost -- negative log-likelihood cost for logistic regression\n dw -- gradient of the loss with respect to w, thus same shape as w\n db -- gradient of the loss with respect to b, thus same shape as b\n \n Tips:\n - Write your code step by step for the propagation. np.log(), np.dot()\n \"\"\"\n \n m = X.shape[1]\n \n # FORWARD PROPAGATION (FROM X TO COST) \n ### START CODE HERE ### (≈ 2 lines of code)\n A = sigmoid(np.dot(w.T,X)+b) # compute activation\n cost = cost = (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A))) # compute cost\n ### END CODE HERE ###\n \n # BACKWARD PROPAGATION (TO FIND GRAD)\n ### START CODE HERE ### (≈ 2 lines of code)\n dw = (1 / m) * np.dot(X, (A - Y).T) # dw.shape is : (2,1)\n db = (1 / m) * np.sum(A - Y)\n ### END CODE HERE ###\n \n assert(dw.shape == w.shape)\n assert(db.dtype == float)\n cost = np.squeeze(cost)\n assert(cost.shape == ())\n \n grads = {\"dw\": dw,\n \"db\": db}\n \n return grads, cost\n\nw, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])\ngrads, cost = propagate(w, b, X, Y)\nprint (\"dw = \" + str(grads[\"dw\"]))\nprint (\"db = \" + str(grads[\"db\"]))\nprint (\"cost = \" + str(cost))", "Expected Output:\n<table style=\"width:50%\">\n <tr>\n <td> ** dw ** </td>\n <td> [[ 0.99993216]\n [ 1.99980262]]</td>\n </tr>\n <tr>\n <td> ** db ** </td>\n <td> 0.499935230625 </td>\n </tr>\n <tr>\n <td> ** cost ** </td>\n <td> 6.000064773192205</td>\n </tr>\n\n</table>\n\nd) Optimization\n\nYou have initialized your parameters.\nYou are also able to compute a cost function and its gradient.\nNow, you want to update the parameters using gradient descent.\n\nExercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\\theta$, the update rule is $ \\theta = \\theta - \\alpha \\text{ } d\\theta$, where $\\alpha$ is the learning rate.", "# GRADED FUNCTION: optimize\n\ndef optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):\n \"\"\"\n This function optimizes w and b by running a gradient descent algorithm\n \n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of shape (num_px * num_px * 3, number of examples)\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)\n num_iterations -- number of iterations of the optimization loop\n learning_rate -- learning rate of the gradient descent update rule\n print_cost -- True to print the loss every 100 steps\n \n Returns:\n params -- dictionary containing the weights w and bias b\n grads -- dictionary containing the gradients of the weights and bias with respect to the cost function\n costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.\n \n Tips:\n You basically need to write down two steps and iterate through them:\n 1) Calculate the cost and the gradient for the current parameters. Use propagate().\n 2) Update the parameters using gradient descent rule for w and b.\n \"\"\"\n \n costs = []\n \n for i in range(num_iterations):\n \n \n # Cost and gradient calculation (≈ 1-4 lines of code)\n ### START CODE HERE ### \n grads, cost = propagate(w, b, X, Y)\n ### END CODE HERE ###\n \n # Retrieve derivatives from grads\n dw = grads[\"dw\"]\n db = grads[\"db\"]\n \n # update rule (≈ 2 lines of code)\n ### START CODE HERE ###\n w = w - np.dot(learning_rate, dw)\n b = b - learning_rate* db\n ### END CODE HERE ###\n \n # Record the costs\n if i % 100 == 0:\n costs.append(cost)\n \n # Print the cost every 100 training examples\n if print_cost and i % 100 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n \n params = {\"w\": w,\n \"b\": b}\n \n grads = {\"dw\": dw,\n \"db\": db}\n \n return params, grads, costs\n\nparams, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)\n\nprint (\"w = \" + str(params[\"w\"]))\nprint (\"b = \" + str(params[\"b\"]))\nprint (\"dw = \" + str(grads[\"dw\"]))\nprint (\"db = \" + str(grads[\"db\"]))", "Expected Output: \n<table style=\"width:40%\">\n <tr>\n <td> **w** </td>\n <td>[[ 0.1124579 ]\n [ 0.23106775]] </td>\n </tr>\n\n <tr>\n <td> **b** </td>\n <td> 1.55930492484 </td>\n </tr>\n <tr>\n <td> **dw** </td>\n <td> [[ 0.90158428]\n [ 1.76250842]] </td>\n </tr>\n <tr>\n <td> **db** </td>\n <td> 0.430462071679 </td>\n </tr>\n\n</table>\n\nExercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:\n\n\nCalculate $\\hat{Y} = A = \\sigma(w^T X + b)$\n\n\nConvert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).", "# GRADED FUNCTION: predict\n\ndef predict(w, b, X):\n '''\n Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)\n \n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of size (num_px * num_px * 3, number of examples)\n \n Returns:\n Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X\n '''\n \n m = X.shape[1]\n Y_prediction = np.zeros((1,m))\n w = w.reshape(X.shape[0], 1)\n \n # Compute vector \"A\" predicting the probabilities of a cat being present in the picture\n ### START CODE HERE ### (≈ 1 line of code)\n A = sigmoid(np.dot(w.T, X)+b)\n ### END CODE HERE ###\n \n for i in range(A.shape[1]):\n \n # Convert probabilities A[0,i] to actual predictions p[0,i]\n ### START CODE HERE ### (≈ 4 lines of code)\n if A[0][i]>0.5:\n Y_prediction[0][i]=1\n else:\n Y_prediction[0][i]=0\n \n ### END CODE HERE ###\n \n assert(Y_prediction.shape == (1, m))\n \n return Y_prediction\n\nprint (\"predictions = \" + str(predict(w, b, X)))", "Expected Output: \n<table style=\"width:30%\">\n <tr>\n <td>\n **predictions**\n </td>\n <td>\n [[ 1. 1.]]\n </td> \n </tr>\n\n</table>\n\n<font color='blue'>\nWhat to remember:\nYou've implemented several functions that:\n- Initialize (w,b)\n- Optimize the loss iteratively to learn parameters (w,b):\n - computing the cost and its gradient \n - updating the parameters using gradient descent\n- Use the learned (w,b) to predict the labels for a given set of examples\n5 - Merge all functions into a model\nYou will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.\nExercise: Implement the model function. Use the following notation:\n - Y_prediction for your predictions on the test set\n - Y_prediction_train for your predictions on the train set\n - w, costs, grads for the outputs of optimize()", "# GRADED FUNCTION: model\n\ndef model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):\n \"\"\"\n Builds the logistic regression model by calling the function you've implemented previously\n \n Arguments:\n X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)\n Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)\n X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)\n Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)\n num_iterations -- hyperparameter representing the number of iterations to optimize the parameters\n learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()\n print_cost -- Set to true to print the cost every 100 iterations\n \n Returns:\n d -- dictionary containing information about the model.\n \"\"\"\n \n ### START CODE HERE ###\n # initialize parameters with zeros (≈ 1 line of code)\n w, b = initialize_with_zeros(X_train.shape[0])\n\n # Gradient descent (≈ 1 line of code)\n parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)\n \n # Retrieve parameters w and b from dictionary \"parameters\"\n w = parameters[\"w\"]\n b = parameters[\"b\"]\n \n # Predict test/train set examples (≈ 2 lines of code)\n Y_prediction_test = predict(w, b, X_test)\n Y_prediction_train = predict(w, b, X_train)\n\n ### END CODE HERE ###\n\n # Print train/test Errors\n print(\"train accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))\n print(\"test accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))\n\n \n d = {\"costs\": costs,\n \"Y_prediction_test\": Y_prediction_test, \n \"Y_prediction_train\" : Y_prediction_train, \n \"w\" : w, \n \"b\" : b,\n \"learning_rate\" : learning_rate,\n \"num_iterations\": num_iterations}\n \n return d", "Run the following cell to train your model.", "d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)", "Expected Output: \n<table style=\"width:40%\"> \n\n <tr>\n <td> **Train Accuracy** </td> \n <td> 99.04306220095694 % </td>\n </tr>\n\n <tr>\n <td>**Test Accuracy** </td> \n <td> 70.0 % </td>\n </tr>\n</table>\n\nComment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!\nAlso, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.", "num_px=64\n#test_set_x[:,1].reshape((num_px, num_px, 3))\n\n# Example of a picture that was wrongly classified.\nindex = 1\nplt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))\nprint (\"y = \" + str(test_set_y[0, index]) + \", you predicted that it is a \\\"\" + classes[d[\"Y_prediction_test\"][0, index]].decode(\"utf-8\") + \"\\\" picture.\")", "Let's also plot the cost function and the gradients.", "# Plot learning curve (with costs)\ncosts = np.squeeze(d['costs'])\nplt.plot(costs)\nplt.ylabel('cost')\nplt.xlabel('iterations (per hundreds)')\nplt.title(\"Learning rate =\" + str(d[\"learning_rate\"]))\nplt.show()", "Interpretation:\nYou can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. \n6 - Further analysis (optional/ungraded exercise)\nCongratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\\alpha$. \nChoice of learning rate\nReminder:\nIn order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may \"overshoot\" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.\nLet's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.", "learning_rates = [0.1, 0.01, 0.001, 0.0001]\nmodels = {}\nfor i in learning_rates:\n print (\"learning rate is: \" + str(i))\n models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = i, print_cost = False)\n print ('\\n' + \"-------------------------------------------------------\" + '\\n')\n\nfor i in learning_rates:\n plt.plot(np.squeeze(models[str(i)][\"costs\"]), label= str(models[str(i)][\"learning_rate\"]))\n\nplt.ylabel('cost')\nplt.xlabel('iterations')\n\nlegend = plt.legend(loc='upper center', shadow=True)\nframe = legend.get_frame()\nframe.set_facecolor('0.90')\nplt.show()", "Interpretation: \n- Different learning rates give different costs and thus different predictions results.\n- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). \n- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.\n- In deep learning, we usually recommend that you: \n - Choose the learning rate that better minimizes the cost function.\n - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) \n7 - Test with your own image (optional/ungraded exercise)\nCongratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Change your image's name in the following code\n 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!", "## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"h1.jpg\" # change this to the name of your image file \n## END CODE HERE ##\n\n# We preprocess the image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\n\nmy_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T\nmy_predicted_image = predict(d[\"w\"], d[\"b\"], my_image)\n\nplt.imshow(image)\nprint(\"y = \" + str(np.squeeze(my_predicted_image)) + \", your algorithm predicts a \\\"\" + classes[int(np.squeeze(my_predicted_image)),].decode(\"utf-8\") + \"\\\" picture.\")\n\n ## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"h2.jpeg\" # change this to the name of your image file \n## END CODE HERE ##\n\n# We preprocess the image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\n\nmy_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T\nmy_predicted_image = predict(d[\"w\"], d[\"b\"], my_image)\n\nplt.imshow(image)\nprint(\"y = \" + str(np.squeeze(my_predicted_image)) + \", your algorithm predicts a \\\"\" + classes[int(np.squeeze(my_predicted_image)),].decode(\"utf-8\") + \"\\\" picture.\")", "<font color='blue'>\nWhat to remember from this assignment:\n1. Preprocessing the dataset is important.\n2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().\n3. Tuning the learning rate (which is an example of a \"hyperparameter\") can make a big difference to the algorithm. You will see more examples of this later in this course!\nFinally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:\n - Play with the learning rate and the number of iterations\n - Try different initialization methods and compare the results\n - Test other preprocessings (center the data, or divide each row by its standard deviation)\nBibliography:\n- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/\n- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
evanmiltenburg/python-for-text-analysis
Extra_Material/Examples/Separate_Files/Using separate files.ipynb
apache-2.0
[ "Using separate files\nWhen you have a big project, you'll want to organize your code so that it's easy to manage. Instead of one large notebook, or one big standalone .py file, it's often a good idea to create several smaller .py files that you can import in your main file or notebook.\nI put a script in the main directory (i.e. the same one as this notebook). You can import it like this:", "import main_dir_script\n\nmain_dir_script.hello_world()", "You can also import specific functions like this:", "from main_dir_script import hello_world\n\nhello_world()", "But the result of this may be that you get a very cluttered main directory. In order to prevent this, we can make a separate folder with all the scripts. I've put an example below. (Also look in the actual folder!)\nIMPORTANT: there needs to be a file called __init__.py in the scripts folder. Without it, Python doesn't know it can import scripts from there.", "from scripts import adele\n\nadele.hello()\n\nfrom scripts.adele import hello\n\nhello()", "And that's all there is to it! Good luck organizing your code :)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tritemio/multispot_paper
realtime kinetics/8-spot dsDNA steady-state - Summary.ipynb
mit
[ "Summary\n<p class=\"lead\">This notebook summarizes the realtime-kinetic measurements.\n</p>\n\nRequirement\nBefore running this notebook, you need to pre-process the data with:\n\n8-spot dsDNA-steady-state - Run-All\n\nThis pre-processing analyzes the measurement data files, \ncompute the moving-window slices, the number of bursts \nand fits the population fractions. All results are saved as TXT in \nthe results folder.\nThe present notebook loads these results and presents a summary.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nfrom pathlib import Path\nfrom scipy.stats import linregress\n\ndir_ = r'C:\\Data\\Antonio\\data\\8-spot 5samples data\\2013-05-15/'\n\nfilenames = [str(f) for f in Path(dir_).glob('*.hdf5')]\nfilenames\n\nkeys = [f.stem.split('_')[0] for f in Path(dir_).glob('*.hdf5')]\nkeys\n\nfilenames_dict = {k: v.stem for k, v in zip(keys, Path(dir_).glob('*.hdf5'))}\nfilenames_dict\n\ndef _filename_fit(idx, method, window, step):\n return 'results/%s_%sfit_ampl_only__window%ds_step%ds.txt' % (filenames_dict[idx], method, window, step)\n\ndef _filename_nb(idx, window, step):\n return 'results/%s_burst_data_vs_time__window%ds_step%ds.txt' % (filenames_dict[idx], window, step)\n\ndef process(meas_id):\n methods = ['em', 'll', 'hist']\n\n fig_width = 14\n fs = 18\n def savefig(title, **kwargs):\n plt.savefig(\"figures/Meas%s %s\" % (meas_id, title))\n\n bursts = pd.DataFrame.from_csv(_filename_nb(meas_id, window=30, step=1))\n \n nbm = bursts.num_bursts.mean()\n nbc = bursts.num_bursts_detrend\n print(\"Number of bursts (detrended): %7.1f MEAN, %7.1f VAR, %6.3f VAR/MEAN\" % \n (nbm, nbc.var(), nbc.var()/nbm))\n \n fig, ax = plt.subplots(figsize=(fig_width, 3))\n ax.plot(bursts.tstart, bursts.num_bursts)\n ax.plot(bursts.tstart, bursts.num_bursts_linregress, 'r')\n title = 'Number of bursts - Full measurement'\n ax.set_title(title, fontsize=fs)\n savefig(title)\n fig, ax = plt.subplots(figsize=(fig_width, 3))\n ax.plot(bursts.tstart, bursts.num_bursts_detrend)\n ax.axhline(nbm, color='r')\n title = 'Number of bursts (detrended) - Full measurement'\n ax.set_title(title, fontsize=fs)\n savefig(title)\n \n params = {}\n for window in (5, 30):\n for method in methods:\n p = pd.DataFrame.from_csv(_filename_fit(meas_id, method=method, \n window=window, step=1))\n params[method, window, 1] = p\n\n meth = 'em'\n fig, ax = plt.subplots(figsize=(fig_width, 3))\n ax.plot('kinetics', data=params[meth, 5, 1], marker='h', lw=0, color='gray', alpha=0.2)\n ax.plot('kinetics', data=params[meth, 30, 1], marker='h', lw=0, alpha=0.5)\n ax.plot('kinetics_linregress', data=params[meth, 30, 1], color='r')\n title = 'Population fraction - Full measurement'\n ax.set_title(title, fontsize=fs)\n savefig(title)\n \n px = params\n print('Kinetics 30s: %.3f STD, %.3f STD detrended.' % \n ((100*px[meth, 30, 1].kinetics).std(), \n (100*px[meth, 30, 1].kinetics_linregress).std()))", "Measurement 0", "process(meas_id = '7d')", "Measurement 1", "process(meas_id = '12d')", "Measurement 2", "process(meas_id = '17d')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs
site/en/guide/gpu.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Use a GPU\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/gpu\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/gpu.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/gpu.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/gpu.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nTensorFlow code, and tf.keras models will transparently run on a single GPU with no code changes required.\nNote: Use tf.config.list_physical_devices('GPU') to confirm that TensorFlow is using the GPU.\nThe simplest way to run on multiple GPUs, on one or many machines, is using Distribution Strategies.\nThis guide is for users who have tried these approaches and found that they need fine-grained control of how TensorFlow uses the GPU. To learn how to debug performance issues for single and multi-GPU scenarios, see the Optimize TensorFlow GPU Performance guide.\nSetup\nEnsure you have the latest TensorFlow gpu release installed.", "import tensorflow as tf\nprint(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))", "Overview\nTensorFlow supports running computations on a variety of types of devices, including CPU and GPU. They are represented with string identifiers for example:\n\n\"/device:CPU:0\": The CPU of your machine.\n\"/GPU:0\": Short-hand notation for the first GPU of your machine that is visible to TensorFlow.\n\"/job:localhost/replica:0/task:0/device:GPU:1\": Fully qualified name of the second GPU of your machine that is visible to TensorFlow.\n\nIf a TensorFlow operation has both CPU and GPU implementations, by default, the GPU device is prioritized when the operation is assigned. For example, tf.matmul has both CPU and GPU kernels and on a system with devices CPU:0 and GPU:0, the GPU:0 device is selected to run tf.matmul unless you explicitly request to run it on another device.\nIf a TensorFlow operation has no corresponding GPU implementation, then the operation falls back to the CPU device. For example, since tf.cast only has a CPU kernel, on a system with devices CPU:0 and GPU:0, the CPU:0 device is selected to run tf.cast, even if requested to run on the GPU:0 device.\nLogging device placement\nTo find out which devices your operations and tensors are assigned to, put\ntf.debugging.set_log_device_placement(True) as the first statement of your\nprogram. Enabling device placement logging causes any Tensor allocations or operations to be printed.", "tf.debugging.set_log_device_placement(True)\n\n# Create some tensors\na = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\nb = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\nc = tf.matmul(a, b)\n\nprint(c)", "The above code will print an indication the MatMul op was executed on GPU:0.\nManual device placement\nIf you would like a particular operation to run on a device of your choice\ninstead of what's automatically selected for you, you can use with tf.device\nto create a device context, and all the operations within that context will\nrun on the same designated device.", "tf.debugging.set_log_device_placement(True)\n\n# Place tensors on the CPU\nwith tf.device('/CPU:0'):\n a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n\n# Run on the GPU\nc = tf.matmul(a, b)\nprint(c)", "You will see that now a and b are assigned to CPU:0. Since a device was\nnot explicitly specified for the MatMul operation, the TensorFlow runtime will\nchoose one based on the operation and available devices (GPU:0 in this\nexample) and automatically copy tensors between devices if required.\nLimiting GPU memory growth\nBy default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to\nCUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. To limit TensorFlow to a specific set of GPUs, use the tf.config.set_visible_devices method.", "gpus = tf.config.list_physical_devices('GPU')\nif gpus:\n # Restrict TensorFlow to only use the first GPU\n try:\n tf.config.set_visible_devices(gpus[0], 'GPU')\n logical_gpus = tf.config.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPU\")\n except RuntimeError as e:\n # Visible devices must be set before GPUs have been initialized\n print(e)", "In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two methods to control this.\nThe first option is to turn on memory growth by calling tf.config.experimental.set_memory_growth, which attempts to allocate only as much GPU memory as needed for the runtime allocations: it starts out allocating very little memory, and as the program gets run and more GPU memory is needed, the GPU memory region is extended for the TensorFlow process. Memory is not released since it can lead to memory fragmentation. To turn on memory growth for a specific GPU, use the following code prior to allocating any tensors or executing any ops.", "gpus = tf.config.list_physical_devices('GPU')\nif gpus:\n try:\n # Currently, memory growth needs to be the same across GPUs\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n logical_gpus = tf.config.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n # Memory growth must be set before GPUs have been initialized\n print(e)", "Another way to enable this option is to set the environmental variable TF_FORCE_GPU_ALLOW_GROWTH to true. This configuration is platform specific.\nThe second method is to configure a virtual GPU device with tf.config.set_logical_device_configuration and set a hard limit on the total memory to allocate on the GPU.", "gpus = tf.config.list_physical_devices('GPU')\nif gpus:\n # Restrict TensorFlow to only allocate 1GB of memory on the first GPU\n try:\n tf.config.set_logical_device_configuration(\n gpus[0],\n [tf.config.LogicalDeviceConfiguration(memory_limit=1024)])\n logical_gpus = tf.config.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n # Virtual devices must be set before GPUs have been initialized\n print(e)", "This is useful if you want to truly bound the amount of GPU memory available to the TensorFlow process. This is common practice for local development when the GPU is shared with other applications such as a workstation GUI.\nUsing a single GPU on a multi-GPU system\nIf you have more than one GPU in your system, the GPU with the lowest ID will be\nselected by default. If you would like to run on a different GPU, you will need\nto specify the preference explicitly:", "tf.debugging.set_log_device_placement(True)\n\ntry:\n # Specify an invalid GPU device\n with tf.device('/device:GPU:2'):\n a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n c = tf.matmul(a, b)\nexcept RuntimeError as e:\n print(e)", "If the device you have specified does not exist, you will get a RuntimeError: .../device:GPU:2 unknown device.\nIf you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can call tf.config.set_soft_device_placement(True).", "tf.config.set_soft_device_placement(True)\ntf.debugging.set_log_device_placement(True)\n\n# Creates some tensors\na = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\nb = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\nc = tf.matmul(a, b)\n\nprint(c)", "Using multiple GPUs\nDeveloping for multiple GPUs will allow a model to scale with the additional resources. If developing on a system with a single GPU, you can simulate multiple GPUs with virtual devices. This enables easy testing of multi-GPU setups without requiring additional resources.", "gpus = tf.config.list_physical_devices('GPU')\nif gpus:\n # Create 2 virtual GPUs with 1GB memory each\n try:\n tf.config.set_logical_device_configuration(\n gpus[0],\n [tf.config.LogicalDeviceConfiguration(memory_limit=1024),\n tf.config.LogicalDeviceConfiguration(memory_limit=1024)])\n logical_gpus = tf.config.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPU,\", len(logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n # Virtual devices must be set before GPUs have been initialized\n print(e)", "Once there are multiple logical GPUs available to the runtime, you can utilize the multiple GPUs with tf.distribute.Strategy or with manual placement.\nWith tf.distribute.Strategy\nThe best practice for using multiple GPUs is to use tf.distribute.Strategy.\nHere is a simple example:", "tf.debugging.set_log_device_placement(True)\ngpus = tf.config.list_logical_devices('GPU')\nstrategy = tf.distribute.MirroredStrategy(gpus)\nwith strategy.scope():\n inputs = tf.keras.layers.Input(shape=(1,))\n predictions = tf.keras.layers.Dense(1)(inputs)\n model = tf.keras.models.Model(inputs=inputs, outputs=predictions)\n model.compile(loss='mse',\n optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))", "This program will run a copy of your model on each GPU, splitting the input data\nbetween them, also known as \"data parallelism\".\nFor more information about distribution strategies, check out the guide here.\nManual placement\ntf.distribute.Strategy works under the hood by replicating computation across devices. You can manually implement replication by constructing your model on each GPU. For example:", "tf.debugging.set_log_device_placement(True)\n\ngpus = tf.config.list_logical_devices('GPU')\nif gpus:\n # Replicate your computation on multiple GPUs\n c = []\n for gpu in gpus:\n with tf.device(gpu.name):\n a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n c.append(tf.matmul(a, b))\n\n with tf.device('/CPU:0'):\n matmul_sum = tf.add_n(c)\n\n print(matmul_sum)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sangheestyle/ml2015project
howto/model07_using_pipeline.ipynb
mit
[ "model07\nExplain this model\nModel\n\nLinear models: SVM, Ridge, Lasso\n\nFeatures\n\nuid\nqid\nq_length\ncategory\nanswer\navg_per_uid: average response time per user\navg_per_qid: average response time per question\n\nLet's start our experimemt\nStep1: Read train and test data\nRead files for train and test set\nWe alread made given csv files as a pickled data for our convenience.", "import gzip\nimport pickle\nfrom numpy import sign\n\nwith gzip.open(\"../data/train.pklz\", \"rb\") as train_file:\n train_set = pickle.load(train_file)\n\nwith gzip.open(\"../data/test.pklz\", \"rb\") as test_file:\n test_set = pickle.load(test_file)\n\nwith gzip.open(\"../data/questions.pklz\", \"rb\") as questions_file:\n questions = pickle.load(questions_file)\n\n\nfor key in train_set:\n train_set[key]['sign_val'] = sign(train_set[key]['position'])", "What they have?\nJust look at what each set have.", "print (\"* train_set:\", train_set[1])\nprint (\"* test_set:\", test_set[7])\nprint (\"* question keys:\", questions[1].keys())", "Step2: Feature Engineering\nWe might want to use some set of feature based on given data.", "from collections import defaultdict\nfrom numpy import sign\n\n\n\"\"\"\nCalculate average position(response time) per user(uid) and question(qid).\nParam:\n data: dataset\n sign_val: -1 for negetive only, +1 positive only, None both\n\"\"\"\ndef get_avg_pos(data, sign_val=None):\n pos_uid = defaultdict(list)\n pos_qid = defaultdict(list)\n\n for key in data:\n if sign_val and sign(data[key]['position']) != sign_val:\n continue\n uid = data[key]['uid']\n qid = data[key]['qid']\n pos = data[key]['position']\n pos_uid[uid].append(pos)\n pos_qid[qid].append(pos)\n\n avg_pos_uid = {}\n avg_pos_qid = {}\n\n for key in pos_uid:\n avg_pos_uid[key] = sum(pos_uid[key]) / len(pos_uid[key])\n\n for key in pos_qid:\n avg_pos_qid[key] = sum(pos_qid[key]) / len(pos_qid[key])\n \n return [avg_pos_uid, avg_pos_qid]\n\n\n\"\"\"\nMake feature vectors for given data set\n\"\"\"\ndef featurize(data, avg_pos, sign_val=None):\n X = []\n avg_pos_uid = avg_pos[0]\n avg_pos_qid = avg_pos[1]\n for key in data:\n if sign_val and data[key]['sign_val'] != sign_val:\n continue\n uid = data[key]['uid']\n qid = data[key]['qid']\n q_length = max(questions[qid]['pos_token'].keys())\n category = questions[qid]['category'].lower()\n answer = questions[qid]['answer'].lower()\n if uid in avg_pos_uid:\n pos_uid = avg_pos_uid[uid]\n else:\n pos_uid = sum(avg_pos_uid.values()) / float(len(avg_pos_uid.values()))\n \n if qid in avg_pos_qid:\n pos_qid = avg_pos_qid[qid]\n else:\n pos_qid = sum(avg_pos_qid.values()) / float(len(avg_pos_qid.values()))\n \n feat = {\"uid\": str(uid),\n \"qid\": str(qid),\n \"q_length\": q_length,\n \"category\": category,\n \"answer\": answer,\n \"sign_val\": data[key]['sign_val'],\n \"avg_pos_uid\": pos_uid,\n \"avg_pos_qid\": pos_qid\n }\n X.append(feat)\n \n return X\n\n\"\"\"\nTemporary: (test only)Make feature vectors for given data set\n\"\"\"\ndef featurize_test(data):\n X = []\n for key in data:\n uid = data[key]['uid']\n qid = data[key]['qid']\n q_length = max(questions[qid]['pos_token'].keys())\n category = questions[qid]['category'].lower()\n answer = questions[qid]['answer'].lower()\n feat = {\"uid\": str(uid),\n \"qid\": str(qid),\n \"q_length\": q_length,\n \"category\": category,\n \"answer\": answer,\n }\n X.append(feat)\n \n return X\n\n\n\"\"\"\nGet positions\n\"\"\"\ndef get_positions(data, sign_val=None):\n Y = []\n for key in data:\n if sign_val and sign(data[key]['position']) != sign_val:\n continue\n position = data[key]['position']\n Y.append(position)\n \n return Y\n\n\"\"\"\nSelect values by keys only\n\"\"\"\ndef select_keys(data, keys):\n unwanted = data[0].keys() - keys\n for item in data:\n for unwanted_key in unwanted:\n del item[unwanted_key]\n return data\n\nX_train_pos = featurize(train_set, get_avg_pos(train_set, sign_val=1), sign_val=1)\nX_train_pos[0]", "Look at the feature vector.", "regression_keys = ['avg_pos_uid', 'avg_pos_qid', 'category', 'q_length', \"sign_val\"]\nX_train_pos = featurize(train_set, get_avg_pos(train_set, sign_val=1), sign_val=1)\nX_train_pos = select_keys(X_train_pos, regression_keys)\nY_train_pos = get_positions(train_set, sign_val=1)\nprint(len(X_train_pos))\nprint(len(Y_train_pos))\nX_train_pos[0], Y_train_pos[0]\n\nX_train_neg = featurize(train_set, get_avg_pos(train_set, sign_val=-1), sign_val=-1)\nX_train_neg = select_keys(X_train_neg, regression_keys)\nY_train_neg = get_positions(train_set, sign_val=-1)\nprint (len(X_train_neg))\nprint (len(Y_train_neg))\nprint (X_train_neg[5], Y_train_neg[5])\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ncat = [[item['category'], Y_train_pos[ii]] for ii, item in enumerate(X_train_pos)]\nprint (set([item[0] for item in cat]))\n\nplt.figure(figsize=(15, 10), dpi=400)\nplt.xlabel(\"Position\")\nplt.ylabel(\"Frequency\")\n#for category in set([item[0] for item in cat]):\nfor category in ['astronomy','biology', 'chemistry', 'earth science', 'physics']:\n plt.hist([item[1] for item in cat if item[0]==category],\n bins=70,\n histtype=\"step\",\n alpha=.8,\n label=category\n )\n plt.legend(title='Category')\n\ncat = [[item['category'], Y_train_neg[ii]] for ii, item in enumerate(X_train_neg)]\nprint (set([item[0] for item in cat]))\n\nplt.figure(figsize=(15, 10), dpi=400)\nplt.xlabel(\"Position\")\nplt.ylabel(\"Frequency\")\n#for category in set([item[0] for item in cat]):\nfor category in ['astronomy','biology', 'chemistry', 'earth science', 'physics']:\n plt.hist([item[1] for item in cat if item[0]==category],\n bins=70,\n histtype=\"step\",\n alpha=.8,\n label=category\n )\n plt.legend(title='Category')", "Step3: Cross varidation", "import multiprocessing\nfrom sklearn import linear_model\nfrom sklearn.cross_validation import train_test_split, cross_val_score\nfrom sklearn.feature_extraction import DictVectorizer\nimport math\nfrom numpy import abs, sqrt\n\nvec = DictVectorizer()\nX_train_pos = vec.fit_transform(X_train_pos)\n\nregressor_names = \"\"\"\nLinearRegression\nRidge\nLasso\nElasticNet\n\"\"\"\nprint (\"=== Linear Cross validation RMSE scores:\")\nfor regressor in regressor_names.split():\n scores = cross_val_score(getattr(linear_model, regressor)(),\n X_train_pos, Y_train_pos,\n cv=10,\n scoring='mean_squared_error',\n n_jobs=multiprocessing.cpu_count()-1\n )\n print (regressor, sqrt(abs(scores)).mean())\n\nX_train_neg = vec.fit_transform(X_train_neg)\nfor regressor in regressor_names.split():\n scores = cross_val_score(getattr(linear_model, regressor)(),\n X_train_neg, Y_train_neg,\n cv=10,\n scoring='mean_squared_error',\n n_jobs=multiprocessing.cpu_count()-1\n )\n print (regressor, sqrt(abs(scores)).mean())\n\nsvm_keys = ['qid', 'uid', 'q_length', 'category', 'answer']\nX_train = featurize(train_set, get_avg_pos(train_set))\nX_train = select_keys(X_train, svm_keys)\nY_train = get_positions(train_set)\nprint (len(X_train))\nprint (len(Y_train))\nprint (X_train[0], Y_train[0])\nY_train = get_positions(train_set)\nX_train = vec.fit_transform(X_train)\n\nfrom numpy import sign\nimport numpy as np\nfrom sklearn import svm, grid_search\nfrom sklearn.cross_validation import StratifiedShuffleSplit\nfrom sklearn.grid_search import GridSearchCV\n\n\nfor kernel in ['rbf']:\n svr = svm.SVC(kernel=kernel, gamma=0.1)\n scores = cross_val_score(svr,\n X_train, sign(Y_train),\n cv=10,\n scoring=None,\n n_jobs=multiprocessing.cpu_count()-1\n )\n print (kernel)\n print (scores)\n print (scores.mean())", "Step4: Prediction", "svm_keys = ['qid', 'uid', 'q_length', 'category', 'answer']\nX_train = featurize(train_set, get_avg_pos(train_set))\nX_train = select_keys(X_train, svm_keys)\nY_train = get_positions(train_set)\n\nX_test = featurize_test(test_set)\nX_test = select_keys(X_test, svm_keys)\n\nX_train_length = len(X_train)\nX = vec.fit_transform(X_train + X_test)\nX_train = X[:X_train_length]\nX_test = X[X_train_length:]\n\nsvr = svm.SVC(kernel='rbf', gamma=0.1)\nsvr.fit(X_train, sign(Y_train))\n\npredictions = svr.predict(X_test)\n\nprint (len(sign(Y_train)))\nprint (sum(sign(Y_train)))\nprint (len(predictions))\nprint (predictions.sum())\n\nregression_keys = ['avg_pos_uid', 'avg_pos_qid', 'q_length', 'sign_val']\navg_pos_pos = get_avg_pos(train_set, sign_val=1)\nX_train_pos = featurize(train_set, avg_pos_pos, sign_val=1)\nX_train_pos = select_keys(X_train_pos, regression_keys)\nY_train_pos = get_positions(train_set, sign_val=1)\n\navg_pos_neg = get_avg_pos(train_set, sign_val=-1)\nX_train_neg = featurize(train_set, avg_pos_neg, sign_val=-1)\nX_train_neg = select_keys(X_train_neg, regression_keys)\nY_train_neg = get_positions(train_set, sign_val=-1)\n\nX_test = test_set\n\nfor index, key in enumerate(X_test):\n X_test[key]['sign_val'] = predictions[index]\n\nX_test_final = []\nfor index, key in enumerate(X_test):\n if predictions[index] == 1.0:\n X_test_final.append(featurize({key: X_test[key]}, avg_pos_pos, sign_val=1)[0])\n else:\n X_test_final.append(featurize({key: X_test[key]}, avg_pos_neg, sign_val=-1)[0])\n\nregression_keys = ['avg_pos_uid', 'avg_pos_qid', 'q_length', 'sign_val']\nX_train_pos = featurize(train_set, get_avg_pos(train_set, sign_val=1), sign_val=1)\nX_train_pos = select_keys(X_train_pos, regression_keys)\nY_train_pos = get_positions(train_set, sign_val=1)\n\nX_train_neg = featurize(train_set, get_avg_pos(train_set, sign_val=-1), sign_val=-1)\nX_train_neg = select_keys(X_train_neg, regression_keys)\nY_train_neg = get_positions(train_set, sign_val=-1)\n\nX_train_pos = vec.fit_transform(X_train_pos)\nX_train_neg = vec.fit_transform(X_train_neg)\n\nregressor_pos = linear_model.Lasso()\nregressor_pos.fit(X_train_pos, Y_train_pos)\n\nregressor_neg = linear_model.Lasso()\nregressor_neg.fit(X_train_neg, Y_train_neg)\n\nX_test_final = select_keys(X_test_final, regression_keys)\n\nX_test_final_vec = vec.fit_transform(X_test_final)\nfinal_predictions = []\nfor index, item in enumerate(X_test_final):\n if item['sign_val'] == 1.0:\n final_predictions.append(regressor_pos.predict(X_test_final_vec[index])[0])\n else:\n final_predictions.append(regressor_neg.predict(X_test_final_vec[index])[0])\n\nfinal_predictions = sorted([[id, final_predictions[index]] for index, id in enumerate(test_set.keys())])\nprint (len(final_predictions))\nfinal_predictions[:5]", "Step5: Writing submission.", "import csv\n\n\nfinal_predictions.insert(0,[\"id\", \"position\"])\nwith open('guess.csv', 'w') as fp:\n writer = csv.writer(fp, delimiter=',')\n writer.writerows(final_predictions)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
theandygross/HIV_Methylation
PreProcessing/BMIQ_Horvath.ipynb
mit
[ "Horvath Normalization Pipeline\nHere we run the data normalization pipeline following Steve Horvath's method. Briefly he uses a variant of BMIQ which normalizes all of the data to a gold standard. His method only uses a subset of the probes, so we can throw out the majority of the data here. \nUnlike the normalization with the full data I do not yet do an adjustment for cellular composition as this was not in Horvath's pipeline. Rather, I do the adjustment after the normalization.", "cd ..\n\nimport NotebookImport\nfrom Setup.Imports import *\n\nimport os as os\nimport pandas as pd\nfrom pandas.rpy.common import convert_to_r_dataframe, convert_robj\nimport rpy2.robjects as robjects\nfrom IPython.display import clear_output", "Load Horvath normalization source into R namespace.", "robjects.r.library('WGCNA');\nrobjects.r.source(\"/cellar/users/agross/Data/MethylationAge/Horvath/NORMALIZATION.R\")\nclear_output()", "Read in Betas", "path = '/cellar/users/agross/TCGA_Code/Methlation/data/'\nf = path + 'all_betas_raw.csv'\ndf = pd.read_csv(f, low_memory=True, header=0, index_col=0)\n\nlabels = pd.read_csv(path + 'all_betas_raw_pdata.csv',\n index_col=0)", "Normalization Step", "gold_standard = pd.read_csv('/cellar/users/agross/Data/MethylationAge/Horvath/probeAnnotation21kdatMethUsed.csv', index_col=0)\nhorvath = pd.read_table('/cellar/users/agross/TCGA_Code/Methlation/data/Horvath_Model.csv', index_col=0, skiprows=[0,1])\nintercept = horvath.CoefficientTraining['(Intercept)']\nhorvath = horvath.iloc[1:]\n\ndf = df.ix[gold_standard.index]\ndf = df.T.fillna(gold_standard.goldstandard2).T\n\ndf_r = robjects.r.t(convert_to_r_dataframe(df))\ngs = list(gold_standard.goldstandard2.ix[df.index])\ngs_r = robjects.FloatVector(gs)\n\ndel df\n\ndata_n = robjects.r.BMIQcalibration(df_r, gs_r)\ndata_n = convert_robj(data_n).T\nclear_output()", "Now we need to fix the labels a little bit.", "c1 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx', \n 'HIV- samples from OldStudy', index_col=0)\nc2 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx', \n 'HIV+ samples', index_col=0)\nclinical = c1.append(c2)\n\ns = labels[labels.studyIndex == 's2'].sampleNames\nss = clinical[['Sample_Plate','Sample_Well']].sort(['Sample_Plate','Sample_Well'])\nassert(alltrue(ss.Sample_Well == s))\n\nnew_label = clinical.sort(['Sample_Plate','Sample_Well']).index\nnew_label = pd.Series(new_label, s.index)\nnew_names = labels['sampleNames'].replace(new_label.to_dict())\nnew_labels = labels['sampleNames'].ix[new_label.index] = new_label\nnew_labels = new_labels.combine_first(labels.sampleNames)\n\nlabels['sampleNames'] = new_labels\nnew_labels2 = labels[labels.studyIndex == 's3'].sampleNames\nnew_labels2 = new_labels2.map(lambda s: '_'.join(s.split('_')[1:]))\nnew_labels2 = new_labels2.combine_first(new_labels)\nlabels['sampleNames'] = new_labels2\n\ndata_n.columns = list(new_labels2)\ndata_n = data_n.astype(float)\ndata_n.to_hdf(HDFS_DIR + 'methylation_norm.h5','BMIQ_Horvath')\n\nlabels.to_hdf(HDFS_DIR + 'methylation_norm.h5','labels')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Wei1234c/Elastic_Network_of_Things_with_MQTT_and_MicroPython
notebooks/tools/上傳檔案 - MQTT.ipynb
gpl-3.0
[ "上傳檔案 - MQTT\n需先安裝 ampy ( Adafruit MicroPython Tool )\npip install adafruit-ampy\nhttps://github.com/adafruit/ampy", "import os, sys\nsys.path.append(os.path.join('..', '..', '..', '..', '..', ))\n\nimport ampy_utils\n# with open(os.path.join('..', '..', '..', '..', '..', 'ampy_utils.py'), 'r') as f:\n# print(f.read())", "<font color='blue'>\n設定COM port (set current COM port), baud rate", "ampy_utils.baud_rate = 115200\nampy_utils.com_port = 'COM3'\n# ampy_utils.com_port = '/dev/ttyUSB0'", "Load utility functions", "from ampy_utils import *", "<font color='blue'>\nCopy files and folders to device", "root_folders = [os.path.sep.join(['..', '..', 'codes', 'micropython']),\n os.path.sep.join(['..', '..', 'codes', 'micropython_mqtt']),\n os.path.sep.join(['..', '..', 'codes', 'node']),\n os.path.sep.join(['..', '..', 'codes', 'shared']),]\nfolders = []\n\nformat_put_files_folders(root_folders = root_folders,\n folders = folders, \n format_first = True)", "單一檔案上傳 (single file upload, in case needed)", "# copy_one_file_to_device(os.path.sep.join(['..', '..', 'codes', 'micropython']), 'main.py')\n\n# copy_one_file_to_device(os.path.sep.join(['..', '..', 'codes', 'shared']), 'config_mqtt.py')\n\n# copy_one_file_to_device(os.path.sep.join(['..', '..', '..', 'dmz']), 'config_mqtt.py')\n\n# copy_one_file_to_device(os.path.sep.join(['..', '..', 'codes', 'node']), 'node.py')", "列出檔案 (list files)", "# list_files_in_device()", "檢查檔案內容 (check file content)", "# cat_file_from_device('main.py')", "連網測試 (network config and test)", "# 連上網路\n# import network; nic=network.WLAN(network.STA_IF); nic.active(False); # disable network\n# import network; nic=network.WLAN(network.STA_IF); nic.active(True); nic.connect('SSID','password');nic.ifconfig()\n# import network; nic=network.WLAN(network.STA_IF); nic.ifconfig()\n# import network; nic=network.WLAN(network.STA_IF);nic.ifconfig();nic.config('mac');nic.ifconfig((['mac',])", "Run Broker container on Raspberry Pi\ncopy folder 'codes' to Raspberry Pi under folder '/data/elastic_network_of_things_with_micropython',\nso Raspberry Pi has folder '/data/elastic_network_of_things_with_micropython/codes'\nthen run the command below on Raspberry Pi.\ndocker run -it -p 9662:9662 --name=Broker --hostname=Broker --volume=/data/elastic_network_of_things_with_micropython:/project wei1234c/python_armv7 /bin/sh -c \"cd /project/codes/broker &amp;&amp; python3 broker.py\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nadvamir/deep-learning
transfer-learning/Transfer_Learning_Solution.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. \nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n images = np.concatenate(batch)\n\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn.preprocessing import LabelBinarizer\n\nlb = LabelBinarizer()\nlb.fit(labels)\n\nlabels_vecs = lb.transform(labels)", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n \nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.", "epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {inputs_: x,\n labels_: y}\n loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration: {}\".format(iteration),\n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x,\n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Validation Acc: {:.4f}\".format(val_acc))\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nwith tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dsg-bielefeld/pentoref
code/ipython_notebooks/PentoRef_Exploration_sqlite_databases_1.ipynb
gpl-3.0
[ "Explore with Sqlite databases", "import sys\nsys.path.append(\"../python/\")\nimport pentoref.IO as IO\nimport sqlite3 as sqlite\n\n# Create databases if required\nif False: # make True if you need to create the databases from the derived data\n for corpus_name in [\"TAKE\", \"TAKECV\", \"PENTOCV\"]:\n data_dir = \"../../../pentoref/{0}_PENTOREF\".format(corpus_name)\n dfwords, dfutts, dfrefs, dfscenes, dfactions = IO.convert_subcorpus_raw_data_to_dataframes(data_dir)\n IO.write_corpus_to_database(\"{0}.db\".format(corpus_name),\n corpus_name, dfwords, dfutts, dfrefs, dfscenes, dfactions)\n\n# Connect to database\nCORPUS = \"PENTOCV\"\ndb = sqlite.connect(\"{0}.db\".format(CORPUS))\ncursor = db.cursor()\n# get the table column header names\nprint(\"utts\", [x[1] for x in cursor.execute(\"PRAGMA table_info(utts)\")])\nprint(\"words\", [x[1] for x in cursor.execute(\"PRAGMA table_info(words)\")])\nprint(\"refs\", [x[1] for x in cursor.execute(\"PRAGMA table_info(refs)\")])\nprint(\"scenes\", [x[1] for x in cursor.execute(\"PRAGMA table_info(scenes)\")])\nprint(\"actions\", [x[1] for x in cursor.execute(\"PRAGMA table_info(actions)\")])", "Get utterances from certain time periods in each experiment or for certain episodes", "for row in db.execute(\"SELECT gameID, starttime, speaker, utt_clean FROM utts\" + \\\n \" WHERE starttime >= 200 AND starttime <= 300\" + \\\n ' AND gameID = \"r8_1_1_b\"' + \\\n \" ORDER BY gameID, starttime\"):\n print(row)", "Get mutual information between words used in referring expressions and properties of the referent", "from collections import Counter\nfrom pentoref.IOutils import clean_utt\n\npiece_counter = Counter()\nword_counter = Counter()\nword_piece_counter = Counter()\n\nfor row in db.execute(\"SELECT id, gameID, text, uttID FROM refs\"):\n#for row in db.execute(\"SELECT shape, colour, orientation, gridPosition, gameID, pieceID FROM scenes\"):\n #isTarget = db.execute('SELECT refID FROM refs WHERE gameID =\"' + row[4] + '\" AND pieceID =\"' + row[5] + '\"')\n #target = False \n #for r1 in isTarget:\n # target = True\n #if not target:\n # continue\n #print(r)\n #shape, colour, orientation, gridPosition, gameID, pieceID = row\n #piece = gridPosition #shape + \"_\" + colour\n piece, gameID, text, uttID = row\n \n \n if CORPUS in [\"TAKECV\", \"TAKE\"]:\n for f in db.execute('SELECT word from words WHERE gameID =\"' + str(gameID) + '\"'):\n #print(f)\n for word in f[0].lower().split():\n word_counter[word] += 1\n word_piece_counter[piece+\"__\"+word]+=1\n piece_counter[piece] += 1\n elif CORPUS == \"PENTOCV\":\n for word in clean_utt(text.lower()).split():\n word_counter[word] += 1\n word_piece_counter[piece+\"__\"+word]+=1\n piece_counter[piece] += 1\n\n\ngood_pieces = [\"X\", \"Y\", \"P\", \"N\", \"U\", \"F\", \"Z\", \"L\", \"T\", \"I\", \"W\", \"V\", \"UNK\"]\nprint(\"non standard pieces\", {k:v for k,v in piece_counter.items() if k not in good_pieces})\npiece_counter\n\nword_counter.most_common(20)\n\nword_total = sum(word_piece_counter.values())\npiece_total= sum(piece_counter.values())\n\nfor piece, p_count in piece_counter.items():\n print(\"piece:\", piece, p_count)\n p_piece = p_count/piece_total\n highest = -1\n best_word = \"\"\n rank = {}\n for word, w_count in word_counter.items():\n if w_count < 3: \n continue\n p_word = w_count / word_total\n p_word_piece = word_piece_counter[piece+\"__\"+word] / word_total\n mi = (p_word_piece/(p_piece * p_word))\n rank[word] = mi\n if mi > highest:\n highest = mi\n best_word = word\n if True:\n top = 5\n for k, v in sorted(rank.items(), key=lambda x:x[1], reverse=True):\n print(k, v)\n top -=1\n if top <= 0: \n break\n print(\"*\" * 30)\n\ndb.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Wei1234c/Elastic_Network_of_Things_with_MicroPython
notebooks/上傳檔案到 NodeMCU (Upload files to NodeMCU).ipynb
gpl-3.0
[ "上傳檔案到 NodeMCU (Upload files to NodeMCU)\n需先安裝 ampy ( Adafruit MicroPython Tool )\npip install adafruit-ampy\nhttps://github.com/adafruit/ampy", "import os", "設定COM port (set current COM port)", "com_port = 'COM12'\n# com_port = 'COM13'\ncom_port = 'COM15'\n# com_port = 'COM16'", "列出檔案 (list files)", "# 現存檔案\nfiles = !ampy --port {com_port} ls\nfiles", "刪除檔案 (delete all files)", "# 清空\nfor file in files:\n print('Deleting {0}'.format(file))\n !ampy --port {com_port} rm {file}", "Functions for copying files", "def copy_one_file(folder, file): \n print('Copying {0}'.format(file))\n !ampy --port {com_port} put {os.path.join(folder, file)}\n \n \ndef copy_all_files(folders, main_filename = 'main.py'):\n files = !ampy --port {com_port} ls\n\n if main_filename in files:\n print('Deleting {0}'.format(main_filename))\n !ampy --port {com_port} rm {main_filename}\n\n for folder in folders: \n for file in os.listdir(folder):\n if not file.startswith('_') and not file.startswith(main_filename):\n print('Copying {0}'.format(file))\n !ampy --port {com_port} put {os.path.join(folder, file)} ", "Copy 檔案到開發板 (copy files onto ESP8266 / NodeMCU, all needed fils will be put in the same folder)", "folders = ['..\\\\codes\\\\micropython', '..\\\\codes\\\\node', '..\\\\codes\\\\shared']\nmain_filename = 'main.py'\n\ncopy_all_files(folders, main_filename)\n \ncopy_one_file('..\\\\codes\\\\micropython', main_filename) ", "單一檔案上傳 (single file upload, in case needed)", "copy_one_file('..\\\\..\\\\dmz', 'config.py')\n\ncopy_one_file('..\\\\codes\\\\shared', 'config.py')\n\ncopy_one_file('..\\\\codes\\\\node', 'node.py')\n\ncopy_one_file('..\\\\codes\\\\micropython', 'u_python.py')", "列出檔案 (list files)", "# !ampy --port {com_port} ls", "檢查檔案內容 (check file content)", "# !ampy --port {com_port} get boot.py\n\n# !ampy --port {com_port} get main.py", "連網測試 (network config and test)", "# 連上網路\n# import network; nic=network.WLAN(network.STA_IF); nic.active(True); nic.connect('SSID','password');nic.ifconfig()\n# import network; nic=network.WLAN(network.STA_IF); nic.active(True); nic.connect('Kingnet-70M-$370', '');nic.ifconfig()\n# import network; nic=network.WLAN(network.STA_IF);nic.ifconfig();nic.config('mac');nic.ifconfig((['mac',])\n\n# 發出 http request\n# import socket;addr=socket.getaddrinfo('micropython.org',80)[0][-1]\n# s = socket.socket();s.connect(addr);s.send(b'GET / HTTP/1.1\\r\\nHost: micropython.org\\r\\n\\r\\n');data = s.recv(1000);s.close()\n\n# Delete all files\n# import u_python;u_python.del_all_files();import os;os.listdir()\n", "Run Broker container on Raspberry Pi\ncopy folder 'codes' to Raspberry Pi under folder '/data/elastic_network_of_things_with_micropython',\nso Raspberry Pi has folder '/data/elastic_network_of_things_with_micropython/codes'\nthen run the command below on Raspberry Pi.\ndocker run -it -p 9662:9662 --name=Broker --hostname=Broker --volume=/data/elastic_network_of_things_with_micropython:/project wei1234c/python_armv7 /bin/sh -c \"cd /project/codes/broker &amp;&amp; python3 broker.py\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
3DGenomes/tadbit
doc/notebooks/tutorial_2-Preparation_of_the_reference_genome.ipynb
gpl-3.0
[ "Preparation of the reference genome\nUsually NGS reads are mapped against a reference genome containing only the assembled chromosomes, and not the remaining contigs. And this methodology is perfectly valid. However in order to decrease the probability of having mapping errors, adding all unassembled contigs may help: \n\nFor variant discovery, RNA-seq and ChIP-seq, it is recommended to use the entire primary assembly, including assembled chromosomes AND unlocalized/unplaced contigs, for the purpose of read mapping. Not including unlocalized and unplaced contigs potentially leads to more mapping errors.\nfrom: http://lh3lh3.users.sourceforge.net/humanref.shtml\n\nWe are thus going to download full chromosomes and unassembled contigs. From these sequences we are then going to create two reference genomes:\n - one \"classic\" reference genome with only assembled chromosomes, used to compute statistics on the genome (GC content, number of restriction sites or mappability)\n - one that would contain all chromosomes and unassembled contigs, used exclusively for mapping.\nMus musculus's reference genome sequence\nWe search for the most recent reference genome corresponding to Mouse (https://www.ncbi.nlm.nih.gov/genome?term=mus%20musculus).\nFrom there we obtain these identifiers:", "species = 'Mus_musculus'\ntaxid = '10090'\nassembly = 'GRCm38.p6'\ngenbank = 'GCF_000001635.26'", "The variables defined above can be modified for any other species, resulting in new results for the following commands.\nDownload from the NCBI\nList of chromosomes/contigs", "sumurl = ('ftp://ftp.ncbi.nlm.nih.gov/genomes/all/{0}/{1}/{2}/{3}/{4}_{5}/'\n '{4}_{5}_assembly_report.txt').format(genbank[:3], genbank[4:7], genbank[7:10], \n genbank[10:13], genbank, assembly)\n\ncrmurl = ('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi'\n '?db=nuccore&id=%s&rettype=fasta&retmode=text')\n\nprint sumurl\n\n! wget -q $sumurl -O chromosome_list.txt\n\n! head chromosome_list.txt", "Sequences of each chromosome/contig", "import os\n\ndirname = 'genome'\n! mkdir -p {dirname}", "For each contig/chromosome download the corresponding FASTA file from NCBI", "contig = []\nfor line in open('chromosome_list.txt'):\n if line.startswith('#'):\n continue\n seq_name, seq_role, assigned_molecule, _, genbank, _, refseq, _ = line.split(None, 7)\n if seq_role == 'assembled-molecule':\n name = 'chr%s.fasta' % assigned_molecule\n else:\n name = 'chr%s_%s.fasta' % (assigned_molecule, seq_name.replace('/', '-'))\n contig.append(name)\n\n outfile = os.path.join(dirname, name)\n if os.path.exists(outfile) and os.path.getsize(outfile) > 10:\n continue\n error_code = os.system('wget \"%s\" --no-check-certificate -O %s' % (crmurl % (genbank), outfile))\n if error_code:\n error_code = os.system('wget \"%s\" --no-check-certificate -O %s' % (crmurl % (refseq), outfile))\n if error_code:\n print genbank", "Concatenate all contigs/chromosomes into single files", "def write_to_fasta(line):\n contig_file.write(line)\n\ndef write_to_fastas(line):\n contig_file.write(line)\n simple_file.write(line)\n\nos.system('mkdir -p {}/{}-{}'.format(dirname, species, assembly))\n\ncontig_file = open('{0}/{1}-{2}/{1}-{2}_contigs.fa'.format(dirname, species, assembly),'w')\nsimple_file = open('{0}/{1}-{2}/{1}-{2}.fa'.format(dirname, species, assembly),'w')\n\nfor molecule in contig:\n fh = open('{0}/{1}'.format(dirname, molecule))\n oline = '>%s\\n' % (molecule.replace('.fasta', ''))\n _ = fh.next()\n # if molecule is an assembled chromosome we write to both files, otherwise only to the *_contigs one\n write = write_to_fasta if '_' in molecule else write_to_fastas\n for line in fh:\n write(oline)\n oline = line\n # last line usually empty...\n if line.strip():\n write(line)\ncontig_file.close()\nsimple_file.close()", "Remove all the other files (with single chromosome/contig)", "! rm -f {dirname}/*.fasta", "Creation of an index file for GEM mapper", "! gem-indexer -T 8 -i {dirname}/{species}-{assembly}/{species}-{assembly}_contigs.fa -o {dirname}/{species}-{assembly}/{species}-{assembly}_contigs", "The path to the index file will be: {dirname}/{species}-{assembly}/{species}_contigs.gem\nCompute mappability values needed for bias specific normalizations\nIn this case we can use the FASTA of the genome whithout contigs and follow these step:", "! gem-indexer -i {dirname}/{species}-{assembly}/{species}-{assembly}.fa \\\n -o {dirname}/{species}-{assembly}/{species}-{assembly} -T 8\n\n! gem-mappability -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem -l 50 \\\n -o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer -T 8\n\n! gem-2-wig -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem \\\n -i {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability \\\n -o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer\n\n! wigToBigWig {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig \\\n {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes \\\n {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw\n\n! bigWigToBedGraph {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw \\\n {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bedGraph", "Cleanup", "! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability\n! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig\n! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw\n! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes\n! rm -f {dirname}/{species}-{assembly}/*.log" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mtat76/atm-py
atmPy/for_removal/documentation/SMPS Analysis.ipynb
mit
[ "Demonstration of SMPS Calculations\nThe SMPS calculations require two main packages from atmPy - smps and dma. dma contains the DMA class and its children. The children of DMA simply contain the definition of the dimensions of the DMA used in the current SMPS instance. In this case, we use the definition of the NOAA wide DMA, which has the dimensions $r_i = 0.0312$, $r_o = 0.03613$ and $l = 0.34054$ where all units are in meters. The SMPS object provides a set of utilities for taking the scan data, applying a transfer function and correcting the distribution for multiple charges to produce a size distribution.\nThe import of the sizedistribution package allows us to manipulate the output of the DMA ($dN/d\\log{D_p}$) such that we can pull out other representations of the size distribution. The remaining packages are simply used for data manipulation.", "from atmPy.instruments.DMA import smps\nfrom atmPy.instruments.DMA import dma\nfrom matplotlib import colors\nimport matplotlib.pyplot as plt\nfrom numpy import meshgrid\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.dates import date2num\nfrom matplotlib import dates\nfrom atmPy import sizedistribution as sd\n%matplotlib inline", "The first thing we do in the analysis is we create a new SMPS object with the DMA instance we wish to use. Here, we also set the initial directory to search for SMPS data. When a new SMPS object is created, an open file dialog window will be produced and the user may select one or many files to analyze. The file names will be stored in the SMPS attribute files.", "hagis = smps.SMPS(dma.NoaaWide(),scan_folder=\"C:/Users/mrichardson/Documents/HAGIS/SMPS/Scans\")", "Determining the Lag\nIn order to properly analyze a scan, we must first align the data such that the particle concentrations are consistent with the conditions in the DMA. Although conditions such as voltages and flows adjust to changes almost immediately, there will be a lag in the particle response due to isntrument residence time. We can determine the lag either through a correlation or heuristically. The SMPS provides a function getLag which takes an integer indicating the file index to select in the attribute files containing the array of files selected by the user. The optional input, delta, allows the user to offset the result of the correlation by some amount to provide a more reasonable estimate of the lag.\nThe SMPS::getLag method will produce two plots. The first is the results from the attempted correlation and the second shows how the two scans align with the lag estimate, both the smoothed and raw data. This method will set the lag attribute in the SMPS instance which will be used in future calculations. This attribute is directly accessible if the user wishes to adjust it.", "hagis.getLag(10, delta=10)", "Processing the Files\nOnce the user has pointed to the files they wish to use, they can begin processing of the files using the function SMPS::procFiles(). Each file is processed as follows:\n\nThe raw data concerning the conditions is truncated for both the up and down scans to the beginning and end of the respective scans. The important parameters here are the values $t_{scan}$ and $t_{dwell}$ from the header of the files. The range of the data from the upscan spans the indices 0 to $t_{scan}$ and the range for the down data is $t_{scan}+t_{dwell}$ to $2\\times t_{scan}+t_{dwell}$.\nThe CN data is adjusted based on the lag. This data is truncated for the up scan as $t_{lag}$ to $t_{lag} + t_{scan}$ where $t_{lag}$ is the lag time determined by the user (possibly with the function SMPS::getLag(). In the downward scan, the data array is reversed and the data is truncated to the range $t_{dwell}-t_{lag}$ to $t_{scan}+t_{dwell}-t_{lag}$. In all cases, the CN concentration is calculated from the 1 second buffer and the CPC flow rate as $N_{1 s}/Q_{cpc}$.\nThe truncated [CN] is then smoothed using a Lowess smoothing function for both the up and down data.\nDiameters for each of the corresponding [CN] are then calculated from the set point voltage (rather than the measured voltage).\nThe resulting diameters and smoothed [CN] are then run through a transfer function. In this case, the transfer function is a simple full width half max based off of the mobility range of the current voltage. This function allows us to produce a $d\\log D_p$ for the ditribution.\nThe distribution is then corrected based on the algorithm described below.\nThe charge corrected distribution is then converted to a logarithmic distribution using the values from the FWHM function.\nThe resulting distribution is then interpolated onto a logarithmically distributed array that consists of bin ranging from 1 to 1000 nm.", "hagis.lag = 10\nhagis.proc_files()", "Charge Correction\nThe SMPS scans mobilities which are a function of voltage. In truth, at each voltage setpoint, the DMA allows a range of particle mobilities through the instrument. This range is expressed by the DMA transfer function as \n\\begin{equation}\n\\Omega = 1/q_a\\max{\\left[q_a,q_s,\\left[\\frac{1}{2}\\left(q_a+q_s\\right)-\\right]\\right]}\n\\end{equation}\n\\begin{equation}\nZ=\\frac{neCc(D_p)}{3\\pi\\mu(T)*D_p}\n\\end{equation}\nIn any charge correction, we must assume that there are no particles beyond the topmost bin. This allows us to make the assumption that all particles in that bin are singly charged. In order to determine the total number of particles in the current bin, we can simply use Wiedensohler's equation for the charging efficiency for singly charged particles. Starting at the topmost bin, we can calculate the total number of particles as \n\\begin{equation}\n\\frac{N_1(D_p,i)}{f_1}=N(D_p)\n\\end{equation}\nwhere $N_1$ is the number of particles of size $D_p$ having 1 charge, $f_1$ is the charging efficiency for singly charged particles and $N(D_p)$ is the total number of particles diameter $D_p$. Using the initial number of particles, we can then calculated the number of multiply charged particles in a similar fashion. \nOnce these numbers have been calculated, we can determine the location of the multiply charged particles (i.e. the diameter bin with which they have been identified). To do \nFinally, to get the total number of particles in the bin, we can apply the sum\n\\begin{equation}\nN(D_p) = \\frac{N_1(D_p)}{f_1}\\sum_{i=0}^\\inf{f_i}\n\\end{equation}\nHowever, in each of these cases, only a finite number of particles may be available in each bin, so in the code, we will have to take the minimum of the following:\n\\begin{equation}\n\\delta{N(k)}=\\min{\\left(\\frac{f_iN_1}{f_1},N_k\\right)}\n\\end{equation}\nwhere $\\delta{N(k)}$ is the number of particles to remove from bin $k$ and $N_k$ is the number of particles in bin $k$.\nOutput\nIn the following, we take the results from the SMPS::procFiles() method and produce a color map of size distributions in $dN/d\\log D_p$ space as a function of time. The attribute date from the instance of SMPS is a set of DateTime for each scan based on the start time of the file and the scan time collected from the header.", "hagis.date\n\n\nindex = []\nfor i,e in enumerate(hagis.date):\n if e is None:\n index.append(i)\n \nprint(index)\n\nif index:\n hagis.date = np.delete(hagis.date, index)\n hagis.dn_interp = np.delete(hagis.dn_interp,index, axis=0) \n\nxfmt = dates.DateFormatter('%m/%d %H:%M')\nxi = date2num(hagis.date)\nXI, YI = meshgrid(xi, hagis.diam_interp)\n#XI = dates.datetime.datetime.fromtimestamp(XI)\nZ = hagis.dn_interp.transpose()\nZ[np.where(Z <= 0)] = np.nan\n\npmax = 1e6 # 10**np.ceil(np.log10(np.amax(Z[np.where(Z > 0)])))\npmin = 1 #10**np.floor(np.log10(np.amin(Z[np.where(Z > 0)])))\nfig, ax = plt.subplots()\npc = ax.pcolor(XI, YI, Z, cmap=plt.cm.jet, norm=colors.LogNorm(pmin, pmax, clip=False), alpha=0.8)\n\nplt.colorbar(pc)\nplt.yscale('log')\nplt.ylim(5, 1000)\nax.xaxis.set_major_formatter(xfmt)\nfig.autofmt_xdate()\nfig.tight_layout()", "Use of the SizeDistr Object\nThe sizeditribution package contains some classes and routines for ready manipulation of the data. But first, we will need to convert the data of interest to a PANDAS data frame with the time as index.", "dataframe = pd.DataFrame(hagis.dn_interp)\ndataframe.index = hagis.date", "In addition, we will need to convert the bin centers produced by the SMPS object to bin edges. To do this, we will make the simple assumption that the bin edges are just the halfway points between the centers. For the edge cases, we will simply take the difference between the smallest bin center and the halfway point between the first and second bin centers and subtract this value from the smallest diameter. Similarly, for the largest diameter, we will take the difference between the halfway point between the largest and second largest bin centers and the largest bin center and add it to the largest bin center.", "binedges = (hagis.diam_interp[1:]+hagis.diam_interp[:-1])/2\nfirst = hagis.diam_interp[0] -(binedges[0]-hagis.diam_interp[0])\nlast = hagis.diam_interp[-1]+ (hagis.diam_interp[-1]-binedges[-1])\nbinedges = np.append([first],binedges)\nbinedges=np.append(binedges,[last])\nsizeDistr = sd.SizeDist_TS(dataframe,binedges, 'dNdlogDp')\n\nf,a,b,c = sizeDistr.plot(vmax = pmax, vmin = pmin, norm='log', showMinorTickLabels=False, cmap=plt.cm.jet)\na.set_ylim((5,1000))", "Once we have the corresponding SizeDistr object, we can now change the current distribution which is in $dN/d\\log D_p$ space and change this to a surface area distribution in log space. This will produce a new object that we will call sfSD.", "sfSD = sizeDistr.convert2dSdlogDp()\n\nfrom imp import reload\nreload(sd)\n\n\nf,a,b,c = sfSD.plot(vmax = 1e10, vmin = 1e4, norm='log', showMinorTickLabels=False,removeTickLabels=['200','300','400',] ,cmap =plt.cm.jet)\na.set_ylim((5,1000))", "To get an overall view, we can further manipulate the data to produce average distributions from the entire time series.", "avgAt = sizeDistr.average_overAllTime()\n\nf,a = avgAt.plot(norm='log')\n# a.set_yscale('log')\n\navgAtS = sfSD.average_overAllTime()\n\nf,a= avgAtS.plot(norm='log')\na.set_yscale('log')", "In the previous analysis, it appears that we have a size distribution which centers around 60 nm in number and somewhere around 100 to 200 nm in surface." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phanrahan/magmathon
notebooks/signal-generator/solutions/DDS.ipynb
mit
[ "Direct Digital Synthesis (DDS)\nThis example shows how to perform direct digital synthesis.", "import math\nimport numpy as np\n\ndef sine(x):\n return np.sin(2 * math.pi * x)\n\nx = np.linspace(0., 1., num=256, endpoint=False)\n\nimport magma as m\nm.set_mantle_target('ice40')\n\nimport mantle\n\ndef DefineDDS(n, has_ce=False):\n class _DDS(m.Circuit):\n name = f'DDS{n}'\n IO = ['I', m.In(m.UInt(n)), \"O\", m.Out(m.UInt(n))] + m.ClockInterface(has_ce=has_ce)\n @classmethod\n def definition(io):\n reg = mantle.Register(n, has_ce=has_ce)\n m.wire(reg(m.uint(reg.O) + io.I, CE=io.CE), io.O)\n return _DDS\n\ndef DDS(n, has_ce=False):\n return DefineDDS(n, has_ce)()\n\nfrom loam.boards.icestick import IceStick\n\nicestick = IceStick()\nicestick.Clock.on()\nfor i in range(8):\n icestick.J1[i].input().on()\n icestick.J3[i].output().on()\n\nmain = icestick.main()\n\ndds = DDS(16, True)\n\nwavetable = 128 + 127 * sine(x)\nwavetable = [int(x) for x in wavetable]\n\nrom = mantle.Memory(height=256, width=16, rom=list(wavetable), readonly=True)\n\nphase = m.concat(main.J1, m.bits(0,8))\n# You can also hardcode a constant as the phase\n# phase = m.concat(m.bits(32, 8), m.bits(0,8))\n\n# Use counter COUT hooked up to CE of registers to slow everything down so we can see it on the LEDs\nc = mantle.Counter(10)\n\naddr = dds( phase, CE=c.COUT)\n\nO = rom( addr[8:] )\nm.wire( c.COUT, rom.RE )\n\nm.wire( O[0:8], main.J3 )\n\nm.EndCircuit()", "Compile and build.", "m.compile('build/dds', main)\n\n%%bash\ncd build\ncat sin.pcf\nyosys -q -p 'synth_ice40 -top main -blif dds.blif' dds.v\narachne-pnr -q -d 1k -o dds.txt -p dds.pcf dds.blif \nicepack dds.txt dds.bin\niceprog dds.bin", "We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct sine waveform.\n\nWe can also use Saleae's export data feature to output a csv file. We'll load this data into Python and plot the results.", "import csv\nimport magma as m\nwith open(\"data/dds-capture.csv\") as sine_capture_csv:\n csv_reader = csv.reader(sine_capture_csv)\n next(csv_reader, None) # skip the headers\n rows = [row for row in csv_reader]\ntimestamps = [float(row[0]) for row in rows]\nvalues = [m.bitutils.seq2int(tuple(int(x) for x in row[1:])) for row in rows]", "TODO: Why do we have this little bit of jitter? Logic analyzer is running at 25 MS/s, 3.3+ Volts for 1s", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot(timestamps[:100], values[:100], \"b.\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
henchc/Data-on-the-Mind-2017-scraping-apis
01-APIs/solutions/01-API_solutions.ipynb
mit
[ "Accessing Databases via Web APIs\n\nIn this lesson we'll learn what an API (Application Programming Interface) is, how it's normally used, and how we can collect data from it. We'll then look at how Python can help us quickly gather data from APIs, parse the data, and write to a CSV. There are four sections:\n\nConstructing an API GET request\nParsing the JSON response\nLooping through result pages\nExporting to CSV\n\nFirst we'll import the required Python libraries", "import requests # to make the GET request \nimport json # to parse the JSON response to a Python dictionary\nimport time # to pause after each API call\nimport csv # to write our data to a CSV\nimport pandas # to see our CSV", "All of these are standard Python libraries, so no matter your distribution, these should be installed.\n1. Constructing an API GET request\n\nWe're going to use the New York Times API. You'll need to first sign up for an API key.\nWe know that every call to any API will require us to provide:\n\na base URL for the API, \n(usually) some authorization code or key, and \na format for the response.\n\nLet's write this information to some variables:", "# set API key var\nkey = \"\"\n\n# set base url var\nbase_url = \"http://api.nytimes.com/svc/search/v2/articlesearch\"\n\n# set response format var\nresponse_format = \".json\"", "Notice we assign each variable as a string. While the requests library will convert integers, it's better to be consistent and use strings for all parameters of a GET request. We choose JSON as the response format, as it is easy to parse quickly with Python, though XML is often an viable frequently offered alternative. JSON stands for \"Javascript object notation.\" It has a very similar structure to a python dictionary -- both are built on key/value pairs.\nYou often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we're going to look for articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term. We know these key names from the NYT API documentation.", "# set search parameters\nsearch_params = {\"q\": \"Duke Ellington\",\n \"api-key\": key}", "Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.", "# make request\nresponse = requests.get(base_url + response_format, params=search_params)", "Now, we have a response object called response. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.", "print(response.url)", "Click on that link to see it returns! Notice that all Python is doing here for us is helping us construct a complicated URL built with &amp; and = signs. You just noticed we could just as well copy and paste this URL to a browser and then save the response, but Python's requests library is much easier and scalable when making multiple queries in succession.\nChallenge 1: Adding a date range\nWhat if we only want to search within a particular date range? The NYT Article API allows us to specify start and end dates.\nAlter the search_params code above so that the request only searches for articles in the year 2015.\nYou're going to need to look at the documentation to see how to do this.", "# set date parameters here\n\nsearch_params = {\"q\": \"Duke Ellington\",\n \"api-key\": key,\n \"begin_date\": \"20150101\", # date must be in YYYYMMDD format\n \"end_date\": \"20151231\"}\n\n# uncomment to test\nr = requests.get(base_url + response_format, params=search_params)\nprint(r.url)", "Challenge 2: Specifying a results page\nThe above will return the first 10 results. To get the next ten, you need to add a \"page\" parameter. Change the search parameters above to get the second 10 results.", "# set page parameters here\n\nsearch_params[\"page\"] = 0\n\n# uncomment to test\nr = requests.get(base_url + response_format, params=search_params)\nprint(r.url)", "2. Parsing the JSON response\n\nWe can read the content of the server’s response using .text", "# inspect the content of the response, parsing the result as text\nresponse_text = r.text\nprint(response_text[:1000])", "What you see here is JSON text, encoded as unicode text. As mentioned, JSON is bascially a Python dictionary, and we can convert this string text to a Python dictionary by using the loads to load from a string.", "# convert JSON response to a dictionary\ndata = json.loads(response_text)\nprint(data)", "That looks intimidating! But it's really just a big dictionary. The most time-consuming part of using APIs is traversing the various key-value trees to see where the information you want resides. Let's see what keys we got in there.", "print(data.keys())\n\n# this is boring\nprint(data['status'])\n\n# so is this\nprint(data['copyright'])\n\n# this is what we want!\nprint(data['response'])\n\nprint(data['response'].keys())\n\nprint(data['response']['meta'].keys())\n\nprint(data['response']['meta']['hits'])", "Looks like there were 93 hits total for our query. Let's take a look:", "print(data['response']['docs'])", "It starts with a square bracket, so it looks like a list, and from a glance it looks like the list of articles we're interested in.", "print(type(data['response']['docs']))", "Let's just save this list to a new variable. Often when using web APIs, you'll spend the majority of your time restructuring the response data to how you want it.", "docs = data['response']['docs']\n\nprint(docs[0])", "Wow! That's a lot of information about just one article! But wait...", "print(len(docs))", "3. Looping through result pages\n\nWe're making progress, but we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop!", "# get number of hits total (in any page we request)\nhits = data['response']['meta']['hits']\nprint(\"number of hits: \", str(hits))\n\n# get number of pages\npages = hits // 10 + 1\n\n# make an empty list where we'll hold all of our docs for every page\nall_docs = []\n\n# now we're ready to loop through the pages\nfor i in range(pages):\n\n print(\"collecting page\", str(i))\n\n # set the page parameter\n search_params['page'] = i\n\n # make request\n r = requests.get(base_url + response_format, params=search_params)\n\n # get text and convert to a dictionary\n data = json.loads(r.text)\n\n # get just the docs\n docs = data['response']['docs']\n\n # add those docs to the big list\n all_docs = all_docs + docs\n\n # IMPORTANT pause between calls\n time.sleep(5) \n\nprint(len(all_docs))", "4. Exporting to CSV\n\nGreat, now we have all the articles. Let's just take out some bits of information and write to a CSV.", "final_docs = []\n\nfor d in all_docs:\n \n # create empty dict for each doc to collect info\n targeted_info = {}\n targeted_info['id'] = d['_id']\n targeted_info['headline'] = d['headline']['main']\n targeted_info['date'] = d['pub_date'][0:10] # cutting time of day.\n targeted_info['word_count'] = d['word_count']\n targeted_info['keywords'] = [keyword['value'] for keyword in d['keywords']]\n try: # some docs don't have this info\n targeted_info['lead_paragraph'] = d['lead_paragraph']\n except:\n pass\n\n # append final doc info to list\n final_docs.append(targeted_info)", "We can write our sifted information to a CSV now:", "header = final_docs[1].keys()\n\nwith open('all-docs.csv', 'w') as output_file:\n dict_writer = csv.DictWriter(output_file, header)\n dict_writer.writeheader()\n dict_writer.writerows(final_docs)\n\npandas.read_csv('all-docs.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
bsd-3-clause
[ "%matplotlib inline", "Brainstorm CTF phantom dataset tutorial\nHere we compute the evoked from raw for the Brainstorm CTF phantom\ntutorial dataset. For comparison, see :footcite:TadelEtAl2011 and:\nhttps://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf\n\nReferences\n.. footbibliography::", "# Authors: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\nimport warnings\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_ctf\nfrom mne.io import read_raw_ctf\n\nprint(__doc__)", "The data were collected with a CTF system at 2400 Hz.", "data_path = bst_phantom_ctf.data_path(verbose=True)\n\n# Switch to these to use the higher-SNR data:\n# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')\n# dip_freq = 7.\nraw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')\ndip_freq = 23.\nerm_path = op.join(data_path, 'emptyroom_20150709_01.ds')\nraw = read_raw_ctf(raw_path, preload=True)", "The sinusoidal signal is generated on channel HDAC006, so we can use\nthat to obtain precise timing.", "sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]\nplt.figure()\nplt.plot(times[times < 1.], sinusoid.T[times < 1.])", "Let's create some events using this signal by thresholding the sinusoid.", "events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp\nevents = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T", "The CTF software compensation works reasonably well:", "raw.plot()", "But here we can get slightly better noise suppression, lower localization\nbias, and a better dipole goodness of fit with spatio-temporal (tSSS)\nMaxwell filtering:", "raw.apply_gradient_compensation(0) # must un-do software compensation first\nmf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)\nraw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)\nraw.plot()", "Our choice of tmin and tmax should capture exactly one cycle, so\nwe can make the unusual choice of baselining using the entire epoch\nwhen creating our evoked data. We also then crop to a single time point\n(@t=0) because this is a peak in our signal.", "tmin = -0.5 / dip_freq\ntmax = -tmin\nepochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,\n baseline=(None, None))\nevoked = epochs.average()\nevoked.plot(time_unit='s')\nevoked.crop(0., 0.)", "Let's use a sphere head geometry model &lt;eeg_sphere_model&gt;\nand let's see the coordinate alignment and the sphere location.", "sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)\n\nmne.viz.plot_alignment(raw.info, subject='sample',\n meg='helmet', bem=sphere, dig=True,\n surfaces=['brain'])\ndel raw, epochs", "To do a dipole fit, let's use the covariance provided by the empty room\nrecording.", "raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)\nraw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',\n **mf_kwargs)\ncov = mne.compute_raw_covariance(raw_erm)\ndel raw_erm\n\nwith warnings.catch_warnings(record=True):\n # ignore warning about data rank exceeding that of info (75 > 71)\n warnings.simplefilter('ignore')\n dip, residual = fit_dipole(evoked, cov, sphere, verbose=True)", "Compare the actual position with the estimated one.", "expected_pos = np.array([18., 0., 49.])\ndiff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))\nprint('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))\nprint('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))\nprint('Difference: %0.1f mm' % diff)\nprint('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))\nprint('GOF: %0.1f %%' % dip.gof[0])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mjbrodzik/ipython_notebooks
GEOG827/Brodzik_assignment1.ipynb
apache-2.0
[ "GEOG827\nAssignment #1\nM. J. Brodzik\ndue 3/12/17\nQuestion 1\n<i>Using the tables and/or equations in the \"Calculating Evaporation\" documents posted in\nblackboard, notes and/or other sources (state the source), express results as mean W/m2\nover the day.</i>\nQ1. Part 1\n<i>Part i) estimate the daily average solar radiation to the top of the Earth's atmosphere at\n56oN on July 2nd.</i>\nTotal daily solar radiation to top of the atmosphere (TOA) is a function of latitude, season and time of day. From Table D, \"Calculating Evaporation Notes\", the Total daily solar radiation, $K_A\\ [W m^{-2}]$, is given in 10-degree latitude increments, for Jun 22 and Jul 15. I begin by plotting these to see how much variability there is in $K_A$:", "%pylab inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Define data to plot\nlats = np.arange(10) * 10\njun22 = np.array([382.68, 422.88, 452.91, 472.29, 480.04,\n 479.07, 474.23, 490.21, 513.46, 521.70])\njul15 = np.array([387.52, 424.82, 450.49, 465.02, 468.41,\n 462.12, 450.01, 452.43, 474.71, 481.49])\n\n# Make the plot:\nplt.rc('text', usetex=True)\nfig, ax = plt.subplots(1)\nax.plot(lats, jun22, 'co', label='jun22')\nax.plot(lats, jul15, 'bo', label='jul15')\nax.set_title(\"Total daily solar radiation to horizontal surface at TOA\")\nax.set_xlabel(\"Latitude\")\nax.set_ylabel(r'$K_A (W m^{-2})$')\nax.legend(loc='best', numpoints=1)\nplt.show()\nfig.savefig('HW1.1i_fig1.png')", "So although there is an inflection point in the data above 60N, for an estimate I think it's sufficient to just linearly interpolate $K_A$ for the given dates to a value for Jul 2, and then interpolate to 56$^{\\circ}$ N.", "# define a quick linear interpolation function\n# calculate slope and intercept and the value of the line at \n# the new value\ndef linear_model_value_at(x, x1, y1, x2, y2):\n slope = (y2 - y1) / (x2 - x1)\n # y = mx + b ==> b = y - mx\n intercept = y1 - (slope * x1)\n return (slope * x) + intercept\n\nKA_56N_jun22 = linear_model_value_at(56., 50., 479.07, 60., 474.23)\nKA_56N_jul15 = linear_model_value_at(56., 50., 462.12, 60., 450.01)", "Linearly interpolate $K_A$ at 56 N to July 2 between Jun22 and Jul15:", "import datetime\njun22_doy = datetime.datetime(2017, 6, 22).timetuple().tm_yday\njul2_doy = datetime.datetime(2017, 7, 2).timetuple().tm_yday\njul15_doy = datetime.datetime(2017, 7, 15).timetuple().tm_yday\nprint(jun22_doy, jul2_doy, jul15_doy)\n\nKA_56N_jul2 = linear_model_value_at(jul2_doy, \n jun22_doy, KA_56N_jun22,\n jul15_doy, KA_56N_jul15)\nKA_56N_jul2", "Adding my interpolated values to the plot, I think it's sufficient to use this approximation:", "fig, ax = plt.subplots(1)\n\nax.plot(lats, jun22, 'co', label='jun22')\nax.plot(lats, jul15, 'bo', label='jul15')\nax.plot(56, KA_56N_jun22, 'cx', label='jun22 @ 56N')\nax.plot(56, KA_56N_jul15, 'bx', label='jul15 @ 56N')\nax.plot(56, KA_56N_jul2, 'kx', label='jul2 @ 56N')\nax.annotate('$K_A$ at 56 N on Jul 2 = %.2f $W m^{-2}$' % KA_56N_jul2, \n xy=(56, KA_56N_jul2), \n xytext=(30, 430),\n arrowprops=dict(facecolor='k', shrink=0.05))\n\nax.set_title(\"Total daily solar radiation to horizontal surface at TOA\")\nax.set_xlabel(\"Latitude\")\nax.set_ylabel(r'$K_A (W m^{-2})$')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., numpoints=1)\nplt.show()\nfig.savefig('HW1.1i_fig2.png')", "So the estimated daily average solar radiation to the top of the Earth's atmosphere at 56$^{\\circ}$ N on July 2nd is 466.90 $W m^{-2}$.\nQ1. Part ii\n<i>ii) if there are 7 hours of bright sunshine, what is the average incoming solar radiation to the surface of a flat unvegetated field at this location on this day?</i>\nEq 7. from \"Calculating Evaporation Notes\" gives the equation for mean incoming shortwave radiation $K\\downarrow [W m^{-2}]$ as:\n$K\\downarrow = K_A[a + b(\\frac{n}{N})]$\nLooking at latitudes of locations in Table B, I think the closest location would be that of Central SK, Canada from Mudiare(1985). I will use the rainfree values for a and b.\nLet:\n<pre>\na = 0.27 (From Table B, Mudiare, rain-free, May-Aug, central SK)\nb = 0.47 (From Table B, Mudiare, rain-free, May-Aug, central SK)\nn = 7. (actual sunshine hours)\nN = maximum possible number of sunshine hours \n</pre>\n\nEstimate N from Table C, values given for latitude=56N, interpolate values from Jul1 - Jul5 to a value for Jul2, and remember to convert (hours,minutes) to decimal hours)\n<pre>\nDate: N at 56N\nJul1: 17h 31m \nJul5: 17h 25m \n</pre>", "N = linear_model_value_at(2., 1, 17. + (31./60.), 5, 17. + (25./60.))\nprint(\"Estimate of N at 56N on Jul 2 is: %.2f\" % N)\n\na = 0.27\nb = 0.47\nn = 7.\nK_down = KA_56N_jul2 * (a + b * (n/N))\nK_down", "So with 7 hours of direct sunshine at this location, the mean incoming solar radiation to the surface of a flat unvegetated field at about 56$^{\\circ}$ N on July 2nd is 213.88 $W m^{-2}$. So atmospheric transmittance reduces the shortwave by more than half of that reaching TOA.\nQ1. Part iii\n<i>iii) if the mean daily air temperature is 15°C and the mean daily relative\nhumidity is 70%, what is the daily average clear sky incoming longwave \nradiation and what is the actual daily average incoming longwave radiation to this field?\n</i>\nFrom our class notes, incoming longwave clear sky radiation from the atmosphere, $L_{0,clear} [W m^{-2}]$, is expressed by the Stephan-Boltzmann Law:\n$L_{0,clear} = \\varepsilon_{clear}(T,e) \\sigma T^4$\nWhere apparent clear-sky emissivity, $\\varepsilon_{clear}$, is a function of air temperature near the ground, $T [K]$, and water vapour pressure, $e [mb]$, near the ground. Brutsaert(1975) derived an empirical relationship for $\\varepsilon_{clear}$ as a function of $T$ and $e$:\n$$\\varepsilon_{clear} = C(\\frac{e}{T})^{1/m}$$\nwhere:\n$T$ = air temperature, $[K]$\n$e$ = vapour pressure near the ground, $[mb]$\n$C$ = 1.24\n$m$ = 7 \nFrom lectures, I can use the Magnus Formula to approximate saturated water vapour pressure $e_s [hPa=mb]$, at 15-deg C:\n$e_s [mb] = 6.1094 \\exp{\\frac{17.625 T}{T + 243.04}}$\nwhere:\n$T$ = air temperature $[°C] = 15.$", "T_C = 15. # air temperature, degrees C\nes_15C = 6.1094 * np.exp((17.625 * T_C) / (T_C + 243.04))\nprint(\"Saturated water vapour pressure at 15°C is %.2f mb\" % es_15C)", "So water vapour pressure near the ground, $e [mb]$, is 70% of $e_s(T=15°C)$:", "RH = 70\ne = RH / 100. * es_15C\nprint(\"Water vapour pressure near the ground is %.2f mb\" % e)\n\n# For the Brutsaert equation, temperature needs to be in Kelvins, so convert 15C to Kelvins\nT_K = T_C + 273.15\nprint(\"Air temperature = %.2f K\" % T_K)\n\nC = 1.24\nm = 7.\ne_clear = C * ((e / T_K)**(1./m))\nprint(\"Clear sky emissivity (using Brutsaert's formula) is %.2f\" % e_clear)", "So using the Stephan-Boltzmann Law, let:\n$\\varepsilon_{clear}(T=15°C,e=11.91mb)$ = 0.79\n$\\sigma$ = 5.67 x $10^{-8} W m^{-2} K^{-4}$ \n$T$ = 288.15 $K$", "sigma = 5.67e-08\nlongwave_clearsky = e_clear * sigma * T_K**4\nprint(\"Incoming longwave clearsky %.2f [W/m^2]\" % longwave_clearsky)", "So I estimate daily average clear sky incoming longwave radiation, $L\\downarrow$, to be 307.49 $W m^{-2}$, which is greater than the incoming shortwave from part i). \nFor the actual daily average incoming longwave radiation to this field, since it there are only 7 hours of bright sunshine, I think the actual atmospheric emissivity will need to be adjusted upward, to account for increased emission from the clouds. I will use the adjustment used in Sicart et al 2006, equation (9). The adjustment for emissivity from clouds is:\n$$\\varepsilon_{cloudy} = \\varepsilon_{clear}(1 + 0.44 RH - 0.18 \\tau_{atm})$$\nwhere: \n$RH$ = relative humidity (as a percent)\n$\\tau_{atm}$ = shortwave transmissivity of the atmosphere \nShortwave atmospheric transmissivity is the ratio of my answers from part i) and ii):", "tau_atm = K_down / KA_56N_jul2\nprint(\"KA = %.2f, K_down = %.2f, tau_atm = %.2f\" %(\n KA_56N_jul2, K_down, tau_atm))", "So using emissivity of the cloudy atmosphere in Stephan-Boltzmann:", "e_cloudy = e_clear * (1. + 0.44 * (RH/100.) - 0.18* (tau_atm))\nlongwave_cloudysky = e_cloudy * sigma * T_K**4\nprint(\"Clear-sky emissivity %.2f\" % e_clear)\nprint(\"Cloudy-sky emissivity %.2f\" % e_cloudy)\nprint(\"Incoming longwave radiation %.2f\" % longwave_cloudysky)", "So this matches my expectation from class notes that $\\varepsilon_{clear} < \\varepsilon_{cloudy}$, and I estimate actual daily average incoming longwave radiation to this field as 376.84 $W m^{-2}$.\nQ1. Part iv\n<i>iv) if the surface temperature is 20°C and albedo is 0.15 what is the average net radiation over the day?</i>\nThe average net radiation over the day is the sum of net shortwave and net longwave:\n$$Q^\\ast = K^\\ast - L^\\ast = K\\downarrow(1 - \\alpha) + L\\downarrow - L\\uparrow$$\nwhere:\n$K^\\ast$ = net shortwave\n$L^\\ast$ = net longwave\n$K\\downarrow$ = incoming shortwave\n$\\alpha$ = shortwave albedo\n$L\\downarrow$ = incoming longwave\n$L\\uparrow$ = outgoing longwave from the terrain surface\nAir temperature is not a factor for the incoming shortwave, so I can use incoming shortwave from part ii:\n$K\\downarrow = 213.88 W m^{-2}$\n$\\alpha = 0.15$", "albedo = 0.15\nK_net = K_down * (1. - albedo)\nK_net", "And adjust the incoming longwave from part iii for the higher temperature:", "T_C = 20. # air temperature, degrees C\nT_K = T_C + 273.15\ne_clear_20 = C * ((e / T_K)**(1./m)) # clear sky emission at T=20\ne_cloudy_20 = e_clear_20 * (1. + 0.44 * (RH/100.) - 0.18* (tau_atm))\nlongwave_cloudysky_20 = e_cloudy_20 * sigma * T_K**4 # W/m^2\nprint(\"Incoming longwave radiation for T=20°C is now %.2f W/m^2\" % longwave_cloudysky_20)", "Outgoing longwave is longwave emission from the terrain surface. I use the Stephan-Boltzmann equation, assuming albedo of the terrain is close to 1, say 0.98, and that the air temperature is an adequate estimate for the surface temperature:", "surface_emissivity = 0.98\nlongwave_up = 0.98 * sigma * T_K**4 # W/m^2\nprint(\"Outgoing longwave radiation from surface for T=20°C is %.2f W/m^2\" %\n longwave_up)\n\nnet = K_net + longwave_cloudysky_20 - longwave_up\nprint(K_net, longwave_cloudysky_20, longwave_up)\nprint(\"Net_radiation for T=20 is %.2f\" % net)", "So I estimate average net radiation in this location over the day to be 159.67 $W m^{-2}$.\nQ1. Part v\n<i>v) what would the daily average incoming solar radiation and incoming longwave radiation fluxes be to the sub-canopy floor under an adjacent pine canopy at this location on this day, if the pine canopy had a leaf area index of 2.1 and a sky view factor of 0.2? If the sub-canopy surface temperature and albedo are the same as in the field, what is the sub-canopy net radiation?</i>\nPer Pomeroy and Dion, 1996, the transmittance through a forest canopy can be modeled with an extinction coefficient:\n$$\\tau = exp^{\\frac{-Q_{ext}\\ {LAI}'}{\\sin\\theta}}$$ \nwhere:\n$\\tau$ = transmittance through forest canopy\n$Q_{ext}$ = extinction efficiency (dimensionless)\n$LAI'$ = effective winter leaf area index\n$\\theta$ = solar angle above the horizon \nFrom Pomeroy lecture on Day 1, slide 42, apparently $Q_{ext}$ cancels $\\sin\\theta$ , so $Q_{ext} \\approx \\sin\\theta$ so their ratio will $\\rightarrow$1. and $\\tau$ is:", "eff_lai = 2.1\nforest_tau = exp(-1 * eff_lai)\nforest_tau", "So I estimate $\\tau$ = 0.12. This is consistent with my notes from class, which indicate that a good rule of thumb for $\\tau$ in forests is 0.1.\nFor daily average incoming solar radiation and incoming longwave radiation fluxes to the sub-canopy floor under an adjacent pine canopy at this location on this day, incoming shortwave will be $K^\\ast$ times the transmittance, and net shortwave will be incoming shortwave times $(1 - \\alpha)$:", "subcanopy_shortwave_down = K_net * forest_tau\nnet_subcanopy_shortwave = subcanopy_shortwave_down * (1. - albedo)\nprint(\"shortwave down to subcanopy %.2f W/m^2\" % subcanopy_shortwave_down)\nprint(\"net subcanopy shortwave %.2f W/m^2\" % net_subcanopy_shortwave)", "The incoming longwave to the subcanopy will come from a combination of the sky and the trees. Sicart et al. 2006 (equation 6) found that terrain and vegetation emissions of longwave radiation could be represented as separate components weighted by sky view factor:\n$$L = V_f L_0 + (1 - V_f) \\varepsilon_s \\sigma T_{s}^4$$\nlet:\n$V_f$ = 0.2 (given)\n$L_0$ = longwave from the cloudysky at 20°C from part iv, $W m^{-2}$\n$\\varepsilon_s$ = 0.98 emissivity of the forest, assume it is close to 1, from Sicart \"terrain emissivity is close to 1 for snow and most natural surfaces\"\n$T_{s}$ = 20 + 273.15 $K$ (same temperature as surrounding field) \nThe outgoing longwave from the subcanopy floor will be the same as that from the surrounding field.", "sky_view_factor = 0.2\nsubcanopy_longwave_down = sky_view_factor * longwave_cloudysky_20 + \\\n (1. - sky_view_factor) * surface_emissivity * sigma * T_K**4 \nnet_subcanopy_longwave = subcanopy_longwave_down - longwave_up\nprint(\"longwave down to subcanopy %.2f W/m^2\" % subcanopy_longwave_down)\nprint(\"longwave up from subcanopy %.2f W/m^2\" % longwave_up)\nprint(\"net subcanopy longwave %.2f W/m^2\" % net_subcanopy_longwave)\n", "So the subcanopy is losing longwave radiation.", "net_subcanopy = net_subcanopy_shortwave + net_subcanopy_longwave", "Finally, the subcanopy net radiation $Q^\\ast = K^\\ast + L^\\ast$ = 14.50 $W m^{-1}$.\nQuestion 2\n<i>Use equations presented in the paper by Harder and Pomeroy (2013) to estimate precipitation phase:</i>\n<i>During a precipitation measurement of 5 mm over an hour into a remote unattended Alter-shielded weighing precipitation gauge in a forested clearing you measure an air temperature of +1.0 C, RH of 60%, with very low wind speed. What is the water equivalent depth of snowfall? What is the depth of rainfall?</i>\nHarder and Pomeroy (2013) derive a psychrometric equation to derive phase change, expressed as rainfall fraction, $f_r$, as a sigmoidal function:\n$$f_r(T_i) = \\frac{1}{1 + b * c^{T_{i}}}$$\nwhere:\n$b$, $c$ = best fit coefficients, both are functions of time scale of measurement (15-min, hourly, daily)\n$T_i$ = hydrometeor temperature\nAn iterative solution for $T_i$, Appendix eq. A.5, is:\n$$T_i = T_a + \\frac{D}{\\lambda_t} L (\\rho_{T_a} - \\rho_{sat(T_i)})$$\nwhere:\n$T_a$ = air temperature $[K]$\n$D$ = diffusivity of water vapour in air $[m^2 s^{-1}]$\n$\\lambda_t$ = thermal conductivity of air $[J\\ m^{-1} s^{-1} K^{-1}]$\n$L$ = latent heat of vaporisation or sublimation $[J\\ kg^{-1}]$\n$\\rho_{T_a}$ = water vapour density in the free atmosphere $[kg\\ m^{-3}]$ \n$\\rho_{sat(T_i)}$ = saturated water vapour density at the hydrometeor surface $[kg\\ m^{-3}]$ \nSince the wind speed is low, I will assume thermodynamic equilibrium.\nI am given $T_a$ = 1.0 °$C$ = 274.15 $K$. For each of the next terms: \n$D$: Following the appendix equation A.6 (Thorpe and Mason, 1966), I can estimate $D [m^2 s^{-1}]$ as function of $T_a [K]$:\n$$D = 2.06 * 10^{-5} * \\left(\\frac{T_a}{273.15}\\right)^{1.75}$$", "C_to_K_offset = 273.15\nTair_C = 1.0 \nTair_K = Tair_C + C_to_K_offset\nD_m2ps = 2.06e-5 * (Tair_K/273.15)**1.75\nC_to_K_offset, Tair_K, D_m2ps", "$\\lambda_t$: Following appendix equation A.9 (List, 1949), I can estimate $\\lambda_t\\ [J m^{-1} s^{-1} K^{-1}]$ as function of $T_a [K]$:\n$$\\lambda_t = 0.000063 * T_a + 0.00673$$", "lambda_t = 0.000063 * Tair_K + 0.00673\nprint(D_m2ps, lambda_t, D_m2ps/lambda_t)", "(I wasn't sure about temperature units in $C$ or $K$, but my value for the psychrometric exchange ratio, $D/\\lambda_t$, at 0 °C is consistent with the plot in Figure A1(b)). \n$L$: Since the temperature is > 0 °C, I will use heat of vaporization, $L_v [J\\ kg^{-1}]$, equation A.11, as a function of $T_a [C]$:\n$$L_v = 1000(2501 - (2.361 T))$$", "Lv_Jpkg = 1000. * (2501 - (2.361 * Tair_C))\nprint(\"Latent heat of vaporization = %.2E J/kg\" % Lv_Jpkg)", "$\\rho_{T_a}$, $\\rho_{sat(T_s)}$ : To estimate water vapour density $[kg\\ m^{-3}]$ in the free atmosphere, I calculate the water vapour pressure, e $[Pa]$, using Dingman (2015), eq 3.9a, p.113:\n$$e = \\frac{RH}{100} * 611\\ exp\\left(\\frac{17.27\\ T}{T + 237.3}\\right)$$\nwhere:\nRH = relative humidity = 70%\nT = air temperature, $°C$", "RH = 60.\nsaturated_vapour_pressure_Pa = 611 * np.exp((17.27 * Tair_C)/(Tair_C + 237.3))\nfree_air_vapour_pressure_Pa = (RH / 100.) * saturated_vapour_pressure_Pa\nprint(\"saturated vapour pressure [Pa] = %.2f Pa\" % (saturated_vapour_pressure_Pa))\nprint(\"free air vapour pressure [Pa] for %d%% RH = %.2f Pa\" % (\n RH, free_air_vapour_pressure_Pa))", "And then convert pressure to density using the ideal gas law (also from Appendix A.8):\n$$\\rho = \\frac{m_we}{RT}$$\nwhere: \n$m_w$ = molecular weight of water = 0.01801528 $kg\\ mol^{-1}$\n$e$ = water vapour pressure, $kPa$\n$R$ = Universal Gas Constant 8.31441 $J\\ mol^{-1}\\ K^{-1}$\n$T$ = temperature $K$", "mw = 0.01801528 # kg mol^-1\nR = 8.31441 # J mol^-1 K^-1\nPa_per_kPa = 1000. # conversion\nsaturated_vapour_density_kgpm3 = (\n mw * saturated_vapour_pressure_Pa / Pa_per_kPa) / (R * Tair_K)\nfree_air_vapour_density_kgpm3 = (\n mw * free_air_vapour_pressure_Pa / Pa_per_kPa) / (R * Tair_K)\nprint(\"saturated water vapour density = %.2E kg m^-3\" % (saturated_vapour_density_kgpm3))\nprint(\"free air water vapour density for %d %% RH = %.2E kg m^-3\" % (\n RH, free_air_vapour_density_kgpm3))", "Solving for hydrometeor temperature:", "T_i = Tair_C + (D_m2ps/lambda_t) * Lv_Jpkg * (\n free_air_vapour_density_kgpm3 - saturated_vapour_density_kgpm3)\nprint(\"hydrometeor temperature = %.2E deg-C\" % T_i)", "And finally, using b and c from middle plot (hourly) of Fig. 6:", "b = 2.50286\nc = 0.125006\nrainfall_fraction = 1. / (1. + (b * c**T_i))\nprecip_mm = 5\nprint(rainfall_fraction, rainfall_fraction * precip_mm, \\\n ((1 - rainfall_fraction) * precip_mm))\n ", "So given a gauge precip measurement of 5 $mm$, I estimate it fell as 3.8 $mm$ of rainfall, and 1.2 $mm$ water equivalent from snowfall.\nQuestion 3\n<i>You are measuring air temperature and water vapour content over a natural grassland (roughness length = 0.03 m) to get an idea of how available energy is being used at this site. You have installed the minimum of two levels of measurements of wind speed, temperature and humidity. The daily average measurements are (heights above the ground):</i\n<i>\n- 1-m height, wind speed is 0.6 $m\\ s^{-1}$, air temperature 20 $^oC$, and vapour pressure 2.0 $kPa$;\n- 2-m height, wind speed is 0.62 $m\\ s^{-1}$, air temperature 19 $^oC$, and vapour pressure 1.5 $kPa$.</i>\n<i>Assume Pa = 101.3 kPa, $\\rho_a$ = 1.2 $kg\\ m^{-3}$, and $c_p$ = 1005 $J\\ kg^{-1} K^{-1}$ and $L_v$ = 2.454 $MJ\\ kg^{-1}$.\n(a) Estimate a $z_0$ for momentum, heat and water vapour from the vegetation height. Then using bulk transfer flux-gradient calculations (example in Helgason and Pomeroy, 1995) and ignoring stability corrections find $Q_E$ and $Q_H$.\n(b) Calculate the footprint representative of these measurements which accounts for 80% of the flux. Recall that you will first need to calculate u*.</i>\nPer Helgason and Pomeroy, 2005, a typical method to estimate momentum roughness length $z_{0m}$ is to plot $ln(z)$ vs. $\\bar{u}$ for neutral conditions and then determine the value of $z_{0m}$ as the y-intercept where $\\bar{u}$ =0.", "import numpy as np\n\nu1 = 0.6 # m/s\nu2 = 0.62 # m/s\nz1 = 1.0 # m\nz2 = 2.0 # m\n\n# quick linear interpolation function\n# returns slope and intercept\ndef linear_model_parms(x1, y1, x2, y2):\n slope = (y2 - y1) / (x2 - x1)\n # y = mx + b ==> b = y - mx\n intercept = y1 - (slope * x1)\n return {'m':slope, 'b':intercept}\n\nparms = linear_model_parms(u1, np.log(z1), u2, np.log(z2))\nprint(\"y-intercept: ln(z) = %f\" % parms['b'])\n\n# Make the plot:\nplt.rc('text', usetex=True)\nfig, ax = plt.subplots(1)\nax.plot([u1, u2], [np.log(z1), np.log(z2)], \"co-\")\n#ax.set_title(\"$ln(z) vs. \\bar{u}\")\nax.set_xlabel('Averge wind speed $(m\\ s^{-1})$')\nax.set_ylabel(\"$ln(z)$\")\nax.set_xlim([0,0.63])\nax.set_ylim([-20, 1.0])\n\nplt.show()\nfig.savefig('HW1.3_fig1.png')", "Solving for the corresponding height:", "print(\"Momentum roughness length z_0m = %e meters\" % np.exp(parms['b']))", "So my calculated momentum roughness length is about 10 Angstroms, which seems impossibly small. But I am unsure as to what I've done wrong, here.\nTaking a different approach, the problem states that the roughness length is 0.03 m. Using Dingman eq (3.28 and 3.29), p. 122, if this roughness length is $z_0$, then the average vegetation height $z_{veg}$ = 0.3 m = 30 cm , which seems reasonable for a \"natural grassland\". And, the zero-plane displacement height, $z_d$ is therefore 0.7 * 0.3 = 0.21 m.\nUsing equation 3.30b to estimate friction velocity, $u^*$:\n$$u^* = \\frac{\\kappa [u(z_2) - u(z_1)]}{ln(\\frac{z_2 - z_d}{z_1 - z_d})}$$\nwhere: \n$\\kappa$ = 0.4", "z_veg = 10. * 0.03\nz_d = 0.7 * z_veg\nkappa = 0.4\nfriction_u = (kappa * (u2 - u1)) / np.log((z2 - z_d) / (z1 - z_d))\nprint(\"friction velocity u* = %f m/s\" % friction_u)", "So I estimate the friction velocity $u^*$ = 0.0098 $m\\ s^{-1}$\nUsing Dingman eq. 3.35, the momentum flux is:\n$$F_M = - \\rho_a\\ {u^*}^2$$\nLetting \n$\\rho_a$ = 1.2 $kg\\ m^{-3}$\n$u^*$ = friction velocity $m\\ s^{-1}$", "# Units kg/m3 * m2/s2 = kg/m/s2 ?? flux units make sense,\n# kg m/s per unit area per unit time\nrho_a = 1.2 # kg/m-3\nmomentum_flux = -1 * rho_a * friction_u\nprint(\"Momentum flux is %f kg/m/s^2\" % momentum_flux)", "To calculate latent heat flux $Q_E$, I think I would need to use Dingman eq 3.47, but I've gotten thoroughly confused and don't understand what $z_m$ is supposed to represent.\nTo calculate sensible heat flux $Q_H$, I think I would need to use Dingman eq 3.57, but again, I can't figure out what $z_m$ represents.\n(Problems 4 and 5 not attempted.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anilcs13m/MachineLearning_Mastering
polynomial-regression-assignment.ipynb
gpl-2.0
[ "Regression Week 3: Assessing Fit (polynomial regression)\nIn this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:\n* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed\n* Use matplotlib to visualize polynomial regressions\n* Use matplotlib to visualize the same polynomial degree on different subsets of the data\n* Use a validation set to select a polynomial degree\n* Assess the final fit using test data\nWe will continue to use the House data from previous notebooks.\nFire up graphlab create", "import graphlab", "Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.\nThe easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions. \nFor example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)", "tmp = graphlab.SArray([1., 2., 3.])\ntmp_cubed = tmp.apply(lambda x: x**3)\nprint tmp\nprint tmp_cubed", "We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).", "ex_sframe = graphlab.SFrame()\nex_sframe['power_1'] = tmp\nprint ex_sframe", "Polynomial_sframe function\nUsing the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:", "def polynomial_sframe(feature, degree):\n # assume that degree >= 1\n # initialize the SFrame:\n poly_sframe = graphlab.SFrame()\n # and set poly_sframe['power_1'] equal to the passed feature\n poly_sframe['power_1'] = feature\n\n # first check if degree > 1fea\n if degree > 1:\n # then loop over the remaining degrees:\n # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree\n for power in range(2, degree+1): \n # first we'll give the column a name:\n name = 'power_' + str(power)\n # then assign poly_sframe[name] to the appropriate power of feature\n #name = name.apply(lambda name: names**power)\n poly_sframe[name] = feature #poly_sframe.apply(lambda name : name**power)\n tmp = poly_sframe[name]\n tmp_power = tmp.apply(lambda x: x**power)\n poly_sframe[name] = tmp_power\n #print tmp_power\n\n return poly_sframe", "To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:", "print polynomial_sframe(tmp, 3)", "Visualizing polynomial regression\nLet's use matplotlib to visualize what a polynomial regression looks like on some real data.", "sales = graphlab.SFrame('kc_house_data.gl/')", "As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.", "sales = sales.sort(['sqft_living', 'price'])", "Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.", "poly1_data = polynomial_sframe(sales['sqft_living'], 1)\npoly1_data['price'] = sales['price'] # add price to the data since it's the target", "NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.", "model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)\n\n#let's take a look at the weights before we plot\nmodel1.get(\"coefficients\")\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(poly1_data['power_1'],poly1_data['price'],'.',\n poly1_data['power_1'], model1.predict(poly1_data),'-')", "Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'. \nWe can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?", "poly2_data = polynomial_sframe(sales['sqft_living'], 2)\nmy_features = poly2_data.column_names() # get the name of the features\npoly2_data['price'] = sales['price'] # add price to the data since it's the target\nmodel2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)\n\nmodel2.get(\"coefficients\")\n\nplt.plot(poly2_data['power_1'],poly2_data['price'],'.',\n poly2_data['power_1'], model2.predict(poly2_data),'-')", "The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:", "poly3_data = polynomial_sframe(sales['sqft_living'], 3)\nmy_features3 = poly3_data.column_names() # get the name of the features\npoly3_data['price'] = sales['price'] # add price to the data since it's the target\nmodel3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features3, validation_set = None)\n\nmodel3.get(\"coefficients\")\n\nplt.plot(poly3_data['power_1'],poly3_data['price'],'.',\n poly3_data['power_1'], model3.predict(poly3_data),'-')", "Now try a 15th degree polynomial:", "poly15_data = polynomial_sframe(sales['sqft_living'], 15)\nmy_features15 = poly15_data.column_names() # get the name of the features\npoly15_data['price'] = sales['price'] # add price to the data since it's the target\nmodel15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None)\n\nplt.plot(poly15_data['power_1'],poly15_data['price'],'.',\n poly15_data['power_1'], model15.predict(poly15_data),'-')", "What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.\nChanging the data and re-learning\nWe're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.\nTo split the sales data into four subsets, we perform the following steps:\n* First split sales into 2 subsets with .random_split(0.5, seed=0). \n* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).\nWe set seed=0 in these steps so that different users get consistent results.\nYou should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.", "train_data,test_data = sales.random_split(.5,seed=0)\n\nset_1,set_2 = train_data.random_split(0.5,seed=0)\nset_3,set_4 = test_data.random_split(0.5,seed=0)", "Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.", "poly15_set1 = polynomial_sframe(set_1['sqft_living'], 15)\nmy_features15 = poly15_set1.column_names() # get the name of the features\npoly15_set1['price'] = set_1['price'] # add price to the data since it's the target\nmodelset1 = graphlab.linear_regression.create(poly15_set1, target = 'price', features = my_features15, validation_set = None)\n\nmodelset1.get('coefficients')", "Selecting a Polynomial Degree\nWhenever we have a \"magic\" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).\nWe split the sales dataset 3-way into training set, test set, and validation set as follows:\n\nSplit our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).\nFurther split our training data into two sets: training and validation. Use random_split(0.5, seed=1).\n\nAgain, we set seed=1 to obtain consistent results for different users.", "training_and_validation,testing = sales.random_split(.9, seed=1)\ntraining,validation = training_and_validation.random_split(.5, seed=1)", "Next you should write a loop that does the following:\n* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))\n * Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree\n * hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)\n * Add train_data['price'] to the polynomial SFrame\n * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data\n * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.\n* Report which degree had the lowest RSS on validation data (remember python indexes from 0)\n(Note you can turn off the print out of linear_regression.create() with verbose = False)", "for degree in range(1, 15+1):\n poly_data = polynomial_sframe(training['sqft_living'], degree)\n vali_data = polynomial_sframe(validation['sqft_living'], degree)\n test_data = polynomial_sframe(testing['sqft_living'], degree)\n poly_features = poly_data.column_names()\n poly_data['price'] = training['price']\n poly_model = graphlab.linear_regression.create(poly_data, target = 'price', features = poly_features,\n validation_set = None, verbose = False)\n \n predictions = poly_model.predict(vali_data)\n predictions_test = poly_model.predict(test_data)\n validation_errors = predictions - validation['price']\n test_errors = predictions_test - testing['price']\n RSS = sum(validation_errors * validation_errors)\n RSS_test = sum(test_errors * test_errors)\n print \"degree : \" + str(degree) + \", RSS : \" + str(RSS) + \", RSS_test : \" \\\n + str(RSS_test) + \", Training loss : \" + str(poly_model.get('training_loss'))", "Which degree (1, 2, …, 15) had the lowest RSS on Validation data?\nNow that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.\n what is the RSS on TEST data for the model with the degree selected from Validation data? (Make sure you got the correct degree from the previous question)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
grokkaine/biopycourse
day2/ML_DR.ipynb
cc0-1.0
[ "Dimension Reduction\n\nFeature selection\nFeature extraction\nPCA\nICA\nFA\n\n\nApplication: tSNE", "%matplotlib inline", "Feature selection\nEspecially popular in SNP and gene expression studies, feature selection contains a miriad of choices for selecting variables of interest. The module sklearn.feature_selection contains several feature selection possibilities, such as univariate filter selection methods and the recursive feature elimination. \nAnother class of feature selection, SelectFromModel is based on any estimator that can score features and order them by importance. Many sparse estimators such as Lasso, Logistic and Linear SVC can be used.", "from sklearn.svm import LinearSVC\nfrom sklearn.datasets import load_iris\nfrom sklearn.feature_selection import SelectFromModel\niris = load_iris()\nX, y = iris.data, iris.target\nprint(X.shape)\n\n# C controls the sparsity, try smaller for fewer features!\nlsvc = LinearSVC(C=0.01, penalty=\"l1\", dual=False).fit(X, y)\n\nmodel = SelectFromModel(lsvc, prefit=True)\nX_new = model.transform(X)\nprint(X_new.shape)", "Rather than zeroing out on features, tree based estimators would compute feature importance.", "from sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.datasets import load_iris\nfrom sklearn.feature_selection import SelectFromModel\n\niris = load_iris()\nX, y = iris.data, iris.target\nprint(X.shape)\n\nclf = ExtraTreesClassifier()\nclf = clf.fit(X, y)\nprint(clf.feature_importances_)\n\nmodel = SelectFromModel(clf, prefit=True)\nX_new = model.transform(X)\nprint(X_new.shape)\n", "Task:\n\nUse the example below to fit a tree or a forest and compare results! (Tip: check the scikit learn documentation on feature selection)\nEvaluate your goodness of fit using the two feature selectors and compare!", "%matplotlib inline\n\nprint(__doc__)\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.datasets import load_boston\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.linear_model import LassoCV\n\n# Load the boston dataset.\nboston = load_boston()\nX, y = boston['data'], boston['target']\n\n# We use the base estimator LassoCV since the L1 norm promotes sparsity of features.\nclf = LassoCV()\n\n# Set a minimum threshold of 0.25\nsfm = SelectFromModel(clf, threshold=0.25)\nsfm.fit(X, y)\nn_features = sfm.transform(X).shape[1]\n\n# Reset the threshold till the number of features equals two.\n# Note that the attribute can be set directly instead of repeatedly\n# fitting the metatransformer.\nwhile n_features > 2:\n sfm.threshold += 0.1\n X_transform = sfm.transform(X)\n n_features = X_transform.shape[1]\n\n# Plot the selected two features from X.\nplt.title(\n \"Features selected from Boston using SelectFromModel with \"\n \"threshold %0.3f.\" % sfm.threshold)\nfeature1 = X_transform[:, 0]\nfeature2 = X_transform[:, 1]\nplt.plot(feature1, feature2, 'r.')\nplt.xlabel(\"Feature number 1\")\nplt.ylabel(\"Feature number 2\")\nplt.ylim([np.min(feature2), np.max(feature2)])\nplt.show()\n\nprint(boston.keys())\nprint(boston['feature_names'])\nprint(boston['DESCR'])\n\n", "Having irrelevant features in your data can decrease the accuracy of many models, especially linear algorithms. This is partly because linear modeling doesn't work well with data where feature present multiple linear dependencies (seethis for more).\nThree benefits of performing feature selection before modeling your data are:\n- Reduces Overfitting: Less redundant data means less opportunity to make decisions based on noise.\n- Improves Accuracy: Less misleading data means modeling accuracy improves.\n- Reduces Training Time: Less data means that algorithms train faster.\nFeature extraction\nRather than selecting features, FT creates an entirely new (transformed) feature space, having lower dimensionality.\nhttps://en.wikipedia.org/wiki/Feature_extraction\nPCA\n(Principal Component Analysis)\nMany times our data is not full rank, which means that some variables repeat as a linear combination or others. PCA will transform the dataset into a number of uncorelated components. The first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to (i.e., uncorrelated with) the preceding components.\nFor a visual explanation look here.\nMathematically, PCA is a linear transformation of a matrix $X(n,p)$ of $p$ features, defined by a set of $m$ $p$-dimensional vectors called loadings ${w}{(k)} = (w_1, \\dots, w_p){(k)}$ mapping each row vector of $X$ into a set of $m$ principal component scores ${t}{(i)} = (t_1, \\dots, t_m){(i)}$, given by\n$${t_{k}}{(i)} = {x}{(i)} \\cdot {w}_{(k)} \\qquad \\mathrm{for} \\qquad i = 1,\\dots,n \\qquad k = 1,\\dots,m $$\nin such a way that the individual variables $t_1, \\dots, t_m$ of $t$ considered over the data set successively inherit the maximum possible variance from $x$, with each loading vector $w$ constrained to be a unit vector. Numerically, there is a direct connection between the loading vectors and the spectral decomposition of the empirical covariance matrix $X^TX$ in that the first loading vector is the eigenvector with the largest eigenvalue in the transformation of the covariance matrix into a diagonal form.\nWith scikit-learn, PCA is part of the matrix decomposition module, and it is numerically solved via SVD decomposition.", "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\nfrom sklearn import decomposition\nfrom sklearn import datasets\n\nnp.random.seed(5)\n\ncenters = [[1, 1], [-1, -1], [1, -1]]\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\nprint(\"Shape of Iris dataset\", iris.data.shape)\n\nfig = plt.figure(1, figsize=(8, 6))\nplt.clf()\nax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)\n#ax = Axes3D(fig, elev=-150, azim=110)\n\n# feature extraction\nplt.cla()\npca = decomposition.PCA(n_components=3)\npca.fit(X)\nX_reduced = pca.transform(X)\n\n# summarize components\nprint(\"Explained Variance: \", pca.explained_variance_ratio_)\nprint(\"Principal components\", pca.components_)\nprint(\"Shape of reduced dataset\", X_reduced.shape)\n\nfor name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:\n ax.text3D(X_reduced[y == label, 0].mean(),\n X_reduced[y == label, 1].mean() + 1.5,\n X_reduced[y == label, 2].mean(), name,\n horizontalalignment='center',\n bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))\n\n# Reorder the labels to have colors matching the cluster results\ny = np.choose(y, [1, 2, 0]).astype(np.float)\nax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y,\n edgecolor='k')\n\nax.w_xaxis.set_ticklabels([])\nax.w_yaxis.set_ticklabels([])\nax.w_zaxis.set_ticklabels([])\n\nax.set_title(\"First three PCA directions\")\nax.set_xlabel(\"1st eigenvector\")\nax.w_xaxis.set_ticklabels([])\nax.set_ylabel(\"2nd eigenvector\")\nax.w_yaxis.set_ticklabels([])\nax.set_zlabel(\"3rd eigenvector\")\nax.w_zaxis.set_ticklabels([])\n\nplt.show()\n\nimport matplotlib.pyplot as plt\nplt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=Y)", "Task:\n- How to make the score vs loadings biplots, similar to those in R? \n- How to compute the FA loadings/scores based on the principal components?\nFirst of all, PCA is not necessarily a factor analysis subject, and the habit of using scores vs loadings doesn't exist or is different elsewhere. Finish the code bellow to add proper labels for the loadings, and compute the actual loadings.\nFurther read:\n- https://arxiv.org/pdf/1404.1100.pdf\n- https://stats.stackexchange.com/questions/119746/what-is-the-proper-association-measure-of-a-variable-with-a-pca-component-on-a/119758#119758\n- https://stackoverflow.com/questions/21217710/factor-loadings-using-sklearn/28062715#28062715", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import datasets\nfrom sklearn.decomposition import PCA\nimport pandas as pd\nfrom sklearn.preprocessing import StandardScaler\n\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\n#In general a good idea is to scale the data\nscaler = StandardScaler()\nscaler.fit(X)\nX=scaler.transform(X) \n\npca = PCA()\nx_new = pca.fit_transform(X)\n\n\ndef myplot(score,load,labels=None):\n xs = score[:,0]\n ys = score[:,1]\n n = load.shape[0]\n scalex = 1.0/(xs.max() - xs.min())\n scaley = 1.0/(ys.max() - ys.min())\n plt.scatter(xs * scalex,ys * scaley, c = y)\n for i in range(n):\n plt.arrow(0, 0, load[i,0], load[i,1],color = 'r',alpha = 0.5)\n if labels is None:\n plt.text(load[i,0]* 1.15, load[i,1] * 1.15, \"Var\"+str(i+1), color = 'g', ha = 'center', va = 'center')\n else:\n plt.text(load[i,0]* 1.15, load[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')\nplt.xlim(-1,1)\nplt.ylim(-1,1)\nplt.xlabel(\"PC{}\".format(1))\nplt.ylabel(\"PC{}\".format(2))\nplt.grid()\n\n#Call the function. Use only the 2 PCs.\nmyplot(x_new[:,0:2],np.transpose(pca.components_[0:2, :]))\nplt.show()", "ICA\n(Independent Component Analysis)\nICA is an algorithm that finds directions in the feature space corresponding to projections with high non-Gaussianity. \nWhy:\n- Many types of biological datasets are Gaussian rather then normally distributed.\n- Continuous biological or biomedical machine signals are typically Gaussian.\n- The question is not always related to dividing data based on the highest variance.\nFurther read:\n- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3298499/\n- http://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_vs_pca.html#sphx-glr-auto-examples-decomposition-plot-ica-vs-pca-py", "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.decomposition import PCA, FastICA\n\n# two Student T tests with hight degrees of freedom\nrng = np.random.RandomState(42) # initialize the Mersenne Twister randomg number generator\nS = rng.standard_t(1.5, size=(20000, 2))\nS[:, 0] *= 2.\n\n# Mix data\nA = np.array([[1, 1], [0, 2]]) # Mixing matrix\nX = np.dot(S, A.T) # Generate observations\n\n# Train PCA and Fast ICA\npca = PCA()\nS_pca_ = pca.fit(X).transform(X)\nica = FastICA(random_state=rng)\nS_ica_ = ica.fit(X).transform(X) # Estimate the sources\nS_ica_ /= S_ica_.std(axis=0)\n\n\n# Plot results\ndef plot_samples(S, axis_list=None):\n plt.scatter(S[:, 0], S[:, 1], s=2, marker='o', zorder=10,\n color='steelblue', alpha=0.5)\n if axis_list is not None:\n colors = ['orange', 'red']\n for color, axis in zip(colors, axis_list):\n axis /= axis.std()\n x_axis, y_axis = axis\n # Trick to get legend to work\n plt.plot(0.1 * x_axis, 0.1 * y_axis, linewidth=2, color=color)\n plt.quiver(0, 0, x_axis, y_axis, zorder=11, width=0.01, scale=6,\n color=color)\n plt.hlines(0, -3, 3)\n plt.vlines(0, -3, 3)\n plt.xlim(-3, 3)\n plt.ylim(-3, 3)\n plt.xlabel('x')\n plt.ylabel('y')\n\nplt.figure()\nplt.subplot(2, 2, 1)\nplot_samples(S / S.std())\nplt.title('True Independent Sources')\n\naxis_list = [pca.components_.T, ica.mixing_]\nplt.subplot(2, 2, 2)\nplot_samples(X / np.std(X), axis_list=axis_list)\nlegend = plt.legend(['PCA', 'ICA'], loc='upper right')\nlegend.set_zorder(100)\n\nplt.title('Observations')\n\nplt.subplot(2, 2, 3)\nplot_samples(S_pca_ / np.std(S_pca_, axis=0))\nplt.title('PCA recovered signals')\n\nplt.subplot(2, 2, 4)\nplot_samples(S_ica_ / np.std(S_ica_))\nplt.title('ICA recovered signals')\n\nplt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.36)\nplt.show()", "FA\nFactor Analysis\n\nUnique variance: Some variance is unique to the variable under examination. It cannot be associated to what happens to any other variable.\nShared variance: Some variance is shared with one or more other variables, creating redundancy in the data. Redundancy implies that you can find the same information, with slightly different values, in various features and across many observations.\n\nBellow, the components_ attribute, returns an array containing measures of the relationship between the newly created factors, placed in rows, and the original features, placed in columns. Higher positive scores correspond with factors being positively associated with the features. In this case, although we fit the factor model based on four components, only two components seem to fit the dataset.\nFactor analysis uses an expectation maximization (EM) optimizer to find the best Gaussian distribution that can accurately model your data within a tolerance of n_tolerance. In simple terms n_components is the dimensionality of the Gaussian distribution.", "from sklearn.datasets import load_iris\nfrom sklearn.decomposition import FactorAnalysis\niris = load_iris()\nX, y = iris.data, iris.target\nfactor = FactorAnalysis(n_components=4, random_state=101).fit(X)\n\nimport pandas as pd\npd.DataFrame(factor.components_,columns=iris.feature_names)", "Further read:\n- https://en.wikipedia.org/wiki/Multiple_factor_analysis\n- https://en.wikipedia.org/wiki/Factor_analysis_of_mixed_data\n- https://blog.dominodatalab.com/how-to-do-factor-analysis/\ntSNE\n\nIt converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.\n\nWhile t-SNE can be useful for DR, it is considered more useful as a visualization technique in classification problems. One of the prize winning creators maintains a website with native calls to his implementation in multiple languages including python, but scikit-learn also has an implementation available.\n\nhttp://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html\nhttp://scikit-learn.org/stable/modules/manifold.html#t-sne\n\nOne of the most popular demo is the classification of the MINST digit data.", "import numpy as np\nfrom sklearn.datasets import fetch_mldata\n\nmnist = fetch_mldata(\"MNIST original\")\nX = mnist.data / 255.0\ny = mnist.target\n\nprint(X.shape, y.shape)\n\nimport pandas as pd\n\nfeat_cols = [ 'pixel'+str(i) for i in range(X.shape[1]) ]\ndf = pd.DataFrame(X,columns=feat_cols)\ndf['label'] = y\ndf['label'] = df['label'].apply(lambda i: str(i))\n#X, y = None, None\n\nprint('Size of the dataframe: {}'.format(df.shape))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nrndperm = np.random.permutation(df.shape[0])\n# Plot the graph\nplt.gray()\nfig = plt.figure( figsize=(16,7) )\nfor i in range(0,30):\n ax = fig.add_subplot(3,10,i+1, title='Digit: ' + str(df.loc[rndperm[i],'label']) )\n ax.matshow(df.loc[rndperm[i],feat_cols].values.reshape((28,28)).astype(float))\nplt.show()\n\nfrom sklearn.decomposition import PCA\n\npca = PCA(n_components=3)\npca_result = pca.fit_transform(df[feat_cols].values)\n\ndf['pca-one'] = pca_result[:,0]\ndf['pca-two'] = pca_result[:,1] \ndf['pca-three'] = pca_result[:,2]\n\nprint('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_))\n\nfrom ggplot import *\n\nchart = ggplot( df.loc[rndperm[:3000],:], aes(x='pca-one', y='pca-two', color='label') ) \\\n + geom_point(size=75,alpha=0.8) \\\n + ggtitle(\"First and Second Principal Components colored by digit\")\nchart\n\nimport time\n\nfrom sklearn.manifold import TSNE\n\nn_sne = 3000\n\ntime_start = time.time()\ntsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=250)\ntsne_results = tsne.fit_transform(df.loc[rndperm[:n_sne],feat_cols].values)\n\nprint('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))\n\ndf_tsne = df.loc[rndperm[:n_sne],:].copy()\ndf_tsne['x-tsne'] = tsne_results[:,0]\ndf_tsne['y-tsne'] = tsne_results[:,1]\n\nchart = ggplot( df_tsne, aes(x='x-tsne', y='y-tsne', color='label') ) \\\n + geom_point(size=70,alpha=0.1) \\\n + ggtitle(\"tSNE dimensions colored by digit\")\nchart", "Task:\n- Plot the tSNE results on Seaborn or matplotlib rather than ggplot!\nFurther reading:\n- The original blogpost covering the application of t-SNE: https://medium.com/@luckylwk/visualising-high-dimensional-datasets-using-pca-and-t-sne-in-python-8ef87e7915b\n- Author website, https://lvdmaaten.github.io/tsne/\n- tSNE is notoriously difficult to interpret, for a list of caveats see https://distill.pub/2016/misread-tsne/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dvirsamuel/MachineLearningCourses
Visual Recognision - Stanford/assignment3/ImageGradients.ipynb
gpl-3.0
[ "Image Gradients\nIn this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.", "# As usual, a bit of setup\n\nimport time, os, json\nimport numpy as np\nimport skimage.io\nimport matplotlib.pyplot as plt\n\nfrom cs231n.classifiers.pretrained_cnn import PretrainedCNN\nfrom cs231n.data_utils import load_tiny_imagenet\nfrom cs231n.image_utils import blur_image, deprocess_image\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "Introducing TinyImageNet\nThe TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.\nWe have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.\nTo download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.\nNOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.", "data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A/tiny-imagenet-100-A', subtract_mean=True)", "TinyImageNet-100-A classes\nSince ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example \"pop bottle\" and \"soda bottle\" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:", "for i, names in enumerate(data['class_names']):\n print i, ' '.join('\"%s\"' % name for name in names)", "Visualize Examples\nRun the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.", "# Visualize some examples of the training data\nclasses_to_show = 7\nexamples_per_class = 5\n\nclass_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)\nfor i, class_idx in enumerate(class_idxs):\n train_idxs, = np.nonzero(data['y_train'] == class_idx)\n train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)\n for j, train_idx in enumerate(train_idxs):\n img = deprocess_image(data['X_train'][train_idx], data['mean_image'])\n plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)\n if j == 0:\n plt.title(data['class_names'][class_idx][0])\n plt.imshow(img)\n plt.gca().axis('off')\n\nplt.show()", "Pretrained model\nWe have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).\nTo get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.", "model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')", "Pretrained model performance\nRun the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.", "batch_size = 100\n\n# Test the model on training data\nmask = np.random.randint(data['X_train'].shape[0], size=batch_size)\nX, y = data['X_train'][mask], data['y_train'][mask]\ny_pred = model.loss(X).argmax(axis=1)\nprint 'Training accuracy: ', (y_pred == y).mean()\n\n# Test the model on validation data\nmask = np.random.randint(data['X_val'].shape[0], size=batch_size)\nX, y = data['X_val'][mask], data['y_val'][mask]\ny_pred = model.loss(X).argmax(axis=1)\nprint 'Validation accuracy: ', (y_pred == y).mean()", "Saliency Maps\nUsing this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].\nAs mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.\nYou will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.\n[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\nImage Classification Models and Saliency Maps\", ICLR Workshop 2014.", "def indices_to_one_hot(data, nb_classes):\n \"\"\"Convert an iterable of indices to one-hot encoded labels.\"\"\"\n targets = np.array(data).reshape(-1)\n return np.eye(nb_classes)[targets]\n\ndef compute_saliency_maps(X, y, model):\n \"\"\"\n Compute a class saliency map using the model for images X and labels y.\n \n Input:\n - X: Input images, of shape (N, 3, H, W)\n - y: Labels for X, of shape (N,)\n - model: A PretrainedCNN that will be used to compute the saliency map.\n \n Returns:\n - saliency: An array of shape (N, H, W) giving the saliency maps for the input\n images.\n \"\"\"\n saliency = None\n ##############################################################################\n # TODO: Implement this function. You should use the forward and backward #\n # methods of the PretrainedCNN class, and compute gradients with respect to #\n # the unnormalized class score of the ground-truth classes in y. #\n ##############################################################################\n out, cache = model.forward(X)\n dout = indices_to_one_hot(y,100)\n dX, grads = model.backward(dout, cache)\n saliency = np.max(np.abs(dX),axis=1)\n ##############################################################################\n # END OF YOUR CODE #\n ##############################################################################\n return saliency", "Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.", "def show_saliency_maps(mask):\n mask = np.asarray(mask)\n X = data['X_val'][mask]\n y = data['y_val'][mask]\n\n saliency = compute_saliency_maps(X, y, model)\n\n for i in xrange(mask.size):\n plt.subplot(2, mask.size, i + 1)\n plt.imshow(deprocess_image(X[i], data['mean_image']))\n plt.axis('off')\n plt.title(data['class_names'][y[i]][0])\n plt.subplot(2, mask.size, mask.size + i + 1)\n plt.title(mask[i])\n plt.imshow(saliency[i])\n plt.axis('off')\n plt.gcf().set_size_inches(10, 4)\n plt.show()\n\n# Show some random images\nmask = np.random.randint(data['X_val'].shape[0], size=5)\nshow_saliency_maps(mask)\n \n# These are some cherry-picked images that should give good results\nshow_saliency_maps([128, 3225, 2417, 1640, 4619])", "Fooling Images\nWe can also use image gradients to generate \"fooling images\" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.\n[2] Szegedy et al, \"Intriguing properties of neural networks\", ICLR 2014", "def make_fooling_image(X, target_y, model):\n \"\"\"\n Generate a fooling image that is close to X, but that the model classifies\n as target_y.\n \n Inputs:\n - X: Input image, of shape (1, 3, 64, 64)\n - target_y: An integer in the range [0, 100)\n - model: A PretrainedCNN\n \n Returns:\n - X_fooling: An image that is close to X, but that is classifed as target_y\n by the model.\n \"\"\"\n X_fooling = X.copy()\n ##############################################################################\n # TODO: Generate a fooling image X_fooling that the model will classify as #\n # the class target_y. Use gradient ascent on the target class score, using #\n # the model.forward method to compute scores and the model.backward method #\n # to compute image gradients. #\n # #\n # HINT: For most examples, you should be able to generate a fooling image #\n # in fewer than 100 iterations of gradient ascent. #\n ##############################################################################\n #current_loss, grads = model.loss(X_fooling,target_y)\n scores, cache = model.forward(X_fooling)\n i = 0\n while scores.argmax() != target_y:\n print(i,scores.argmax(),target_y)\n dout = indices_to_one_hot(target_y,100)\n dX, grads = model.backward(dout, cache)\n X_fooling += 200 * dX\n scores, cache = model.forward(X_fooling)\n i += 1\n \n ##############################################################################\n # END OF YOUR CODE #\n ##############################################################################\n return X_fooling", "Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.", "# Find a correctly classified validation image\nwhile True:\n i = np.random.randint(data['X_val'].shape[0])\n X = data['X_val'][i:i+1]\n y = data['y_val'][i:i+1]\n y_pred = model.loss(X)[0].argmax()\n if y_pred == y: break\n\ntarget_y = 67\nX_fooling = make_fooling_image(X, target_y, model)\n\n# Make sure that X_fooling is classified as y_target\nscores = model.loss(X_fooling)\nassert scores[0].argmax() == target_y, 'The network is not fooled!'\n\n# Show original image, fooling image, and difference\nplt.subplot(1, 3, 1)\nplt.imshow(deprocess_image(X, data['mean_image']))\nplt.axis('off')\nplt.title(data['class_names'][y][0])\nplt.subplot(1, 3, 2)\nplt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))\nplt.title(data['class_names'][target_y][0])\nplt.axis('off')\nplt.subplot(1, 3, 3)\nplt.title('Difference')\nplt.imshow(deprocess_image(X - X_fooling, data['mean_image']))\nplt.axis('off')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mastertrojan/Udacity
embeddings/.ipynb_checkpoints/Skip-Gram word2vec-checkpoint.ipynb
mit
[ "Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.", "import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils", "Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()", "Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.", "words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))", "And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.", "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]", "Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.", "## Your code here\nfrom collections import Counter\nimport random\nimport re\n\nthreshold = 1e-5\nword_counts = Counter(int_words)\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in word_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\ntrain_words = [word for word in int_words if p_drop[word] < random.random()]", "Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.", "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n # Your code here\n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n \n return list(target_words)", "Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.", "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.", "train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None])\n labels = tf.placeholder(tf.int32, [None, 1])", "Embedding\nThe embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:\n\nYou don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.\n<img src=\"assets/word2vec_weight_matrix_lookup_table.png\" width=500>\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.", "n_vocab = len(int_to_vocab)\nn_embedding = 300 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))# create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output", "Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.", "# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here\n softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)", "Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints", "Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.", "epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)", "Restore the trained network if you need to:", "with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)", "Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deculler/MachineLearningTables
Chapter5.ipynb
bsd-2-clause
[ "Chapter 5 - Resampling methods\nConcepts and data from \"An Introduction to Statistical Learning, with applications in R\" (Springer, 2013) with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani \" available at www.StatLearning.com.\nFor Tables reference see http://data8.org/datascience/tables.html", "# HIDDEN\n# For Tables reference see http://data8.org/datascience/tables.html\n# This useful nonsense should just go at the top of your notebook.\nfrom datascience import *\n%matplotlib inline\nimport matplotlib.pyplot as plots\nimport numpy as np\nfrom sklearn import linear_model\nplots.style.use('fivethirtyeight')\nplots.rc('lines', linewidth=1, color='r')\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n# datascience version number of last run of this notebook\nversion.__version__\n\n\nimport sys\nsys.path.append(\"..\")\nfrom ml_table import ML_Table\n\nimport locale\nlocale.setlocale( locale.LC_ALL, 'en_US.UTF-8' ) ", "5.1 Cross validation\n5.1.1 The Validation Set Approach\nTrain on a random half the data, test on the other half.", "raw_auto = ML_Table.read_table(\"data/Auto.csv\")\nauto = raw_auto.where(raw_auto['horsepower'] != '?')\nauto['horsepower'] = auto.apply(int, 'horsepower')\n\nauto\n\nauto_poly = auto.poly('mpg', 'horsepower', 2)\n\n# training error\n\nauto.MSE_model('mpg', auto_poly, 'horsepower')\n\nTable().with_columns([\n ('degree', range(1,10)),\n ('Training error', [auto.MSE_model('mpg', auto.poly('mpg', 'horsepower', degree), 'horsepower') for degree in range(1,10)])\n]).plot('degree', height=3, width=4)\n\ndef poly_cross(tbl, output_label, input_label, max_degree):\n train, test = tbl.split(tbl.num_rows//2)\n return [test.MSE_model(output_label, train.poly(output_label, input_label, deg), input_label) for deg in range(1,max_degree)\n ]\n\npoly_cross(auto, 'mpg', 'horsepower', 10)\n\nauto_poly = Table().with_column('degree', range(1,10))\nfor k in range(10):\n auto_poly = auto_poly.with_column(\"run \"+str(k), poly_cross(auto, 'mpg', 'horsepower', 10))\nauto_poly\n\nauto_poly.plot('degree')\nplots.ylim(10,30)", "Leave out one cross valudation - LOOCV\n$CV_n = \\frac{1}{n} \\sum_{i=1}^{n}MSE_i$\nwhere $MSE_i$ is the test error on the $i$-th row when trained on the remainin $n-1$ rows. It is slow.", "auto.take(13)\n\ndef LOOCV_poly(tbl, output_label, input_labels, degree):\n n = tbl.num_rows\n def split(i):\n return tbl.exclude(i), tbl.take(i)\n MSEs = [test.MSE_model(output_label, \n train.poly(output_label, input_labels, degree),\n input_labels) for train, test in [split(i) for i in range(n)]\n ]\n return np.sum(MSEs)/n\n\nLOOCV_poly(auto, 'mpg', 'horsepower', 1)\n\n# This takes a while with 392 * 9 poly fits\nauto_poly_loocv = Table().with_column('degree', range(1,10))\nauto_poly_loocv['LOOCV'] = [LOOCV_poly(auto, 'mpg', 'horsepower', deg) for deg in auto_poly_loocv['degree'] ]\nauto_poly_loocv\n\nauto_poly_loocv.plot('degree')\n\nauto.num_rows", "k-fold cross validation\n$CV_k = \\frac{1}{k} \\sum_{i=1}^{k}MSE_i$\nwhere the data is broken into $k$ groups, trained on $k-1$ of them and tested on the remaining.", "auto.take(range(9,13))\n\ndef k_split(tbl, i, k):\n n = tbl.num_rows\n nk = n//k\n fold = range(i*nk, (i+1)*nk)\n return tbl.exclude(fold), tbl.take(fold)\n\ndef k_fold_poly(tbl, output_label, input_labels, degree, k):\n MSEs = [test.MSE_model(output_label, \n train.poly(output_label, input_labels, degree),\n input_labels) for train, test in [k_split(tbl, i, k) for i in range(k)]\n ]\n return np.sum(MSEs)/k\n\nk_split(auto, 0, 10)\n\nk_fold_poly(auto, 'mpg', 'horsepower', 1, 10)\n\nauto_poly_k_fold = Table().with_column('degree', range(1,10))\nauto_poly_k_fold['k fold'] = [k_fold_poly(auto, 'mpg', 'horsepower', deg, 10) for deg in auto_poly_k_fold['degree'] ]\nauto_poly_k_fold\n\nauto_poly_k_fold.plot('degree')", "Cross validation with classification", "n = 400\neps = 0.1\ntest2 = ML_Table.runiform('ix', n)\ntest2['iy'] = np.random.rand(n)\ntest2['Cat'] = test2.apply(lambda x, y: 'A' if x+y <0 else 'B', ['ix', 'iy'])\ntest2['Class A'] = test2.apply(lambda x: 1 if x=='A' else 0, 'Cat')\ntest2['x'] = test2['ix'] + eps*np.random.normal(size=n)\ntest2['y'] = test2['iy'] + eps*np.random.normal(size=n)\n\nlogit2d = test2.logit_regression('Class A', ['x', 'y'])\nmodel_2d = logit2d.model\nax = test2.plot_cut_2d('Class A', 'x', 'y', model_2d, n_grid=50)\ntest2.classification_error_model('Class A', model_2d, ['x', 'y'])\n\ntrain, test = test2.split(n//2)\nclassifier = train.logit_regression('Class A', ['x', 'y'])\nax = test.plot_cut_2d('Class A', 'x', 'y', classifier.model, n_grid=50)\ntest.classification_error_model('Class A', classifier.model, ['x', 'y'])\n\ntrain, test = test2.split(n//2)\nclassifier = train.knn_regression('Class A', ['x', 'y'], n_neighbors=5)\nax = test.plot_cut_2d('Class A', 'x', 'y', classifier.model, n_grid=50)\ntest.classification_error_model('Class A', classifier.model, ['x', 'y'])\n\ntrain, test = test2.split(n//2)\nclassifier = train.LDA('Class A', ['x', 'y'])\nax = test.plot_cut_2d('Class A', 'x', 'y', classifier.model, n_grid=50)\ntest.classification_error_model('Class A', classifier.model, ['x', 'y'])", "K-Fold Cross Validation", "def k_split(tbl, i, k):\n n = tbl.num_rows\n nk = n//k\n fold = range(i*nk, (i+1)*nk)\n return tbl.exclude(fold), tbl.take(fold)\n\ndef k_error(i, k, tbl, classifier, output_label, input_labels, **kwargs):\n train, test = k_split(tbl, i, k)\n return test.classification_error_model(output_label, \n classifier(train, output_label, input_labels, **kwargs).model,\n input_labels)\n\ndef k_fold(k, tbl, classifier, output_label, input_labels, **kwargs):\n return [k_error(i, k, tbl, classifier, output_label, input_labels, **kwargs) for i in range(k)]\n\nk_error(0, 10, test2, ML_Table.LDA, 'Class A', ['x', 'y'])\n\nk_fold(10, test2, ML_Table.logit_regression, 'Class A', ['x', 'y'])\n\nk_fold(10, test2, ML_Table.knn_regression, 'Class A', ['x', 'y'])\n\nk_fold(10, test2, ML_Table.LDA, 'Class A', ['x', 'y'])\n\n[np.mean(k_fold(10, test2, ML_Table.logit_regression, 'Class A', ['x', 'y'])),\nnp.mean(k_fold(10, test2, ML_Table.knn_regression, 'Class A', ['x', 'y'])),\nnp.mean(k_fold(10, test2, ML_Table.LDA, 'Class A', ['x', 'y']))]", "Bootstrap\nUsing the bootstrap, we can compare the accuracy of our various classifiers", "def boot(tbl, classifier, output_label, input_labels, **kwargs):\n test = tbl.sample(tbl.num_rows, with_replacement=True)\n return test.classification_error_model(output_label, \n classifier(test, output_label, input_labels, **kwargs).model,\n input_labels)\n\ndef bootstrap_classifier(k, tbl, classifier, output_label, input_labels, **kwargs):\n return [boot(tbl, classifier, output_label, input_labels, **kwargs) for i in range(k)]\n\nboot(test2, ML_Table.logit_regression, 'Class A', ['x', 'y'])\n\ntest2_classifiers = Table()\ntest2_classifiers['Logit Error'] = bootstrap_classifier(100, test2, ML_Table.logit_regression, 'Class A', ['x', 'y'])\ntest2_classifiers['LDA Error'] = bootstrap_classifier(100, test2, ML_Table.LDA, 'Class A', ['x', 'y'])\ntest2_classifiers['KNN Error'] = bootstrap_classifier(100, test2, ML_Table.knn_regression, 'Class A', ['x', 'y'])\ntest2_classifiers.hist()\n\ntest2_classifiers.stats(ops=[min, np.mean, np.median, max])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LimeeZ/phys292-2015-work
assignments/phys202-project/project/NeuralNetworks.ipynb
mit
[ "Neural Networks\nThis project was created by Brian Granger. All content is licensed under the MIT License.\n\nIntroduction\nNeural networks are a class of algorithms that can learn how to compute the value of a function given previous examples of the functions output. Because neural networks are capable of learning how to compute the output of a function based on existing data, they generally fall under the field of Machine Learning.\nLet's say that we don't know how to compute some function $f$:\n$$ f(x) \\rightarrow y $$\nBut we do have some data about the output that $f$ produces for particular input $x$:\n$$ f(x_1) \\rightarrow y_1 $$\n$$ f(x_2) \\rightarrow y_2 $$\n$$ \\ldots $$\n$$ f(x_n) \\rightarrow y_n $$\nA neural network learns how to use that existing data to compute the value of the function $f$ on yet unseen data. Neural networks get their name from the similarity of their design to how neurons in the brain work.\nWork on neural networks began in the 1940s, but significant advancements were made in the 1970s (backpropagation) and more recently, since the late 2000s, with the advent of deep neural networks. These days neural networks are starting to be used extensively in products that you use. A great example of the application of neural networks is the recently released Flickr automated image tagging. With these algorithms, Flickr is able to determine what tags (\"kitten\", \"puppy\") should be applied to each photo, without human involvement.\nIn this case the function takes an image as input and outputs a set of tags for that image:\n$$ f(image) \\rightarrow {tag_1, \\ldots} $$\nFor the purpose of this project, good introductions to neural networks can be found at:\n\nThe Nature of Code, Daniel Shiffman.\nNeural Networks and Deep Learning, Michael Nielsen.\nData Science from Scratch, Joel Grus\n\nThe Project\nYour general goal is to write Python code to predict the number associated with handwritten digits. The dataset for these digits can be found in sklearn:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.html.widgets import interact\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()\nprint(digits.data.shape)\n\ndef show_digit(i):\n plt.matshow(digits.images[i]);\n\ninteract(show_digit, i=(0,100));", "The actual, known values (0,1,2,3,4,5,6,7,8,9) associated with each image can be found in the target array:", "digits.target", "Here are some of the things you will need to do as part of this project:\n\nSplit the original data set into two parts: 1) a training set that you will use to train your neural network and 2) a test set you will use to see if your trained neural network can accurately predict previously unseen data.\nWrite Python code to implement the basic building blocks of neural networks. This code should be modular and fully tested. While you can look at the code examples in the above resources, your code should be your own creation and be substantially different. One way of ensuring your code is different is to make it more general.\nCreate appropriate data structures for the neural network.\nFigure out how to initialize the weights of the neural network.\nWrite code to implement forward and back propagation.\nWrite code to train the network with the training set.\n\nYour base question should be to get a basic version of your code working that can predict handwritten digits with an accuracy that is significantly better than that of random guessing.\nHere are some ideas of questions you could explore as your two additional questions:\n\nHow to specify, train and use networks with more hidden layers.\nThe best way to determine the initial weights.\nMaking it all fast to handle more layers and neurons per layer (%timeit and %%timeit).\nExplore different ways of optimizing the weights/output of the neural network.\nTackle the full MNIST benchmark of $10,000$ digits.\nHow different sigmoid function affect the results.\n\nImplementation hints\nThere are optimization routines in scipy.optimize that may be helpful.\nYou should use NumPy arrays and fast NumPy operations (dot) everywhere that is possible." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/tutorials/plotting_advanced.ipynb
gpl-3.0
[ "Advanced: Plotting Options\nFor basic plotting usage, see the plotting tutorial\nPHOEBE 2.4 uses autofig 1.1 as an intermediate layer for highend functionality to matplotlib.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"", "This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode. Depending on your version of Jupyter, Python, and matplotlib - you may or may not need this line in order to see plots in the notebook.", "%matplotlib inline\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\n\nlogger = phoebe.logger()", "First we're going to create some fake observations so that we can show how to plot observational data. In real life, we would use something like np.loadtxt to get arrays from a data file instead.", "b = phoebe.default_binary()\nb.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))\nb.run_compute(irrad_method='none')\n\ntimes = b.get_value('times', context='model')\nfluxes = b.get_value('fluxes', context='model') + np.random.normal(size=times.shape) * 0.01\nsigmas = np.ones_like(times) * 0.05", "Now we'll create a new Bundle and attach an orbit dataset (without observations) and a light curve dataset (with our \"fake\" observations - see Datasets for more details):", "b = phoebe.default_binary()\nb.set_value('q', 0.8)\nb.set_value('ecc', 0.1)\nb.set_value('irrad_method', 'none')\n\nb.add_dataset('orb', compute_times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])\nb.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')", "And run several forward models. See Computing Observables for more details.", "b.set_value(qualifier='incl', kind='orbit', value=90)\nb.run_compute(model='run_with_incl_90') \n\nb.set_value(qualifier='incl', kind='orbit', value=85)\nb.run_compute(model='run_with_incl_85')\n\nb.set_value(qualifier='incl', kind='orbit', value=80)\nb.run_compute(model='run_with_incl_80')", "Time (highlight and uncover)\nThe built-in plot method also provides convenience options to either highlight the interpolated point for a given time, or only show the dataset up to a given time.\nHighlight\nThe higlight option is enabled by default so long as a time (or times) is passed to plot. It simply adds an extra marker at the sent time - interpolating in the synthetic model if necessary.", "afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, show=True)", "To change the style of the \"highlighted\" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.", "afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, \n highlight_marker='s', \n highlight_color='g', \n highlight_ms=20, \n show=True)", "To disable highlighting, simply send highlight=False", "afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, \n highlight=False, \n show=True)", "Uncover\nUncover shows the observations or synthetic model up to the provided time and is disabled by default, even when a time is provided, but is enabled simply by providing uncover=True. There are no additional options available for uncover.", "afig, mplfig = b['orb@run_with_incl_80'].plot(time=0.5, \n uncover=True, \n show=True)", "Units\nLikewise, each array that is plotted is automatically plotted in its default units. To override these defaults, simply provide the unit (as a string or as a astropy units object) for a given axis.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU', show=True)", "WARNING: when plotting two arrays with the same dimensions, PHOEBE attempts to set the aspect ratio to equal, but overriding to use two different units will result in undesired results. This may be fixed in the future, but for now can be avoided by using consistent units for the x and y axes when they have the same dimensions.\nAxes Labels\nAxes labels are automatically generated from the qualifier of the array and the plotted units. To override these defaults, simply pass a string for the label of a given axis.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS', show=True)", "Axes Limits\nAxes limits are determined by the data automatically. To set custom axes limits, either use matplotlib methods on the returned axes objects, or pass limits as a list or tuple.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2), show=True)", "Errorbars\nIn the cases of observational data, errorbars can be added by passing the name of the column.", "afig, mplfig = b['lc01@dataset'].plot(yerror='sigmas', show=True)", "To disable the errorbars, simply set yerror=None.", "afig, mplfig = b['lc01@dataset'].plot(yerror=None, show=True)", "Colors\nColors of points and lines, by default, cycle according to matplotlib's color policy. To manually set the color, simply pass a matplotlib recognized color to the 'c' keyword.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(c='r', show=True)", "In addition, you can point to an array in the dataset to use as color.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', show=True)", "Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.\nColormaps\nThe colormaps is determined automatically based on the parameter used for coloring (ie RVs will be a red-blue colormap). To override this, pass a matplotlib recognized colormap to the cmap keyword.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', cmap='spring', show=True)", "Adding a Colorbar\nTo add a colorbar (or sizebar, etc), send draw_sidebars=True to the plot call.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', draw_sidebars=True, show=True)", "Labels and Legends\nTo add a legend, include legend=True.\nFor details on placement and formatting of the legend see matplotlib's documentation.", "afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True)", "The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.", "afig, mplfig = b['primary@orb@run_with_incl_80'].plot(label='primary')\nafig, mplfig = b['secondary@orb@run_with_incl_80'].plot(label='secondary', legend=True, show=True)", "To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend", "afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True, legend_kwargs={'loc': 'center', 'facecolor': 'r'})", "Other Plotting Options\nValid plotting options that are directly passed to matplotlib include:\n- linestyle\n- marker\nNote that sizes (markersize, linewidth) should be handled by passing the size to 's' and attempting to set markersize or linewidth directly will raise an error. See also the autofig documention on size scales.", "afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', s=0.1, show=True)", "3D Axes\nTo plot a in 3d, simply pass projection='3d' to the plot call. To override the defaults for the z-direction, pass a twig or array just as you would for x or y.", "afig, mplfig = b['orb@run_with_incl_80'].plot(time=0, projection='3d', show=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kimkipyo/dss_git_kkp
Python 복습/16일차.금_nosql/16일차_2T_데이터 분석을 위한 MongoDB (2)_실습.ipynb
mit
[ "1T_접속하기", "from pymongo import MongoClient\n\nclient = MongoClient(\n \"mongodb://mongodb.dobest.io:27017/\"\n)\n\nclient.database_names()\n\nclient2 = MongoClient(\n \"mongodb://kipoy:rlarlvy@ds013024.mlab.com:13024/kkpoy\"\n # username: kipoy\n # password: rlarlvy\n # database name: kkpoy\n)\n\nclient2\n\nclient2.database_names() #관리자 권한이 없기 때문에 안된다.\n\nclient2.kkpoy.collection_names()\n\nclient2.kkpoy.users.find_one() #원래는 JSON인데 dict형태로 뽑혔다. pymongo가 다 바꾸어주었다.", "2T_데이터 추가하기, 검색하기", "db = client2.kkpoy\n# db = client[\"kkpoy\"]\n#위 아래 같은 의미\n\nusers = db.users\n# users = db[\"usres\"]\n#위 아래 같은 의미\n\nusers #users가 collection\n\nusers.find_one()\n\nMONGO_QUERY = {} # json 으로 query 를 날린다. ( dict => json; pymongo )\n\nusers.find_one(MONGO_QUERY)\n\nMONGO_QUERY = {}\nusers.find(MONGO_QUERY)\n\nMONGO_QUERY = {}\ncursor = users.find(MONGO_QUERY)\n\nfor user in cursor:\n print(user)\n\n[\n user\n for user\n in cursor\n]\n\nMONGO_QUERY = {\"name\": \"Dani Kim\"}\n\nusers.find_one(MONGO_QUERY)", "find_찾을 때\ninsert_넣을 때", "user = {\n \"name\": \"이동규\",\n \"email\": \"milk@gmail.com\",\n}\nusers.insert_one(user)\n\nusers_list = [\n {\n \"name\": \"이영은\",\n \"phone\": \"01022706512\",\n },\n {\n \"name\": \"유영수\",\n \"email\": \"yskk@naver.com\"\n }\n]\nusers.insert_many(users_list)\n\nwatcha = client2.kipoy[\"watcha\"]\n\nwatcha = db.watcha\n\nimport requests\n\nresponse = requests.get(\"https://watcha.net/home/news.json?page=1&per=300\")\n\nwatcha_news = response.json().get(\"news\") # List\n\nwatcha.insert_many(watcha_news)", "직방 Collection => 데이터를 100개 정도 넣어보기", "import pandas as pd\nimport requests\nimport json\n\nzigbang = client2.kipoy[\"zigbang\"]\n\nzigbang = db.zigbang\n\nzigbang\n\nresponse = requests.get(\"https://api.zigbang.com/v1/items?detail=true&item_ids=4620292&item_ids=4366382&item_ids=4566963&item_ids=4585208&item_ids=4560308&item_ids=4552724&item_ids=4344484&item_ids=4612042&item_ids=4574810&item_ids=4588687&item_ids=4387287&item_ids=4538842&item_ids=4557985&item_ids=4579464&item_ids=4607349&item_ids=4603203&item_ids=4341393&item_ids=4575315&item_ids=4350877&item_ids=4538375&item_ids=4616443&item_ids=4281504&item_ids=4556024&item_ids=4550034&item_ids=4512172&item_ids=4507118&item_ids=4606156&item_ids=4457169&item_ids=4526327&item_ids=4407071&item_ids=4582264&item_ids=4607937&item_ids=4395275&item_ids=4568603&item_ids=4569329&item_ids=4564865&item_ids=4551098&item_ids=4617261&item_ids=4536918&item_ids=4614718&item_ids=4614198&item_ids=4610604&item_ids=4578711&item_ids=4593621&item_ids=4612621&item_ids=4518874&item_ids=4533169&item_ids=4409063&item_ids=4617602&item_ids=4477945&item_ids=4249606&item_ids=4560223&item_ids=4570020&item_ids=4517907&item_ids=4530774&item_ids=4525210&item_ids=4596138&item_ids=4588994&item_ids=4612357&item_ids=4411862\")\n\nzigbang_dict = response.json()\n\nlen(zigbang_dict)\n\nzigbang_dict.keys()\n\nzigbang_items_list = zigbang_dict.get(\"items\")\n\nlen(zigbang_items_list)\n\nzigbang.insert_many(zigbang_items_list)\n\nMONGO_QUERY = {}\nzigbang.find(MONGO_QUERY).count()\n#SELECT COUNT(*) FROM ____;", "보증금이 딱 1000만원인 매물 리스트를 출력해보자", "MONGO_QUERY = {}\ncursor = zigbang.find(MONGO_QUERY)\nitems_list = [\n document\n for document\n in cursor\n if document.get(\"item\").get(\"deposit\") == 1000\n]\n\nlen(items_list)\n\nMONGO_QUERY = {\"title\": \"서울시 관악구 신림동\"}\nzigbang.find_one(MONGO_QUERY)\n\nMONGO_QUERY = {\"item.deposit\": 1000} #관계형 DB의 관점에서는 말도 안되는 기능. 쉽게 찾을 수 있다.\nzigbang.find(MONGO_QUERY).count()", "보증금이 1000만원 이상인 매물 가져오기", "MONGO_QUERY = {}\ncursor = zigbang.find(MONGO_QUERY)\n\nitems_list = [\n document\n for document\n in cursor\n if document.get(\"item\").get(\"deposit\") >= 1000\n]\nlen(items_list)\n\nMONGO_QUERY = {\"item.deposit\": {\"$gte\": 1000}} #1000보다 크거나 같다. greater than equal\nzigbang.find(MONGO_QUERY).count()\n\nMONGO_QUERY = {\"item.deposit\": {\"$gte\": 1000, \"$lte\": 2000}} \nzigbang.find(MONGO_QUERY).count()", "보증금이 2000만원 이하이고 월세가 50만원 이하인 매물 출력하기", "MONGO_QUERY = {}\ncursor = zigbang.find(MONGO_QUERY)\n\nitems_list = [\n document\n for document\n in cursor\n if document.get(\"item\").get(\"deposit\") <= 2000 and document.get(\"item\").get(\"rent\") <= 50\n]\nlen(items_list)\n\nMONGO_QUERY = {\n \"item.deposit\": {\"$lte\": 2000},\n \"item.rent\": {\"$lte\": 50},\n }\nzigbang.find(MONGO_QUERY).count()\n\nzigbang_list = zigbang_dict.get(\"items\")\n\nlen(zigbang_list)\n\nnew_zigbang_list = [\n item.get(\"item\")\n for item\n in zigbang_list\n]\n\nnew_zigbang_list[0]\n\nzigbang.insert_many(new_zigbang_list)\n\nMONGO_QUERY = {\"deposit\": {\"$lte\": 1000}}\nzigbang.find(MONGO_QUERY).count()", "비정형데이터에서 중요한 것은 어떤 데이터를 잘 넣을까 하는 부분에 대해서 고민해보아야 한다.\n왜냐하면 있는 그대로 넣었을 경우에 비효율적인 부분이 발생하기 때문이다.\n\n3T_Aggregate ( GROUP BY )\n\nGROUP BY ... 지역별 숫자 세기 / 지역별 평균 보증금,월세 ( MongoDB )\n\nWorld => MongoDB ( 데이터 전처리 ... )\n\n\naggregate ( GROUP BY )\n\nzigbang ... address1을 기준으로 나누어보자(그룹을 나누어보자)", "import pymongo\n\n#GROUP BY\ncursor = zigbang.aggregate([\n {\n \"$group\": {\n \"_id\": \"$address1\",\n }\n }\n ])\nfor document in cursor:\n print(document)\n\ncursor = zigbang.aggregate([\n {\n \"$group\": {\n \"_id\": \"$address1\",\n \"count\": {\"$sum\": 1}\n }\n }\n ])\nfor document in cursor:\n print(document)\n\ncursor = zigbang.aggregate([\n {\n \"$group\": {\n \"_id\": \"$address1\",\n \"count\": {\"$sum\": 1},\n \"total deposit\": {\"$sum\": \"$deposit\"}\n }\n }\n ])\nfor document in cursor:\n print(document)\n\ncursor = zigbang.aggregate([\n {\n \"$group\": {\n \"_id\": \"$address1\",\n \"count\": {\"$sum\": 1},\n \"average rent\": {\"$avg\": \"$rent\"},\n }\n }\n ])\nfor document in cursor:\n print(document)\n\ncursor = zigbang.aggregate([\n {\n \"$group\": {\n \"_id\": \"$address1\",\n \"count\": {\"$sum\": 1},\n \"average rent\": {\"$avg\": \"$rent\"},\n \"average deposit\": {\"$avg\": \"$deposit\"}\n }\n }\n ])\nfor document in cursor:\n print(document)\n\ndb = client.dobestan\n\ndb.collection_names()\n\nrestaurants = db.restaurants #MONGO DB에서 제공하는 데이터\n\nrestaurants.find().count()\n\ncursor = restaurants.aggregate([\n {\n \"$group\": {\n \"_id\": \"$borough\",\n } \n }\n])\nfor document in cursor:\n print(document)\n\ncursor = restaurants.aggregate([\n {\n \"$group\": {\n \"_id\": \"$borough\",\n \"count\": {\"$sum\": 1},\n } \n }\n])\nfor document in cursor:\n print(document)", "관계형 DB를 몽고DB로 옮길 때 어떻게 옮기는 것이 가장 좋을까?\npandas로 country city df 옮겨보자", "import pandas as pd\nimport pymysql\n\ndb = pymysql.connect(\n \"db.fastcamp.us\",\n \"root\",\n \"dkstncks\",\n \"world\",\n charset='utf8',\n)\n\n# db = MySQLdb.connect(\n# \"192.168.0.199\",\n# \"root\",\n# \"rlarlvy\",\n# \"sakila\",\n# charset='utf8',\n# )\n# => 이거 왜 안되는지 모르겠다.\n\ncountry_df = pd.read_sql(\"SELECT * FROM Country;\", db)\ncity_df = pd.read_sql(\"SELECT * FROM City;\", db)\n\ncountry_df.columns\n\ncity_df.columns\n\n# List of countries.\n[\n {\n \"Name\": \"Republic of Korea\",\n \"cities\": [ #List of cities\n {\n \"Name\": \"Seoul\",\n \"Population\": \"100000000\",\n },\n {\n \"Name\": \"Busan\",\n }\n \n ]\n }\n]\n\ncountry_df.to_json()\n\ncountry_df.to_dict()\n\ncity_groups = city_df.groupby(\"CountryCode\")\n\ncountry_list = []\n\n\nfor index, row in country_df.iterrows():\n country_dict = {}\n \n country_code = row[\"Code\"]\n# city_group_df = city_groups.get_group(country_code) 위랑 같아\n country_dict[\"name\"] = row[\"Name\"]\n country_dict[\"population\"] = row[\"Population\"]\n country_dict[\"cities\"] = [] # cities 라는 key 로 빈 리스트를 초기화 한다.\n \n if country_code in city_df[\"CountryCode\"].unique():\n city_group_df = city_groups.get_group(country_code)\n \n for city_index, city_row in city_group_df.iterrows():\n city_dict = {}\n city_dict[\"name\"] = city_row[\"Name\"]\n city_dict[\"population\"] = city_row[\"Population\"]\n country_dict[\"cities\"].append(city_dict)\n \n country_list.append(country_dict)\n\ncountry_list\n\nlen(country_list)\n\nworld = client.kipoy[\"world\"]\n\nworld\n\nworld.insert_many(country_list)", "Document 의 기준을 무엇으로 보는가? 우리는 아까 몽고DB에서 Country로 보았다\n\n\n인구가 10만 이상인 도시 리스트를 뽑아올 수 있는가? 몽고 DB 쿼리 만으로는 안 된다.\n\n왜냐하면 다큐먼트 하나하나의 기준이 country이기 때문에. 만약에 city를 기준으로 하려고 했다면 몽고 DB에서 차라리\n이게 더 좋은 방법이다.", "city = {\n \"name\": \"Seoul\",\n \"population\" = 10000000,\n \"country\": {\n \"name\": \"Korea\",\n \"population\": 50000000\n }\n}", "어떤 잠재적 문제? city가 기준이니까 서울에서 한국이 들어가고 부산에도 들어가고\n하지만 몽고 DB에서는 이게 맞는 구현 방법\n\n그래서 중요한 것은 데이터를 잘 넣는것과 Document의 기준을 무엇으로 할 것인가? 방금 본 것은 City가 기준\n\n\n관계형을 비관계형으로 바꾸어서 넣는 일은 거의 없다.\n\n그런데 API 같은 것을 쓸 때, 어차피 우리가 테이블에 접근 못하고 받아보는 정보는 어차피 비관계형 데이터 정보를 받을 때.\n직방의 경우에는 당연히 관계형 DB로 정보를 축적\n하지만 우리는 API를 받아 보는 입장이기 때문에 관계를 모른다. 그러므로 몽고DB를 쓰는 것이다.\n어려운 문제는 직방 1000만개의 매물을 받아왔는데 비관계형에서 관계형으로 바꾸는 것이 어렵다.\n몽고 DB를 편하게 편하게 쓸 때는 그냥 받아서 데이터만 가공해서 넣어두는 것" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gkc1000/pyscf
pyscf/nao/notebook/AWS/example-ase-siesta-pyscf-ch4-dens-change-gpu.ipynb
apache-2.0
[ "Easy Ab initio calculation with ASE-Siesta-Pyscf\nNo installation necessary, just download a ready to go container for any system, or run it into the cloud\nWe first import the necessary libraries and define the system using ASE", "# import libraries and set up the molecule geometry\n\nfrom ase.units import Ry, eV, Ha\nfrom ase.calculators.siesta import Siesta\nfrom ase import Atoms\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom timeit import default_timer as timer\n\nfrom ase.build import molecule\n\nCH4 = molecule(\"CH4\")\n\n# visualization of the particle\nfrom ase.visualize import view\nview(CH4, viewer='x3d')", "We can then run the DFT calculation using Siesta", "# enter siesta input and run siesta\nsiesta = Siesta(\n mesh_cutoff=150 * Ry,\n basis_set='DZP',\n pseudo_qualifier='lda',\n energy_shift=(10 * 10**-3) * eV,\n fdf_arguments={\n 'SCFMustConverge': False,\n 'COOP.Write': True,\n 'WriteDenchar': True,\n 'PAO.BasisType': 'split',\n 'DM.Tolerance': 1e-4,\n 'DM.MixingWeight': 0.1,\n 'MaxSCFIterations': 300,\n 'DM.NumberPulay': 4,\n 'XML.Write': True})\n\nCH4.set_calculator(siesta)\ne = CH4.get_potential_energy()", "The TDDFT calculations with PySCF-NAO", "# compute polarizability using pyscf-nao\n\nfreq = np.arange(0.0, 15.0, 0.05)\n\nt1 = timer()\nsiesta.pyscf_tddft(label=\"siesta\", jcutoff=7, iter_broadening=0.15/Ha,\n xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = freq)\nt2 = timer()\nprint(\"CPU timing: \", t2-t1)\n\ncpu_pol = siesta.results[\"polarizability inter\"]\n\nt1 = timer()\nsiesta.pyscf_tddft(label=\"siesta\", jcutoff=7, iter_broadening=0.15/Ha,\n xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = freq, GPU=True)\nt2 = timer()\nprint(\"GPU timing: \", t2-t1)\n\ngpu_pol = siesta.results[\"polarizability inter\"]\n\n# plot polarizability with matplotlib\n%matplotlib inline\n\nfig = plt.figure(1, figsize=(16, 9))\nax1 = fig.add_subplot(121)\nax2 = fig.add_subplot(122)\nax1.plot(siesta.results[\"freq range\"], siesta.results[\"polarizability nonin\"][:, 0, 0].imag)\nax2.plot(siesta.results[\"freq range\"], cpu_pol[:, 0, 0].imag)\nax2.plot(siesta.results[\"freq range\"], gpu_pol[:, 0, 0].imag, \"--\")\n\nax1.set_xlabel(r\"$\\omega$ (eV)\")\nax2.set_xlabel(r\"$\\omega$ (eV)\")\n\nax1.set_ylabel(r\"Im($P_{xx}$) (au)\")\nax2.set_ylabel(r\"Im($P_{xx}$) (au)\")\n\nax1.set_title(r\"Non interacting\")\nax2.set_title(r\"Interacting\")\n\nfig.tight_layout()", "Compute the spatial distributoin of the density change at resonance frequency", "res = 10.5/Ha \nlim = 20.0 # Bohr\nbox = np.array([[-lim, lim],\n [-lim, lim],\n [-lim, lim]])\nfrom pyscf.nao.m_comp_spatial_distributions import spatial_distribution\n\nspd = spatial_distribution(siesta.results[\"density change inter\"], freq/Ha, box, label=\"siesta\")\nspd.get_spatial_density(10.5/Ha)\n\ncenter = np.array([spd.dn_spatial.shape[0]/2, spd.dn_spatial.shape[1]/2, spd.dn_spatial.shape[2]/2], dtype=int)\n\nfig2 = plt.figure(2, figsize=(15, 12))\n\ncmap=\"seismic\"\nax1 = fig2.add_subplot(1, 3, 1)\nvmax = np.max(abs(spd.dn_spatial[center[0], :, :].imag))\nvmin = -vmax\nax1.imshow(spd.dn_spatial[center[0], :, :].imag, interpolation=\"bicubic\", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[1][0], spd.mesh[1][spd.mesh[1].shape[0]-1], spd.mesh[2][0], spd.mesh[2][spd.mesh[2].shape[0]-1]])\n\nax2 = fig2.add_subplot(1, 3, 2)\nvmax = np.max(abs(spd.dn_spatial[:, center[1], :].imag))\nvmin = -vmax\nax2.imshow(spd.dn_spatial[:, center[1], :].imag, interpolation=\"bicubic\", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[0][0], spd.mesh[0][spd.mesh[0].shape[0]-1], spd.mesh[2][0], spd.mesh[2][spd.mesh[2].shape[0]-1]])\n\nax3 = fig2.add_subplot(1, 3, 3)\nvmax = np.max(abs(spd.dn_spatial[:, :, center[2]].imag))\nvmin = -vmax\nax3.imshow(spd.dn_spatial[:, :, center[2]].imag, interpolation=\"bicubic\", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[0][0], spd.mesh[0][spd.mesh[0].shape[0]-1], spd.mesh[1][0], spd.mesh[1][spd.mesh[1].shape[0]-1]])\n\nax1.set_xlabel(r\"y (Bohr)\")\nax2.set_xlabel(r\"x (Bohr)\")\nax3.set_xlabel(r\"x (Bohr)\")\n\nax1.set_ylabel(r\"z (Bohr)\")\nax2.set_ylabel(r\"z (Bohr)\")\nax3.set_ylabel(r\"y (Bohr)\")\n\nax1.set_title(r\"Im($\\delta n$) in the $x$ plane\")\nax2.set_title(r\"Im($\\delta n$) in the $y$ plane\")\nax3.set_title(r\"Im($\\delta n$) in the $z$ plane\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cornhundred/ipywidgets
docs/source/examples/Image Processing.ipynb
bsd-3-clause
[ "Image Manipulation with skimage\nThis example builds a simple UI for performing basic image manipulation with scikit-image.", "# Stdlib imports\nfrom io import BytesIO\n\n# Third-party libraries\nfrom IPython.display import Image\nfrom ipywidgets import interact, interactive, fixed\nimport matplotlib as mpl\nfrom skimage import data, filters, io, img_as_float\nimport numpy as np", "Let's load an image from scikit-image's collection, stored in the data module. These come back as regular numpy arrays:", "i = img_as_float(data.coffee())\ni.shape", "Let's make a little utility function for displaying Numpy arrays with the IPython display protocol:", "def arr2img(arr):\n \"\"\"Display a 2- or 3-d numpy array as an image.\"\"\"\n if arr.ndim == 2:\n format, cmap = 'png', mpl.cm.gray\n elif arr.ndim == 3:\n format, cmap = 'jpg', None\n else:\n raise ValueError(\"Only 2- or 3-d arrays can be displayed as images.\")\n # Don't let matplotlib autoscale the color range so we can control overall luminosity\n vmax = 255 if arr.dtype == 'uint8' else 1.0\n with BytesIO() as buffer:\n mpl.image.imsave(buffer, arr, format=format, cmap=cmap, vmin=0, vmax=vmax)\n out = buffer.getvalue()\n return Image(out)\n\narr2img(i)", "Now, let's create a simple \"image editor\" function, that allows us to blur the image or change its color balance:", "def edit_image(image, sigma=0.1, R=1.0, G=1.0, B=1.0):\n new_image = filters.gaussian_filter(image, sigma=sigma, multichannel=True)\n new_image[:,:,0] = R*new_image[:,:,0]\n new_image[:,:,1] = G*new_image[:,:,1]\n new_image[:,:,2] = B*new_image[:,:,2]\n return arr2img(new_image)", "We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image:", "edit_image(i, sigma=5, R=0.1)", "But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's ipywidgets package lets us do that with a minimal amount of code:", "lims = (0.0,1.0,0.01)\ninteract(edit_image, image=fixed(i), sigma=(0.0,10.0,0.1), R=lims, G=lims, B=lims);", "Browsing the scikit-image gallery, and editing grayscale and jpg images\nThe coffee cup isn't the only image that ships with scikit-image, the data module has others. Let's make a quick interactive explorer for this:", "def choose_img(name):\n # Let's store the result in the global `img` that we can then use in our image editor below\n global img\n img = getattr(data, name)()\n return arr2img(img)\n\n# Skip 'load' and 'lena', two functions that don't actually return images\ninteract(choose_img, name=sorted(set(data.__all__)-{'lena', 'load'}));", "And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale':", "lims = (0.0, 1.0, 0.01)\n\ndef edit_image(image, sigma, R, G, B):\n new_image = filters.gaussian_filter(image, sigma=sigma, multichannel=True)\n if new_image.ndim == 3:\n new_image[:,:,0] = R*new_image[:,:,0]\n new_image[:,:,1] = G*new_image[:,:,1]\n new_image[:,:,2] = B*new_image[:,:,2]\n else:\n new_image = G*new_image\n return arr2img(new_image)\n\ninteract(edit_image, image=fixed(img), sigma=(0.0, 10.0, 0.1), \n R=lims, G=lims, B=lims);", "Python 3 only: Function annotations and unicode identifiers\nIn Python 3, we can use the new function annotation syntax to describe widgets for interact, as well as unicode names for variables such as sigma. Note how this syntax also lets us define default values for each control in a convenient (if slightly awkward looking) form: var:descriptor=default.", "lims = (0.0, 1.0, 0.01)\n\n@interact\ndef edit_image(image: fixed(img), σ:(0.0, 10.0, 0.1)=0, \n R:lims=1.0, G:lims=1.0, B:lims=1.0):\n new_image = filters.gaussian_filter(image, sigma=σ, multichannel=True)\n if new_image.ndim == 3:\n new_image[:,:,0] = R*new_image[:,:,0]\n new_image[:,:,1] = G*new_image[:,:,1]\n new_image[:,:,2] = B*new_image[:,:,2]\n else:\n new_image = G*new_image\n return arr2img(new_image)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rainyear/pytips
Tips/2016-03-09-List-Comprehension.ipynb
mit
[ "0x03 - Python 列表推导\n0x02 中提到的 map/filter 方法可以通过简化的语法快速构建我们需要的列表(或其它可迭代对象),与它们功能相似的,Python 还提供列表推导(List Comprehension)的语法。最初学 Python 的时候,我只是把这种语法当做一种语法糖,可以用来快速构建特定的列表,后来学习 Haskell 的时候才知道这种形式叫做 List Comprehension(中文我好像没有找到固定的翻译,有翻译成列表速构、列表解析之类的,但意思上都是在定义列表结构的时候按照一定的规则进行推导,而不是穷举所有元素)。\n这种列表推导与数学里面集合的表达形式有些相似,例如$[0, 10)$之间偶数集合可以表示为:\n$$\\left{x\\ |\\ x \\in N, x \\lt 10, x\\ mod\\ 2\\ ==\\ 0\\right}$$\n翻译成 Python 表达式为:", "evens = [x for x in range(10) if x % 2 == 0]\nprint(evens)", "这与filter效果一样:", "fevens = filter(lambda x: x % 2 == 0, range(10))\nprint(list(fevens) == evens)", "同样,列表推导也可以实现map的功能:", "squares = [x ** 2 for x in range(1, 6)]\nprint(squares)\n\nmsquares = map(lambda x: x ** 2, range(1, 6))\nprint(list(msquares) == squares)", "相比之下,列表推导的语法更加直观,因此更 Pythonic 的写法是在可以用列表推导的时候尽量避免map/filter。\n除了上面简单的迭代、过滤推导之外,列表推导还支持嵌套结构:", "cords = [(x, y) for x in range(3) for y in range(3) if x > 0]\nprint(cords)\n\n# 相当于\nlcords = []\nfor x in range(3):\n for y in range(3):\n if x > 0:\n lcords.append((x, y))\n \nprint(lcords == cords)", "字典与集合的推导\n这样一比较更加能够突出列表推导的优势,但是当嵌套的循环超过2层之后,列表推导语法的可读性也会大大下降,所以当循环嵌套层数增加时,还是建议用直接的语法。\nPython 中除了列表(List)可以进行列表推导之外,字典(Dict)、集合(Set)同样可以:", "dns = {domain : ip\n for domain in [\"github.com\", \"git.io\"]\n for ip in [\"23.22.145.36\", \"23.22.145.48\"]}\nprint(dns)\n\nnames = {name for name in [\"ana\", \"bob\", \"catty\", \"octocat\"] if len(name) > 3}\nprint(names)", "生成器\n0x01中提到的生成器(Generator),除了在函数中使用 yield 关键字之外还有另外一种隐藏方法,那就是对元组(Tuple)使用列表推导:", "squares = (x for x in range(10) if x % 2 == 0)\nprint(squares)\n\nprint(next(squares))\nnext(squares)\n\nfor i in squares:\n print(i)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CNS-OIST/STEPS_Example
user_manual/source/well_mixed.ipynb
gpl-2.0
[ "Well-Mixed Reaction Systems\nThe simulation script described in this chapter is available at STEPS_Example repository.\nIn this chapter, we'll use some simple classical reaction systems as examples\nto introduce the basics of using STEPS. More specifically, we'll focus on reaction\nsystems that occur in a single, well-mixed reaction volume. The topics presented\nin later chapters (such as surface-volume interactions, diffusion, 3D environments,\netc) will build on the material presented in this chapter.\nIn our first STEPS simulation, we'll be working with the following simple system,\nwhich consists of a single reversible reaction:\n\\begin{equation}\nA+B\\underset{{k_{b}}}{\\overset{{k_{f}}}{{\\rightleftarrows}}}C\n\\end{equation}\nwith 'forward' and 'backward' reaction constants $k_{f}$ and $k_{b}$,\nrespectively.\nModel Specification\nThe first thing we need to do, is to write some Python code that “passes”\nthis equation on to STEPS. This is called model specification, which in\nSTEPS consists of building a hierarchy of Python objects that list the species\noccurring in your model, their relevant chemical and physical properties and\ntheir interactions. As explained in the chapter introduction, here we deal only\nwith sets of reaction rules that occur together within one single chemical volume.\nModel container\nThe first step in model specification is to import package steps.model.\nThis package contains all the definitions of the objects and functions you need\nto describe the physics and chemistry of your model within STEPS. This entire\npackage has been written in c++ and exposed to Python through SWIG (Simplified\nWrapper and Interface Generator) and Cython, like most packages in STEPS. We import the package\nusing an alias, smodel, to reduce the required amount of typing (a common\nconvention in Python):", "import steps.model as smodel", "smodel now refers to the steps.model Python module containing the class\ndefinitions.\nNext, we're going to create a top-level container object for our\nmodel (steps.model.Model). This top level container is required for\nall simulations in STEPS but itself does not contain much information and\nmerely acts as a hub that allows the other objects in the model specification\nto reference each other. In the code listing below, we store our Model object\nin variable mdl. When you create an object in Python information inside the\nparenthesis is passed onto the class constructor. Each constructor requires\nspecific information, though some information can be omitted and will be given\ndefault values, as we will see. However, for a steps.model.Model object, the\nconstructor does not require any information at all:", "mdl = smodel.Model()", "Species\nOur next task is to enumerate all the chemical species that can occur in the model.\nThis means creating a number of objects of type steps.model.Spec and passing them\non to the steps.model.Model container. For our simple reaction the above equation,\nwe create\nthree steps.model.Spec objects (molA, molB, and molC) corresponding to our\nthree chemical species:", "molA = smodel.Spec('molA', mdl)\nmolB = smodel.Spec('molB', mdl)\nmolC = smodel.Spec('molC', mdl)", "The initializer of class steps.model.Spec requires two arguments: first an\nidentifier string that can be used later on to refer to this object. This\nidentifier string has to be unique among all species objects. It's important\nto distinguish between the Python variable we use to store the reference to\nthe newly created object on the one hand (e.g. molA), and the identifier\nstring on the other (e.g. 'molA'). In this example they bear the same name,\nbut this is not necessary. These identifier strings are a common requirement\nfor STEPS objects at this level and we will see when and how they are necessary\nlater in this chapter, when describing geometry and performing simulations with\nour model.\nWe should note at this point that our object reference variables should be\nnamed differently also, though Python will allow you to reuse the same name\n(one could even use the same name to reference objects of different type because\na variable in Python does not have to reference a specific type, as is the case\nin c++ for example). So this, for example, does not result in an error in Python:", "spec = smodel.Spec('mol1', mdl)\nspec = smodel.Spec('mol2', mdl)", "and since the identifier strings are different this is not a STEPS error either.\nHowever, in the above code in the first line spec at first references the\n'molA' object, but in the second line the object spec references changes\nto the 'molB' object, and the reference to the 'molA' object is lost.\nThese object references are required when defining the species' interactions,\nas we will see, so as a rule in STEPS all variables should be given a unique\nname so that no object references are lost. Actually, we could use container\nmethods to return references to objects, but let's keep things simple for now.\nThe second argument in the steps.model.Spec initializer is an object reference\nto the model we just created (stored in variable mdl). This will allow the steps.model.Spec\ninitializer to add itself to the steps.model.Model container.\nVolume System\nNext, we will create a volume system:", "vsys = smodel.Volsys('vsys', mdl)", "Volume systems (objects of class steps.model.Volsys) are container\nobjects that group a number of stoichiometric reaction rules\n(in later chapters we'll see how diffusion rules can also be added\nto these volume systems). The user has the option of grouping all\nreactions in the entire system into one single big volume system,\nor using multiple volume systems to organize reaction rules that\nbelong together. The second option may be preferred for larger models,\nbut for our simple example we only require one volume system.\nThe arguments for the steps.model.Volsys initializer are the same\nas for steps.model.Spec:\nThe first argument must be an identifier string, which can be used\nfor future referencing. This identifier must be unique among all volume\nsystems in the model. The second argument is the reference to the steps.model.Model\nparent object of which this steps.model.Volsys will be a child.\nReactions\nFinally, we need to create the reaction rules themselves.\nIn STEPS a single reversible reaction has to be regarded as two separate\nreaction rules; the first rule corresponding to the forward reaction and\nthe second rule to the backward reaction. So for our simple model in\nthe above equation , we have to create two objects of class steps.model.Reac\nand add them to the steps.model.Volsys object we just created:", "kreac_f = smodel.Reac('kreac_f', vsys, lhs=[molA, molB], rhs=[molC], kcst=0.3e6)\nkreac_b = smodel.Reac('kreac_b', vsys, lhs=[molC], rhs=[molA, molB])\nkreac_b.kcst = 0.7", "The initializer for steps.model.Reac can be provided with a bit more information than the\ninitializers for the other objects until now. Aside from the required identifier string (which is checked\nto be unique among all reactions in all volume systems) and a required reference\nto the steps.model.Volsys object to which this reaction will be added, we can also specify\nreaction stoichiometry at this stage (alternatively we can create the object with\nthe minimum information and set the stoichiometry with object methods). This\nstoichiometry is specified by two Python lists:\n\n\nA list called lhs, which gives the left-hand side of the stoichiometry\n (i.e. the reactants). If a reactant occurs more than once, as can be the\n case in e.g. a dimerization reaction, the steps.model.Spec object has to be listed the\n required number of times.\n\n\nA list called rhs, which gives the right hand side of the stoichiometry\n (i.e. the reaction products). The same remarks that applied for parameter\n lhs apply here.\n\n\nThe lists must contain references to the required steps.model.Spec objects\n(and not identifier strings), so we can see why it was important\nnot to lose these object references when we created our steps.model.Spec objects.\nBoth lists can also be empty e.g. lhs=[] or rhs=[] (this is the default\nbehavior if lists are not supplied to the constructor, but can be changed\nwith object methods setLHS and setRHS). Care should be used in the case of\nempty lists because either situation could break physical laws such as the\nconservation of mass, although they are available because they can be useful\nfor some simulation approximations. If the left hand side is empty, we have a\nzero order reaction that acts as a source, i.e. it creates molecules “out of\nthin air”. If the right hand side is empty, we have a sink reaction that\nmerely destroys molecules. Obviously, within one single reaction rule,\nit doesn't make sense to set both lhs and rhs to an empty list.\nWe can also already set the default rate constants for both the forward\nand backward reaction, by manipulating the kcst property of the Reac objects.\nAs shown above these rate constants can be initialized as a parameter during\nobject construction, or by using object methods after the object has been\ncreated, which is common to many properties of objects in STEPS. Note: This is an example of an object property: kcst is a property of our Reac object. In this example kreac_f.kcst = 0.3e6 is an indirect call to object method kreac_f.setKcst(0.3e6). For more information on available property functions see API References.\nThese rate constants can also be changed later on during the simulation, but values\ngiven here will be used as default values when a simulation state is initialized.\nGenerally speaking, physical constants in STEPS must be specified in SI units.\nHowever, the s.i derived unit for volume is the cubic meter, which means that the s.i derived unit for concentration is mole per cubic meter, and reaction constants would be based on cubic meters, i.e. a second order reaction constant should have units of metres cubed per mole-second ($m^{3}\\left(mol.s\\right)^{-1}$). However, the convention in chemical kinetics is to base reaction parameters on Molar units (M = mol/litre) (i.e. based on the litre rather than the cubic metre) and this convention is followed in STEPS. The actual interpretation of the unit of a reaction rule depends on the order of that reaction.\nIn other words, it depends on the number of species in the left hand side. The constant for a zero order reaction in STEPS has units $M.s^{-1}$; for a first order reaction rule has units $s^{-1}$; for a second order reaction the units are $\\left(M.s\\right)^{-1}$; for a third order reaction $\\left(M^{2}.s\\right)^{-1}$; and so on (while there is no upper limit on the order of the reaction when working with Reac objects within\nthe context of package steps.model, STEPS simulators will not deal with any\nreaction rule that has an order larger than 4). These units are not strictly s.i. units, however all parameters, other than reactions constants, in STEPS must be given in base or derived s.i. units, which includes the unit of $m^{3}$ for volume. Note: The units for a zero-order reaction have changed from previous versions of STEPS ($s^{-1}$)\n so as to follow convention. Zero-order reactions are NOT permitted on membranes (surface\n reactions, see later chapters) due to the ambiguity of the interpretation of the units.\nFinally, the full Python code of our model description looks like this:", "import steps.model as smodel\nmdl = smodel.Model()\nmolA = smodel.Spec('molA', mdl)\nmolB = smodel.Spec('molB', mdl)\nmolC = smodel.Spec('molC', mdl)\nvolsys = smodel.Volsys('vsys', mdl)\nkreac_f = smodel.Reac('kreac_f', volsys, lhs=[molA, molB], rhs=[molC], kcst = 0.3e6)\nkreac_b = smodel.Reac('kreac_b', volsys, lhs=[molC], rhs=[molA, molB])\nkreac_b.kcst = 0.7", "Notice that we have said nothing about the actual geometry of our model at\nthis point, nor have we said anything related to the simulation itself\n(initial conditions, special events during the simulation, etc).\nWe have just created a hierarchy of Python objects that describes\nthe interactions between chemical species and we have done this on a\nrather abstract level.\nPreparing geometry for well-mixed simulation\nBefore we can start doing simulations, we need to say something about\nthe environment in which our reactions will occur. Specifically, we need\nto describe the volume compartments in which reactions take place, and sometimes\nalso the surface patches around or in between these compartments (patches are described in more detail in the next chapter). We then link\neach of these compartments with one or more of the volume systems defined\nin the kinetic model, in a process called annotation. There are currently\ntwo types of geometry that can be specified in STEPS:\n\n\nWell-mixed geometry. In this type of geometry description, compartments are described\n only by their volume in cubic meters and patches by their area in\n square meters and connectivity to compartments. Nothing is said\n about the actual shape.\n\n\nTetrahedral mesh geometry. In this type of geometry, a compartment is a collection of 3D tetrahedral\n voxels and a patch is a 2D section between compartments composed of\n triangular surface connecting tetrahedrons.\n\n\nWe will talk about tetrahedral meshes (and their relationship with\nwell-mixed geometry) in the chapter on Simulating Diffusion in Volumes.\nIn this chapter, however, we will restrict ourselves to well-mixed geometry,\nbecause we will only use the well-mixed stochastic solver. Specifying a\nwell-mixed compartment that can be used together with the kinetic model\nfrom the previous section is very easy. First, need we to import the STEPS\nmodule that contains the objects used to define the geometry, namely steps.geom:", "import steps.geom as swm", "Like before we give the steps.geom module an alias swm, simply to reduce later\ntyping. Next we generate a parent container object, that will collect and store\nthe actual compartments. The purpose of this object is in many ways similar to\nthe purpose of the steps.model.Model object we discussed in the previous section,\nand the constructor does not require any information:", "wmgeom = swm.Geom()", "Finally, the actual compartment we need for simulating our model must be created:", "comp = swm.Comp('comp', wmgeom)\ncomp.addVolsys('vsys')\ncomp.setVol(1.6667e-21)", "Since our model is very simple, we only create one compartment, an object of\ntype steps.geom.Comp, and we store it in the variable called comp.\nThe initializer takes two arguments here: first a unique identifier string\n(that will once again be used later on, during actual simulation) and a\nreference to the container object. Since we only have one compartment,\nwe use the rather unimaginative identifier comp.\nThe second line corresponds to the annotation, which in this case is very simple.\nIt links the compartment we just created with a volume system that carries the\nidentifier 'vsys'. At this stage, only the string is stored in the Comp object.\nIn other words, STEPS makes no attempt to resolve the link by searching for a\nsteps.model.Volsys object that has the identifier 'vsys'. In fact, STEPS\ncouldn't resolve the link at this point, because the kinetic model and the\ngeometric model remain completely separated in memory. They will remain\nseparated until the time we create an actual simulation; that is the point\nwhere these cross references between kinetic model and geometry will be resolved.\nThis “workflow” enables us to build several kinetic model descriptions and geometry\ndescriptions separately, and put them together as needed for simulation. The only\nrequirement for any combination of kinetic model and geometry to work is that\nthe volume systems referenced from the geometry have been defined in the\nkinetic model. An error will result when creating the simulation object\n(which we will do next) if any compartment contains a reference to a volume\nsystem that is unknown in the model description.\nThe third line sets the volume of the compartment. Once again, SI units must be\nused, meaning that the volume is specified in $m^{\\text{3}}$. The volume of\ncompartment 'comp' therefore has a volume of $1.6667\\cdot10^{-3}\\mu m^{3}$.\nThis parameter can be set in the steps.geom.Comp object initializer, explicitly with the\nsetVol method (as above), or with the property function vol (i.e.\ncomp.vol = 1.667e-21).\nSimulation with Wmdirect\nWith all this in place, we can finally start performing simulations.\nSince STEPS is a set of Python packages and extensions, simulations\ncan either be fully scripted and run automatically, or they can be\ncontrolled interactively from the Python prompt. In this text, we'll\njust run a simulation “automatically” from begin to end, without any\ninteractive input.\nThe simulator (or solver) we'll be using here is the Wmdirect solver.\nWmdirect is\nan implementation of Gillespie's Direct Method (see Gillespie, Exact stochastic simulation of coupled chemical reactions, J Phys Chem 1977, 81:2340-2361) for stochastic simulation and\nhas the following properties:\n\n\nIt's a well-mixed solver, meaning that you will need to present\n it with well-mixed geometry. Note: if you present a well-mixed solver in STEPS with a tetrahedral\n mesh, the solver will automatically extract the well-mixed properties\n (i.e. the volumes of compartments, the areas of patches and their connectivity)\n from the mesh. Well-mixed solvers have no\n concept of concentration gradients within a given compartment, but rather\n assume that all molecules in any given compartment are kept uniformly\n distributed by elastic (non-reactive) collisions between reaction events.\n Therefore there is also no concept of diffusion within a compartment.\n However, we will later see that even in simulations with well-mixed solvers,\n it is possible to implement diffusive fluxes in between compartments,\n by linking them with patches.\n\n\nIt's a stochastic solver, meaning that it uses random numbers to create\n possible “realizations” (also called “iterations”) of the stochastic\n interpretation of the reaction system. In other words, for the same set\n of initial conditions, running the simulation multiple times (with different\n initial seed values for the random number generator) will generate different\n results each time.\n\n\nIt's a discrete stochastic solver, meaning that the amount of mass in the\n system is (at least internally) not being tracked over time as continuous\n concentrations, but as integer molecular counts. This may be a negligible\n distinction with large numbers of molecules present in the system, but it\n becomes very important when any species involved in the system has a small\n population of only a few molecules (especially when these particular molecules\n are involved in some feedback mechanism). Consequently, each realization is a\n sequence of discrete, singular reaction events.\n\n\nIt's an exact stochastic solver, which means that each iteration is exact\n with respect to the master equation governing the reaction system.\n\n\nTo perform a simulation of the above kinetic model and geometry with Wmdirect,\nwe first need to create a random number generator. This must be done explicitly by\nthe user, because this allows you to choose which random number generator to use\n(even though that choice is rather limited right now) and, more importantly, how\nto use it. Random number generation objects can be found in package steps.rng:", "import steps.rng as srng\nr = srng.create('mt19937', 256)\nr.initialize(23412)", "In the first line, we import the steps.rng package with alias srng.\nIn the next line, we actually generate a random number generator using the\nfunction steps.rng.create. The first argument selects which type of random\nnumber generator we want. STEPS currently only implements one pseudo RNG\nalgorithm, 'mt19937', also known as the “Mersenne Twister”. The Mersenne\nTwister is supported because it is considered to be quite simply the current\nbest choice for numerical simulations, because of its large period and fast\nruntime. The second argument selects how many random numbers are pre-generated\nand stored in a buffer.\nIn the third line, we initialize the random number generator with a seed value.\nHere, we initialize the random number generator only once. You can, however,\nalso re-initialize it prior to each iteration, for instance to ensure a\nsimulation starts with some specific seed value. Note: Solver Wmdirect guarantees that a stochastic simulation started with the\n same seed value will recreate the exact same chain of events. The same is true\n for solver Tetexact. This might not be the case in future solvers, particularly\n in solvers that have been parallellized using some form of “look-ahead” execution.\nNext we will create the actual solver object. Since we will be doing\nsimulations using solver Wmdirect, we first import the package in which all\nsolvers have been implemented, then create the steps.solver.Wmdirect object:", "import steps.solver as ssolver\nsim = ssolver.Wmdirect(mdl, wmgeom, r)", "For all steps.solver objects (currently Wmdirect, Wmrk4 and Tetexact)\nthe initializer requires three arguments. The first argument is the model\ndescription (a variable that references the steps.model.Model object we\ncreated in the first section of this chapter), followed by the the\nwell-mixed geometry description (a variable that references a steps.geom.Geom\nobject) and finally also a variable that references the random number generator\nwe just constructed. And that's it.\nThe variable sim now references the solver object we just created which contains\nall the methods we require to run and control our simulation, so now we can\nstart performing simulations. First we call the reset function on the solver object:", "sim.reset()", "This method sets all values within the solver “state” to their default values.\nThis state includes the concentration of species in all compartments (set to 0\neverywhere), rate constants (set to their defaults from the steps.model.Reac objects)\netc. If you want to re-initialize the random number generator prior to each\nindividual iteration, setting the seed value right before calling the reset\nfunction would be a good choice. Note: Since reset currently doesn't use any random numbers, in principle you\n might also initialize the random number generator's seed value right after\n calling it. This might change with future solvers, so as a rule you're better\n off if you make it a habit to initialize the random number generator before\n calling reset.\nAfter the reset function call, we can start manipulating the “state” of the\nsimulation, i.e. setting up the initial conditions of the simulation.\nEach solver implemented in STEPS includes a numbers of functions for doing that.\nEach solver, including the steps.solver.Wmdirect solver that we're using here, implements a\nbasic set of functions that allows you e.g. to get/set concentration of species\nin compartments and patches as a whole. In addition, solvers will typically\nimplement additional functions that only make sense for their specific\nimplementation. Due to the internal structure of the code, all solver methods\nare available for all solvers, but methods which don't make sense for a particular\nsolver (e.g. getting/setting concentration in individual tetrahedrons doesn't\nmake sense for a well-mixed solver) will display an error message if called. A detailed list of which methods\nare available for which solvers is available in API_1/API_solver.\nNow let's set up our initial conditions with simulation object methods:", "sim.setCompConc('comp', 'molA', 31.4e-6)\nsim.setCompConc('comp', 'molB', 22.3e-6)", "This means we're setting the concentration of molA to $31.4 \\mu M$ and the\nconcentration of molB to $22.3 \\mu M$ in our compartment comp.\nWe're setting these concentration values at simulation time $t = 0$,\nbut these functions can be called at any point in time, to control the\nconcentration of species during simulation. Here we see an example of why\nthe identifier strings were necessary during our model specification.\nThe simulation methods require the identifier strings to the steps.model and\nsteps.geom objects and not a variable that references the objects.\nThis is necessary because the model and geometry specification are separated\nfrom the simulation and could be organised inside functions or even separate\nmodules meaning a reference to the object will often not be available.\nNext we'll use NumPy to generate some auxiliary numerical arrays that will be\nused during simulation. Note: Presently, all structures for storing simulation results are explicitly\n created by the user and it is also up to the user to include in their script,\n typically, a for loop that will run the simulation, collect data and store this\n data in an appropriate structure, such as a list or NumPy array. In the future\n we may implement the option to pass to the simulation object information about\n what data to store, which will then be collected internally and returned to the\n user or saved automatically in files. This will make it much simpler to run a\n simulation and improve runtime, for the cost of a slightly lengthier\n initialization process.", "import numpy\ntpnt = numpy.arange(0.0, 2.001, 0.001)\nres = numpy.zeros([2001, 3])", "The first array, tpnt, contains the time points at which we will pause the\nsimulation. This range of numbers starts at 0.0 and runs to 2.0 seconds with\n$1ms$ intervals. That gives us a total of 2001 “time points”.\nThe second array, res, will be used to store the concentrations of 'molA',\n'molB' and 'molC' over time: that's why the array has 2001 rows and 3 columns.\nWe use NumPy's zeros function, which not only allocates the array but also\ninitializes all elements to zero.\nNow it's time to actually run an iteration:", "for t in range(0,2001):\n sim.run(tpnt[t])\n res[t,0] = sim.getCompCount('comp', 'molA')\n res[t,1] = sim.getCompCount('comp', 'molB')\n res[t,2] = sim.getCompCount('comp', 'molC')", "We loop over all time points using a range to generate indices.\nThen we use the basic solver function run to forward the simulation\nuntil the time specified by the function's argument. Note that the first time the loop is executed, the current time is 0.0 because we called the reset() function earlier, so in this case, sim.run(0.0) doesn't move the simulation forward.\nAfter having forwarded the simulation one millisecond, we use function\nsteps.solver.Wmdirect.getCompCount to sample the number of molecules present in compartment\ncomp for each of our three species. All of these functions are described\nin more detail in API_1/API_solver.\nFinally, we can plot these values using Matplotlib. Due to the low numbersof molecules, we can clearly see the reactions occurring as discrete events.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(12,7))\n# Plot number of molecules of 'molA' over the time range:\nplt.plot(tpnt, res[:,0], label = 'A')\n# Plot number of molecules of 'molB' over the time range:\nplt.plot(tpnt, res[:,1], label = 'B')\n# Plot number of molecules of 'molC' over the time range:\nplt.plot(tpnt, res[:,2], label = 'C')\nplt.xlabel('Time (sec)')\nplt.ylabel('#molecules')\nplt.legend()\nplt.show()", "If we're using a stochastic simulation algorithm such as that implemented in\nsolver Wmdirect, we're usually interested in analysing the range of behaviours\nproduced by different iterations. One way of doing that is by taking the mean\nover multiple iterations (100 in this example), as is shown in the following\nsimulation code. We plot the average of multiple (n = 100) iterations of our second order reaction.", "NITER = 100\nres = numpy.zeros([NITER, 2001, 3])\ntpnt = numpy.arange(0.0, 2.001, 0.001)\n\nfor i in range(0, NITER):\n sim.reset()\n sim.setCompConc('comp', 'molA', 31.4e-6)\n sim.setCompConc('comp', 'molB', 22.3e-6)\n\n for t in range(0, 2001):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\nres_mean = numpy.mean(res, 0)\n\nplt.figure(figsize=(12,7))\n# Plot mean number of molecules of 'molA' over the time range:\nplt.plot(tpnt, res_mean[:,0], label = 'A')\n# Plot mean number of molecules of 'molB' over the time range:\nplt.plot(tpnt, res_mean[:,1], label = 'B')\n# Plot mean number of molecules of 'molC' over the time range:\nplt.plot(tpnt, res_mean[:,2], label = 'C')\n\nplt.xlabel('Time (sec)')\nplt.ylabel('#molecules')\nplt.legend()", "As you can see, the array that will be used to store the simulation results\n(array res) is now a three dimensional array, with the first dimension set to\nrecord 100 iterations. The loop that runs over all time points is now embedded\nin a loop that runs over the iterations. The solver object is reset and the initial\nconditions are set at the beginning of each iteration. Since we don't need\nany detailed control over which iteration starts with which RNG seed value,\nwe initialize the RNG just once, prior to everything else. Once the 100 iterations\nare completed, we call NumPy's mean function to compute the mean over the first\ndimension, and then plot these mean values.\nControlling the simulation\nIn the previous section, we paused the simulation at regular time intervals\nonly to record the concentrations of various molecules. The only time we actively\nchanged the simulation state was at t=0, to set the initial conditions. However,\nthe function calls we used to set initial conditions can be called at any time\nduring the simulation.\nAs an example, let's interrupt our simulation at t=1sec to add 10 molecules\nof species molA. We plot the mean behaviour of multiple (n = 100) iterations of our second order reaction, with an injection of 10 molecules of species A at t = 1.0.", "for i in range(NITER):\n sim.reset()\n sim.setCompConc('comp', 'molA', 31.4e-6)\n sim.setCompConc('comp', 'molB', 22.3e-6)\n\n for t in range(0,1001):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\n\n # Add 10 molecules of species A\n sim.setCompCount('comp', 'molA', sim.getCompCount('comp', 'molA') + 10)\n for t in range(1001, 2001):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\n\nres_mean = numpy.mean(res, 0)\n\nplt.figure(figsize=(12,7))\n# Plot mean number of molecules of 'molA' over the time range:\nplt.plot(tpnt, res_mean[:,0], label = 'A')\n# Plot mean number of molecules of 'molB' over the time range:\nplt.plot(tpnt, res_mean[:,1], label = 'B')\n# Plot mean number of molecules of 'molC' over the time range:\nplt.plot(tpnt, res_mean[:,2], label = 'C')\n\nplt.xlabel('Time (sec)')\nplt.ylabel('#molecules')\nplt.legend()", "When you have to do these things regularly, you might want to encapsulate\nvarious parts of this code in separate functions to save yourself some coding time.\nQuite often, one does not want to simulate the sudden injection of molecules,\nbut rather keep the concentration of some species constant at a controlled value.\nThis means that any reaction involving the buffered molecule will still occur\nif the reactants are present in sufficiently large numbers, but the occurrence\nof this reaction will not actually change the amount of the buffered species\nthat is present. The following code snippet shows how, during the time\ninterval $0.1\\leq t<0.6$, the concentration of species molA is clamped to\nwhatever its value was at $t=0.5$. We plot the result of a single iteration of the second order reaction, where the concentration of A is clamped during the interval $0.1\\leq t<0.6$", "for i in range(1):\n sim.reset()\n sim.setCompConc('comp', 'molA', 31.4e-6)\n sim.setCompConc('comp', 'molB', 22.3e-6)\n\n for t in range(0, 101):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\n\n sim.setCompClamped('comp', 'molA', True)\n\n for t in range(101, 601):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\n\n sim.setCompClamped('comp', 'molA', False)\n\n for t in range(601,2001):\n sim.run(tpnt[t])\n res[i, t, 0] = sim.getCompCount('comp', 'molA')\n res[i, t, 1] = sim.getCompCount('comp', 'molB')\n res[i, t, 2] = sim.getCompCount('comp', 'molC')\n\nres = res[0,:,:]\n\nplt.figure(figsize=(12,7))\n# Plot mean number of molecules of 'molA' over the time range:\nplt.plot(tpnt, res[:,0], label='A')\n# Plot mean number of molecules of 'molB' over the time range:\nplt.plot(tpnt, res[:,1], label='B')\n# Plot mean number of molecules of 'molC' over the time range:\nplt.plot(tpnt, res[:,2], label='C')\n\nplt.xlabel('Time (sec)')\nplt.ylabel('#molecules')\nplt.legend()", "The function steps.solver.Wmdirect.setCompClamped takes a boolean which is used to turn on or off\nthe clamping of the species in the specified compartment.\nA final way in which we will control our simulation in this chapter is\nby activating/inactivating a reaction channel. Inactivating a reaction channel\nmeans that it will never occur, regardless of whether the required reactants\nare present in sufficient numbers. In the following simulation:\n\n\nwe will turn off the forward reaction of the above equation during\n interval $2.0\\leq t<4.0$;\n\n\nturn it back on and let everything recover during $4.0\\leq t<6.0$;\n\n\nturn off the backward reaction during $6.0\\leq t<8.0$;\n\n\nturn it back on and let everything recover again during $8.0\\leq t<10.0$;\n\n\nand finally turn off both the forward and backward channel during a final\n interval $10.0\\leq t<12.0$.\n\n\nThis time, we'll wrap the “run-until-time-t“ part of the code in a separate\nfunction to save ourselves some writing, and we also have to alter our tpnt\nand res arrays to store data for 12 seconds:", "def run(i, tp1, tp2):\n for t in range(tp1, tp2):\n sim.run(tpnt[t])\n res[i,t,0] = sim.getCompCount('comp', 'molA')\n res[i,t,1] = sim.getCompCount('comp', 'molB')\n res[i,t,2] = sim.getCompCount('comp', 'molC')\nres = numpy.zeros([NITER, 12001, 3])\ntpnt = numpy.arange(0.0, 12.001, 0.001)", "The actual simulation code now becomes:", "for i in range(NITER):\n sim.reset()\n sim.setCompConc('comp', 'molA', 31.4e-6)\n sim.setCompConc('comp', 'molB', 22.3e-6)\n run(i,0,2001)\n sim.setCompReacActive('comp', 'kreac_f', False)\n run(i,2001,4001)\n sim.setCompReacActive('comp', 'kreac_f', True)\n run(i,4001,6001)\n sim.setCompReacActive('comp', 'kreac_b', False)\n run(i,6001,8001)\n sim.setCompReacActive('comp', 'kreac_b', True)\n run(i,8001,10001)\n sim.setCompReacActive('comp', 'kreac_f', False)\n sim.setCompReacActive('comp', 'kreac_b', False)\n run(i,10001,12001)\n \n\nres_mean = numpy.mean(res, 0)\n\nplt.figure(figsize=(12,7))\n# Plot mean number of molecules of 'molA' over the time range:\nplt.plot(tpnt, res_mean[:,0], label = 'A')\n# Plot mean number of molecules of 'molB' over the time range:\nplt.plot(tpnt, res_mean[:,1], label = 'B')\n# Plot mean number of molecules of 'molC' over the time range:\nplt.plot(tpnt, res_mean[:,2], label = 'C')\n\nplt.xlabel('Time (sec)')\nplt.ylabel('#molecules')\nplt.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nfaggian/notebooks
Introductions/Tutorial - 1.3 - Scipy.ipynb
apache-2.0
[ "SciPy : Scientific Python\nhttp://scipy.org\nA framework for numerical computing using Python.\n\nSpecial functions (scipy.special)\nIntegration (scipy.integrate)\nOptimization (scipy.optimize)\nInterpolation (scipy.interpolate)\nFourier Transforms (scipy.fftpack)\nSignal Processing (scipy.signal)\nLinear Algebra (scipy.linalg)\nSparse Eigenvalue Problems (scipy.sparse)\nStatistics (scipy.stats)\nMulti-dimensional image processing (scipy.ndimage)\nFile IO (scipy.io)\n\nSome great examples are here: https://scipy-lectures.github.io/intro/scipy.html\n\nQ. Import and explore the library.", "import scipy.optimize\n\nscipy.optimize??", "Q. Find the minimum of the following function: $ f(x)= x^2 + 10\\cdot\\sin(x)$", "import numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(x):\n return x**2 + 10*np.sin(x)", "Plot the function using matplotlib.", "x = np.arange(-10, 10, 0.1)\nfig, ax = plt.subplots(figsize=(10,5))\nax.plot(x, f(x));\nax.hlines(0,-10, 10, alpha=0.1);\n\ninitial_x = 5.0\n\nminimum_x = scipy.optimize.fmin_bfgs(f, initial_x)", "Plot the found minimum.", "%matplotlib inline\n\nx = np.arange(-10, 10, 0.1)\nfig, ax = plt.subplots(figsize=(10,5))\nax.plot(x, f(x));\nax.hlines(0,-10, 10, alpha=0.1);\nax.plot(initial_x, f(initial_x), 'or')\nax.plot(minimum_x, f(minimum_x), 'og')", "Q. Approximate the density of the following superposition of normals: $\\mathcal{N}(-1.0, 0.5) + \\mathcal{N}(1.0, 0.5)$", "import scipy.stats as stats\n\nN0 = stats.norm.rvs(loc=-1.0,scale=.5,size=1000)\nN1 = stats.norm.rvs(loc=1.0,scale=.5,size=1000)\nsamp = np.hstack([N0, N1])", "Plot the distribution of the data, using a histogram. Using the hist method.", "fig, ax = plt.subplots(figsize=(10,5))\nax.hist(samp, 100, normed=1)\nax.set_xlim(-5, 5);\nax.grid()", "Estimate the distribution of the data using a kernel density estimation.\nhttp://en.wikipedia.org/wiki/Kernel_density_estimation", "pdf = stats.gaussian_kde(samp)\nfig, ax = plt.subplots(figsize=(10,5))\nx = np.linspace(-5, 5, 100)\nax.plot(x, pdf(x),'r')\nax.hist(samp, 100, normed=1, alpha=.5);\n\npdf(0.)\n\npdf(-1.), pdf(1.0)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
riddhishb/ipython-notebooks
Kalman Filter/Kalman-first.ipynb
gpl-3.0
[ "This is an jupyter notebook.\nLectures about Python, useful both for beginners and experts, can be found at http://scipy-lectures.github.io.\nOpen the notebook by (1) copying this file into a directory, (2) in that directory typing \njupyter-notebook\nand (3) selecting the notebook.\n\nWritten By: Riddhish Bhalodia\n\nIn this exercise, we will learn and code about Kalman Filtering and look at few of it's applications.\nKalman Filtering\nMotivation\nLet me start it this way that before Rudolf Kalman (co-inventor of Kalman filtering), problems were divided in two distinct classes Control Problems (what value of acceleration should be provided to the car so that it climbs a certain incline with constant speed) and Filtering Problem (damn this noisy accelerometer, I can't get a clear value even at a fixed point). \nYou might have guessed (if you care to read the brackets :P) that the two problems are not uncorrelated. To the un-initiated, take a scenario that the car has a noisy accelerometer and you want to control it's speed on the incline, so two problems in one. One way is to solve them independently, but that is too situation dependent and so there was a need for a dynamic solution for filtering while controlling and vice versa, essentially bringing the two separate problems under one roof.\nThis is precisely what Kalman Filter does! Kalman Filter and it's non-linear extensions are essential elements in modern control theory. Lot of different applications ranging from filtering noisy sensor output to autonomous robot navigation uses Kalman Filter.\nIn this tutorial we will first start off with an application, which we will code (yay!) and then move on to build Kalman filtering theory.\nFaulty Voltmeter\nA classic example to start with and very intuitive. We have to measure a DC voltage from a faulty (noisy) voltmeter, it cant get any simpler than this.\nFirst off let's import certain packages", "%matplotlib inline \n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport random", "Now for solving any computational problem we need to model the system (in layman terms, set of equations which is used to describe the overall situation and how it varies with physical inputs and parameters).\nThe simplest way to model the noisy voltmeter at every measurements instant is through the following equation\n\\begin{equation}\nV_m = V_{m-1} + \\omega_m \\qquad (1)\n\\end{equation}\nHere, $V_m$ : Voltage at current time,\n $V_{m-1}$ : Voltage at previous time instant,\n $\\omega_m$ : Random Noise (process noise)\nWe also have measurements taken at each instance m which is given by $Z_m$ and is corrupted by some sensor (measurements) noise $\\nu_m$\n$$Z_m = V_m + \\nu_m \\qquad (2)$$\nUsually in such cases as $\\nu_m$ is introduced due to faults in voltmeter (sensor) and hence we usually know it's characteristics (least count, precision ... ring any bells?). So here we will model $\\nu_m$ as a Gaussian Random Variable (as it is usually done) with zero mean and standard deviation $\\sigma_{\\nu} = 0.3$ (for simulation sake). $\\omega_m$ is more difficult to predict and is introduced by error due to non-ideality (we know it's constant DC voltage, but is it really!) of our process equations but we still assume that we know it. Again, $\\omega_m$ is to be modeled by a Gaussian Random Variable with zero mean and standard deviation $\\sigma_{\\omega} = 0.01$.\nSo before solving this let's model this voltmeter, What are the parameters to give. The true constant voltage lets say variable true_voltage = 1, and we need a noise level as well (this is voltmeter's error so will feature in measurements error), say variable noise_level_sig = 0.2 (KF works when noise estimate are off by a mark...). Let's take measurements for 50 instances store it in iter_max and we will generate the measurements for each instant which will be just $~ \\mathcal{N}(true_voltage,noise_level_sig)$ (Think about this :))", "true_voltage = 1\nnoise_level_sig = 0.2\niter_max = 50\nmeasurements = []\ntrue_voltage_mat = []\n\nfor i in range(iter_max):\n measured = random.gauss(true_voltage , noise_level_sig)\n measurements.append(measured)\n true_voltage_mat.append(true_voltage)\n", "Let's plot how the measurements look as compared to the true voltage", "plt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r')\nplt.xlabel('Time Instances')\nplt.ylabel('Voltage')\nplt.legend(('true voltage', 'measured'), loc=4)", "Now starts the actual thing, we want to filter this :D, by using Kalman Filtering. But first we need to derive, so do get ready for a chunk load of theory, but but please be patient as once we are done with this, the standard Kalman Filter will be a piece of cake\n:)\nFiltering!!\nMath Math Math.\nThat being said, to solve this problem iteratively we need two major step\n\nPredict the voltage at next instance using the previous estimate (1)\nCorrect the estimate based on the measurements at that instant \n\nSo we define two variables, $\\hat{V}^-_m :$ prior estimate of the voltage given only the knowledge of the process (the equation (1)) and $\\hat{V}_m :$ posterior estimate of the voltage at step m given the knowledge of the measurements $Z_m$ \nSo lets' start the firing of equations, just one comment before this, we will also estimate the error in the estimate at every iteration along with the estimate itself.\n$$e_m = V_m - \\hat{V}_m \\quad \\textrm{and} \\quad \\sigma^2_m = \\mathbb{E}[e_m^2]$$\nand \n$$e^-_m = V_m - \\hat{V}^-_m \\quad \\textrm{and} \\quad \\sigma^{2-}_m = \\mathbb{E}[e_m^{2-}]$$\nWe have to minimize this $\\sigma^2_m$. So now as any sane person (ok statistician) would do we would model the posterior estimate $\\hat{V}_m$ as the linear combination of the prior estimate $\\hat{V}^-_m$ and the deviation of the estimate from the measurements (also called as innovation term) given as\n$$y_m = Z_m - \\hat{V}^-_m \\qquad (3)$$\nPutting the above ramble in equation we have \n$$\\hat{V}_m = \\hat{V}^-_m + k_my_m \\qquad (4)$$\nsubtracting $V_m$ from both sides we get\n$$\\hat{V}_m - V_m = \\hat{V}^-_m - V_m + k_m(Z_m - \\hat{V}^-_m)$$\nTo compute $k_m$ we take the square and it's expectation and then differentiate the quadratic in $k_m$ to get something like (try this your self)\n$$k_m = \\frac{\\mathbb{E}[(V_m - \\hat{V}^-_m)(y_m)]}{\\mathbb{E}[y_m^2]}$$\nThe numerator and denominator when expanded and taking into the account for independence of the R.V $Z_m$, $V_m$, and $\\hat{V}^-_m$ (think about this too) we get \n$$k_m = \\frac{\\sigma^{2-}m}{\\sigma^{2-}_m + \\sigma^2{\\nu}} \\qquad (5)$$\nalong with this we also have from equation(1) \n$$\\sigma_m^{2-} = \\sigma_{m-1}^2 + \\sigma_\\omega^2 \\qquad (6)$$\nNow substituting this (5) in the quadratic for $\\mathbb{E}[e^2_m]$ we get the variance as\n$$\\sigma^2_m = (1 - k_m)\\sigma_m^{2-} \\qquad (7)$$\nNow we have everything :D, already! It will be clear when you look at the summary below\n* Start with an initial guess for $V = V_0$\n* Get the prior estimate of voltage and it's error ($\\hat{V}^-_m$ and $\\sigma_m^{2-}$) from the process equation (1) and (6)\n* Using the prior estimates and the measurements data at instant m we get the posterior (read corrected) estimates of the voltage and it's error at instant m ($\\hat{V}_m$ and $\\sigma_m^2$)\n* Repeat this for several instances and we will converge to a solution (Yes! there exist a proof for convergence, you can google it up)\nSo hopefully you would have got a hang of how this works. So let's code it up", "# Initialize the parameters\n\ninitial_guess = 3\ninitial_guess_error = 1\nsig_nu = 0.3\nsig_omega = 0.01\nestimate_vector = []\nestimate_vector.append(initial_guess)\nerror_estimate_vector = []\nerror_estimate_vector.append(initial_guess_error)\n\n# Run the Filter\n\nfor i in range(iter_max-1):\n # first the prior estimation step\n \n volt_prior_est = estimate_vector[i]\n error_prior_est = error_estimate_vector[i] + sig_omega * sig_omega\n \n # estimate correction\n \n k = error_prior_est/(error_prior_est + sig_nu * sig_nu)\n volt_corrected_est = volt_prior_est + k * (measurements[i+1] - volt_prior_est)\n error_corrected_est = (1 - k) * error_prior_est\n estimate_vector.append(volt_corrected_est)\n error_estimate_vector.append(error_corrected_est)\n ", "Let us plot the things", "plt.figure()\nplt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r', range(iter_max), estimate_vector,'g')\nplt.xlabel('Time Instances')\nplt.ylabel('Voltage')\nplt.legend(('true voltage', 'measured', 'filtered'), loc=1)", "Did you have your voila moment? :D\nLets also look at the error for the estimate, lets plot it.", "plt.figure()\nplt.plot(range(iter_max),error_estimate_vector)\nplt.xlabel('Time Instances')\nplt.ylabel('Voltage Error')", "Kalman Filter\nYou might be wondering where is the control in all this. So now that's easily introduced in the actual formulation of Kalman filter which we will look at now. In the actual Kalman Filter we deal with multi-variable setting unlike that of the voltmeter example. So let me describe the inputs and the outputs and the parameters\nInputs\n* $\\textbf{Z}_m : $ The measurements vector at each instant m\n* $\\textbf{U}_m : $ This is new!, it denotes the controls provided to the system at instance m (this is just like force being the control for velocity of the car)\nOutputs\n* $\\textbf{X}_m : $ Newest estimate of the current state (state can be thought of as a parameter vector)\n* $P_m :$ The newest estimate for the average error of the state\nParameters\n* A : State transition matrix, basically the constant matrix multiplied to the previous estimate in the process equation\n* B : Control matrix, one to be multiplied to the control vector in the process equation\n* H : Observation matrix, the proportionality factor for state to be equal to the measurements\n* Q : Covariance matrix for the process error (this is again assumed to be known)\n* R : Covariance matrix for the measurements error (again known)\nNow we are ready to write the basic equations for the KF, don't worry much of the above will be cleared as you look at the equation.\nLets start with the two basic equations, first is the process equation\n$$\\textbf{X}m = A\\textbf{X}{m-1} + B\\textbf{U}_{m-1} + \\pmb{\\omega}_m \\qquad (8)$$\nand then we have the measurements equation\n$$\\textbf{Z}_m = H\\textbf{X}_m + \\pmb{\\nu}_m \\qquad (9)$$\nHere, we model the two error terms $\\pmb{\\omega}_m$ and $\\pmb{\\nu}_m$ as multi-variate Gaussian distributions with zero mean and covariance matrices Q and P respectively, i.e. $\\pmb{\\omega}_m \\textrm{~} \\mathcal{N}(0,Q)$\nand $\\pmb{\\nu}_m \\textrm{~} \\mathcal{N}(0,P)$\nSo now following the exact same philosophy of that we followed for the derivation in voltmeter example we get the update equations for the general Kalman Filter. I will just list it down, if people are interested they can look up the references given below\n$$ P_m^- = AP_{m-1}A^T + Q \\qquad (10) $$\n$$ K_m = P_m^-H^T(HP_m^-H^T + R) \\qquad (11) $$\n$$\\pmb{y}_m = \\pmb{Z}_m - H\\hat{\\pmb{X}_m^-} \\qquad (12)$$\n$$ \\hat{\\pmb{X}}_m = \\hat{\\pmb{X}_m^-} + K_m\\pmb{y}_m \\qquad (13)$$\n$$ P_m = (I - K_mH)P_m^- \\qquad (14)$$\nAgain summarizing\n\nStart with an initial guess for $\\pmb{X} = \\pmb{X}_0$\nGet the prior estimate of state and it's error ($\\hat{\\pmb{X}}^-_m$ and $P_m^{-}$) from the process equation (1) and (6)\nUsing the prior estimates and the measurements data at instant m we get the posterior (read corrected) estimates of the voltage and it's error at instant m ($\\hat{V}_m$ and $\\sigma_m^2$)\nRepeat this for several instances and we will converge to a solution (Yes! there exist a proof for convergence, you can google it up)\n\nNow enough of this rambling, I do hope you get this but we are going to make a kalman filter class and then try to see how this fits with our voltmeter example. So lets first create a class", "class kalmanFilter:\n def __init__(self, X0, P0, A, B, H, Q, R):\n self.A = A # State Transition Matrix\n self.B = B # Control Matrix\n self.H = H # Observation Matrix\n self.Q = Q # Covariance for the process error\n self.R = R # Covariance for the measurements error\n self.current_estimate = X0 # this is the initial guess of the state\n self.current_error_estimate = P0 # initial guess for the state estimate error\n \n def getEstimate(self):\n # returns the current state estimate\n return self.current_estimate\n \n def getErrorEstimate(self):\n # returns the current state error estimate\n return self.current_error_estimate\n \n def iteration(self, U, Z):\n # here is where the updates happen\n # U = control vector\n # Z = measurements vector\n \n # prior prediction step\n prior_estimate = self.A * self.current_estimate + self.B * U\n prior_error_estimate = (self.A * self.current_error_estimate) * np.transpose(self.A) + self.Q\n \n # intermediate observation\n y = Z - self.H * prior_estimate\n y_covariance = self.H * prior_error_estimate * np.transpose(self.H) + self.R\n \n # Correction Step\n K = prior_error_estimate * np.transpose(self.H) * np.linalg.inv(y_covariance)\n self.current_estimate = prior_estimate + K * y\n # We need the size of the matrix so we can make an identity matrix.\n size = self.current_error_estimate.shape[0]\n # eye(n) = nxn identity matrix.\n self.current_error_estimate = (np.eye(size) - K * self.H) * prior_error_estimate\n", "Now we have this nice class set up, let's test it's correctness by applying it to the Voltmeter problem. First things in the voltmeter problem we set the parameters first.", "A = np.matrix([1])\nB = np.matrix([0])\nH = np.matrix([1])\nQ = np.matrix([0.0001]) # the sigmas gets squared\nR = np.matrix([0.09])\nX0 = np.matrix([3])\nP0 = np.matrix([1])\n\nKF = kalmanFilter(X0, P0, A, B, H, Q, R)\nestimate_vector_new = []\nestimate_vector_new.append(initial_guess)\nerror_estimate_vector_new = []\nerror_estimate_vector_new.append(initial_guess_error)\n\n# Run the Filter\n\nfor i in range(iter_max-1):\n U = np.matrix([0]) # there is no control here\n Z = np.matrix([measurements[i+1]])\n estimate_vector_new.append(KF.getEstimate()[0,0])\n error_estimate_vector_new.append(KF.getErrorEstimate()[0,0])\n KF.iteration(U,Z)\n ", "Now lets plot again to see weather we are good to go or not.", "plt.figure()\nplt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r', range(iter_max), estimate_vector_new,'g')\nplt.xlabel('Time Instances')\nplt.ylabel('Voltage')\nplt.legend(('true voltage', 'measured', 'filtered'), loc=1)", "Well it should exactly match with the previous plot :P duh, big deal. But now as we have this nice class we can start dealing with cooler application. So now after much search I have come up with this application to end this hopefully interesting notebook :D\nWell a simple but I guess not with real wow factor example is that we have a ball thrown in a projectile and we can measure it's (x,y) position with camera system and also it's velocity (vx,vy) by the sensors on the ball (used in cricket for LBW system without the velocity part). Now we know that these cameras are noisy so we need a filtered estimate of the state of the ball finally (here the state is a vector of x,y,vx,vy). So let's get on with the system modeling first.\nRemember projectiles! Remember JEE physics which you all did. All right I am not going to explain the kinematics equations just have a look and you will understand. We project at an initial velocity u and angle $\\theta$ wrt horizontal. we divide into intervals of measurements into time intervals of $\\Delta t$. \n$$Vx_{t} = Vx_{t-1}$$\n$$Vy_{t} = Vy_{t-1} - g\\Delta t$$\n$$x_{t} = x_{t-1} + Vx_{t-1}\\Delta t$$\n$$y_{t} = y_{t-1} + Vy_{t-1}\\Delta t - 0.5g\\Delta t^2$$\nThe state of the equations at instance t is given by the vector $\\pmb{X}_t = (x_t,Vx_t,y_t,Vy_t)$, and the control vector is the additional term $\\pmb{u}_t = (0,0,-0.5g\\Delta t^2, -g\\Delta t)$. Think about this! Now we start defining matrices.\n\\begin{equation}\n\\pmb{X}t = A\\pmb{X}{t-1} + B\\pmb{u}_t \\\nA=\\left(\\begin{array}{cccc}\n 1 & \\Delta t & 0 & 0 \\ \n 0 & 1 & 0 & 0 \\\n 0 & 0 & 1 & \\Delta t \\\n 0 & 0 & 0 & 1\n\\end{array}\\right)\nB = \\left(\\begin{array}{cccc}\n0 & 0 & 0 & 0 \\\n0 & 0 & 0 & 0 \\\n0 & 0 & 1 & 0 \\\n0 & 0 & 0 & 1\n\\end{array}\\right)\n\\end{equation}\nAs our measurements are directly the state there is no proportionality and hence the H matrix is an identity matrix (H = $I_4$). Also we assume that our process has an error covariance as $Q = 0.0001I_4$ and the measurements has an error covariance of $R = 0.3I_4$ (all the errors are not necessary to be equal, but taken here for convenience :D). Also we need the values for $\\theta$ and $u$, lets say $\\theta = 45$ and $u = 100m/s$\nNow we have all the matrices and so lets start with solving this using our kalmanFilter class.", "# Physics\n# 1) sin(45)*100 = 70.710 and cos(45)*100 = 70.710\n# vf = vo + at\n# 0 = 70.710 + (-9.81)t\n# t = 70.710/9.81 = 7.208 seconds for half\n# 14.416 seconds for full journey\n# distance = 70.710 m/s * 14.416 sec = 1019.36796 m\n", "due to the above calculations we have to be careful choosing $\\Delta t$ and number of iterations. $\\Delta t = 0.1$ and max_iter = 145 makes sense (think). So now we create our simulated data", "del_t = 0.1\nmax_iter = 145\nadded_noise = 25\ninit_vel = 100\ntheta = np.pi/4\n\n# now we define the measurement matrix\nmeasurements = np.zeros((4,max_iter))\ntrue_value = np.zeros((4,max_iter))\nux0 = init_vel * np.cos(theta)\nuy0 = init_vel * np.sin(theta)\n\nfor i in range(max_iter):\n # we generate this by projectile equations and adding noise to it\n t = i * del_t\n true_value[0,i] = ux0 * t\n true_value[1,i] = ux0\n true_value[2,i] = uy0 * t - 0.5 * 9.8 * t * t\n true_value[3,i] = uy0 - 9.8 * t\n measurements[0,i] = random.gauss(true_value[0,i],added_noise)\n measurements[1,i] = random.gauss(true_value[1,i],added_noise)\n measurements[2,i] = random.gauss(true_value[2,i],added_noise)\n measurements[3,i] = random.gauss(true_value[3,i],added_noise)", "Lets plot the position data and measurements", "plt.figure()\nplt.plot(true_value[0,:],true_value[2,:],'b',measurements[0,:],measurements[2,:],'r')\nplt.xlabel('X Position')\nplt.ylabel('Y Position')\nplt.legend(('true trajectory', 'measured trajectory'), loc=2)", "So this is how the cricket ball's trajectory is measured, now we go to the filtering part", "A = np.matrix([[1,del_t,0,0],[0,1,0,0],[0,0,1,del_t],[0,0,0,1]])\nB = np.matrix([[0,0,0,0],[0,0,0,0],[0,0,1,0],[0,0,0,1]])\nu = np.matrix([[0],[0],[-0.5*9.8*del_t*del_t],[-9.8*del_t]]) # control vector is constant does not depend on t\nH = np.eye(4)\nQ = 0.0001 * np.eye(4)\nR = 0.3 * np.eye(4)\nX0 = np.matrix([[0],[ux0],[500],[uy0]]) # set it little different than the orig initial just to show that KF will still work\nP0 = np.eye(4) # set arbitrary as identity\n\nestimate_matrix = np.zeros((4,max_iter))\nestimate_matrix[:,0] = np.asarray(X0)[:,0]\nestimate_error = P0\nKF = kalmanFilter(X0, P0, A, B, H, Q, R)\n \nfor i in range(max_iter-1):\n Z = np.matrix([[measurements[0,i+1]],[measurements[1,i+1]],[measurements[2,i+1]],[measurements[3,i+1]]])\n estimate_matrix[:,i+1] = np.asarray(KF.getEstimate())[:,0]\n KF.iteration(u,Z)\n\nplt.figure()\nplt.plot(true_value[0,:],true_value[2,:],'b',measurements[0,:],measurements[2,:],'r',estimate_matrix[0,:],estimate_matrix[2,:],'g')\nplt.xlabel('X Position')\nplt.ylabel('Y Position')\nplt.legend(('true trajectory', 'measured trajectory', 'filtered trajectory'), loc=1)", "Zoom and see if you are not able to. Now I think you can also look at the error terms and velocity estimates by changing the code little bit.\nThis concludes it! Hopefully well :D\nHere are some references\nReferences\n\nThis one is real cool. I took most of the stuff from here.\nBit involved but good.\nBeginner" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dacostaortiz/Modelado-Matematico
Homework04/Homework 04 - Simplex.ipynb
mit
[ "Homework 04 - Simplex method\nIn python we can find a set of libraries which has been implemented for a wide community of developers, let's use one of these.\nPymprog is one library which implements the simplex method.", "from pymprog import *", "Let's define our variables in a symbolic way.", "x, y = var('x,y')", "Now we are going to set our restrictions.", "maximize(20*x+30*y, 'profit')\n3*x + 6*y <= 150\nx + 0.5*y <= 22\nx + y <= 275\nsolve()\n\nprint 'pasteles tipo P = %g;' % x.primal\nprint 'pasteles tipo Q = %g;' % y.primal " ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
cathywu/flow
tutorials/tutorial08_network_templates.ipynb
mit
[ "Tutorial 08: Networks from Custom Templates\nIn the previous tutorial, we discussed how OpenStreetMap files can be simulated in Flow. These networks, however, may at time be imperfect, as we can see in the toll section of the Bay Bridge (see the figure below). The simulators SUMO and Aimsun both possess methods for augmenting the network after they have been imported, and store the changes in their own versions of the initial template (whether it was generated via a custom scenario class or a network imported from OpenStreetMap). In order to utilize these newly generated networks, we demonstrate in this tutorial how simulator-generated template files can be imported when running a simulation in Flow. \n<img src=\"img/osm_to_template.png\">\n<center> Figure 1: Example benefit of converting OpenStreetMap to a custom template </center>\nThe remainder of the tutorial is organized as follows. In section 1, we begin by importing the classic set of parameters. In section 2, we introduce the template files that are used as examples for importing the template files. In section 3, we present how custom SUMO network templates, i.e. the generated .net.xml files, can be modified and simulated in Flow for the purposed of improving network features. Finally, in section 4, we demonstrate how custom Aimsun network files can be simulated in Flow.\n1. Importing Modules\nBefore we begin, let us import all relevant Flow parameters as we have done for previous tutorials. If you are unfamiliar with these parameters, you are encouraged to review tutorial 1.", "# the TestEnv environment is used to simply simulate the network\nfrom flow.envs import TestEnv\n\n# the Experiment class is used for running simulations\nfrom flow.core.experiment import Experiment\n\n# the base scenario class\nfrom flow.scenarios import Scenario\n\n# all other imports are standard\nfrom flow.core.params import VehicleParams\nfrom flow.core.params import NetParams\nfrom flow.core.params import InitialConfig\nfrom flow.core.params import EnvParams\n\n# create some default parameters parameters\nenv_params = EnvParams()\ninitial_config = InitialConfig()\nvehicles = VehicleParams()\nvehicles.add('human', num_vehicles=1)", "2. Example Network\nIn this tutorial, we use the Luxembourg SUMO Traffic (LuST) Scenario as an example use case. This example consists of a well-calibrated model of vehicles in Luxembourg. A representation of the simulation can be seen in the figure below.\n<img src=\"img/LuST_network.png\" width=\"500\">\n<center><b>Figure 2</b>: Simulation of the LuST network </center>\nBefore, continuing with this tutorial, please begin by cloning the LuST scenario repository by running the following command.\ngit clone https://github.com/lcodeca/LuSTScenario.git\n\nOnce you have cloned the repository, please modify the code snippet below to match correct location of the repository's main directory.", "LuST_dir = \"/home/aboudy/LuSTScenario\"", "3. Sumo Network Files\nSumo generates several network and simulation-specifc template files prior to starting a simulation. This procedure when creating custom scenarios and scenarios from OpenStreetMap is covered by the scenario class. Three of these files (*.net.xml, *.rou.xml, and vtype.add.xml) can be imported once again via the scenario class to recreate a previously decided scenario.\nWe start by creating the simulation parameters:", "from flow.core.params import SumoParams\n\nsim_params = SumoParams(render=True, sim_step=1)", "3.1 Importing Network (*.net.xml) Files\nThe *.net.xml file covers the network geometry within a simulation, and can be imported independently of the SUMO route file (see section 1.2). This can be done through the template parameter within NetParams as follows:", "import os\n\nnet_params = NetParams(\n template=os.path.join(LuST_dir, \"scenario/lust.net.xml\"),\n)", "This network alone, similar to the OpenStreetMap file, does not cover the placement of vehicles or the routes vehicles can traverse. These, however, can be defined a they were in the previous tutorial for importing networks from OpenStreetMap. For the LuST network, this looks something similar to the following code snippet (note that the specific edges were not spoken for any specific reason).", "# specify the edges vehicles can originate on\ninitial_config = InitialConfig(\n edges_distribution=[\"-32410#3\"]\n)\n\n\n# specify the routes for vehicles in the network\nclass TemplateScenario(Scenario):\n\n def specify_routes(self, net_params):\n return {\"-32410#3\": [\"-32410#3\"]}", "The simulation can then be executed as follows:", "# create the scenario\nscenario = TemplateScenario(\n name=\"template\",\n net_params=net_params,\n initial_config=initial_config,\n vehicles=vehicles\n)\n\n# create the environment\nenv = TestEnv(\n env_params=env_params,\n sim_params=sim_params,\n scenario=scenario\n)\n\n# run the simulation for 1000 steps\nexp = Experiment(env=env)\n_ = exp.run(1, 1000)", "3.2 Importing Additional Files\nSumo templates will at times contain files other than the network templates that can be used to specify the positions, speeds, and properties of vehicles at the start of a simulation, as well as the departure times of vehicles while the scenario is running and the routes that all these vehicles are meant to traverse. All these files can also be imported under the template attribute in order to recreate the simulation in it's entirety.\nWhen incorporating files other that the net.xml file to the simulation, the template attribute is treated as a dictionary instead, with a different element for each of the additional files that are meant to be imported. Starting with the net.xml file, it is added to the template attribute as follows:", "new_net_params = NetParams(\n template={\n # network geometry features\n \"net\": os.path.join(LuST_dir, \"scenario/lust.net.xml\")\n }\n)", "3.2.1 Vehicle Type (vtype.add.xml)\nThe vehicle types file describing the properties of different vehicle types in the network. These include parameters such as the max acceleration and comfortable deceleration of drivers. This file can be imported via the \"vtype\" attribute in template.\nNote that, when vehicle information is being imported from a template file, the VehicleParams object does not need be modified, unless you would like additionally vehicles to enter the network as well.", "new_net_params = NetParams(\n template={\n # network geometry features\n \"net\": os.path.join(LuST_dir, \"scenario/lust.net.xml\"),\n # features associated with the properties of drivers\n \"vtype\": os.path.join(LuST_dir, \"scenario/vtype.add.xml\")\n }\n)\n\n# we no longer need to specify anything in VehicleParams\nnew_vehicles = VehicleParams()", "3.2.2 Route (*.rou.xml)\nNext, the routes can be imported from the *.rou.xml files that are generated by SUMO. These files help define which cars enter the network at which point in time, whether it be at the beginning of a simulation or some time during it run. The route files are passed to the \"rou\" key in the templates attribute. Moreover, since the vehicle routes can be spread over multiple files, the \"rou\" key that a list of string filenames.", "new_net_params = NetParams(\n template={\n # network geometry features\n \"net\": os.path.join(LuST_dir, \"scenario/lust.net.xml\"),\n # features associated with the properties of drivers\n \"vtype\": os.path.join(LuST_dir, \"scenario/vtypes.add.xml\"),\n # features associated with the routes vehicles take\n \"rou\": [os.path.join(LuST_dir, \"scenario/DUARoutes/local.0.rou.xml\"),\n os.path.join(LuST_dir, \"scenario/DUARoutes/local.1.rou.xml\"),\n os.path.join(LuST_dir, \"scenario/DUARoutes/local.2.rou.xml\")]\n }\n)\n\n# we no longer need to specify anything in VehicleParams\nnew_vehicles = VehicleParams()", "3.2.3 Running the Modified Simulation\nFinally, the fully imported simulation can be run as follows. \nWarning: the network takes time to initialize while the departure positions and times and vehicles are specified.", "# create the scenario\nscenario = Scenario(\n name=\"template\",\n net_params=new_net_params,\n vehicles=new_vehicles\n)\n\n# create the environment\nenv = TestEnv(\n env_params=env_params,\n sim_params=sim_params,\n scenario=scenario\n)\n\n# run the simulation for 100000 steps\nexp = Experiment(env=env)\n_ = exp.run(1, 100000)", "4. Aimsun Network Files\nFlow can run templates that have been created in Aimsun and saved into an *.ang file. Although it is possible to have control over the network, for instance add vehicles and monitor them directly from Flow, this tutorial only covers how to run the network.\nWe will use the template located at tutorials/networks/test_template.ang, which looks like this:\n<img src=\"img/test_template.png\">\n<center><b>Figure 2</b>: Simulation of <code>test_template.ang</code> in Aimsun</center>\nIt contains two input and three output centroids that define the centroid configuration Centroid Configuration 910. The inflows are defined by two OD matrices, one for the type Car (in blue), the other for the type rl (in red). Note that there is no learning in this tutorial so the two types both act as regular cars. The two OD matrices form the traffic demand Traffic Demand 925 that is used by the scenario Dynamic Scenario 927. Finally, the experiment Micro SRC Experiment 928 and the replication Replication 930 are created, and we will run this replication in the following.\nFirst, we create the Aimsun-specific simulation parameters:", "from flow.core.params import AimsunParams\n\nsim_params = AimsunParams(\n sim_step=0.1,\n render=True,\n emission_path='data',\n replication_name=\"Replication 930\",\n centroid_config_name=\"Centroid Configuration 910\"\n)", "As you can see, we need to specify the name of the replication we want to run as well as the centroid configuration that is to be used. There is an other optional parameter, subnetwork_name, that can be specified if only part of the network should be simulated. Please refer to the documentation for more information. \nThe template can then be imported as follows:", "import os\nimport flow.config as config\n\nnet_params = NetParams(\n template=os.path.join(config.PROJECT_PATH,\n \"tutorials/networks/test_template.ang\")\n)", "Finally, we can run the simulation by specifying 'aimsun' as the simulator to be used:", "scenario = Scenario(\n name=\"template\", \n net_params=net_params,\n initial_config=initial_config,\n vehicles=vehicles\n)\n\nenv = TestEnv(\n env_params, \n sim_params, \n scenario, \n simulator='aimsun' \n)\n\nexp = Experiment(env)\nexp.run(1, 1000)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kringen/IOT-Back-Brace
data_collection/ProcessSensorReadings.ipynb
apache-2.0
[ "Script to Process the Sensor Readings - ProcessSensorReadings.py\nOverview:\nThis Script uses Spark Streaming to read Kafka topics as they come in and then insert them into a MySQL database. There are two main methods:\n\n\nRead actual sensor readings: Kafka Topic (LumbarSensorReadings) -> writeLumbarReadings -> MySQL table: SensorReadings\n\n\nRead Training sensor readings: Kafka Topic (LumbarSensorTrainingReadings) -> writeLumbarTrainingReadings -> MySQL table: SensorTrainingReadings\n\n\nThis script requires the JDBC Driver in order to connect to to a MySQL database.", "import json\nfrom pyspark.streaming import StreamingContext\nfrom pyspark.streaming.kafka import KafkaUtils\nfrom pyspark import SparkContext\nfrom pyspark.sql import SQLContext\nfrom pyspark.sql.functions import explode\nfrom pyspark.ml.feature import VectorAssembler\nfrom pyspark.mllib.tree import RandomForest, RandomForestModel\n\n#custom modules\nimport MySQLConnection\n\n\"\"\"\nIMPORTANT: MUST use class paths when using spark-submit\n$SPARK_HOME/bin/spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.2,mysql:mysql-connector-java:5.1.28 ProcessSensorReadings.py\n\"\"\"", "The \"writeLumbarReadings\" method takes the rdd received from Spark Streaming as an input. It then extracts the JSON data and converts to a SQLContext dataframe.\nAfter this it creates a new column in the dataframe that contains the \"feature vector\" that will be used to predict the posture.\nThe prediction process uses a model that is created and saved previously. it uses the feature vector to predict the posture.\nFinally, the extra feature column is dropped and the final dataframe is inserted into the MySQL database using JDBC.", "def writeLumbarReadings(time, rdd):\n\ttry:\n\t\t# Convert RDDs of the words DStream to DataFrame and run SQL query\n\t\tconnectionProperties = MySQLConnection.getDBConnectionProps('/home/erik/mysql_credentials.txt')\n\t\tsqlContext = SQLContext(rdd.context)\n\t\tif rdd.isEmpty() == False:\n\t\t\tlumbarReadings = sqlContext.jsonRDD(rdd)\n\t\t\tlumbarReadingsIntermediate = lumbarReadings.selectExpr(\"readingID\",\"readingTime\",\"deviceID\",\"metricTypeID\",\"uomID\",\"actual.y AS actualYaw\",\"actual.p AS actualPitch\",\"actual.r AS actualRoll\",\"setPoints.y AS setPointYaw\",\"setPoints.p AS setPointPitch\",\"setPoints.r AS setPointRoll\")\n\t\t\tassembler = VectorAssembler(\n\t\t\t\t\t\tinputCols=[\"actualPitch\"], # Must be in same order as what was used to train the model. Testing using only pitch since model has limited dataset.\n\t\t\t\t\t\toutputCol=\"features\")\n\t\t\tlumbarReadingsIntermediate = assembler.transform(lumbarReadingsIntermediate)\n\n\t\t\t\n\t\t\tpredictions = loadedModel.predict(lumbarReadingsIntermediate.map(lambda x: x.features))\n\t\t\tpredictionsDF = lumbarReadingsIntermediate.map(lambda x: x.readingID).zip(predictions).toDF([\"readingID\",\"positionID\"])\n\t\t\tcombinedDF = lumbarReadingsIntermediate.join(predictionsDF, lumbarReadingsIntermediate.readingID == predictionsDF.readingID).drop(predictionsDF.readingID)\n\t\t\t\n\t\t\tcombinedDF = combinedDF.drop(\"features\")\n\t\t\t\n\t\t\tcombinedDF.show()\n\n\n\t\t\tcombinedDF.write.jdbc(\"jdbc:mysql://localhost/biosensor\", \"SensorReadings\", properties=connectionProperties)\n\texcept:\n\t\tpass", "The \"writeLumbarTrainingReadings\" method also accepts an RDD from Spark Streaming but does not need to do any machine learning processing since we already know the posture from the JSON data.\nReadings are simply transformed to a SQLContext dataframe and then inserted into the MySQL training readings table.", "def writeLumbarTrainingReadings(time, rddTraining):\n\ttry:\n\t\t# Convert RDDs of the words DStream to DataFrame and run SQL query\n\t\tconnectionProperties = MySQLConnection.getDBConnectionProps('/home/erik/mysql_credentials.txt')\n\t\tsqlContext = SQLContext(rddTraining.context)\n\t\tif rddTraining.isEmpty() == False:\n\t\t\tlumbarTrainingReading = sqlContext.jsonRDD(rddTraining)\n\t\t\tlumbarTrainingReadingFinal = lumbarTrainingReading.selectExpr(\"deviceID\",\"metricTypeID\",\"uomID\",\"positionID\",\"actual.y AS actualYaw\",\"actual.p AS actualPitch\",\"actual.r AS actualRoll\",\"setPoints.y AS setPointYaw\",\"setPoints.p AS setPointPitch\",\"setPoints.r AS setPointRoll\")\n\t\t\tlumbarTrainingReadingFinal.write.jdbc(\"jdbc:mysql://localhost/biosensor\", \"SensorTrainingReadings\", properties=connectionProperties)\n\texcept:\n\t\tpass", "In the main part of the script the machine learning model is loaded and then two Spark StreamingContexts are created to listen for either actual device readings or training readings. The appropriate methods are then called upon receipt.", "if __name__ == \"__main__\":\n\tsc = SparkContext(appName=\"Process Lumbar Sensor Readings\")\n\tssc = StreamingContext(sc, 2) # 2 second batches\n\tloadedModel = RandomForestModel.load(sc, \"../machine_learning/models/IoTBackBraceRandomForest.model\")\n\n\t#Process Readings\n\tstreamLumbarSensor = KafkaUtils.createDirectStream(ssc, [\"LumbarSensorReadings\"], {\"metadata.broker.list\": \"localhost:9092\"})\n\tlineSensorReading = streamLumbarSensor.map(lambda x: x[1])\n\tlineSensorReading.foreachRDD(writeLumbarReadings)\n\t\n\t#Process Training Readings\n\tstreamLumbarSensorTraining = KafkaUtils.createDirectStream(ssc, [\"LumbarSensorTrainingReadings\"], {\"metadata.broker.list\": \"localhost:9092\"})\n\tlineSensorTrainingReading = streamLumbarSensorTraining.map(lambda x: x[1])\n\tlineSensorTrainingReading.foreachRDD(writeLumbarTrainingReadings)\n\n\t# Run and then wait for termination signal\n\tssc.start()\n\tssc.awaitTermination()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
anilcs13m/MachineLearning_Mastering
sentiment analysis/Analyzing product sentiment.ipynb
gpl-2.0
[ "Predicting sentiment from product reviews\nLoad Graph lab", "import graphlab", "Read some product review data\nLoading reviews for a set of baby products.", "products = graphlab.SFrame('amazon_baby.gl/')", "Let's explore this data together\nData includes the product name, the review text and the rating of the review.", "products.head(5)", "Build the word count vector for each review from the products dataset", "products['word_count'] = graphlab.text_analytics.count_words(products['review'])\n\nproducts.head(5)", "set the canvas target as ipynb so that graph will display in iPython Notebook", "graphlab.canvas.set_target('ipynb')\n\nproducts['name'].show()", "Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'", "giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']\n\nlen(giraffe_reviews)", "total 785 review are there for the Vulli Sophie the Giraffe Teether from the Amazon dataset", "giraffe_reviews['rating'].show(view='Categorical')", "Build a sentiment classifier", "products['rating'].show(view='Categorical')", "Define what's a positive and a negative sentiment\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.", "#ignore all 3* reviews\nproducts = products[products['rating'] != 3]\n\n#positive sentiment = 4* or 5* reviews\nproducts['sentiment'] = products['rating'] >=4\n\nproducts.head(6)", "Let's train the sentiment classifier", "train_data,test_data = products.random_split(.8, seed=0)\n\nsentiment_model = graphlab.logistic_classifier.create(train_data,\n target='sentiment',\n features=['word_count'],\n validation_set=test_data)", "Evaluate the sentiment model", "sentiment_model.evaluate(test_data, metric='roc_curve')\n\nsentiment_model.show(view='Evaluation')", "Applying the learned model to understand sentiment for Giraffe", "giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')\n\ngiraffe_reviews.head(4)", "Sort the reviews based on the predicted sentiment and explore", "giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)\n\ngiraffe_reviews.head(5)", "Most positive reviews for the giraffe", "giraffe_reviews[0]['review']\n\ngiraffe_reviews[1]['review']", "Show most negative reviews for giraffe", "giraffe_reviews[-1]['review']\n\ngiraffe_reviews[-2]['review']", "use some normal feature\nWe used the word counts for all words in the reviews to train the sentiment classifier model. Now, we are going to follow a similar path, but only use this subset of the words:\nsubset of words are as follows.", "selected_words = ['awesome', 'great', 'fantastic', 'amazing','love', 'horrible', \n 'bad', 'terrible','awful', 'wow', 'hate']\n\nSelected_Frame = graphlab.SArray(selected_words)\n\nSelected_Frame", "Now we are using only words from the reviews which are present in the selected words, for that we are using the cleaning function that clearn the review", "bow = graphlab.text_analytics.count_words(products['review'])\n\n\n# Only we are considering the review which are having word count which are present in the Selected Frame\n# add a new column for that words_clean \nproducts['words_clean'] = bow.dict_trim_by_keys(Selected_Frame, exclude=False)\n\n## Remove the old colunm for the words count\nproducts = products['name','review','rating','sentiment','words_clean']\n\nproducts.head(5)", "Let's train our model based on the clean words of the data frame", "train_data_clean,test_data_clean = products.random_split(.8, seed=0)\n\nsentiment_model_clean = graphlab.logistic_classifier.create(train_data_clean,\n target='sentiment',\n features=['words_clean'],\n validation_set=test_data_clean)", "Evalute the Model", "sentiment_model_clean.evaluate(test_data_clean, metric='roc_curve')\n\nsentiment_model_clean.show(view='Evaluation')", "Applying the clean learned model to understand sentiment for Giraffe", "giraffe_reviews['predicted_sentiment'] = sentiment_model_clean.predict(giraffe_reviews, output_type='probability')\n\ngiraffe_reviews.head(4)", "Sort the reviews based on predicted sentiment", "giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)\n\ngiraffe_reviews.head(4)\n\nsentiment_model['coefficients']\n\nsentiment_model_clean['coefficients']", "THANKS" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mayankjohri/LetsExplorePython
Section 1 - Core Python/Chapter 11 - Exceptions/Chapter13_Exceptions.ipynb
gpl-3.0
[ "Chapter 13: Exceptions\n\nWhen a failure occurs in the program (such as division by zero, for example) at runtime, an exception is generated. If the exception is not handled, it will be propagated through function calls to the main program module, interrupting execution.", "print (10/0)", "The try instruction allows exception handling in Python. If an exception occurs in a block marked by try, it is possible to handle the exception through the instruction except. It is possible to have many except blocks for the same try block.", "try:\n print (1/0)\nexcept ZeroDivisionError:\n print ('Error trying to divide by zero.')\n\ntry:\n print (1/0)\nexcept:\n print ('Error trying to divide by zero.')\n\ntry:\n print (1/0)\nexcept Exception:\n # Please put the ex.message in some logfile instead of on the console\n print ('Error trying to divide by zero.', Exception)\n\ntry:\n print (1/0)\nexcept Exception as e:\n # Please put the ex.message in some logfile instead of on the console\n print ('Error trying to divide by zero.', e)\n print(dir(e))", "If except receives the name of an exception, only that exception will be handled. If no exception name is passed as a parameter, all exceptions will be handled.\nExample:", "import sys\n\ntry:\n print(\"... TESTing.. \")\n with open('myfile.txt', \"w\") as myFile:\n for a in [\"a\", \"b\", \"c\"]:\n myFile.write(str(a))\n for a in [1, 2, 3, 4, 5, \"6\"]:\n myFile.write(str(a))\n\n# f = open('myfile.txt')\n# s = f.readline()\n# i = int(s.strip())\n raise Exception(\"Test Exception\")\nexcept OSError as err:\n print(\"OS error: {0}\".format(err))\nexcept ValueError:\n print(\"Could not convert data to an integer.\")\n raise\nexcept:\n print(\"Unexpected error:\", sys.exc_info())\n try:\n print(1/0)\n except:\n print(\"Hallo, Ja\")\n raise\n\nimport sys\n\ntry:\n# print(1 > z)\n print(1 / a)\nexcept (OSError, NameError, ValueError, ZeroDivisionError) as err:\n print(\"Some common error: {0}\".format(err))\nexcept:\n print(\"Unexpected error:\", sys.exc_info())\n \n\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Aug 5 08:50:42 2016\n\n@author: mayankjohri@gmail.com\n\"\"\"\nimport traceback\n\n# Try to get a file name\ntry:\n# fn = input('File Name (temp.txt): ').strip()\n fn = \"temp.txt\"\n # Numbering lines\n for i, s in enumerate(open(fn)):\n print( i + 1,\"> \", s,)\n if i == 2:\n raise ValueError\n \n# If an error happens\nexcept:\n # Show it on the screen\n trace = traceback.format_exc()\n\n # And save it on a file\n print ('An error happened:\\n', trace)\n \n with open(\"trace_asd.log\", \"a+\") as file:\n file.write(trace)\n \n# file('trace_asd.log', 'a').write(trace)\n\n # end the program\n# raise SystemExit", "The module traceback offers functions for dealing with error messages. The function format_exc() returns the output of the last exception formatted in a string.\nThe handling of exceptions may have an else block, which will be executed when no exception occurs and a finally block, which will be executed anyway, whether an exception occurred or <span class=\"note\" title=\"The finally declaration may be used for freeing resources that were used in the try block, such as database connections or open files.\">not</span>. New types of exceptions may be defined through inheritance of the class Exception.\nSince version 2.6, the instruction with is available, that may replace the combination of try / finally in many situations. It is possible to define an object that will be used during the with block execution. The object will support the context management protocol, which means that it will need to have an __enter__() method, which will be executed at the beginning of the block, and another called __exit__(), which will be called at the end of the block.\nExample:", "def do_some_stuff():\n print(\"Doing some stuff\")\n\ndef do_some_stuff_e():\n print(\"Doing some stuff and will now raise error\")\n raise ValueError('A very specific bad thing happened')\n\ndef rollback():\n print(\"reverting the changes\")\n\ndef commit():\n print(\"commiting the changes\")\n \nprint(\"Testing\")\n\ntry:\n# do_some_stuff()\n do_some_stuff_e()\nexcept:\n rollback()\n# raise \nelse:\n commit()\nfinally:\n print(\"Exiting out\")\n \n# #### ERROR Condtion\n# Testing\n# try block\n# Doing some stuff and will now raise error\n# except block\n# reverting the changes\n# Finally block\n# Exiting out\n\n# NO ERROR\n# Testing\n# Try block\n# Doing some stuff\n# else block\n# commiting the changes\n# finally block\n# Exiting out\n\n", "Writing Exception Classes", "class HostNotFound(Exception):\n def __init__( self, host ):\n self.host = host\n Exception.__init__(self, 'Host Not Found exception: missing %s' % host)\n\ntry:\n raise HostNotFound(\"gitpub.com\")\nexcept HostNotFound as hcf:\n # Handle exception.\n print (hcf) # -> 'Host Not Found exception: missing taoriver.net'\n print (hcf.host) # -> 'gitpub.net'\n\ntry:\n fh = open(\"nonexisting.txt\", \"r\")\n try:\n fh.write(\"This is my test file for exception handling!!\")\n print(1/0)\n except:\n print(\"Caught error message\")\n finally:\n print (\"Going to close the file\")\n fh.close()\nexcept IOError:\n print (\"Error: can\\'t find file or read data\")\n\ntry:\n# fh = open(\"nonexisting.txt\", \"r\")\n try:\n fh.write(\"This is my test file for exception handling!!\")\n print(1/0)\n except:\n print(\"Caught error message\")\n raise\n finally:\n print (\"Going to close the file\")\n fh.close()\nexcept:\n print (\"Error: can\\'t find file or read data\")\n\ntry:\n# fh = open(\"nonexisting.txt\", \"r\")\n try:\n# fh.write(\"This is my test file for exception handling!!\")\n print(1/0)\n except:\n print(\"Caught error message\") \n finally:\n print (\"Going to close the file\")\n# fh.close()print(1/0)\n print(1/0)\nexcept :\n print (\"Error: can\\'t find file or read data\")\n raise", "Exception hierarchy\nThe class hierarchy for built-in exceptions is:\n```\nBaseException\n +-- SystemExit\n +-- KeyboardInterrupt\n +-- GeneratorExit\n +-- Exception\n +-- StopIteration\n +-- StopAsyncIteration\n +-- ArithmeticError\n | +-- FloatingPointError\n | +-- OverflowError\n | +-- ZeroDivisionError\n +-- AssertionError\n +-- AttributeError\n +-- BufferError\n +-- EOFError\n +-- ImportError\n +-- LookupError\n | +-- IndexError\n | +-- KeyError\n +-- MemoryError\n +-- NameError\n | +-- UnboundLocalError\n +-- OSError\n | +-- BlockingIOError\n | +-- ChildProcessError\n | +-- ConnectionError\n | | +-- BrokenPipeError\n | | +-- ConnectionAbortedError\n | | +-- ConnectionRefusedError\n | | +-- ConnectionResetError\n | +-- FileExistsError\n | +-- FileNotFoundError\n | +-- InterruptedError\n | +-- IsADirectoryError\n | +-- NotADirectoryError\n | +-- PermissionError\n | +-- ProcessLookupError\n | +-- TimeoutError\n +-- ReferenceError\n +-- RuntimeError\n | +-- NotImplementedError\n | +-- RecursionError\n +-- SyntaxError\n | +-- IndentationError\n | +-- TabError\n +-- SystemError\n +-- TypeError\n +-- ValueError\n | +-- UnicodeError\n | +-- UnicodeDecodeError\n | +-- UnicodeEncodeError\n | +-- UnicodeTranslateError\n +-- Warning\n +-- DeprecationWarning\n +-- PendingDeprecationWarning\n +-- RuntimeWarning\n +-- SyntaxWarning\n +-- UserWarning\n +-- FutureWarning\n +-- ImportWarning\n +-- UnicodeWarning\n +-- BytesWarning\n +-- ResourceWarning\n```", "import inspect\ninspect.getclasstree(inspect.getmro(Exception))\n\n# https://stackoverflow.com/questions/18296653/print-the-python-exception-error-hierarchy\ndef classtree(cls, indent=0):\n print ('.' * indent, cls.__name__)\n for subcls in cls.__subclasses__():\n classtree(subcls, indent + 3)\n\nclasstree(BaseException)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/tutorials/text/image_captioning.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Image captioning with visual attention\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/text/image_captioning\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/image_captioning.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nGiven an image like the example below, your goal is to generate a caption such as \"a surfer riding on a wave\".\n\nImage Source; License: Public Domain\nTo accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.\n\nThe model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.\nThis notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.\nIn this example, you will train a model on a relatively small amount of data—the first 30,000 captions for about 20,000 images (because there are multiple captions per image in the dataset).", "import tensorflow as tf\n\n# You'll generate plots of attention in order to see which parts of an image\n# your model focuses on during captioning\nimport matplotlib.pyplot as plt\n\nimport collections\nimport random\nimport numpy as np\nimport os\nimport time\nimport json\nfrom PIL import Image", "Download and prepare the MS-COCO dataset\nYou will use the MS-COCO dataset to train your model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.\nCaution: large download ahead. You'll use the training set, which is a 13GB file.", "# Download caption annotation files\nannotation_folder = '/annotations/'\nif not os.path.exists(os.path.abspath('.') + annotation_folder):\n annotation_zip = tf.keras.utils.get_file('captions.zip',\n cache_subdir=os.path.abspath('.'),\n origin='http://images.cocodataset.org/annotations/annotations_trainval2014.zip',\n extract=True)\n annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'\n os.remove(annotation_zip)\n\n# Download image files\nimage_folder = '/train2014/'\nif not os.path.exists(os.path.abspath('.') + image_folder):\n image_zip = tf.keras.utils.get_file('train2014.zip',\n cache_subdir=os.path.abspath('.'),\n origin='http://images.cocodataset.org/zips/train2014.zip',\n extract=True)\n PATH = os.path.dirname(image_zip) + image_folder\n os.remove(image_zip)\nelse:\n PATH = os.path.abspath('.') + image_folder", "Optional: limit the size of the training set\nTo speed up training for this tutorial, you'll use a subset of 30,000 captions and their corresponding images to train your model. Choosing to use more data would result in improved captioning quality.", "with open(annotation_file, 'r') as f:\n annotations = json.load(f)\n\n# Group all captions together having the same image ID.\nimage_path_to_caption = collections.defaultdict(list)\nfor val in annotations['annotations']:\n caption = f\"<start> {val['caption']} <end>\"\n image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (val['image_id'])\n image_path_to_caption[image_path].append(caption)\n\nimage_paths = list(image_path_to_caption.keys())\nrandom.shuffle(image_paths)\n\n# Select the first 6000 image_paths from the shuffled set.\n# Approximately each image id has 5 captions associated with it, so that will\n# lead to 30,000 examples.\ntrain_image_paths = image_paths[:6000]\nprint(len(train_image_paths))\n\ntrain_captions = []\nimg_name_vector = []\n\nfor image_path in train_image_paths:\n caption_list = image_path_to_caption[image_path]\n train_captions.extend(caption_list)\n img_name_vector.extend([image_path] * len(caption_list))\n\nprint(train_captions[0])\nImage.open(img_name_vector[0])", "Preprocess the images using InceptionV3\nNext, you will use InceptionV3 (which is pretrained on Imagenet) to classify each image. You will extract features from the last convolutional layer.\nFirst, you will convert the images into InceptionV3's expected format by:\n* Resizing the image to 299px by 299px\n* Preprocess the images using the preprocess_input method to normalize the image so that it contains pixels in the range of -1 to 1, which matches the format of the images used to train InceptionV3.", "def load_image(image_path):\n img = tf.io.read_file(image_path)\n img = tf.io.decode_jpeg(img, channels=3)\n img = tf.keras.layers.Resizing(299, 299)(img)\n img = tf.keras.applications.inception_v3.preprocess_input(img)\n return img, image_path", "Initialize InceptionV3 and load the pretrained Imagenet weights\nNow you'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. The shape of the output of this layer is 8x8x2048. You use the last convolutional layer because you are using attention in this example. You don't perform this initialization during training because it could become a bottleneck.\n\nYou forward each image through the network and store the resulting vector in a dictionary (image_name --> feature_vector).\nAfter all the images are passed through the network, you save the dictionary to disk.", "image_model = tf.keras.applications.InceptionV3(include_top=False,\n weights='imagenet')\nnew_input = image_model.input\nhidden_layer = image_model.layers[-1].output\n\nimage_features_extract_model = tf.keras.Model(new_input, hidden_layer)", "Caching the features extracted from InceptionV3\nYou will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).\nPerformance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.\nThe caching will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you can: \n\n\nInstall tqdm:\n!pip install tqdm\n\n\nImport tqdm:\nfrom tqdm import tqdm\n\n\nChange the following line:\nfor img, path in image_dataset:\nto:\nfor img, path in tqdm(image_dataset):", "# Get unique images\nencode_train = sorted(set(img_name_vector))\n\n# Feel free to change batch_size according to your system configuration\nimage_dataset = tf.data.Dataset.from_tensor_slices(encode_train)\nimage_dataset = image_dataset.map(\n load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)\n\nfor img, path in image_dataset:\n batch_features = image_features_extract_model(img)\n batch_features = tf.reshape(batch_features,\n (batch_features.shape[0], -1, batch_features.shape[3]))\n\n for bf, p in zip(batch_features, path):\n path_of_feature = p.numpy().decode(\"utf-8\")\n np.save(path_of_feature, bf.numpy())", "Preprocess and tokenize the captions\nYou will transform the text captions into integer sequences using the TextVectorization layer, with the following steps:\n\nUse adapt to iterate over all captions, split the captions into words, and compute a vocabulary of the top 5,000 words (to save memory).\nTokenize all captions by mapping each word to its index in the vocabulary. All output sequences will be padded to length 50.\nCreate word-to-index and index-to-word mappings to display results.", "caption_dataset = tf.data.Dataset.from_tensor_slices(train_captions)\n\n# We will override the default standardization of TextVectorization to preserve\n# \"<>\" characters, so we preserve the tokens for the <start> and <end>.\ndef standardize(inputs):\n inputs = tf.strings.lower(inputs)\n return tf.strings.regex_replace(inputs,\n r\"!\\\"#$%&\\(\\)\\*\\+.,-/:;=?@\\[\\\\\\]^_`{|}~\", \"\")\n\n# Max word count for a caption.\nmax_length = 50\n# Use the top 5000 words for a vocabulary.\nvocabulary_size = 5000\ntokenizer = tf.keras.layers.TextVectorization(\n max_tokens=vocabulary_size,\n standardize=standardize,\n output_sequence_length=max_length)\n# Learn the vocabulary from the caption data.\ntokenizer.adapt(caption_dataset)\n\n# Create the tokenized vectors\ncap_vector = caption_dataset.map(lambda x: tokenizer(x))\n\n# Create mappings for words to indices and indices to words.\nword_to_index = tf.keras.layers.StringLookup(\n mask_token=\"\",\n vocabulary=tokenizer.get_vocabulary())\nindex_to_word = tf.keras.layers.StringLookup(\n mask_token=\"\",\n vocabulary=tokenizer.get_vocabulary(),\n invert=True)", "Split the data into training and testing", "img_to_cap_vector = collections.defaultdict(list)\nfor img, cap in zip(img_name_vector, cap_vector):\n img_to_cap_vector[img].append(cap)\n\n# Create training and validation sets using an 80-20 split randomly.\nimg_keys = list(img_to_cap_vector.keys())\nrandom.shuffle(img_keys)\n\nslice_index = int(len(img_keys)*0.8)\nimg_name_train_keys, img_name_val_keys = img_keys[:slice_index], img_keys[slice_index:]\n\nimg_name_train = []\ncap_train = []\nfor imgt in img_name_train_keys:\n capt_len = len(img_to_cap_vector[imgt])\n img_name_train.extend([imgt] * capt_len)\n cap_train.extend(img_to_cap_vector[imgt])\n\nimg_name_val = []\ncap_val = []\nfor imgv in img_name_val_keys:\n capv_len = len(img_to_cap_vector[imgv])\n img_name_val.extend([imgv] * capv_len)\n cap_val.extend(img_to_cap_vector[imgv])\n\nlen(img_name_train), len(cap_train), len(img_name_val), len(cap_val)", "Create a tf.data dataset for training\nYour images and captions are ready! Next, let's create a tf.data dataset to use for training your model.", "# Feel free to change these parameters according to your system's configuration\n\nBATCH_SIZE = 64\nBUFFER_SIZE = 1000\nembedding_dim = 256\nunits = 512\nnum_steps = len(img_name_train) // BATCH_SIZE\n# Shape of the vector extracted from InceptionV3 is (64, 2048)\n# These two variables represent that vector shape\nfeatures_shape = 2048\nattention_features_shape = 64\n\n# Load the numpy files\ndef map_func(img_name, cap):\n img_tensor = np.load(img_name.decode('utf-8')+'.npy')\n return img_tensor, cap\n\ndataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))\n\n# Use map to load the numpy files in parallel\ndataset = dataset.map(lambda item1, item2: tf.numpy_function(\n map_func, [item1, item2], [tf.float32, tf.int64]),\n num_parallel_calls=tf.data.AUTOTUNE)\n\n# Shuffle and batch\ndataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\ndataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)", "Model\nFun fact: the decoder below is identical to the one in the example for Neural Machine Translation with Attention.\nThe model architecture is inspired by the Show, Attend and Tell paper.\n\nIn this example, you extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).\nYou squash that to a shape of (64, 2048).\nThis vector is then passed through the CNN Encoder (which consists of a single Fully connected layer).\nThe RNN (here GRU) attends over the image to predict the next word.", "class BahdanauAttention(tf.keras.Model):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n\n def call(self, features, hidden):\n # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)\n\n # hidden shape == (batch_size, hidden_size)\n # hidden_with_time_axis shape == (batch_size, 1, hidden_size)\n hidden_with_time_axis = tf.expand_dims(hidden, 1)\n\n # attention_hidden_layer shape == (batch_size, 64, units)\n attention_hidden_layer = (tf.nn.tanh(self.W1(features) +\n self.W2(hidden_with_time_axis)))\n\n # score shape == (batch_size, 64, 1)\n # This gives you an unnormalized score for each image feature.\n score = self.V(attention_hidden_layer)\n\n # attention_weights shape == (batch_size, 64, 1)\n attention_weights = tf.nn.softmax(score, axis=1)\n\n # context_vector shape after sum == (batch_size, hidden_size)\n context_vector = attention_weights * features\n context_vector = tf.reduce_sum(context_vector, axis=1)\n\n return context_vector, attention_weights\n\nclass CNN_Encoder(tf.keras.Model):\n # Since you have already extracted the features and dumped it\n # This encoder passes those features through a Fully connected layer\n def __init__(self, embedding_dim):\n super(CNN_Encoder, self).__init__()\n # shape after fc == (batch_size, 64, embedding_dim)\n self.fc = tf.keras.layers.Dense(embedding_dim)\n\n def call(self, x):\n x = self.fc(x)\n x = tf.nn.relu(x)\n return x\n\nclass RNN_Decoder(tf.keras.Model):\n def __init__(self, embedding_dim, units, vocab_size):\n super(RNN_Decoder, self).__init__()\n self.units = units\n\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n self.fc1 = tf.keras.layers.Dense(self.units)\n self.fc2 = tf.keras.layers.Dense(vocab_size)\n\n self.attention = BahdanauAttention(self.units)\n\n def call(self, x, features, hidden):\n # defining attention as a separate model\n context_vector, attention_weights = self.attention(features, hidden)\n\n # x shape after passing through embedding == (batch_size, 1, embedding_dim)\n x = self.embedding(x)\n\n # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n\n # passing the concatenated vector to the GRU\n output, state = self.gru(x)\n\n # shape == (batch_size, max_length, hidden_size)\n x = self.fc1(output)\n\n # x shape == (batch_size * max_length, hidden_size)\n x = tf.reshape(x, (-1, x.shape[2]))\n\n # output shape == (batch_size * max_length, vocab)\n x = self.fc2(x)\n\n return x, state, attention_weights\n\n def reset_state(self, batch_size):\n return tf.zeros((batch_size, self.units))\n\nencoder = CNN_Encoder(embedding_dim)\ndecoder = RNN_Decoder(embedding_dim, units, tokenizer.vocabulary_size())\n\noptimizer = tf.keras.optimizers.Adam()\nloss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, reduction='none')\n\n\ndef loss_function(real, pred):\n mask = tf.math.logical_not(tf.math.equal(real, 0))\n loss_ = loss_object(real, pred)\n\n mask = tf.cast(mask, dtype=loss_.dtype)\n loss_ *= mask\n\n return tf.reduce_mean(loss_)", "Checkpoint", "checkpoint_path = \"./checkpoints/train\"\nckpt = tf.train.Checkpoint(encoder=encoder,\n decoder=decoder,\n optimizer=optimizer)\nckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)\n\nstart_epoch = 0\nif ckpt_manager.latest_checkpoint:\n start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])\n # restoring the latest checkpoint in checkpoint_path\n ckpt.restore(ckpt_manager.latest_checkpoint)", "Training\n\nYou extract the features stored in the respective .npy files and then pass those features through the encoder.\nThe encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.\nThe decoder returns the predictions and the decoder hidden state.\nThe decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.\nUse teacher forcing to decide the next input to the decoder.\nTeacher forcing is the technique where the target word is passed as the next input to the decoder.\nThe final step is to calculate the gradients and apply it to the optimizer and backpropagate.", "# adding this in a separate cell because if you run the training cell\n# many times, the loss_plot array will be reset\nloss_plot = []\n\n@tf.function\ndef train_step(img_tensor, target):\n loss = 0\n\n # initializing the hidden state for each batch\n # because the captions are not related from image to image\n hidden = decoder.reset_state(batch_size=target.shape[0])\n\n dec_input = tf.expand_dims([word_to_index('<start>')] * target.shape[0], 1)\n\n with tf.GradientTape() as tape:\n features = encoder(img_tensor)\n\n for i in range(1, target.shape[1]):\n # passing the features through the decoder\n predictions, hidden, _ = decoder(dec_input, features, hidden)\n\n loss += loss_function(target[:, i], predictions)\n\n # using teacher forcing\n dec_input = tf.expand_dims(target[:, i], 1)\n\n total_loss = (loss / int(target.shape[1]))\n\n trainable_variables = encoder.trainable_variables + decoder.trainable_variables\n\n gradients = tape.gradient(loss, trainable_variables)\n\n optimizer.apply_gradients(zip(gradients, trainable_variables))\n\n return loss, total_loss\n\nEPOCHS = 20\n\nfor epoch in range(start_epoch, EPOCHS):\n start = time.time()\n total_loss = 0\n\n for (batch, (img_tensor, target)) in enumerate(dataset):\n batch_loss, t_loss = train_step(img_tensor, target)\n total_loss += t_loss\n\n if batch % 100 == 0:\n average_batch_loss = batch_loss.numpy()/int(target.shape[1])\n print(f'Epoch {epoch+1} Batch {batch} Loss {average_batch_loss:.4f}')\n # storing the epoch end loss value to plot later\n loss_plot.append(total_loss / num_steps)\n\n if epoch % 5 == 0:\n ckpt_manager.save()\n\n print(f'Epoch {epoch+1} Loss {total_loss/num_steps:.6f}')\n print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\\n')\n\nplt.plot(loss_plot)\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.title('Loss Plot')\nplt.show()", "Caption!\n\nThe evaluate function is similar to the training loop, except you don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.\nStop predicting when the model predicts the end token.\nAnd store the attention weights for every time step.", "def evaluate(image):\n attention_plot = np.zeros((max_length, attention_features_shape))\n\n hidden = decoder.reset_state(batch_size=1)\n\n temp_input = tf.expand_dims(load_image(image)[0], 0)\n img_tensor_val = image_features_extract_model(temp_input)\n img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0],\n -1,\n img_tensor_val.shape[3]))\n\n features = encoder(img_tensor_val)\n\n dec_input = tf.expand_dims([word_to_index('<start>')], 0)\n result = []\n\n for i in range(max_length):\n predictions, hidden, attention_weights = decoder(dec_input,\n features,\n hidden)\n\n attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()\n\n predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()\n predicted_word = tf.compat.as_text(index_to_word(predicted_id).numpy())\n result.append(predicted_word)\n\n if predicted_word == '<end>':\n return result, attention_plot\n\n dec_input = tf.expand_dims([predicted_id], 0)\n\n attention_plot = attention_plot[:len(result), :]\n return result, attention_plot\n\ndef plot_attention(image, result, attention_plot):\n temp_image = np.array(Image.open(image))\n\n fig = plt.figure(figsize=(10, 10))\n\n len_result = len(result)\n for i in range(len_result):\n temp_att = np.resize(attention_plot[i], (8, 8))\n grid_size = max(int(np.ceil(len_result/2)), 2)\n ax = fig.add_subplot(grid_size, grid_size, i+1)\n ax.set_title(result[i])\n img = ax.imshow(temp_image)\n ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())\n\n plt.tight_layout()\n plt.show()\n\n# captions on the validation set\nrid = np.random.randint(0, len(img_name_val))\nimage = img_name_val[rid]\nreal_caption = ' '.join([tf.compat.as_text(index_to_word(i).numpy())\n for i in cap_val[rid] if i not in [0]])\nresult, attention_plot = evaluate(image)\n\nprint('Real Caption:', real_caption)\nprint('Prediction Caption:', ' '.join(result))\nplot_attention(image, result, attention_plot)", "Try it on your own images\nFor fun, below you're provided a method you can use to caption your own images with the model you've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)", "image_url = 'https://tensorflow.org/images/surf.jpg'\nimage_extension = image_url[-4:]\nimage_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url)\n\nresult, attention_plot = evaluate(image_path)\nprint('Prediction Caption:', ' '.join(result))\nplot_attention(image_path, result, attention_plot)\n# opening the image\nImage.open(image_path)", "Next steps\nCongrats! You've just trained an image captioning model with attention. Next, take a look at this example Neural Machine Translation with Attention. It uses a similar architecture to translate between Spanish and English sentences. You can also experiment with training the code in this notebook on a different dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
micahhausler/consul-demo
Consul.ipynb
mit
[ "Consul Intro\nPlease see the README.md and get your consul cluster up and running\nSetting a key/value with requests\nFor full documentation, see the Consul http-api", "import requests\n\nbase_url = 'http://192.168.59.103:8500/v1/kv/'\n\nresponse = requests.put(base_url + 'key1', data=\"value1\")\nprint(response.text)", "Getting a key/value with python-consul\nFor full documentation, see python-consul documentation", "from consul import Consul\n\nc = Consul('192.168.59.103')\n\nindex, data = c.kv.get('key1')\n\nprint(data['Value'].decode('utf8'))", "Other features:\n\nLocks\nDNS for service discovery\nLimit access to key/value subsets with ACL's and tokens" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
diegocavalca/Studies
books/deep-learning-with-python/5.1-introduction-to-convnets.ipynb
cc0-1.0
[ "import keras\nkeras.__version__", "5.1 - Introduction to convnets\nThis notebook contains the code sample found in Chapter 5, Section 1 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.\n\nFirst, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been \nthrough in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its \naccuracy will still blow out of the water that of the densely-connected model from Chapter 2.\nThe 6 lines of code below show you what a basic convnet looks like. It's a stack of Conv2D and MaxPooling2D layers. We'll see in a \nminute what they do concretely.\nImportantly, a convnet takes as input tensors of shape (image_height, image_width, image_channels) (not including the batch dimension). \nIn our case, we will configure our convnet to process inputs of size (28, 28, 1), which is the format of MNIST images. We do this via \npassing the argument input_shape=(28, 28, 1) to our first layer.", "from keras import layers\nfrom keras import models\n\nmodel = models.Sequential()\nmodel.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))", "Let's display the architecture of our convnet so far:", "model.summary()", "You can see above that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width \nand height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to \nthe Conv2D layers (e.g. 32 or 64).\nThe next step would be to feed our last output tensor (of shape (3, 3, 64)) into a densely-connected classifier network like those you are \nalready familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor. \nSo first, we will have to flatten our 3D outputs to 1D, and then add a few Dense layers on top:", "model.add(layers.Flatten())\nmodel.add(layers.Dense(64, activation='relu'))\nmodel.add(layers.Dense(10, activation='softmax'))", "We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network \nlooks like:", "model.summary()", "As you can see, our (3, 3, 64) outputs were flattened into vectors of shape (576,), before going through two Dense layers.\nNow, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter \n2.", "from keras.datasets import mnist\nfrom keras.utils import to_categorical\n\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\ntrain_images = train_images.reshape((60000, 28, 28, 1))\ntrain_images = train_images.astype('float32') / 255\n\ntest_images = test_images.reshape((10000, 28, 28, 1))\ntest_images = test_images.astype('float32') / 255\n\ntrain_labels = to_categorical(train_labels)\ntest_labels = to_categorical(test_labels)\n\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(train_images, train_labels, epochs=5, batch_size=64)", "Let's evaluate the model on the test data:", "test_loss, test_acc = model.evaluate(test_images, test_labels)\n\ntest_acc", "While our densely-connected network from Chapter 2 had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.3%: we \ndecreased our error rate by 68% (relative). Not bad!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mjirik/lisa
devel/read_mat_and_wfdb_ecg_holter.ipynb
bsd-3-clause
[ "Soubory se záznamem z holteru\nUžitečný se jeví zejména formát WFDB (.hea/.dat), který obsahuje záznam 3 svodů ECG.", "from matplotlib import pyplot as plt\nimport pprint\nimport numpy as np", "Formát .mat\nFormát pro práci v Matlabu. V souboru s příponou .mat se nachází metadata o pořízeném záznamu - Datum, čas, název. Nejsou zde obsaženy žádné záznamy s průběhem ECG. Položka PEVENT obsahuje 7 čísel s neznámým významem.", "import scipy.io\nmat = scipy.io.loadmat('_.mat')\n\npprint.pprint(mat)\n\nbeatpos = mat['beatpos']\nlen(beatpos[0])\n\n\nlen(mat['PEVENT'][0])\n\nlen(mat['beats'][0])\n\nplt.plot(beatpos[0])\n\nplt.plot(mat['PEVENT'][0])", "Soubory Physiobank WFDB (.hea/.dat)\nFormát obsahuje záznamy ECG. Testovací soubor obsahuje záznamy ze 3 svodů. Načítáme prvních 5 sekund, aby bylo na obrázku něco vidět. Předpokládám, že 'fs' znamená frekvenci měření.\nhttps://archive.physionet.org/physiotools/wfdb.shtml\nhttps://github.com/MIT-LCP/wfdb-python\nInstalace balíky pro python\nshell\npip install wfdb\nVizualizace prvních 5 sekund", "import wfdb\nfn = \"_\"\nfrom wfdb import processing\nsig, fields = wfdb.rdsamp(fn, channels=None, sampto=None)\n# sig, fields = wfdb.rdsamp(fn, channels=None, sampto=5500)\nfields\n\nsig\n\n# remove noisy signal at the beginning\nsig = sig[300:]\n# cut 5 seconds\nsig = sig[:5000]\n\ntime = np.array(range(len(sig)))*1/fields['fs']\nplt.plot(time, sig)\nplt.legend(fields['sig_name'])\nplt.ylabel(fields['units'][0])\nplt.xlabel('s')\nplt.savefig(\"ekg.png\")", "Matlab export", "sig, fields = wfdb.rdsamp(fn, channels=None, sampto=10000)\nfields['sig'] = sig\ntm = fields['base_time']\ndt = fields['base_date']\nfields['base_time'] = [tm.hour, tm.minute, tm.second]\nfields['base_date'] = [dt.year, dt.month, dt.day]\nscipy.io.savemat(fn + 'wfdb_export.mat',fields)\n\n# Kontrola uložení znovu-načtením\nscipy.io.loadmat(fn + 'wfdb_export.mat')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwhswenson/openpathsampling
examples/toy_model_mstis/toy_mstis_2_run.ipynb
mit
[ "Running an MSTIS simulation\nNow we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.\nTasks covered in this notebook:\n* Loading OPS objects from storage\n* Creating initial conditions for a path sampling simulation\n* Setting up and running path sampling simulations\n* Visualizing trajectories while the path sampling is running", "from __future__ import print_function\n%matplotlib inline\nimport openpathsampling as paths\nimport numpy as np", "Setting up the simulation\nLoading from storage\nFirst we'll reload some of the stuff we stored before. Of course, this starts with opening the file.", "old_store = paths.Storage(\"mstis_bootstrap.nc\", mode='r')", "A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.", "print(\"PathMovers: \"+ str(len(old_store.pathmovers)))\nprint(\"Samples: \" + str(len(old_store.samples)))\nprint(\"Ensembles: \" + str(len(old_store.ensembles)))\nprint(\"SampleSets: \" + str(len(old_store.samplesets)))\nprint(\"Snapshots: \" + str(len(old_store.snapshots)))\nprint(\"Networks: \" + str(len(old_store.networks)))", "Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create.", "template = old_store.snapshots[0]", "Named objects can be found in storage by using their name as a dictionary key. This allows us to load our old engine, collective variables, and states.", "engine = old_store.engines['toy_engine']\n\nopA = old_store.cvs['opA']\nopB = old_store.cvs['opB']\nopC = old_store.cvs['opC']\n\nstateA = old_store.volumes['A']\nstateB = old_store.volumes['B']\nstateC = old_store.volumes['C']", "Creating new interface set, network, and move scheme", "# we could also load the interfaces, but it is just as easy:\ninterfacesA = paths.VolumeInterfaceSet(opA, 0.0,[0.2, 0.3, 0.4])\ninterfacesB = paths.VolumeInterfaceSet(opB, 0.0,[0.2, 0.3, 0.4])\ninterfacesC = paths.VolumeInterfaceSet(opC, 0.0,[0.2, 0.3, 0.4])", "Once again, we have everything we need to build the MSTIS network. Recall that this will create all the ensembles we need for the simulation. However, even though the ensembles are semantically the same, these are not the same objects. We'll need to deal with that later.", "ms_outers = paths.MSOuterTISInterface.from_lambdas(\n {ifaces: 0.5 \n for ifaces in [interfacesA, interfacesB, interfacesC]}\n)\nmstis = paths.MSTISNetwork(\n [(stateA, interfacesA),\n (stateB, interfacesB),\n (stateC, interfacesC)],\n ms_outers=ms_outers\n).named('mstis')", "Finally, we'll create the move scheme. For this, we'll use the default TIS move scheme:", "scheme = paths.DefaultScheme(mstis, engine=engine).named(\"scheme\")", "Preparing initial conditions\nThe OPS object called Sample is used to associate a trajectory with a replica ID and an ensemble. The trajectory needs to be associated with an ensemble so we know how to get correct statistics from the many ensembles that we might be sampling simultaneously. The trajectory needs to be associated with a replica ID so that replica exchange approaches can be analyzed.\nBecause we've created a new network (and therefore new ensembles), we need to associate the previous trajectories with the new ensembles. The main tool to map trajectories to ensembles is the initial_conditions_from_trajectories method of a move scheme.", "# load the sampleset we have saved before; there is only one in the file\nold_sampleset = old_store.samplesets[0]\nfrom_old_sset = scheme.initial_conditions_from_trajectories(old_sampleset)", "We are missing trajectories that satisfy the minus ensembles. In real simulations, you usually will have run trajectories in each state (you'll use those to create state/interface definitions). You can (and should) feed those trajectories to the initial_conditions_from_trajectories, instead of the complicated code below. That function is smart enough to select a section of the trajectory that satisfies the minus ensemble.\nHowever, in this case, we'll need to run dynamics to create such a trajectory. And because this notebook is also used in our tests, the code is a little more complicated to prevent test failures in unusual circumstances (like having a spontaneous transition in the innermost ensemble).", "# CODE IN THIS CELL IS NEEDED FOR TEST SUITE, NOT NORMAL USE\n\ndef shoot_until_A_to_A(initial_ensemble, desired_ensemble, sample, engine):\n # we only shoot forward because we know the final frame is the problem\n mover = paths.ForwardShootMover(ensemble=initial_ensemble,\n selector=paths.UniformSelector(),\n engine=engine)\n while not desired_ensemble(sample):\n change = mover.move_core([sample])\n if desired_ensemble(change.trials[0]):\n sample = change.trials[0]\n \n return sample\n \nminus_samples = []\nfor minus_ensemble in mstis.special_ensembles['minus']:\n # tis_ensemble allows A->B; desired_ensemble only allows A->A \n initial_state = minus_ensemble.state_vol\n tis_ensemble = mstis.from_state[initial_state].ensembles[0]\n desired_ensemble = paths.TISEnsemble(initial_state, initial_state,\n tis_ensemble.interface)\n initial_sample = from_old_sset[tis_ensemble]\n # ensure we're A->A, not A->B\n sample_A_to_A = shoot_until_A_to_A(tis_ensemble, desired_ensemble,\n initial_sample, engine)\n\n # with an A->A segment, just use this to extend into the minus ensemble\n sample = minus_ensemble.extend_sample_from_trajectories(\n sample_A_to_A,\n engine=engine,\n replica=-len(minus_samples) - 1\n )\n minus_samples.append(sample)", "Now that we have the necessary trajectories, we create a new sample set using initial_conditions_from_trajectories. By adding the sample_set keyword, we retain any assignments that existing in the given sample set.\nNote that initial_conditions_from_trajectories is very flexible about its input. It allow you to provide a sample set, a list of samples, a list of trajectories, or a single trajectory. Above we used a sample set; here we'll use a list of samples.", "init_conds = scheme.initial_conditions_from_trajectories(\n minus_samples,\n sample_set=from_old_sset\n)", "Now we have a sample set with a trajectory for all the ensembles required to start the simulation. We can (and should) double-check that everything is okay with a few simple checks:", "# verify that every trajectory satisfies its ensemble\ninit_conds.sanity_check()\n\n# verify that this initial conditions are valid for this move scheme\nscheme.assert_initial_conditions(init_conds)", "Equilibration\nIn molecular dynamics, you need to equilibrate if you don't start with an equilibrated snapshot (e.g., if you start with you system at an energy minimum, your should equilibrate before you start taking statistics). Similarly, if you start with a set of paths which are far from the path ensemble equilibrium, you need to equilibrate. This could either be because your trajectories are not from the real dynamics (generated with metadynamics, high temperature, etc.) or because your trajectories are not representative of the path ensemble (e.g., if you put transition trajectories into all interfaces).\nAs with MD, running equilibration can be the same process as running the total simulation. However, in path sampling, it doesn't have to be exactly the same: we can equilibrate without replica exchange moves or path reversal moves, for example. See the appendix at the end of this notebook for an example of that.\nFor a real simulation, we would probably want to store the equilibration phase. Here, we don't bother to save it, so we use storage=None.", "equilibration = paths.PathSampling(\n storage=None,\n sample_set=init_conds,\n move_scheme=scheme\n)", "When using one-way shooting, as we are, part of the trajectory is reused after each shooting move. Therefore, an absolute minimum requirement for equilibration is that all frames from each initial trajectory have been replaced by other frames. We refer to such trajectories as \"decorrelated,\" and OPS has a convenience for running a move scheme until all trajectories are decorrelated.\nFor a real simulation, it would be much better to equilibrate beyond the point where all paths are decorrelated.", "# NBVAL_SKIP\n# don't run this during testing\nequilibration.run_until_decorrelated()\n\n# get the final sample set; normally we'd save to a file and reload\nequilibrated = equilibration.sample_set", "Production\nNow we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll create a storage file, and we'll save a template snapshot. Then we'll create a new PathSampling simulation object. Note that we're using the same move scheme here as we did in the equilibration stage.", "# logging creates ops_output.log file with details of what is going on \n#import logging.config\n#logging.config.fileConfig(\"../resources/logging.conf\", \n# disable_existing_loggers=False)\n\nstorage = paths.Storage(\"mstis.nc\", \"w\")\n\nstorage.save(template)\n\nmstis_calc = paths.PathSampling(\n storage=storage,\n sample_set=equilibrated,\n move_scheme=scheme\n)\nmstis_calc.save_frequency = 50", "The next block sets up a live visualization. This is optional, and only recommended if you're using OPS interactively (which would only be for very small systems). Some of the same tools can be used to play back the behavior after the fact if you want to see the behavior for more complicated systems. You can create a background (here we use the PES contours), and the visualization will plot the trajectories.", "# NBVAL_SKIP\n# skip this during testing, but leave it for demo purposes\n# we use the %run magic because this isn't in a package\n%run ../resources/toy_plot_helpers.py\nxval = paths.FunctionCV(\"xval\", lambda snap : snap.xyz[0][0])\nyval = paths.FunctionCV(\"yval\", lambda snap : snap.xyz[0][1])\nmstis_calc.live_visualizer = paths.StepVisualizer2D(mstis, xval, yval,\n [-1.0, 1.0], [-1.0, 1.0])\nbackground = ToyPlot()\nbackground.contour_range = np.arange(-1.5, 1.0, 0.1)\nbackground.add_pes(engine.pes)\nmstis_calc.live_visualizer.background = background.plot()\n# increase update frequency to speed things up, but it isn't as pretty\nmstis_calc.status_update_frequency = 1", "Now everything is ready: let's run the simulation! We'll start by running it for 100 MC steps.", "mstis_calc.run(100)", "In RETIS, there are several different move types (shooting, replica exchange, etc.), and each move type can have a different probability of being selected. Moreover, different move types may have different numbers of specific moves (ensembles affected) within them.\nThis means that if you wanted to run enough steps that, on average, each shooting mover ran 1000 times, you would need to figure out how many trials (of any type) that corresponds to. OPS has tools to make that easy. First, we select a mover (we'll take the first shooting mover as a representative of all shooting movers), and then we use scheme.n_steps_for_trial to get the expected number of steps to get that many trials of that mover. Of course, this is only a expected value; the exact number of trials in the simulation will vary.", "representative_mover = scheme.movers['shooting'][0]\nn_steps = int(scheme.n_steps_for_trials(representative_mover, 1000))\nprint(n_steps)", "Finally, let's run for a lot longer to get enough statistics. Note that this time, we'll run the simulation using run_until, which picks up from where we left off, and finishes when a total of n_steps trials have been performed.", "# NBVAL_SKIP\n# don't run all those steps in testing!\nmstis_calc.live_visualizer = None # turn off the live visualization\nmstis_calc.run_until(n_steps)\n\nstorage.close()", "Appendix: Alternate equilibration scheme\nThe equilibration above uses the same move scheme as the simulation. As remarked, this is not required: you can equilibrate using a different move scheme that samples the same ensembles. For example, the minus move is run much less frequently in the default TIS scheme, which means that it is a long time before the minus ensemble decorrelates. Additionally, moves like path reversal and replica exchange don't do anything to help the initial decorrelation.\nHere's an example of how to create a custom move scheme that uses a shooting move for all ensembles. Note that we don't usually use a shooting move on the minus ensemble (the minus move does the dynamics), but nonetheless, shooting is possible.", "from openpathsampling import strategies", "We want to create a move scheme that consists of a shooting mover for each ensemble that is used in the move scheme called scheme, which we created above. The easiest way to get that is the scheme's find_used_ensembles method. However, we could get equivalent information from the network object mstis, by looking at its sampling_ensembles (normal TIS), ms_outers (multiple state outer), and minus_ensembles.", "all_ensembles = scheme.find_used_ensembles()\n\nshooting_strat = strategies.OneWayShootingStrategy(\n selector=paths.UniformSelector(),\n ensembles=all_ensembles,\n engine=engine\n)\n\n# all custom strategies need a global-level \"OrganizeBy\" strategy\n# this is the standard one to use\nglobal_strat = strategies.OrganizeByMoveGroupStrategy()\n\nequil_scheme = paths.MoveScheme(mstis)\nequil_scheme.append([shooting_strat, global_strat])\n\ncustom_equil = paths.PathSampling(\n storage=None,\n move_scheme=equil_scheme,\n sample_set=init_conds\n)\n\n# NBVAL_SKIP\ncustom_equil.run_until_decorrelated()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sarahmid/programming-bootcamp-v2
lab7_exercises.ipynb
mit
[ "Programming Bootcamp 2016\nLesson 7 Exercises\n\n!!! This is the last graded problem set! And it's due early: 11pm on 9/29 !!!\n\nWinners will be announced after the final lecture on 9/30, and mugs will be distributed then\nIf you earn a mug but can't attend, you can pick it up from me later. Send me an email to arrange a time.\n\n\n Earning points (optional) \n- Enter your name below.\n- Email your .ipynb file to me (sarahmid@mail.med.upenn.edu) before 11:00 pm on 9/29. \n- You do not need to complete all the problems to get points. \n- I will give partial credit for effort when possible.\n- At the end of the course, everyone who gets at least 90% of the total points will get a prize (bootcamp mug!).\nName: \n\n1. Creating your function file (1pt)\nUp until now, we've created all of our functions inside the Jupyter notebook. However, in order to use your functions across different scripts, it's best to put them into a separate file. Then you can load this file from anywhere with a single line of code and have access to all your custom functions!\nDo the following:\n- Open up your plain-text editor. \n- Copy and paste all of the functions you created in lab6, question 3, into a blank file and save it as my_utils.py (don't forget the .py extension). Save it anywhere on your computer.\n- Edit the code below so that it imports your my_utils.py functions. You will need to change '../utilities/my_utils.py' to where you saved your file. \nSome notes:\n- You need to supply the path relative to where this notebook is. See the slides for some examples on how to specify paths.\n- You can use this method to import your functions from anywhere on your computer! In contrast, the regular import function will only find custom functions that are in the same directory as the current notebook/script.", "import imp\nmy_utils = imp.load_source('my_utils', '../utilities/my_utils.py') #CHANGE THIS PATH\n\n# test that this worked\nprint \"Test my_utils.gc():\", my_utils.gc(\"ATGGGCCCAATGG\")\nprint \"Test my_utils.reverse_compl():\", my_utils.reverse_compl(\"GGGGTCGATGCAAATTCAAA\")\nprint \"Test my_utils.read_fasta():\", my_utils.read_fasta(\"horrible.fasta\")\nprint \"Test my_utils.rand_seq():\", my_utils.rand_seq(23)\nprint \"Test my_utils.shuffle_nt():\", my_utils.shuffle_nt(\"AAAAAAGTTTCCC\")\n\nprint \"\\nIf the above produced no errors, then you're good!\"", "Feel free to use these functions (and any others you've created) to solve the problems below. You can see in the test code above how they can be accessed.\n\n2. Command line arguments (6pts)\nNote: Do the following in a SCRIPT, not in the notebook. You can not use command line arguments within Jupyter notebooks.\nAfter testing your code as a script, copy and paste it here for grading purposes only.\n(A) Write a script that expects 4 arguments, and prints those four arguments to the screen. Test this script by running it (on the command line) as shown in the lecture. Copy and paste the code below once you have it working.\n(B) Write a script that expects 3 numerical arguments (\"a\", \"b\", and \"c\") from the command line.\n- Check that the correct number of arguments is supplied (based on the length of sys.argv)\n- If not, print an error message and exit\n- Otherwise, go on to add the three numbers together and print the result. \nCopy and paste your code below once you have it working.\nNote: All command line arguments are read in as strings (just like with raw_input). To use them as numbers, you must convert them with float().\n(C) Here you will create a script that generates a random dataset of sequences. \nYour script should expect the following command line arguments, in this order. Remember to convert strings to ints when needed:\n1. outFile - string; name of the output file the generated sequences will be printed to\n1. numSeqs - integer; number of sequences to create\n1. minLength - integer; minimum sequence length\n1. maxLength - integer; maximum sequence length\nThe script should read in these arguments and check if the correct number of arguments is supplied (exit if not). \nIf all looks good, then print the indicated number of randomly generated sequences as follows:\n- the length of each individual sequence should be randomly chosen to be between minLength and maxLength (so that not all sequences are the same length)\n- each sequence should be given a unique ID (e.g. using a counter to make names like seq1, seq2, ...)\n- the output should be in fasta format (>seqID\\nsequence\\n)\n- the output shold be printed to the indicated file\nThen, run your script to create a file called fake.fasta containing 100,000 random sequences of random length 50-500 nt.\nCopy and paste your code below once you have it working.\n\n3. time practice (7pts)\nFor the following problems, use the file you created in the previous problem (fake.fasta) and the time.time() function. (Note: there is also a copy of fake.fasta on Piazza if you need it.)\nNote: Do not include the time it takes to read the file in your time calculation! Loading files can take a while.\n(A) Initial practice with timing. Add code to the following cell to time how long it takes to run. Print the result.", "\nsillyList = []\nfor i in range(50000):\n sillyList.append(sum(sillyList))\n ", "(B) Counting characters. Is it faster to use the built-in function str.count() or to loop through a string and count characters manually? Compare the two by counting all the A's in all the sequences in fake.fasta using each method and comparing how long they take to run. \n(You do not need to output the counts)", "# Method 1 (Manual counting)\n\n# Method 2 (.count())", "Which was faster? Your answer: \n(C) Replacing characters. Is it faster to use the built-in function str.replace() or to loop through a string and replace characters manually? Compare the two by replacing all the T's with U's in all the sequences in fake.fasta using each method, and comparing how long they take to run. \n(You do not need to output the edited sequences)", "# Method 1 (Manual replacement)\n\n# Method 2 (.replace())", "Which was faster? Your answer: \n(D) Lookup speed in data structures. Is it faster to get unique IDs using a list or a dictionary? Read in fake.fasta, ignoring everything but the header lines. Count the number of unique IDs (headers) using a list or dictionary, and compare how long each method takes to run. \nBe patient; this one might take a while to run!", "# Method 1 (list)\n\n# Method 2 (dictionary)", "Which was faster? Your answer: \nIf you're curious, below is a brief explanation of the outcomes you should have observed:\n\n(B) The built-in method should be much faster! Most built in functions are pretty well optimized, so they will often (but not always) be faster.\n(C) Again, the built in function should be quite a bit faster.\n(D) If you did this right, then the dictionary should be faster by several orders of magnitude. When you use a dictionary, Python jumps directly to where the requested key should be, if it were in the dictionary. This is very fast (it's an O(1) operation, for those who are familiar with the terminology). With lists, on the other hand, Python will scan through the whole list until it finds the requested element (or until it reaches the end). This gets slower and slower on average as you add more elements (it's an O(n) operation). Just something to keep in mind if you start working with very large datasets!\n\n\n4. os and glob practice (6pts)\nUse horrible.fasta as a test fasta file for the following.\n(A) Write code that prompts the user (using raw_input()) for two pieces of information: an input file name (assumed to be a fasta file) and an output folder name (does not need to already exist). Then do the following:\n- Check if the input file exists\n- If it doesn't, print an error message\n- Otherwise, go on to check if the output folder exists\n- If it doesn't, create it\n(B) Add to the code above so that it also does the following after creating the output folder:\n- Read in the fasta file (ONLY if it exists) \n- Print each individual sequence to a separate file in the specified output folder. \n- The files should be named &lt;SEQID&gt;.fasta, where &lt;SEQID&gt; is the name of the sequence (from the fasta header)\n(C) Now use glob to get a list of all files in the output folder from part (B) that have a .fasta extension. For each file, print just the file name (not the file path) to the screen." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dianafprieto/SS_2017
.ipynb_checkpoints/06_NB_VTKPython_Scalar-checkpoint.ipynb
mit
[ "<img src=\"imgs/header.png\">\nVisualization techniques for scalar fields in VTK + Python\nRecap: The VTK pipeline\n<img src=\"imgs/vtk_pipeline.png\", align=left>\n$~$\nVTK look-up tables and transfer functions", "%gui qt\nimport vtk\nfrom vtkviewer import SimpleVtkViewer\n#help(vtk.vtkRectilinearGridReader())", "1. Data input (source)", "# do not forget to call \"Update()\" at the end of the reader\nrectGridReader = vtk.vtkRectilinearGridReader()\nrectGridReader.SetFileName(\"data/jet4_0.500.vtk\")\nrectGridReader.Update()", "2. Filters\n\nFilter 1: vtkRectilinearGridOutlineFilter() creates wireframe outline for a rectilinear grid.", "%qtconsole\n\nrectGridOutline = vtk.vtkRectilinearGridOutlineFilter()\nrectGridOutline.SetInputData(rectGridReader.GetOutput())", "3. Mappers\n\nMapper: vtkPolyDataMapper() maps vtkPolyData to graphics primitives.", "rectGridOutlineMapper = vtk.vtkPolyDataMapper()\nrectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())", "4. Actors", "outlineActor = vtk.vtkActor()\noutlineActor.SetMapper(rectGridOutlineMapper)\noutlineActor.GetProperty().SetColor(0, 0, 0)", "5. Renderers and Windows", "#Option 1: Default vtk render window\nrenderer = vtk.vtkRenderer()\nrenderer.SetBackground(0.5, 0.5, 0.5)\nrenderer.AddActor(outlineActor)\nrenderer.ResetCamera()\n\nrenderWindow = vtk.vtkRenderWindow()\nrenderWindow.AddRenderer(renderer)\nrenderWindow.SetSize(500, 500)\nrenderWindow.Render()\n\niren = vtk.vtkRenderWindowInteractor()\niren.SetRenderWindow(renderWindow)\niren.Start()\n\n#Option 2: Using the vtk-viewer for Jupyter to interactively modify the pipeline\nvtkSimpleWin = SimpleVtkViewer()\nvtkSimpleWin.resize(1000,800)\nvtkSimpleWin.hide_axes()\n\nvtkSimpleWin.add_actor(outlineActor)\nvtkSimpleWin.add_actor(gridGeomActor)\n\nvtkSimpleWin.ren.SetBackground(0.5, 0.5, 0.5)\nvtkSimpleWin.ren.ResetCamera()", "<font color='red'>Trick:</font> The autocomplete functionality in Jupyter is available by pressing the Tab button.\nUseful Resources\nhttp://www.vtk.org/Wiki/VTK/Examples/Python" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ModSimPy
examples/spiderman.ipynb
mit
[ "Spider-Man\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "In this case study we'll develop a model of Spider-Man swinging from a springy cable of webbing attached to the top of the Empire State Building. Initially, Spider-Man is at the top of a nearby building, as shown in this diagram.\n\nThe origin, O⃗, is at the base of the Empire State Building. The vector H⃗ represents the position where the webbing is attached to the building, relative to O⃗. The vector P⃗ is the position of Spider-Man relative to O⃗. And L⃗ is the vector from the attachment point to Spider-Man.\nBy following the arrows from O⃗, along H⃗, and along L⃗, we can see that \nH⃗ + L⃗ = P⃗\nSo we can compute L⃗ like this:\nL⃗ = P⃗ - H⃗\nThe goals of this case study are:\n\n\nImplement a model of this scenario to predict Spider-Man's trajectory.\n\n\nChoose the right time for Spider-Man to let go of the webbing in order to maximize the distance he travels before landing.\n\n\nChoose the best angle for Spider-Man to jump off the building, and let go of the webbing, to maximize range.\n\n\nI'll create a Params object to contain the quantities we'll need:\n\n\nAccording to the Spider-Man Wiki, Spider-Man weighs 76 kg.\n\n\nLet's assume his terminal velocity is 60 m/s.\n\n\nThe length of the web is 100 m.\n\n\nThe initial angle of the web is 45 degrees to the left of straight down.\n\n\nThe spring constant of the web is 40 N / m when the cord is stretched, and 0 when it's compressed.\n\n\nHere's a Params object with the parameters of the system.", "params = Params(height = 381, # m,\n g = 9.8, # m/s**2,\n mass = 75, # kg,\n area = 1, # m**2,\n rho = 1.2, # kg/m**3,\n v_term = 60, # m / s,\n length = 100, # m,\n angle = (270 - 45), # degree,\n k = 40, # N / m,\n t_0 = 0, # s,\n t_end = 30, # s\n )", "Compute the initial position", "def initial_condition(params):\n \"\"\"Compute the initial position and velocity.\n \n params: Params object\n \"\"\"\n H⃗ = Vector(0, params.height)\n theta = np.deg2rad(params.angle)\n x, y = pol2cart(theta, params.length)\n L⃗ = Vector(x, y)\n P⃗ = H⃗ + L⃗\n \n return State(x=P⃗.x, y=P⃗.y, vx=0, vy=0)\n\ninitial_condition(params)", "Now here's a version of make_system that takes a Params object as a parameter.\nmake_system uses the given value of v_term to compute the drag coefficient C_d.", "def make_system(params):\n \"\"\"Makes a System object for the given conditions.\n \n params: Params object\n \n returns: System object\n \"\"\"\n init = initial_condition(params)\n \n mass, g = params.mass, params.g\n rho, area, v_term = params.rho, params.area, params.v_term\n C_d = 2 * mass * g / (rho * area * v_term**2)\n \n return System(params, init=init, C_d=C_d)", "Let's make a System", "system = make_system(params)\n\nsystem.init", "Drag and spring forces\nHere's drag force, as we saw in Chapter 22.", "def drag_force(V⃗, system):\n \"\"\"Compute drag force.\n \n V⃗: velocity Vector\n system: `System` object\n \n returns: force Vector\n \"\"\"\n rho, C_d, area = system.rho, system.C_d, system.area\n \n mag = rho * vector_mag(V⃗)**2 * C_d * area / 2\n direction = -vector_hat(V⃗)\n f_drag = direction * mag\n return f_drag\n\nV⃗_test = Vector(10, 10)\n\ndrag_force(V⃗_test, system)", "And here's the 2-D version of spring force. We saw the 1-D version in Chapter 21.", "def spring_force(L⃗, system):\n \"\"\"Compute drag force.\n \n L⃗: Vector representing the webbing\n system: System object\n \n returns: force Vector\n \"\"\"\n extension = vector_mag(L⃗) - system.length\n if extension < 0:\n mag = 0\n else:\n mag = system.k * extension\n \n direction = -vector_hat(L⃗)\n f_spring = direction * mag\n return f_spring\n\nL⃗_test = Vector(0, -system.length-1)\n\nf_spring = spring_force(L⃗_test, system)\nf_spring", "Here's the slope function, including acceleration due to gravity, drag, and the spring force of the webbing.", "def slope_func(t, state, system):\n \"\"\"Computes derivatives of the state variables.\n \n state: State (x, y, x velocity, y velocity)\n t: time\n system: System object with g, rho, C_d, area, mass\n \n returns: sequence (vx, vy, ax, ay)\n \"\"\"\n x, y, vx, vy = state\n P⃗ = Vector(x, y)\n V⃗ = Vector(vx, vy)\n g, mass = system.g, system.mass\n \n H⃗ = Vector(0, system.height)\n L⃗ = P⃗ - H⃗\n \n a_grav = Vector(0, -g)\n a_spring = spring_force(L⃗, system) / mass\n a_drag = drag_force(V⃗, system) / mass\n \n A⃗ = a_grav + a_drag + a_spring\n \n return V⃗.x, V⃗.y, A⃗.x, A⃗.y", "As always, let's test the slope function with the initial conditions.", "slope_func(0, system.init, system)", "And then run the simulation.", "results, details = run_solve_ivp(system, slope_func)\ndetails.message", "Visualizing the results\nWe can extract the x and y components as Series objects.\nThe simplest way to visualize the results is to plot x and y as functions of time.", "def plot_position(results):\n results.x.plot(label='x')\n results.y.plot(label='y')\n\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n \nplot_position(results)", "We can plot the velocities the same way.", "def plot_velocity(results):\n results.vx.plot(label='vx')\n results.vy.plot(label='vy')\n\n decorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n \nplot_velocity(results)", "Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.", "def plot_trajectory(results, label):\n x = results.x\n y = results.y\n make_series(x, y).plot(label=label)\n\n decorate(xlabel='x position (m)',\n ylabel='y position (m)')\n \nplot_trajectory(results, label='trajectory')", "Letting go\nNow let's find the optimal time for Spider-Man to let go. We have to run the simulation in two phases because the spring force changes abruptly when Spider-Man lets go, so we can't integrate through it.\nHere are the parameters for Phase 1, running for 9 seconds.", "params1 = params.set(t_end=9)\nsystem1 = make_system(params1)\nresults1, details1 = run_solve_ivp(system1, slope_func)\nplot_trajectory(results1, label='phase 1')", "The final conditions from Phase 1 are the initial conditions for Phase 2.", "t_0 = results1.index[-1]\nt_0\n\ninit = results1.iloc[-1]\ninit\n\nt_end = t_0 + 10", "Here is the System for Phase 2. We can turn off the spring force by setting k=0, so we don't have to write a new slope function.", "system2 = system1.set(init=init, t_0=t_0, t_end=t_end, k=0)", "Here's an event function that stops the simulation when Spider-Man reaches the ground.", "def event_func(t, state, system):\n \"\"\"Stops when y=0.\n \n state: State object\n t: time\n system: System object\n \n returns: height\n \"\"\"\n x, y, vx, vy = state\n return y", "Run Phase 2.", "results2, details2 = run_solve_ivp(system2, slope_func, \n events=event_func)\ndetails2.message", "Plot the results.", "plot_trajectory(results1, label='phase 1')\nplot_trajectory(results2, label='phase 2')", "Now we can gather all that into a function that takes t_release and V_0, runs both phases, and returns the results.", "def run_two_phase(t_release, params):\n \"\"\"Run both phases.\n \n t_release: time when Spider-Man lets go of the webbing\n \"\"\"\n params1 = params.set(t_end=t_release)\n system1 = make_system(params1)\n results1, details1 = run_solve_ivp(system1, slope_func)\n\n t_0 = results1.index[-1]\n t_end = t_0 + 10\n init = results1.iloc[-1]\n\n system2 = system1.set(init=init, t_0=t_0, t_end=t_end, k=0)\n results2, details2 = run_solve_ivp(system2, slope_func, \n events=event_func)\n\n return results1.append(results2)", "And here's a test run.", "t_release = 9 \nresults = run_two_phase(t_release, params)\nplot_trajectory(results, 'trajectory')\n\nx_final = results.iloc[-1].x\nx_final", "Animation\nHere's a draw function we can use to animate the results.", "from matplotlib.pyplot import plot\n\nxlim = results.x.min(), results.x.max()\nylim = results.y.min(), results.y.max()\n\ndef draw_func(t, state):\n plot(state.x, state.y, 'bo')\n decorate(xlabel='x position (m)',\n ylabel='y position (m)',\n xlim=xlim,\n ylim=ylim)\n\n# animate(results, draw_func)", "Maximizing range\nTo find the best value of t_release, we need a function that takes possible values, runs the simulation, and returns the range.", "def range_func(t_release, params):\n \"\"\"Compute the final value of x.\n \n t_release: time to release web\n params: Params object\n \"\"\"\n results = run_two_phase(t_release, params)\n x_final = results.iloc[-1].x\n print(t_release, x_final)\n return x_final", "We can test it.", "range_func(9, params)", "And run it for a few values.", "for t_release in linrange(3, 15, 3):\n range_func(t_release, params)", "Now we can use maximize_scalar to find the optimum.", "bounds = [6, 12]\nres = maximize_scalar(range_func, params, bounds=bounds)", "Finally, we can run the simulation with the optimal value.", "best_time = res.x\nresults = run_two_phase(best_time, params)\nplot_trajectory(results, label='trajectory')\n\nx_final = results.iloc[-1].x\nx_final" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
python/pandas_dataframe_examples.ipynb
mit
[ "Title: Simple Example Dataframes In Pandas\nSlug: pandas_dataframe_examples\nSummary: Simple Example Dataframes In Pandas\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nimport modules", "import pandas as pd", "Create dataframe", "raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], \n 'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'], \n 'age': [42, 52, 36, 24, 73], \n 'preTestScore': [4, 24, 31, 2, 3],\n 'postTestScore': [25, 94, 57, 62, 70]}\ndf = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])\ndf", "Create 2nd dataframe", "raw_data_2 = {'first_name': ['Sarah', 'Gueniva', 'Know', 'Sara', 'Cat'], \n 'last_name': ['Mornig', 'Jaker', 'Alom', 'Ormon', 'Koozer'], \n 'age': [53, 26, 72, 73, 24], \n 'preTestScore': [13, 52, 72, 26, 26],\n 'postTestScore': [82, 52, 56, 234, 254]}\ndf_2 = pd.DataFrame(raw_data_2, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])\ndf_2", "Create 3rd dataframe", "raw_data_3 = {'first_name': ['Sarah', 'Gueniva', 'Know', 'Sara', 'Cat'], \n 'last_name': ['Mornig', 'Jaker', 'Alom', 'Ormon', 'Koozer'],\n 'postTestScore_2': [82, 52, 56, 234, 254]}\ndf_3 = pd.DataFrame(raw_data_3, columns = ['first_name', 'last_name', 'postTestScore_2'])\ndf_3" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quoniammm/mine-tensorflow-examples
fastAI/deeplearning1/nbs/statefarm.ipynb
mit
[ "Enter State Farm", "from theano.sandbox import cuda\ncuda.use('gpu0')\n\n%matplotlib inline\nfrom __future__ import print_function, division\npath = \"data/state/\"\n#path = \"data/state/sample/\"\nimport utils; reload(utils)\nfrom utils import *\nfrom IPython.display import FileLink\n\nbatch_size=64", "Setup batches", "batches = get_batches(path+'train', batch_size=batch_size)\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)", "Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)", "trn = get_data(path+'train')\nval = get_data(path+'valid')\n\nsave_array(path+'results/val.dat', val)\nsave_array(path+'results/trn.dat', trn)\n\nval = load_array(path+'results/val.dat')\ntrn = load_array(path+'results/trn.dat')", "Re-run sample experiments on full dataset\nWe should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.\nSingle conv layer", "def conv1(batches):\n model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n\n model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])\n model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n model.optimizer.lr = 0.001\n model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n return model\n\nmodel = conv1(batches)", "Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.\nData augmentation", "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)\n\nmodel.optimizer.lr = 0.0001\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.\nFour conv/pooling pairs + dropout\nUnfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.", "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(128,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(10, activation='softmax')\n ])\n\nmodel.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr=0.001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr=0.00001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...\nImagenet conv features\nSince we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)", "vgg = Vgg16()\nmodel=vgg.model\nlast_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]\nconv_layers = model.layers[:last_conv_idx+1]\n\nconv_model = Sequential(conv_layers)\n\n# batches shuffle must be set to False when pre-computing features\nbatches = get_batches(path+'train', batch_size=batch_size, shuffle=False)\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)\n\nconv_feat = conv_model.predict_generator(batches, batches.nb_sample)\nconv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)\nconv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)\n\nsave_array(path+'results/conv_val_feat.dat', conv_val_feat)\nsave_array(path+'results/conv_test_feat.dat', conv_test_feat)\nsave_array(path+'results/conv_feat.dat', conv_feat)\n\nconv_feat = load_array(path+'results/conv_feat.dat')\nconv_val_feat = load_array(path+'results/conv_val_feat.dat')\nconv_val_feat.shape", "Batchnorm dense layers on pretrained conv layers\nSince we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.", "def get_bn_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]\n\np=0.8\n\nbn_model = Sequential(get_bn_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])\n\nbn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.01\n\nbn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.save_weights(path+'models/conv8.h5')", "Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.\nPre-computed data augmentation + dropout\nWe'll use our usual data augmentation parameters:", "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nda_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)", "We use those to create a dataset of convolutional features 5x bigger than the training set.", "da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5)\n\nsave_array(path+'results/da_conv_feat2.dat', da_conv_feat)\n\nda_conv_feat = load_array(path+'results/da_conv_feat2.dat')", "Let's include the real training data as well in its non-augmented form.", "da_conv_feat = np.concatenate([da_conv_feat, conv_feat])", "Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.", "da_trn_labels = np.concatenate([trn_labels]*6)", "Based on some experiments the previous model works well, with bigger dense layers.", "def get_bn_da_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]\n\np=0.8\n\nbn_model = Sequential(get_bn_da_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])", "Now we can train the model as usual, with pre-computed augmented data.", "bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.01\n\nbn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.0001\n\nbn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "Looks good - let's save those weights.", "bn_model.save_weights(path+'models/da_conv8_1.h5')", "Pseudo labeling\nWe're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.\nTo do this, we simply calculate the predictions of our model...", "val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)", "...concatenate them with our training labels...", "comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])\n\ncomb_feat = np.concatenate([da_conv_feat, conv_val_feat])", "...and fine-tune our model using that data.", "bn_model.load_weights(path+'models/da_conv8_1.h5')\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.00001\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.", "bn_model.save_weights(path+'models/bn-ps8.h5')", "Submit\nWe'll find a good clipping amount using the validation set, prior to submitting.", "def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)\n\nkeras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()\n\nconv_test_feat = load_array(path+'results/conv_test_feat.dat')\n\npreds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)\n\nsubm = do_clip(preds,0.93)\n\nsubm_name = path+'results/subm.gz'\n\nclasses = sorted(batches.class_indices, key=batches.class_indices.get)\n\nsubmission = pd.DataFrame(subm, columns=classes)\nsubmission.insert(0, 'img', [a[4:] for a in test_filenames])\nsubmission.head()\n\nsubmission.to_csv(subm_name, index=False, compression='gzip')\n\nFileLink(subm_name)", "This gets 0.534 on the leaderboard.\nThe \"things that didn't really work\" section\nYou can safely ignore everything from here on, because they didn't really help.\nFinetune some conv layers too", "for l in get_bn_layers(p): conv_model.add(l)\n\nfor l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):\n l2.set_weights(l1.get_weights())\n\nfor l in conv_model.layers: l.trainable =False\n\nfor l in conv_model.layers[last_conv_idx+1:]: l.trainable =True\n\ncomb = np.concatenate([trn, val])\n\ngen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, \n shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)\n\nbatches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)\n\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\n\nconv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])\n\nconv_model.fit_generator(batches, batches.N, nb_epoch=1, validation_data=val_batches, \n nb_val_samples=val_batches.N)\n\nconv_model.optimizer.lr = 0.0001\n\nconv_model.fit_generator(batches, batches.N, nb_epoch=3, validation_data=val_batches, \n nb_val_samples=val_batches.N)\n\nfor l in conv_model.layers[16:]: l.trainable =True\n\nconv_model.optimizer.lr = 0.00001\n\nconv_model.fit_generator(batches, batches.N, nb_epoch=8, validation_data=val_batches, \n nb_val_samples=val_batches.N)\n\nconv_model.save_weights(path+'models/conv8_ps.h5')\n\nconv_model.load_weights(path+'models/conv8_da.h5')\n\nval_pseudo = conv_model.predict(val, batch_size=batch_size*2)\n\nsave_array(path+'models/pseudo8_da.dat', val_pseudo)", "Ensembling", "drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')\ndrivers_ds.head()\n\nimg2driver = drivers_ds.set_index('img')['subject'].to_dict()\n\ndriver2imgs = {k: g[\"img\"].tolist() \n for k,g in drivers_ds[['subject', 'img']].groupby(\"subject\")}\n\ndef get_idx(driver_list):\n return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]\n\ndrivers = driver2imgs.keys()\n\nrnd_drivers = np.random.permutation(drivers)\n\nds1 = rnd_drivers[:len(rnd_drivers)//2]\nds2 = rnd_drivers[len(rnd_drivers)//2:]\n\nmodels=[fit_conv([d]) for d in drivers]\nmodels=[m for m in models if m is not None]\n\nall_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])\navg_preds = all_preds.mean(axis=0)\navg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)\n\nkeras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()\n\nkeras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session11/Day3/PSFphotometry.ipynb
mit
[ "PSF Photometry\nVersion 0.1\nWe're going to try to piece together the different elements of a PSF photometry pipeline from scratch. Getting that done in one notebook means we'll have to cut some corners, but the process should be illustrative.\nWe will start with an image that has already been processed by the LSST pipelines, so all the calibration steps like bias subtraction, flat fielding, background subtraction, etc (together often called \"instrumental signature removal\") have been performed, and the image is ready for measurement.\nPlease download the calibrated image. \n\nBy C Slater (University of Washington)", "import numpy as np\nfrom astropy.io import fits\nimport matplotlib.pyplot as plt\nimport astropy.convolution\n\nimport pandas as pd\n\nf = fits.open(\"calexp-0527247_10.fits\")\nimage = f[1].data", "0) Finding an example star\nI think a good way to work on a problem like this is to start with the core of the algorithm, working on just a single test case. After we have that working and tested, we can build out the infrastructure around it to run on the entire image.\nLet's display a small subset of the image, say 400x400 pixels. By default, imshow() will scale the colorbar to the minimum and maximum pixel values, so let's also set some more reasonable limits so we can see some stars.\nWe also need to use the extent= keyword argument to imshow() so that the labels on the X and Y axes correspond to the pixel coordinates that we've selected.\nYou can also open the images in ds9 if you like, for easier browsing.", "#Question\nplt.imshow(image[ # complete\n extent= # complete\n vmin=\n vmax= \n )", "Now let's select a smaller region around something that looks like a good, isolated star. Remember to update the extent so we know which pixels we're looking at.", "# Question\nplt.imshow(image[ # complete\n # complete\n )", "Ok, we need to cut down the image one more time, this time to give us a \"cutout\" image of a single star-like object. The cutout should only be about 20x20 pixels.", "# Question\ncutout = image[ # complete\nplt.imshow( #complete", "1) Centroiding\nNow that we have a test case to work on, let's find its position on the CCD.\nTo do that, we're going to need two arrays: one which has the same shape as cutout, but where each value is the X coordinate of the pixel, and another where each value is the Y coordinate of the pixel. Numpy has a function called meshgrid() that will give us this; we just need to supply an iterator for the X values, and an iterator for the Y values. It looks like this:", "xx, yy = np.meshgrid(range(2, 10), range(20, 30))\nprint(\"xx: \", xx)\nprint(\"yy: \", yy)", "Note how the values in a column are the same in xx, and all the values in a row are the same in yy.\nLet's make an xx and yy with the values corresponding to the pixel coordinates in your cutout image.", "# Question\nxx, yy = np.meshgrid( # complete", "Now we're ready to compute the centroid. Let's compute it first in x: we want the weighted mean of xx, with our cutout image as the weights. Remember to normalize by the sum of cutout values. The same formula will apply for y.", "# Question\nx_center = # complete\ny_center = # complete\n\nprint(x_center, y_center)", "Do the values you got make sense? Are they within the range of x and y coordinate values of the cutout? Does it roughly match where the star is? If not, are they possibly swapped, x-for-y and y-for-x? (It's very easy to get confused with the ordering of x and y indicies in Numpy, I make that mistake all the time).\nIf they make sense, try overplotting the coordinates on one of your larger cutout images.", "# Question\nplt.imshow(image[ # complete\n # complete\n )\nplt.axvline(x_center, color='r')\nplt.axhline(y_center, color='r')", "If your lines cross on your chosen star, great! You've completed the first step of doing photometry, centroiding the object.\nLet's take the code you prototyped in the notebook cells, and wrap it into a nice function we can use later. When we call this function, we need to tell it about the coordinates of the image we're providing, so we'll add the x_start and y_start parameters to convey that. We don't need to know the other two corners, because we can figure that out from the size of image_cutout.", "# Question\ndef centroid(image_cutout, x_start, y_start):\n x_size, y_size = image_cutout.shape\n xx, yy = # complete\n x_center = # complete\n y_center = # complete\n return (x_center, y_center)\n", "2) PSF Photometry\nWe needed the centroid first, because we're going to use that position to place our \"PSF\" model. Since we have not yet fit a real PSF model to the sources in the image, we'll use a Gaussian as an approximation.\nI'll give you the function for a normalized 2D Gaussian:", "def gaussian2D(radius, mu):\n return 1/(mu**2*2*np.pi)*np.exp(-0.5*((radius)/mu)**2)\n", "First just make an image of an example PSF, on the same grid as the cutout. \nNote that the Gaussian is parameterized in terms of a radius, which means you will need to compute that radius from the position of every pixel in your image. meshgrid is again the tool for this.\nYou can either use your centroid() function here, or for debugging it's fine to manually set x_center and y_center to specific values.", "xx, yy = np.meshgrid( # complete \nx_center, y_center = # complete\nradius = np.sqrt(( # complete\n + ( # complete\n )\n\npsf_size_pixels = 2.5\npsf_image = gaussian2D( # complete\nplt.imshow( # Complete ", "Just to be sure, we should check that the PSF image is normalized (approximately) by summing the pixel values.", "# Question\n# Complete", "Ok, now we can compute the actual PSF flux. Remember the formula from the lecture is:\n$$ f_{\\rm ML}(x, y) = \\frac{\\sum_i \\hat{f}_i p_i(x,y)}{\\sum_i p_i^2(x, y)}$$\nwhere $\\hat{f_i}$ are your image values, and $p_i$ are are your PSF model values.", "# Question\npsf_flux = # complete\nprint(psf_flux)", "Double check that the PSF flux you get matches (approximately) the flux you get from aperture photometry. If your cutout image is small enough that there are no other sources in it, you can just sum the cutout itself. No need to apply a more restrictive aperture for a debugging check like this.", "# Question\naperture_flux = # complete\nprint(aperture_flux)", "If your psf_flux reasonably matches your aperture_flux, well done! You have a working PSF photometry measurement, now it just needs to get wrapped up in a convenient function for later use.", "# Question\n\n# We need to pass both the centroid x and y, and the image cutout start x,y because the star\n# isn't necessarily at the very center of the cutout.\n\ndef psf_flux_gaussian(image_cutout, centroid_x, centroid_y, radius, x_start, y_start):\n \n x_size, y_size = # complete\n xx, yy = # complete\n \n r = # complete\n psf_image = # complete\n psf_flux = # complete\n return psf_flux", "3) Object Detection\nNow that we have the core of the algorithm, we need to improve on our earlier step where we hand-picked a single source to measure. \nWe know from the talk on object detection that we need to convolve the image with the PSF to detect sources. Of course, we don't yet know what the PSF is, so we'll guess and use a Gaussian again.\nWith the convolved image, we now need to find \"peaks\". That is, we want to find pixels whose value is greater than all of their immediate neighbors. That's a relatively easy way to make sure we (mostly) only try to run photometry once on each star.\nWe are also applying a threshold; if a pixel value is below this threshold, we don't bother checking if it's a peak. That's useful \nto exclude faint background fluctuations that aren't statistically significant (below 5-sigma), or we might set the threshold higher if if we want only bright stars for PSF determination.\nThe edges of the sensor often contain various artifacts, so you might want to exclude 5 to 10 pixels around each edge from the search.\nProgramming note: we're going to do a python loop over all the pixels in the image. This is a really slow way to do this, and you should try to avoid loops like this as much as possible in python. We're doing it this way only because 1) it's illustrative and 2) it takes less than a minute; acceptable for a notebook, but not how we process LSST.", "# Question\ndef find_peaks(image, threshold):\n # We are going to append the peaks we find to these two lists\n peak_x_values = [] \n peak_y_values = []\n\n for i in # complete \n for j in # complete\n pixel = image[i,j]\n\n # We want to skip over pixels that are below our threshold\n if(pixel # complete\n \n # We want to save pixel coordinates if the pixel is a \"peak\"\n if(pixel > # Complete\n and pixel > # complete\n and\n # complete\n ):\n # complete\n \n # Now that we're done appending to them, it will be easier if we turn the\n # lists into numpy arrays.\n return np.array(peak_x_values), np.array(peak_x_values)\n", "To use the peak-finder, we need to create a \"detection image\" by convolving the real image with the PSF. Of course, we don't know the PSF yet, so you can substitute a guess: try a Gaussian kernel, with a 2.5 pixel width.\nThe %%time \"magic\" will show us how long the convolution and peak-finding took.", "%%time\n\n# Question\n\nconvolved_image = astropy.convolution.convolve( # complete\n\npeak_x_values, peak_y_values = # complete", "Let's plot the positions of the peaks on the image, to make sure they look reasonable", "# Question\n\nplt.plot( # Complete", "A good debugging check is to look at a few cutouts centered on your newly-found detections. You can flip through a few of these by changing the value of n.", "# question\n\nn = 50\npeak_x = peak_x_values[n]\npeak_y = peak_y_values[n]\ncutout = image[ # complete\n \nplt.imshow(cutout)", "4) Photometry on all objects\nYou're almost finished, the only remaining task is to put together all the different pieces from above into one function that finds sources and measures their sizes and fluxes, and outputs a data table at the end.\nFor the moment, I will tell you that the Gaussian PSF size is 2 pixels. If you have more time, there's an \"extra credit\" problem at the end of the notebook that will show you how to measure the PSF size directly, which also lets you measure object sizes in general. But try to get the PSF photometry working first before going onto that.", "# Question\n\ndef run_photometry(image, threshold, psf_width):\n \n # Detect your sources\n \n \n # Setup any variables you need to store results.\n for # complete\n \n # Measure the centroid\n \n # Measure the flux\n \n # Measure the moments\n \n # Let's return a pandas DataFrame to make it easy to use the results\n return pd.DataFrame( # complete\n ", "With that function all filled in, let's run it on the image!", "%%time\n# Question\n\nphotometry_table = run_photometry( # complete\nprint(photometry_table)\n\nprint(photometry_table[:20])", "Did you get a table full of photometry? If so, great! If it's not working well, it's likely to be a problem with getting the right inputs to the different functions you're calling. You've tested all the steps separately, so they should be working. Getting the right indices on your image cutout is always a tricky part.\nIf you have extra time, try adding an aperture photometry function to the processing. You can plot the size (from the second moment) against flux to find what objects might be galaxies, and generate the cutout image to see if they're really galaxies. \nExtra Credit: Measuring the PSF\nOnce we have sources identified in an image, we want to identify which would be good for PSF determination, and then we want to measure their PSFs. In our case we're going to do both of these at once, we're going to measure sizes for all sources, and then use the mean size of those which we think are stars as our PSF model. In a more sophisticated pipeline, the object sizes might be used as a cut before passing to some more complicated PSF determination process.\nTo obtain object sizes, we're going to measure the \"second moment\".\nThis will look a lot like the centroid algorithm. The formula we want to implement is:\n$$I_{xx}^2 = \\frac{\\sum_i (\\hat{f_i} (x_i - x_{\\rm center}))^2}{\\sum_i \\hat{f_i}^2} $$\nLet's try building it directly in the function this time; if it gives you trouble, feel free to try it out in some notebook cells directly (so you can see the intermediate variables better) before putting it back in the function.", "# Question\ndef second_moment(image_cutout, centroid_x, centroid_y, start_x, start_y):\n x_size, y_size = # complete\n xx, yy = # complete\n x_width = # complete\n y_width = # complete\n return (x_width, y_width)\n", "Let's run the second moment estimator on one of the cutouts you made above.", "# Question\nsecond_moment(cutout, # complete", "Do the results look reasonable, compared to the image of the cutout you made above? Note that this is the Gaussian width, not the full-width at half-max that is typically quoted for PSF sizes.\nIf those look good, now we just need to run the second moment estimator over all the sources in your catalog. Our goal is to find if there's one particular size that fits lots of objects; that's likely to be our PSF size and the objects are likely to be stars.", "# Question\n%%time\n\n# We will put the x and y moments in these lists\nmoments_x = []\nmoments_y = []\n\nfor peak_x, peak_y in # complete\n image_cutout = image[ # complete\n start_x = int( # complete\n start_y = int( # complete\n \n centroid_x, centroid_y = # complete\n \n moment_x, moment_y = second_moment( # complete\n moments_x.append( # complete\n moments_y.append( # complete\n", "Because we have second moments in both X and Y directions, we should combine them into a single value as the square root of the sum of squares.", "# Question\nmoments_sq = # complete\n\nplt.hist( # complete\nplt.xlabel(\"Second Moment (pixels)\")", "If all went according to plan, you should have a nice histogram with a big peak at the PSF size. If it's not a big obvious peak, double check that the postage stamps that went into your second moment calculator are correct, and that the right centroid positions went into the calculator as well. \nFrom this histogram, you should either compute or estimate the PSF size, so you can plug this back into your run_photometry() function. You can also add the x and y sizes to the output photometry table, which would make it easier for the user to select stars or galaxies separately." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hmc-cs-nsuaysom/BigDataProject
Average_RM_YIELD_location.ipynb
mit
[ "# For cross-validation\nFRACTION_TRAINING = 0.5\n\n# For keeping output\ndetailed_output = {}\n\n# dictionary for keeping model\nmodel_dict = {}\n\nimport pandas as pd\nimport pickle\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn\nfrom matplotlib.colors import ListedColormap\nfrom sklearn import linear_model\n%matplotlib inline\nplt.style.use(\"ggplot\")\n\n\"\"\"\nData Preprocessing\n\nEXPERIMENT_DATA - Contain training data\nEVALUATION_DATA - Contain testing data \n\"\"\"\n\nEXPERIMENT_DATA = pickle.load(open('EXPERIMENT_SET_pandas.pkl', 'rb'))\nEVALUATION_SET = pickle.load(open('EVALUATION_SET_pandas.pkl', 'rb'))\n\n# Shuffle Data\nEXPERIMENT_DATA = sklearn.utils.shuffle(EXPERIMENT_DATA)\nEXPERIMENT_DATA = EXPERIMENT_DATA.reset_index(drop=True)\n\n\nTRAINING_DATA = EXPERIMENT_DATA[:int(FRACTION_TRAINING*len(EXPERIMENT_DATA))]\nTESTING_DATA = EXPERIMENT_DATA[int(FRACTION_TRAINING*len(EXPERIMENT_DATA)):]\n\n# Consider only graduated species\nTRAINING_DATA_GRAD = TRAINING_DATA[TRAINING_DATA[\"GRAD\"] == \"YES\"]\nTESTING_DATA_GRAD = TESTING_DATA[TESTING_DATA[\"GRAD\"] == \"YES\"]\n\nprint(\"Graduate Training Data size: {}\".format(len(TRAINING_DATA_GRAD)))\nprint(\"Graduate Testing Data size: {}\".format(len(TESTING_DATA_GRAD)))\n\ndef plotRegression(data):\n plt.figure(figsize=(8,8))\n \n ###########################\n # 1st plot: linear scale\n ###########################\n bagSold = np.asarray(data[\"BAGSOLD\"]).reshape(-1, 1).astype(np.float)\n rm = np.asarray(data[\"RM\"]).reshape(-1,1).astype(np.float)\n bias_term = np.ones_like(rm)\n x_axis = np.arange(rm.min(),rm.max(),0.01)\n \n # Liear Regression\n regr = linear_model.LinearRegression(fit_intercept=True)\n regr.fit(rm, bagSold)\n bagSold_prediction = regr.predict(rm)\n print(\"Coefficients = {}\".format(regr.coef_))\n print(\"Intercept = {}\".format(regr.intercept_))\n \n # Find MSE\n mse = sklearn.metrics.mean_squared_error(bagSold, bagSold_prediction)\n \n# plt.subplot(\"121\")\n plt.title('Linear Regression RM vs. Bagsold')\n true_value = plt.plot(rm, bagSold, 'ro', label='True Value')\n regression_line = plt.plot(x_axis, regr.intercept_[0] + regr.coef_[0][0]*x_axis, color=\"green\")\n plt.legend([\"true_value\", \"Regression Line\\nMSE = {:e}\".format(mse)])\n plt.xlabel(\"RM\")\n plt.ylabel(\"Bagsold\")\n plt.xlim(rm.min(),rm.max())\n detailed_output[\"MSE of linear regression on entire dataset (linear scale)\"] = mse\n plt.savefig(\"linear_reg_entire_dataset_linearscale.png\")\n plt.show()\n \n #######################\n # 2nd plot: log scale\n #######################\n plt.figure(figsize=(8,8))\n bagSold = np.log(bagSold)\n \n # Linear Regression \n regr = linear_model.LinearRegression()\n regr.fit(rm, bagSold)\n bagSold_prediction = regr.predict(rm)\n print(\"Coefficients = {}\".format(regr.coef_))\n print(\"Intercept = {}\".format(regr.intercept_))\n \n # Find MSE\n mse = sklearn.metrics.mean_squared_error(bagSold, bagSold_prediction)\n \n# plt.subplot(\"122\")\n plt.title('Linear Regression RM vs. log of Bagsold')\n true_value = plt.plot(rm,bagSold, 'ro', label='True Value')\n regression_line = plt.plot(x_axis, regr.intercept_[0] + regr.coef_[0][0]*x_axis, color=\"green\")\n plt.legend([\"true_value\", \"Regression Line\\nMSE = {:e}\".format(mse)])\n plt.xlabel(\"RM\")\n plt.ylabel(\"log Bagsold\")\n plt.xlim(rm.min(),rm.max())\n detailed_output[\"MSE of linear regression on entire dataset (log scale)\"] = mse\n# plt.savefig(\"linear_reg_entire_dataset_logscale.png\")\n plt.show()\n\n\n# Coefficients = [[-14956.36881671]]\n# Intercept = [ 794865.84758174]\nplotRegression(TRAINING_DATA_GRAD)", "Location-based algorithm", "location_map = list(set(TRAINING_DATA_GRAD[\"LOCATION\"]))\nlocation_map.sort()\n# print(location_map)\n\nlist_location = []\nlist_avg_rm = []\nlist_avg_yield = []\nfor val in location_map:\n avg_rm = np.average(TRAINING_DATA_GRAD[EXPERIMENT_DATA[\"LOCATION\"] == str(val)][\"RM\"])\n avg_yield = np.average(TRAINING_DATA_GRAD[EXPERIMENT_DATA[\"LOCATION\"] == str(val)][\"YIELD\"])\n list_location.append(str(val))\n list_avg_rm.append(avg_rm)\n list_avg_yield.append(avg_yield)\n # print(\"{} = {},{}\".format(val,avg_rm,avg_yield))\n\nplt.title(\"Average RM and YIELD for each location\")\nplt.plot(list_avg_rm, list_avg_yield, 'ro')\n\nfor i, txt in enumerate(list_location):\n if int(txt) <= 1000:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"blue\")\n elif int(txt) <= 2000:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"red\")\n elif int(txt) <= 3000:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"green\")\n elif int(txt) <= 4000:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"black\")\n elif int(txt) <= 5000:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"orange\")\n else:\n plt.annotate(txt, (list_avg_rm[i],list_avg_yield[i]), color=\"purple\")\nplt.show()\n", "Analysis\nFrom the preliminary analysis, we find that the number of different locateion in the dataset is 140. The location in the dataset is encoded as a 4-digit number. We first expected that we can group the quality of the species based on the location parameters. We then plot the average of RM and YIELD for each location, which is shown below: \nLinear Regression on each group of location\nAccording to prior analaysis, it appears that we can possibly categorize species on location. The approach we decide to adopt is to use first digit of the location number as a categorizer. The histogram in the previous section indicates that there exists roughly about 7 groups. Notice that the leftmost and rightmost columns seem to be outliers.", "# Calculate the number of possible locations\nlocation_set = set(TRAINING_DATA_GRAD[\"LOCATION\"])\nprint(\"The number of possible location is {}.\".format(len(location_set)))\n\nlocation_histogram_list = []\nfor location in sorted(location_set):\n amount = len(TRAINING_DATA_GRAD[TRAINING_DATA_GRAD[\"LOCATION\"] == str(location)])\n for j in range(amount):\n location_histogram_list.append(int(location))\n# print(\"Location {} has {:>3} species\".format(location, amount))\n \nplt.title(\"Histogram of each location\")\nplt.xlabel(\"Location Number\")\nplt.ylabel(\"Amount\")\nplt.hist(location_histogram_list, bins=7, range=(0,7000))\nplt.savefig(\"location_histogram.png\")\nplt.show()\n\n# Convert location column to numeric\nTRAINING_DATA_GRAD[\"LOCATION\"] = TRAINING_DATA_GRAD[\"LOCATION\"].apply(pd.to_numeric)\n\n# Separate training dataset into 7 groups\ndataByLocation = []\nfor i in range(7):\n dataByLocation.append(TRAINING_DATA_GRAD[(TRAINING_DATA_GRAD[\"LOCATION\"] < ((i+1)*1000)) & (TRAINING_DATA_GRAD[\"LOCATION\"] >= (i*1000))])\n\nfor i in range(len(dataByLocation)):\n data = dataByLocation[i]\n bagSold = np.log(np.asarray(data[\"BAGSOLD\"]).reshape(-1,1).astype(np.float))\n rm = np.asarray(data[\"RM\"]).reshape(-1,1).astype(np.float)\n \n # Liear Regression\n regr = linear_model.LinearRegression()\n regr.fit(rm, bagSold)\n model_dict[i] = regr\n bagSold_prediction = regr.predict(rm)\n \n x_axis = np.arange(rm.min(), rm.max(), 0.01).reshape(-1,1)\n \n # Find MSE\n mse = sklearn.metrics.mean_squared_error(bagSold, bagSold_prediction)\n print(mse, np.sqrt(mse))\n detailed_output[\"number of data point on location {}xxx\".format(i)] = len(data)\n detailed_output[\"MSE on location {}xxx log scale\".format(i)] = mse\n \n plt.figure(figsize=(8,8))\n# plt.subplot(\"{}\".format(int(str(len(dataByLocation))+str(1)+str(i+1))))\n plt.title(\"Linear Regression RM vs. Log Bagsold on Location {}xxx\".format(i))\n \n true_value = plt.plot(rm,bagSold, 'ro', label='True Value')\n regression_line = plt.plot(x_axis, regr.predict(x_axis), color=\"green\")\n plt.legend([\"true_value\", \"Regression Line\\nMSE = {:e}\".format(mse)])\n# plt.show()\n plt.xlim(rm.min(),rm.max())\n plt.savefig(\"location{}.png\".format(i))\n", "Test with validation set", "# Test with validation set\nTESTING_DATA_GRAD = TESTING_DATA_GRAD.reset_index(drop=True)\n\nXtest = np.column_stack((TESTING_DATA_GRAD[\"LOCATION\"], \n TESTING_DATA_GRAD[\"RM\"], \n TESTING_DATA_GRAD[\"YIELD\"]))\nytest = TESTING_DATA_GRAD[\"BAGSOLD\"].astype(np.float)\nlog_ytest = np.log(ytest)\n\nypredicted = []\nfor row in Xtest:\n location = row[0]\n rm_val = row[1]\n yield_val = row[2]\n \n model = model_dict[int(location[0])]\n prediction = model.predict(rm_val)[0][0]\n ypredicted.append(prediction)\n \nypredicted = np.array(ypredicted)\n\n# MSE error\nsklearn.metrics.mean_squared_error(log_ytest, ypredicted)", "Testing Ridge Reg vs. Linear Reg\nBelow is not used. It's for testing the difference between Ridge Regression and Linear Regression.\nThe result is that the MSE is almsot the same", "bagSold = np.log(np.asarray(TRAINING_DATA_GRAD[\"BAGSOLD\"]).reshape(-1, 1).astype(np.float))\nrm = np.asarray(TRAINING_DATA_GRAD[\"RM\"]).reshape(-1,1).astype(np.float)\nyield_val = np.asarray(TRAINING_DATA_GRAD[\"YIELD\"]).reshape(-1,1).astype(np.float)\n\nx = np.column_stack((rm, yield_val))\n \n# Liear Regression\nregr = linear_model.LinearRegression(fit_intercept=True)\nregr.fit(x, bagSold)\nbagSold_prediction = regr.predict(x)\nprint(\"Coefficients = {}\".format(regr.coef_))\nprint(\"Intercept = {}\".format(regr.intercept_))\n \n# Find MSE\nmse = sklearn.metrics.mean_squared_error(bagSold, bagSold_prediction)\nprint(\"MSE = {}\".format(mse))\n\nbagSold = np.log(np.asarray(TRAINING_DATA_GRAD[\"BAGSOLD\"]).reshape(-1, 1).astype(np.float))\nrm = np.asarray(TRAINING_DATA_GRAD[\"RM\"]).reshape(-1,1).astype(np.float)\nyield_val = np.asarray(TRAINING_DATA_GRAD[\"YIELD\"]).reshape(-1,1).astype(np.float)\n\nx = np.column_stack((rm, yield_val))\n \n# Liear Regression\nregr = linear_model.Ridge(alpha=20000)\nregr.fit(x, bagSold)\nbagSold_prediction = regr.predict(x)\nprint(\"Coefficients = {}\".format(regr.coef_))\nprint(\"Intercept = {}\".format(regr.intercept_))\n \n# Find MSE\nmse = sklearn.metrics.mean_squared_error(bagSold, bagSold_prediction)\nprint(\"MSE = {}\".format(mse))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Murali-group/PathLinker-Cytoscape
cytoscape-automation-example/lovastatin_analysis.ipynb
gpl-3.0
[ "Automation of the PathLinker App paper Lovastatin Analysis\n<img src=\"http://apps.cytoscape.org/media/pathlinker/logo.png.png\" alt=\"PathLinker Logo\">\nRequirments\n\nJava 8\nCytoscape 3.6.0+ (this notebook was run using Cytoscape 3.6)\ncyREST 3.6.0+\nPathLinker App 1.4+\npy2cytoscape 0.4.2+\n\nThis notebook is based on the following scripts:\n\nNew_wrapper_api_sample\nbasic1\n\nTODO Update this notebook with better descriptions and more py2cytoscape usage", "# necessary libraries and dependencies\nimport sys\nfrom py2cytoscape.data.cyrest_client import CyRestClient\nfrom py2cytoscape.data.style import StyleUtil\n\nimport networkx as nx\nimport pandas as pd\nimport json\nimport requests\n\nprint(\"python version: \" + sys.version)\n# The py2cytoscape module doesn't have a version. I installed it 2018-04-13\n#print(\"py2cytoscape version: \" + py2cytoscape.__version__)\nprint(\"networkx version: \" + nx.__version__)\nprint(\"pandas version: \" + pd.__version__)\nprint(\"requests version: \" + requests.__version__)\n\n# !!!!!!!!!!!!!!!!! Step 0: Start Cytoscape 3.6 with cyREST App !!!!!!!!!!!!!!!!!!!!!!!!!!\n# Cytoscape must be running to use the automation features\n\n# Step 1: create an instance of cyRest client\ncy = CyRestClient()\n\n# Reset the session\n#cy.session.delete()", "Create network using networkx\nThe PathLinker app paper uses the same interactome as the original PathLinker pap (available on the PathLinker supplementary website here: background-interactome-pathlinker-2015.txt).", "# Step 2: Import/Create the network that PathLinker will run on\nnetwork_file = 'background-interactome-pathlinker-2015.txt'\n\n# create a new network by importing the data from a sample using pandas\ndf = pd.read_csv(network_file, header=None, sep='\\t', lineterminator='\\n',\n names=[\"source\", 'target', 'weight', 'evidence'])\n\n# and create the networkx Graph from the pandas dataframe\n# this is a directed network, so I'm using the networkx DiGraph instead of the default undirected Graph\nG = nx.from_pandas_edgelist(df, \"source\", \"target\", edge_attr=['weight'], create_using=nx.DiGraph())\n \n# create the CyNetwork object from the networkx in CytoScape\ncy_network = cy.network.create_from_networkx(G, name = 'background-interactome-pathlinker-2015', collection = 'F1000 PathLinker Lovastatin Use Case')\n\n# obtain the CyNetwork object SUID\ncy_network_suid = cy_network.get_id()\n\n# TODO the force-directed layout isn't working for some reason. All nodes are still left in the same position. \n# commenting out for now\n# # give the network some style and a layout\n# my_style = cy.style.create('defaut')\n\n# # copied from here: https://github.com/cytoscape/cytoscape-automation/blob/master/for-scripters/Python/basic-fundamentals.ipynb\n# basic_settings = { \n# 'NODE_FILL_COLOR': '#6AACB8',\n# 'NODE_SIZE': 55,\n# 'NODE_BORDER_WIDTH': 0,\n# 'NODE_LABEL_COLOR': '#555555',\n \n# 'EDGE_WIDTH': 2,\n# 'EDGE_TRANSPARENCY': 100,\n# 'EDGE_STROKE_UNSELECTED_PAINT': '#333333',\n \n# 'NETWORK_BACKGROUND_PAINT': '#FFFFEA'\n# }\n\n# my_style.update_defaults(basic_settings)\n\n# # Create some mappings\n# my_style.create_passthrough_mapping(column='name', vp='NODE_LABEL', col_type='String')\n\n# cy.layout.apply(name=\"force-directed\", network=cy_network)\n# cy.style.apply(my_style, cy_network)\n\n# TODO create a better view\n# for some reason, the force-directed layout isn't working.\n# for now, just delete the default uninformative layout\nheaders = {'Content-Type': 'application/json', 'Accept': 'application/json'}\n# delete the created view\nurl = \"http://localhost:1234/v1/networks/%s/views\" % (cy_network_suid)\nrequests.request(\"DELETE\", url, headers=headers)\n\n# # create a new one\n# url = \"http://localhost:1234/v1/networks/%s/views\" % (cy_network_suid)\n# requests.request(\"POST\", url, headers=headers)\n\n# # apply the default style\n# url = \"http://localhost:1234/v1/apply/styles/default/%s\" % (cy_network_suid)\n# requests.request(\"GET\", url, headers=headers)\n\n# url = \"http://localhost:1234/v1/apply/fit//%s\" % (cy_network_suid)\n# requests.request(\"GET\", url, headers=headers)\n\n# # apply the force directed layout\n# url = \"http://localhost:1234/v1/apply/layouts/force-directed/%s\" % (cy_network_suid)\n# requests.request(\"GET\", url, headers=headers)", "The network shown below will be generated in Cytoscape with the above code.\nLooking at the Edge Table in the Table Panel, the network consists of <b>'source' column, 'target' column, and 'weight' column</b>. The 'weight' column will be used for the <b>'edgeWeightColumnName\"</b> input for running the function. \n\nRun PathLinker using the API function\nThe function takes user sources, targets, and a set of parameters, and computes the k shortest paths. The function returns the paths in JSON format. Based on the user input, the function could generate a subnetwork (and view) containing those paths, and returns the computed paths and subnetwork/view SUIDs.\nAdditional description of the parameters are available in the PathLinker app documentation.\nThe sources, targets and parameters used below are the same parameters used to run PathLinker in the paper \"The PathLinker app: Connect the dots in protein interaction networks\".\nTODO Include code to download and parse the ToxCast data, then use it to find the cellular receptors and transcription factors TFs perturbed by Lovastatin.", "# Step 3: Construct input data to pass to PathLinker API function\n\n# construct PathLinker input data parameters for API request\nparams = {}\n\n# the node names for the sources and targets are space separated \n# and must match the \"name\" column in the Node Table in Cytoscape\nparams[\"sources\"] = \"P35968 P00533 Q02763\"\nparams[\"targets\"] = \"Q15797 Q14872 Q16236 P14859 P36956\"\n\n# the number of shortest path to compute, must be greater than 0\n# Default: 50\nparams[\"k\"] = 50\n\n# Edge weight type, must be one of the three: [UNWEIGHTED, ADDITIVE, PROBABILITIES]\nparams[\"edgeWeightType\"] = \"PROBABILITIES\" \n\n# Edge penalty. Not needed for UNWEIGHTED \n# Must be 0 or greater for ADDITIVE, and 1 or greater for PROBABILITIES \nparams[\"edgePenalty\"] = 1\n\n# The column name in the Edge Table in Cytoscape containing edge weight property, \n# column type must be numerical type \nparams[\"edgeWeightColumnName\"] = \"weight\"\n\n# The option to ignore directionality of edges when computing paths\n# Default: False\nparams[\"treatNetworkAsUndirected\"] = False\n\n# Allow source/target nodes to appear as intermediate nodes in computed paths\n# Default: False\nparams[\"allowSourcesTargetsInPaths\"] = False\n\n# Include more than k paths if the path length/score is equal to kth path length/score\n# Default: False\nparams[\"includeTiedPaths\"] = False\n\n# Option to disable the generation of the subnetwork/view, path rank column, and result panel\n# and only return the path result in JSON format\n# Default: False\nparams[\"skipSubnetworkGeneration\"] = False\n\n# perform REST API call\nheaders = {'Content-Type': 'application/json', 'Accept': 'application/json'}\n\n# construct REST API request url\nurl = \"http://localhost:1234/pathlinker/v1/\" + str(cy_network_suid) + \"/run\"\n# to just run on the network currently in view on cytoscape, use the following:\n#url = \"http://localhost:1234/pathlinker/v1/currentView/run\"\n\n# store request output\nresult_json = requests.request(\"POST\", \n url,\n data = json.dumps(params),\n params = None,\n headers = headers)\n#print(json.loads(result_json.content))", "The subnetwork shown below will be generated by running the function with above input.\n\nOutput\nThe following section stores the subnetwork/view references and prints out the path output returned by the run function.\nThe output consist of path result in JSON format, and based on user input: subnetwork SUID, subnetwork view SUID, and path rank column name.", "# Step 4: Store result, parse, and print\nresults = json.loads(result_json.content)\n\nprint(\"Output:\\n\")\n\n# access the suid, references, and path rank column name\nsubnetwork_suid = results[\"subnetworkSUID\"]\nsubnetwork_view_suid = results[\"subnetworkViewSUID\"]\n# The path rank column shows for each edge, the rank of the first path in which it appears\npath_rank_column_name = results[\"pathRankColumnName\"]\n \nprint(\"subnetwork SUID: %s\" % (subnetwork_suid))\nprint(\"subnetwork view SUID: %s\" % (subnetwork_view_suid))\nprint(\"Path rank column name: %s\" % (path_rank_column_name))\nprint(\"\")\n\n# access the paths generated by PathLinker\npaths = results[\"paths\"]\n\n# print the first 10 paths out of 50 paths\nfor path in paths[:3]:\n print(\"path rank: %d\" % (path['rank']))\n print(\"path score: %s\" % (str(path['score'])))\n print(\"path: %s\" % (\"|\".join(path['nodeList'])))\n \n# access network and network view references\nsubnetwork = cy.network.create(suid=subnetwork_suid)\n#subnetwork_view = subnetwork.get_first_view()\n\n# write the paths to a file\npaths_file = \"lovastatin-analysis-results/pathlinker-50-paths.txt\"\nprint(\"Writing paths to %s\" % (paths_file))\nwith open(paths_file, 'w') as out:\n out.write(\"path rank\\tpath score\\tpath\\n\")\n for path in paths:\n out.write('%d\\t%s\\t%s\\n' % (path['rank'], str(path['score']), \"|\".join(path['nodeList'])))\n\n# TODO map the nodes to gene names\n# cy.idmapper.map_column(subnetwork, ) ", "View the subnetwork and store the image", "# png\nsubnetwork_image_png = subnetwork.get_png()\nsubnetwork_image_file = 'lovastatin-analysis-results/pathlinker-50-paths.png'\nprint(\"Writing PNG to %s\" % (subnetwork_image_file))\nwith open(subnetwork_image_file, 'wb') as f:\n f.write(subnetwork_image_png)\n\nfrom IPython.display import Image\nImage(subnetwork_image_png)\n\n# # pdf\n# subnetwork_image_pdf = subnetwork.get_pdf()\n# subnetwork_image_file = subnetwork_image_file.replace('.png', '.pdf')\n# print(\"Writing PDF to %s\" % (subnetwork_image_file))\n# with open(subnetwork_image_file, 'wb') as f:\n# f.write(subnetwork_image_pdf)\n# # display the pdf in frame\n# from IPython.display import IFrame\n# IFrame('use_case_images/subnetwork_image.pdf', width=600, height=300)\n\n# # svg\n# subnetwork_image_svg = subnetwork.get_svg()\n\n# from IPython.display import SVG\n# SVG(subnetwork_image_svg)", "Analyze the Subnetwork\nNext, we need to recreate the functional enrichment analysis on the proteins in the subnetwork. \nTODO figure out how to run ClueGO" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/tutorials/dpdt.ipynb
gpl-3.0
[ "Period Change (dpdt)\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install \"phoebe>=2.4,<2.5\"", "As always, let's do imports and initialize a new Bundle.", "import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nb = phoebe.default_binary()", "In order to easily differentiate between the components in a light curve and in the orbits, we'll set the secondary temperature and the mass ratio.", "b.set_value('q', 0.8)\nb.set_value('teff', component='secondary', value=5000)", "and set dpdt to an unrealistically large value so that we can easily see the effect over just a few orbits.", "b.set_value('per0', 60)\nb.set_value('dpdt', 0.005*u.d/u.d)", "We'll add several light curve, RV, and orbit datasets, each covering successive cycles of the orbit so that we can differentiate between them later when plotting.", "for i in range(3):\n b.add_dataset('lc', compute_phases=phoebe.linspace(i,i+1,101))\n b.add_dataset('rv', compute_phases=phoebe.linspace(i,i+1,101))\n b.add_dataset('orb', compute_phases=phoebe.linspace(i,i+1,101))", "It is important to note that the dpdt parameter is the time-derivative of period (the anomalistic period). However, the anomalistic and sidereal periods are only different in the case of apsidal motion.", "print(b.get_parameter(qualifier='dpdt').description)", "Zero-point for orbital period\nThe orbital period itself, period, is defined at time t0 (in the system context). If the dataset times are far from t0@system then we begin to lose precision on the period parameter as a small change, coupled with the propagation of dpdt * (times - t0) can cause a large effect. It is important to try to define the system at a time t0@system that are near the dataset times (and near the other various t0s). By default, t0@system is set to 0 and we have set our dataset times to start at zero as well.", "print(b.filter('t0', context='system'))\n\nprint(b.get_parameter('t0', context='system').description)", "Considerations in compute_phases and mask_phases\nBy default, the mapping between compute_times and compute_phases will account for dpdt. In this case, we set compute_phases to cover successive orbits... so therefore the resulting compute_times will adjust as necessary.", "print(b.filter(qualifier='compute_times', kind='lc', context='dataset'))", "For the case of this tutorial, we would rather the compute_times be even cycles based on period alone, so that we can color by cycles of period and easily visualize the effect of dpdt. We could have set compute_times directly instead, but then we would need to keep the period fixed and know it in advance. Alternatively, we can set phases_dpdt = 'none' to tell this mapping to ignore dpdt.", "print(b.filter(qualifier='phases_dpdt'))\n\nprint(b.get_parameter(qualifier='phases_dpdt', dataset='lc01').description)", "As noted in the description, the phases_dpdt parameter will also affect phase-masking.", "b.set_value_all('phases_dpdt', 'none')", "Now we see that our resulting compute_times are direct multiples of the period (at time=t0@system).", "print(b.filter(qualifier='compute_times', kind='lc', context='dataset'))", "Contribution to Eclipse Timings in Light Curves\nNow we'll run the forward model, but with light travel time effects disabled, just to avoid any confusion with small contributions from the finite speed of light.", "b.run_compute(ltte=False)\n\n_ = b.plot(kind='lc', x='times', legend=True, show=True)", "By default, the phasing in plotting accounts for dpdt.", "_ = b.plot(kind='lc', x='phases', legend=True, show=True)", "To override this behavior, we can pass dpdt=0.0 so that we can see the eclipses spread across the phase-space. dpdt is passed directly to b.to_phase (see also: b.to_time and b.get_ephemeris)", "_ = b.plot(kind='lc', x='phases', dpdt=0.0, legend=True, show=True)", "Contribution to Orbits and Mass-Conservation\nAs the orbital period is instantaneously changing, the instantaneous semi-major axis of the orbit is also adjusted in order to conserve the total mass in the system (under Kepler's third law). This results in an automatic in or out-spiral of the system whenever dpdt != 0.0. Note that, like the period, sma is defined at t0@system.\nJust for visualization purposes, let's rerun our forward model, but this time with an even more exaggerated value for dpdt", "b.set_value('dpdt', 0.1*u.d/u.d)\n\nb.run_compute(ltte=False)\n\n_ = b.plot(kind='orb',\n x='us', y='ws', \n time=b.get_value('t0_supconj@component')+1*np.arange(0,3), \n linestyle={'primary': 'solid', 'secondary': 'dotted'}, \n color={'orb01': 'blue', 'orb02': 'orange', 'orb03': 'green'},\n #color='dataset', # TODO: we should support this to say color BY dataset\n legend=True,\n show=True)", "By plotting us vs times, we can see the position of the stars at integer periods (when we'd expect eclipses if it weren't for dpdt) as well as the times of the resulting eclipses (when the two stars cross at u=0, ignoring ltte, etc). Here we clearly see the increasing orbit size as a function of time.", "_ = b.plot(kind='orb',\n time=b.get_value('t0_supconj@component')+1*np.arange(0,3), \n linestyle={'primary': 'solid', 'secondary': 'dotted'}, \n color={'orb01': 'blue', 'orb02': 'orange', 'orb03': 'green'},\n x='times', y='us',\n show=True)", "Contributions to RVs\nDue to the changing size of the orbit due to mass conservation (increasing the RV amplitude for a positive dpdt), as well as the changing orbital period (decreasing the RV amplitude for a positive dpdt), the RVs will also have a change in amplitude as a function of time (in addition to the phase-effects seen for the light curve above).", "_ = b.plot(kind='rv', x='times', \n linestyle={'primary': 'solid', 'secondary': 'dotted'}, \n color={'rv01': 'blue', 'rv02': 'orange', 'rv03': 'green'},\n show=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
batfish/pybatfish
jupyter_notebooks/Analyzing Routing Policies.ipynb
apache-2.0
[ "Analyzing BGP Route Policies\nRoute policies for BGP are complex and error prone, which is why some of the biggest outages in the Internet involve misconfigured route policies that end up leaking routes or accepting routes they shouldn't (e.g., BGP Leak Causing Internet Outages in Japan and Beyond, How a Tiny Error Shut Off the Internet for Parts of the US, Telia engineer error to blame for massive net outage). While it is often clear to network engineers what the route policy should or should not do (e.g., see MANRS guidelines), ensuring that the route policy implementation is correct is notoriously hard.\nIn this notebook we show how you can use Batfish to validate your route policies. Batfish's testRoutePolicies question provides an easy way to test route-policy behavior---given a route, it shows how it is transformed (or denied) by the route policy. Batfish's searchRoutePolicies actively searches for routes that cause a policy to violate its intent.\nTo illustrate these capabilities, we'll use an example network with two border routers, named border1 and border2. Each router has a BGP session with a customer network and a BGP session with a provider network. Our goal in this notebook is to validate the in-bound route policy from the customer, called from_customer, and the out-bound route policy to the provider, called to_provider.\nThe intent of the from_customer route policy is:\n\nfilter private addresses\nonly permit routes to known prefixes if they have the correct origin AS\ntag permitted routes with an appropriate community, and update the local preference\n\nThe intent of the to_provider route policy is:\n\nadvertise all prefixes that we own\nadvertise all customer routes\ndon't advertise anything else\n\n\nWe'll start, as usual, by initializing the example network that we will use in this notebook.", "# Import packages\n%run startup.py\nfrom pybatfish.datamodel.route import BgpRouteConstraints\nbf = Session(host=\"localhost\")\n\n# Initialize a network and snapshot\nNETWORK_NAME = \"example_network\"\nSNAPSHOT_NAME = \"example_snapshot\"\n\nSNAPSHOT_PATH = \"networks/route-analysis\"\n\nbf.set_network(NETWORK_NAME)\nbf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)", "Example 1: Filter private addresses in-bound\nWhen peering with external entities, an almost universally-desired policy is to filter out all announcements of the private IP address space. For our network, we'd like to ensure that the two from_customer route policies properly filter such announcements. \nTraditionally you might validate this policy through some form testing that involves the production or a lab device. With Batfish's testRoutePolicies question we can easily test a route policy's behavior without access to a device.", "# Create an example route to use for testing\ninRoute1 = BgpRoute(network=\"10.0.0.0/24\", \n originatorIp=\"4.4.4.4\", \n originType=\"egp\", \n protocol=\"bgp\")\n\n# Test how our policy treats this route\nresult = bf.q.testRoutePolicies(policies=\"from_customer\", \n direction=\"in\", \n inputRoutes=[inRoute1]).answer().frame()\n# Pretty print the result\nshow(result)", "The first line of code above creates a BgpRoute object that specifies the input route announcement to use for testing, which in this case announces the prefix 10.0.0.0/24, has an originator IP that we arbitrarily chose, and has default values for other parts of the announcement. The second line uses testRoutePolicies to test the behavior of the two from_customer route policies on this announcement.\nThe output of the question shows the results of the test. As we see in the Action column, in both border routers, the from_customer route policy properly denies this private address. The Trace column tell us that this happened because the input route matched clause 100 of the route map.\nThat result gives us some confidence in our route policies, but it is just a single test. We can run testRoutePolicies on more private addresses to ensure they are denied. \nHowever, how can we be sure that all private addresses are denied by the two in-bound route maps? For that, we will use the searchRoutePolicies question and change our perspective a bit. Instead of testing individual routes, we will ask Batfish to search for a route-policy behavior that violates our intent. If we get one or more results, then we've found a bug. If we get no results, then we can be sure that our configurations satisfy the intent, since Batfish explores all possible route-policy behaviors.", "# Define the space of private addresses\nprivateIps = [\"10.0.0.0/8:8-32\", \n \"172.16.0.0/28:28-32\", \n \"192.168.0.0/16:16-32\"]\n\n# Specify all route announcements for the private space\ninRoutes1 = BgpRouteConstraints(prefix=privateIps)\n\n# Verify that no such announcement is permitted by our policy\nresult = bf.q.searchRoutePolicies(policies=\"from_customer\", \n inputConstraints=inRoutes1, \n action=\"permit\").answer().frame()\n# Pretty print the result\nshow(result)", "The first line above specifies the space of all private IP prefixes. The second line creates a BgpRouteConstraints object, which is like the BgpRoute object we saw earlier but represents a set of announcements rather than a single one. In this case, we are interested in all announcements that announce a prefix in privateIps. Finally, the third line of code uses searchRoutePolicies to search for an announcement in the set inRoutes that is permitted by the from_customer route policy.\nThere are no results for border1, which means that its from_customer route policy properly filters all private addresses. However, the result for border2 shows that its version of from_customer permits an announcement for the prefix 192.168.0.0/32. The table also shows the route announcement that will be produced by from_customer in this case, along with a \"diff\" of the input and output announcements. \nInspecting the configurations, we see that both routers deny all announcements for prefixes in the prefix list private-ips. However, the definition of private-ips on border2 accidentally omitted the ge /16 clause, so only applied to /16 prefixes. Relevant parts of the config at border2 are:\n```\nip prefix-list private-ips seq 15 permit 192.168.0.0/16 // <-- missing ge /16\n...\nroute-map from_customer deny 100\n match ip address prefix-list private-ips\n!\n....\nroute-map from_customer permit 400\n set community 20:30\n set local-preference 300\n!\n```\nBatfish is able to correctly model the semantics of route maps and prefix lists and deduce that some prefix with private IPs will get past our policy.\nExample 2: Filter based on origin AS in-bound\nAnother common BGP policy is to make sure that announcements for certain prefixes (e.g., customer-owned prefixes) are acccepted only if they have a specific origin AS. \nFor our example, we assume that announcements for routes with any prefix in the range 5.5.5.0/24:24-32 should originate from the AS 44. We will use searchRoutePolicies to ask: Is there any permitted announcement for a prefix in the range 5.5.5.0/24:24-32 that does not originate from AS 44?", "# Define expected prefixes\nknownPrefixes = \"5.5.5.0/24:24-32\"\n\n# Define invalid AS-path -- all those that do not have 44 as the origin AS\nbadOrigin = \"!/( |^)44$/\"\n\n# Specify the route announcements we must not permit\ninRoutes2 = BgpRouteConstraints(prefix=knownPrefixes, asPath=badOrigin)\n\n# Verify that our policy does not permit any such announcement\nresult = bf.q.searchRoutePolicies(policies=\"from_customer\", \n inputConstraints=inRoutes2, \n action=\"permit\").answer().frame()\nshow(result)", "The first line above defines the known prefixes. The second line specifies that we are interested in AS-paths that do not end in 44, using the same syntax that Batfish uses for regular-expression specifiers. Specifically, a regular expression is surrounded by / characters, and the leading ! indicates that we are interested in AS-paths that do not match this regular expression. Since there are no results, we can be sure that the our intent is satisfied.\nExample 3: Set attributes in-bound\nAs the third and final policy for inbound routes, let's make sure that our from_customer route policies tag each permitted route with the community 20:30 and set the local preference to 300. To start, we can use testRoutePolicies to test this property on a specific route announcement.", "# Define a test route and test what the policy does to it\ninRoute3 = BgpRoute(network=\"2.0.0.0/8\", \n originatorIp=\"4.4.4.4\", \n originType=\"egp\", \n protocol=\"bgp\")\nresult = bf.q.testRoutePolicies(policies=\"from_customer\", \n direction=\"in\", \n inputRoutes=[inRoute3]).answer().frame()\nshow(result)", "The results show what each router does to the test route. This information can be seen in the Output_Route column, which shows the full output route announcement, as well as the Difference column, which shows the differences between the input and output route announcements. We see that there is an error in border1's configuration: the permitted route is tagged with community 20:30, but its local preference is not set to 300. However, border2 is doing the right thing. A look at the configuration for border1 reveals that the set local-preference line is accidentally omitted from one clause of the policy.\nThis example shows why testing is so important. However, we'd like to make sure that there aren't any other lurking bugs. We can use searchRoutePolicies for this purpose. First we'll check that all permitted routes are tagged with the community 20:30. To check this property, we will leverage the ability of searchRoutePolicies not only to search for particular input announcements, but also to search for particular output announcements. In this case, we will ask: Is there a permitted route whose output announcement is not tagged with community 20:30?", "# Define invalid communities -- those that do not contain 20:30\noutRoutes3a = BgpRouteConstraints(communities=\"!20:30\")\n# Verify that our policy does not output routes with such communities\nresult = bf.q.searchRoutePolicies(policies=\"from_customer\", \n action=\"permit\", \n outputConstraints=outRoutes3a).answer().frame()\nshow(result)", "There are no results, so we know that our intent is satisfied on both routers. \nNow let's do a similar thing to check that the local preference is properly set. We've already seen that border1 is not properly setting the local preference, so we'll just check that border2's configuration is correct.", "# Verify that all permitted routes have the expected local preference\noutRoutes3b = BgpRouteConstraints(localPreference=\"!300\")\nresult = bf.q.searchRoutePolicies(nodes=\"border2\", \n policies=\"from_customer\", \n action=\"permit\", \n outputConstraints=outRoutes3b).answer().frame()\nshow(result)", "There are no results for border2's from_customer policy, so we have the strong assurance that it is properly setting the local preference on all permitted routes.\nExample 4: Announce your own addresses out-bound\nOk, now let's validate the to_provider route policies. The first thing we want to ensure is that they allow all our addresses to be advertised. Lets assume that these are addresses in the ranges 1.2.3.0/24:24-32 and 1.2.4.0/24:24-32. We can use searchRoutePolicies to validate this property. Specifically, we ask: Is there an announcement for an address that we own that is denied by to_provider?", "# Verify that no route for our address space is ever denied by to_provider policies\nownedSpace=[\"1.2.3.0/24:24-32\", \"1.2.4.0/24:24-32\"]\ninRoutes4 = BgpRouteConstraints(prefix=ownedSpace)\nresult = bf.q.searchRoutePolicies(policies=\"to_provider\", \n inputConstraints=inRoutes4, \n action=\"deny\").answer().frame()\nshow(result)", "Since there are no results, this implies that no such announcement exists. Hurray!\nExample 5: Announce your customers' routes out-bound\nNext we will check that the to_provider route policies are properly announcing our customers' routes. These are identified by announcements that are tagged with the community 20:30, as we saw earlier.", "# Verify that no customer routes (i.e., those tagged with the community 20:30) is ever denied\ncustomerCommunities = \"20:30\"\ninRoutes5 = BgpRouteConstraints(communities=customerCommunities)\nresult = bf.q.searchRoutePolicies(policies=\"to_provider\", \n inputConstraints=inRoutes5, \n action=\"deny\").answer().frame()\nshow(result)", "There are no results for border1, so it permits announcement of all customer routes. But border2 has a bug since it denies some customer routes, a concrete example of which is shown in the Input_Route column. \nA look at the configuration reveals that someone has fat-fingered the definition of the community list:\nip community-list cust_community permit 2:30\nSuch mistakes are difficult to find with any other tool.\nExample 6: Don't advertise anything else out-bound\nLast, we want to make sure that our to_provider route policies don't announce any routes other than the ones we own and the ones that our customers own. We will use searchRoutePolicies to ask: Is there a permitted route whose prefix is not one we own and which is not tagged with the community 20:30?", "# Set of routes that are neither in our owned space nor have the customer community (20:30)\ninRoutes6 = BgpRouteConstraints(prefix=ownedSpace, \n complementPrefix=True, \n communities=\"!20:30\")\n\n# Verify that no such route is permitted\nresult = bf.q.searchRoutePolicies(policies=\"to_provider\", \n inputConstraints=inRoutes6, \n action=\"permit\").answer().frame()\nshow(result)", "The complementPrefix parameter above is used to indicate that we are interested in routes whose prefix is not in ownedSpace.\nSince there are no results for border1 we can be sure that it is not advertising any routes that it shouldn't be. We already saw in the previous example that border2 accidentally advertises routes tagged with 2:30, and that error shows up again here.\nCurrent Status\nThe testRoutePolicies question supports all of the vendors and route-policy features that are supported by Batfish. \nThe searchRoutePolicies question has been (at the time of this writing) newly added to Batfish. It supports all of the vendors that are supported by Batfish. The question supports a host of common route policy behaviors and intents, as shown above, but it does not currently support all routing constructs. See its documentation for details, and feel free to reach out to us with questions or specific needs. We'll continue to enhance its coverage.\nSummary\nIn this notebook we showed you two ways to use Batfish to check whether your route policies meet your intent:\n1. The testRoutePolicies question allows you to easily test the behavior of a route policy offline, without access to the live network.\n2. The searchRoutePolicies question allows you to search for violations of intent, identifying concrete errors if they exist and providing strong correctness guarantees if not.\n\nGet involved with the Batfish community\nJoin our community on Slack and GitHub." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Dima806/udacity-mlnd-capstone
capstone-preprocessing-final.ipynb
apache-2.0
[ "Udacity MLND Capstone Project\n\"Determination of students’ interaction patterns with an intelligent tutoring system and study of their correlation with successful learning\"\nPreprocessing step", "import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport time\nimport gzip\nimport shutil\nimport seaborn", "Load raw data from ASSISTments 2004-2005, 2005-2006 and 2006-2007 years datasets:", "print(\"Loading ds92 data:\")\nds92_data = pd.read_csv(\"wpi-assistments/math_2004_2005/ds92_tx_All_Data_172_2016_0504_081852.txt\", sep=\"\\t\", low_memory=False)\n#ds92_data = ds92_data[columns]\nprint(ds92_data.shape)\n\nprint(\"Loading ds120 data:\")\nds120_data = pd.read_csv(\"wpi-assistments/math_2005_2006/ds120_tx_All_Data_265_2017_0414_065125.txt\", sep=\"\\t\", low_memory=False)\n#ds120_data = ds120_data.copy()[columns]\nprint(ds120_data.shape)\n\nprint(\"Loading ds339 data:\")\nds339_data = pd.read_csv(\"wpi-assistments/math_2006_2007/ds339_tx_All_Data_1059_2015_0729_215742.txt\", sep=\"\\t\", low_memory=False)\n#ds339_data = ds339_data.copy()[columns]\nprint(ds339_data.shape)", "Transform selected features:", "i = 1\nfor df in [ds92_data, ds120_data, ds339_data]:\n print(\">> Processing dataset {}:\".format(i))\n df['Day'] = df['Time'].apply(lambda x: x.split(\" \")[0])\n df.drop(['Time'], axis=1, inplace=True)\n df['Duration (sec)'] = df['Duration (sec)'].replace({'.': 0}).astype(float)\n df['Student Response Type'] = df['Student Response Type'].replace({'ATTEMPT': 0, 'HINT_REQUEST': 1})\n df['Outcome'] = df['Outcome'].replace({'CORRECT': 0, 'INCORRECT': 1, 'HINT': 2})\n i += 1", "Look at descriptive statistics of initial datasets (only numerical columns):", "ds92_data.describe().dropna(axis=1)\n\nds120_data.describe().dropna(axis=1)\n\nds339_data.describe().dropna(axis=1)", "As we see, there are several common features present in each dataset. I will use all of them skipping only 'Help Level' (which gives the number of subsequent hints for hints and NaN for attempts).\nTransform and combine the selected features:", "data = pd.concat([ds92_data, ds120_data, ds339_data], ignore_index=True)\ndata.info()\n\ndata.head(20)\n\ndata.describe().dropna(axis=1)", "Look at correlations between different features:", "corr1 = data.corr().dropna(how='all', axis=1).dropna(how='all', axis=0)\ncorr1", "Visualise obtained correlations:", "seaborn.heatmap(corr1);", "As we see, there are some large (anti)correlations between the following features:\n- correlation (0.87) between 'Outcome' and 'Student Response Type' (hints contribute to the largest values in both features); \n- correlation (0.61) between 'Attempt At Step' and 'Help Level' (making more steps for attempts generally means making more hints);\n- correlation (0.49) between 'Help Level' and 'Total Num Hints' (number of subsequent hints clearly correlate with the total number of hints);\n- anti-correlation (-0.72) between 'Is Last Attempt' and 'Outcome';\n- anti-correlation (-0.48) between 'Is Last Attempt' and 'Student Response Type' (a product of correlation between 'Outcome' and 'Student Response Type' and anti-correlation between 'Is Last Attempt' and 'Outcome').\nSelect features chosen for further analysis:", "columns = ['Anon Student Id', \\\n 'Session Id', \\\n 'Duration (sec)', \\\n 'Student Response Type', \\\n 'Problem Name', \\\n 'Problem View', \\\n 'Attempt At Step', \\\n 'Outcome', \\\n 'Day']\n\ndata = data.copy()[columns]", "Add 'x' column:\nNote to reviewers: this algorithm is quite slow (~25 minutes), so you may consider adding 'x' variable (number of attempt) to a substantial subset of ASSISTments dataset (e.g. processing 100,000 rows takes only ~0.5 minutes).", "def adding_x(df):\n j = 0\n start_time = time.time()\n df['x'] = 0\n df_attempts = df[df['Student Response Type'] == 0].copy()\n stud_list = df_attempts['Anon Student Id'].unique()\n for student in stud_list:\n print(\"\\r\\t>>> Progress\\t:{:.4%}\".format((j + 1)/len(stud_list)), end='')\n j += 1\n stud = []\n stud.append(student)\n data_stud = df_attempts[np.in1d(df_attempts['Anon Student Id'], stud)].copy()\n for problem in data_stud['Problem Name'].unique():\n prob = []\n prob.append(problem)\n data_prob = data_stud[np.in1d(data_stud['Problem Name'], prob)].copy()\n data_stud.loc[data_prob.index,'x'] = range(1,len(data_prob)+1)\n df_attempts.loc[data_stud.index,'x'] = data_stud['x']\n end_time = time.time()\n print(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\n return df_attempts\n\n#data_x = adding_x(data.head(100000).copy())\ndata_x = adding_x(data.copy())\ndata['x'] = 0\ndata.loc[data_x.index,'x'] = data_x['x']\ndata[data['x'] > 0].shape", "Write data to hdf, read back and compare\nI read from and write to compressed hdf, see performance comparison:", "def hdf_fixed_write_compress(df):\n df.to_hdf('data.hdf','test',mode='w',complib='blosc')\n return\n\ndef hdf_fixed_read_compress():\n df = pd.read_hdf('data.hdf','test')\n return df\n\nhdf_fixed_write_compress(data)\n\ndata1 = hdf_fixed_read_compress()\nne = data[data != data1]\nne.dropna(axis=0, how='all', inplace=True)\nne.shape[0]", "This file is too large to upload to Githib:", "! ls -lh data.hdf", ", so I gzipped it:", "with open('data.hdf', 'rb') as f_in, gzip.open('data.hdf.gz', 'wb') as f_out:\n shutil.copyfileobj(f_in, f_out)", "The obtained data.hdf.gz file is smaller than 25M, so I upload it to my Github:", "! ls -lh data.hdf.gz", "Create visualisation:", "s1 = data[data['Outcome'] <= 1].groupby(['x']).agg(len)['Problem Name']\n\ns2 = data[data['Outcome'] == 1].groupby(['x']).agg(len)['Problem Name']\n\ns1[8] = s1.loc[8:].sum()\nfor i in range(9, int(s1.index.max()+1)):\n try:\n s1.drop(i, inplace=True)\n except ValueError:\n pass\n\ns2[8] = s2.loc[8:].sum()\nfor i in range(9, int(s2.index.max()+1)):\n try:\n s2.drop(i, inplace=True)\n except ValueError:\n pass\n\n# In case of wrong x labelling, simply run this cell 2 times:\n\nfig, ax1 = plt.subplots()\nfig_size = plt.rcParams[\"figure.figsize\"]\nfig_size[0] = 8.3\nfig_size[1] = 4.7\nplt.rcParams[\"figure.figsize\"] = fig_size\nplt.xlim(0.5,8.5)\nplt.bar(s1.index, s1, width=0.9)\n#plt.bar(s2.index, s2, width=0.9)\n#plt.legend(['CORRECT', 'INCORRECT'])\n\nplt.xlabel(\"Attempt number\", size=14)\nplt.ylabel(\"Number of attempts\", size=14)\nax1.tick_params(axis ='both', which='major', length=0, labelsize =14, color='black')\nax1.tick_params(axis ='both', which='minor', length=0)\nlabels = [item.get_text() for item in ax1.get_xticklabels()]\nlabels = ['0', '1', '2', '3', '4', '5', '6', '7', '8+']\n#print(labels)\n\nax2 = ax1.twinx()\nax2.plot(s1.index, s2/s1, 'r-o')\nax2.set_ylabel('Fraction of incorrect attempts', size=14, color='r')\nax2.tick_params('y', colors='r')\nax2.tick_params(axis ='both', which='minor', length=0)\nax2.tick_params(axis ='both', which='major', length=0, labelsize =14, color='red')\n\nax1.set_xticklabels(labels)\n\nplt.show()\nfig.savefig('data-visualisation.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jamesmarva/maths-with-python
09-exceptions-testing.ipynb
mit
[ "Things go wrong when programming all the time. Some of these \"problems\" are errors that stop the program from making sense. Others are problems that stop the program from working in specific, special cases. These \"problems\" may be real, or we may want to treat them as special cases that don't stop the program from running.\nThese special cases can be dealt with using exceptions.\nExceptions\nLet's define a function that divides two numbers.", "from __future__ import division\n\ndef divide(numerator, denominator):\n \"\"\"\n Divide two numbers.\n \n Parameters\n ----------\n \n numerator: float\n numerator\n denominator: float\n denominator\n \n Returns\n -------\n \n fraction: float\n numerator / denominator \n \"\"\"\n return numerator / denominator\n\nprint(divide(4.0, 5.0))", "But what happens if we try something really stupid?", "print(divide(4.0, 0.0))", "So, the code works fine until we pass in input that we shouldn't. When we do, this causes the code to stop. To show how this can be a problem, consider the loop:", "denominators = [1.0, 0.0, 3.0, 5.0]\nfor denominator in denominators:\n print(divide(4.0, denominator))", "There are three sensible results, but we only get the first.\nThere are many more complex, real cases where it's not obvious that we're doing something wrong ahead of time. In this case, we want to be able to try running the code and catch errors without stopping the code. This can be done in Python:", "try:\n print(divide(4.0, 0.0))\nexcept ZeroDivisionError:\n print(\"Dividing by zero is a silly thing to do!\")\n\ndenominators = [1.0, 0.0, 3.0, 5.0]\nfor denominator in denominators:\n try:\n print(divide(4.0, denominator))\n except ZeroDivisionError:\n print(\"Dividing by zero is a silly thing to do!\")", "The idea here is given by the names. Python will try to execute the code inside the try block. This is just like an if or a for block: each command that is indented in that block will be executed in order.\nIf, and only if, an error arises then the except block will be checked. If the error that is produced matches the one listed then instead of stopping, the code inside the except block will be run instead.\nTo show how this works with different errors, consider a different silly error:", "try:\n print(divide(4.0, \"zero\"))\nexcept ZeroDivisionError:\n print(\"Dividing by zero is a silly thing to do!\")", "We see that, as it makes no sense to divide by a string, we get a TypeError instead of a ZeroDivisionError. We could catch both errors:", "try:\n print(divide(4.0, \"zero\"))\nexcept ZeroDivisionError:\n print(\"Dividing by zero is a silly thing to do!\")\nexcept TypeError:\n print(\"Dividing by a string is a silly thing to do!\")", "We could catch any error:", "try:\n print(divide(4.0, \"zero\"))\nexcept:\n print(\"Some error occured\")", "This doesn't give us much information, and may lose information that we need in order to handle the error. We can capture the exception to a variable, and then use that variable:", "try:\n print(divide(4.0, \"zero\"))\nexcept (ZeroDivisionError, TypeError) as exception:\n print(\"Some error occured: {}\".format(exception))", "Here we have caught two possible types of error within the tuple (which must, in this case, have parantheses) and captured the specific error in the variable exception. This variable can then be used: here we just print it out.\nNormally best practise is to be as specific as possible on the error you are trying to catch.\nExtending the logic\nSometimes you may want to perform an action only if an error did not occur. For example, let's suppose we wanted to store the result of dividing 4 by a divisor, and also store the divisor, but only if the divisor is valid.\nOne way of doing this would be the following:", "denominators = [1.0, 0.0, 3.0, \"zero\", 5.0]\nresults = []\ndivisors = []\nfor denominator in denominators:\n try:\n result = divide(4.0, denominator)\n except (ZeroDivisionError, TypeError) as exception:\n print(\"Error of type {} for denominator {}\".format(exception, denominator))\n else:\n results.append(result)\n divisors.append(denominator)\nprint(results)\nprint(divisors)", "The statements in the else block are only run if the try block succeeds. If it doesn't - if the statements in the try block raise an exception - then the statements in the else block are not run.\nExceptions in your own code\nSometimes you don't want to wait for the code to break at a low level, but instead stop when you know things are going to go wrong. This is usually because you can be more informative about what's going wrong. Here's a slightly artificial example:", "def divide_sum(numerator, denominator1, denominator2):\n \"\"\"\n Divide a number by a sum.\n \n Parameters\n ----------\n \n numerator: float\n numerator\n denominator1: float\n Part of the denominator\n denominator2: float\n Part of the denominator\n \n Returns\n -------\n \n fraction: float\n numerator / (denominator1 + denominator2)\n \"\"\"\n \n return numerator / (denominator1 + denominator2)\n\ndivide_sum(1, 1, -1)", "It should be obvious to the code that this is going to go wrong. Rather than letting the code hit the ZeroDivisionError exception automatically, we can raise it ourselves, with a more meaningful error message:", "def divide_sum(numerator, denominator1, denominator2):\n \"\"\"\n Divide a number by a sum.\n \n Parameters\n ----------\n \n numerator: float\n numerator\n denominator1: float\n Part of the denominator\n denominator2: float\n Part of the denominator\n \n Returns\n -------\n \n fraction: float\n numerator / (denominator1 + denominator2)\n \"\"\"\n \n if (denominator1 + denominator2) == 0:\n raise ZeroDivisionError(\"The sum of denominator1 and denominator 2 is zero!\")\n \n return numerator / (denominator1 + denominator2)\n\ndivide_sum(1, 1, -1)", "There are a large number of standard exceptions in Python, and most of the time you should use one of those, combined with a meaningful error message. One is particularly useful: NotImplementedError.\nThis exception is used when the behaviour the code is about to attempt makes no sense, is not defined, or similar. For example, consider computing the roots of the quadratic equation, but restricting to only real solutions. Using the standard formula\n\\begin{equation}\n x_{\\pm} = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}\n\\end{equation}\nwe know that this only makes sense if $b^2 \\ge 4ac$. We put this in code as:", "from math import sqrt\n \ndef real_quadratic_roots(a, b, c):\n \"\"\"\n Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.\n \n Parameters\n ----------\n \n a : float\n Coefficient of x^2\n b : float\n Coefficient of x^1\n c : float\n Coefficient of x^0\n \n Returns\n -------\n \n roots : tuple\n The roots\n \n Raises\n ------\n \n NotImplementedError\n If the roots are not real.\n \"\"\"\n \n discriminant = b**2 - 4.0*a*c\n if discriminant < 0.0:\n raise NotImplementedError(\"The discriminant is {}<0. \"\n \"No real roots exist.\".format(discriminant))\n \n x_plus = (-b + sqrt(discriminant)) / (2.0*a)\n x_minus = (-b - sqrt(discriminant)) / (2.0*a)\n \n return x_plus, x_minus\n\nprint(real_quadratic_roots(1.0, 5.0, 6.0))\n\nreal_quadratic_roots(1.0, 1.0, 5.0)", "Testing\nHow do we know if our code is working correctly? It is not when the code runs and returns some value: as seen above, there may be times where it makes sense to stop the code even when it is correct, as it is being used incorrectly. We need to test the code to check that it works.\nUnit testing is the idea of writing many small tests that check if simple cases are behaving correctly. Rather than trying to prove that the code is correct in all cases (which could be very hard), we check that it is correct in a number of tightly controlled cases (which should be more straightforward). If we later find a problem with the code, we add a test to cover that case.\nConsider a function solving for the real roots of the quadratic equation again. This time, if there are no real roots we shall return None (to say there are no roots) instead of raising an exception.", "from math import sqrt\n \ndef real_quadratic_roots(a, b, c):\n \"\"\"\n Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.\n \n Parameters\n ----------\n \n a : float\n Coefficient of x^2\n b : float\n Coefficient of x^1\n c : float\n Coefficient of x^0\n \n Returns\n -------\n \n roots : tuple or None\n The roots\n \"\"\"\n \n discriminant = b**2 - 4.0*a*c\n if discriminant < 0.0:\n return None\n \n x_plus = (-b + sqrt(discriminant)) / (2.0*a)\n x_minus = (-b + sqrt(discriminant)) / (2.0*a)\n \n return x_plus, x_minus", "First we check what happens if there are imaginary roots, using $x^2 + 1 = 0$:", "print(real_quadratic_roots(1, 0, 1))", "As we wanted, it has returned None. We also check what happens if the roots are zero, using $x^2 = 0$:", "print(real_quadratic_roots(1, 0, 0))", "We get the expected behaviour. We also check what happens if the roots are real, using $x^2 - 1 = 0$ which has roots $\\pm 1$:", "print(real_quadratic_roots(1, 0, -1))", "Something has gone wrong. Looking at the code, we see that the x_minus line has been copied and pasted from the x_plus line, without changing the sign correctly. So we fix that error:", "from math import sqrt\n \ndef real_quadratic_roots(a, b, c):\n \"\"\"\n Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.\n \n Parameters\n ----------\n \n a : float\n Coefficient of x^2\n b : float\n Coefficient of x^1\n c : float\n Coefficient of x^0\n \n Returns\n -------\n \n roots : tuple or None\n The roots\n \"\"\"\n \n discriminant = b**2 - 4.0*a*c\n if discriminant < 0.0:\n return None\n \n x_plus = (-b + sqrt(discriminant)) / (2.0*a)\n x_minus = (-b - sqrt(discriminant)) / (2.0*a)\n \n return x_plus, x_minus", "We have changed the code, so now have to re-run all our tests, in case our change broke something else:", "print(real_quadratic_roots(1, 0, 1))\nprint(real_quadratic_roots(1, 0, 0))\nprint(real_quadratic_roots(1, 0, -1))", "As a final test, we check what happens if the equation degenerates to a linear equation where $a=0$, using $x + 1 = 0$ with solution $-1$:", "print(real_quadratic_roots(0, 1, 1))", "In this case we get an exception, which we don't want. We fix this problem:", "from math import sqrt\n \ndef real_quadratic_roots(a, b, c):\n \"\"\"\n Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.\n \n Parameters\n ----------\n \n a : float\n Coefficient of x^2\n b : float\n Coefficient of x^1\n c : float\n Coefficient of x^0\n \n Returns\n -------\n \n roots : tuple or float or None\n The root(s) (two if a genuine quadratic, one if linear, None otherwise)\n \n Raises\n ------\n \n NotImplementedError\n If the equation has trivial a and b coefficients, so isn't solvable.\n \"\"\"\n \n discriminant = b**2 - 4.0*a*c\n if discriminant < 0.0:\n return None\n \n if a == 0:\n if b == 0:\n raise NotImplementedError(\"Cannot solve quadratic with both a\"\n \" and b coefficients equal to 0.\")\n else:\n return -c / b\n \n x_plus = (-b + sqrt(discriminant)) / (2.0*a)\n x_minus = (-b - sqrt(discriminant)) / (2.0*a)\n \n return x_plus, x_minus", "And we now must re-run all our tests again, as the code has changed once more:", "print(real_quadratic_roots(1, 0, 1))\nprint(real_quadratic_roots(1, 0, 0))\nprint(real_quadratic_roots(1, 0, -1))\nprint(real_quadratic_roots(0, 1, 1))", "Formalizing tests\nThis small set of tests covers most of the cases we are concerned with. However, by this point it's getting hard to remember\n\nwhat each line is actually testing, and\nwhat the correct value is meant to be.\n\nTo formalize this, we write each test as a small function that contains this information for us. Let's start with the $x^2 - 1 = 0$ case where the roots are $\\pm 1$:", "from numpy.testing import assert_equal, assert_allclose\n\ndef test_real_distinct():\n \"\"\"\n Test that the roots of x^2 - 1 = 0 are \\pm 1.\n \"\"\"\n \n roots = (1.0, -1.0)\n assert_equal(real_quadratic_roots(1, 0, -1), roots,\n err_msg=\"Testing x^2-1=0; roots should be 1 and -1.\")\n\ntest_real_distinct()", "What this function does is checks that the results of the function call match the expected value, here stored in roots. If it didn't match the expected value, it would raise an exception:", "def test_should_fail():\n \"\"\"\n Comparing the roots of x^2 - 1 = 0 to (1, 1), which should fail.\n \"\"\"\n \n roots = (1.0, 1.0)\n assert_equal(real_quadratic_roots(1, 0, -1), roots,\n err_msg=\"Testing x^2-1=0; roots should be 1 and 1.\"\n \" So this test should fail\")\n\ntest_should_fail()", "Testing that one floating point number equals another can be dangerous. Consider $x^2 - 2 x + (1 - 10^{-10}) = 0$ with roots $1.1 \\pm 10^{-5} )$:", "from math import sqrt\n\ndef test_real_distinct_irrational():\n \"\"\"\n Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \\pm 1e-5.\n \"\"\"\n \n roots = (1 + 1e-5, 1 - 1e-5)\n assert_equal(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,\n err_msg=\"Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.\")\n \ntest_real_distinct_irrational()", "We see that the solutions match to the first 14 or so digits, but this isn't enough for them to be exactly the same. In this case, and in most cases using floating point numbers, we want the result to be \"close enough\": to match the expected precision. There is an assertion for this as well:", "from math import sqrt\n\ndef test_real_distinct_irrational():\n \"\"\"\n Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \\pm 1e-5.\n \"\"\"\n \n roots = (1 + 1e-5, 1 - 1e-5)\n assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,\n err_msg=\"Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.\")\n \ntest_real_distinct_irrational()", "The assert_allclose statement takes options controlling the precision of our test.\nWe can now write out all our tests:", "from math import sqrt\nfrom numpy.testing import assert_equal, assert_allclose\n\ndef test_no_roots():\n \"\"\"\n Test that the roots of x^2 + 1 = 0 are not real.\n \"\"\"\n \n roots = None\n assert_equal(real_quadratic_roots(1, 0, 1), roots,\n err_msg=\"Testing x^2+1=0; no real roots.\")\n\ndef test_zero_roots():\n \"\"\"\n Test that the roots of x^2 = 0 are both zero.\n \"\"\"\n \n roots = (0, 0)\n assert_equal(real_quadratic_roots(1, 0, 0), roots,\n err_msg=\"Testing x^2=0; should both be zero.\")\n\ndef test_real_distinct():\n \"\"\"\n Test that the roots of x^2 - 1 = 0 are \\pm 1.\n \"\"\"\n \n roots = (1.0, -1.0)\n assert_equal(real_quadratic_roots(1, 0, -1), roots,\n err_msg=\"Testing x^2-1=0; roots should be 1 and -1.\")\n \ndef test_real_distinct_irrational():\n \"\"\"\n Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \\pm 1e-5.\n \"\"\"\n \n roots = (1 + 1e-5, 1 - 1e-5)\n assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,\n err_msg=\"Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.\")\n \ndef test_real_linear_degeneracy():\n \"\"\"\n Test that the root of x + 1 = 0 is -1.\n \"\"\"\n \n root = -1.0\n assert_equal(real_quadratic_roots(0, 1, 1), root,\n err_msg=\"Testing x+1=0; root should be -1.\")\n\ntest_no_roots()\ntest_zero_roots()\ntest_real_distinct()\ntest_real_distinct_irrational()\ntest_real_linear_degeneracy()", "Nose\nWe now have a set of tests - a testsuite, as it is sometimes called - encoded in functions, with meaningful names, which give useful error messages if the test fails. Every time the code is changed, we want to re-run all the tests to ensure that our change has not broken the code. This can be tedious. A better way would be to run a single command that runs all tests. nosetests is that command.\nThe easiest way to use it is to put all tests in the same file as the function being tested. So, create a file quadratic.py containing\n```python\nfrom math import sqrt\nfrom numpy.testing import assert_equal, assert_allclose\ndef real_quadratic_roots(a, b, c):\n \"\"\"\n Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.\nParameters\n----------\n\na : float\n Coefficient of x^2\nb : float\n Coefficient of x^1\nc : float\n Coefficient of x^0\n\nReturns\n-------\n\nroots : tuple or float or None\n The root(s) (two if a genuine quadratic, one if linear, None otherwise)\n\nRaises\n------\n\nNotImplementedError\n If the equation has trivial a and b coefficients, so isn't solvable.\n\"\"\"\n\ndiscriminant = b**2 - 4.0*a*c\nif discriminant &lt; 0.0:\n return None\n\nif a == 0:\n if b == 0:\n raise NotImplementedError(\"Cannot solve quadratic with both a\"\n \" and b coefficients equal to 0.\")\n else:\n return -c / b\n\nx_plus = (-b + sqrt(discriminant)) / (2.0*a)\nx_minus = (-b - sqrt(discriminant)) / (2.0*a)\n\nreturn x_plus, x_minus\n\ndef test_no_roots():\n \"\"\"\n Test that the roots of x^2 + 1 = 0 are not real.\n \"\"\"\nroots = None\nassert_equal(real_quadratic_roots(1, 0, 1), roots,\n err_msg=\"Testing x^2+1=0; no real roots.\")\n\ndef test_zero_roots():\n \"\"\"\n Test that the roots of x^2 = 0 are both zero.\n \"\"\"\nroots = (0, 0)\nassert_equal(real_quadratic_roots(1, 0, 0), roots,\n err_msg=\"Testing x^2=0; should both be zero.\")\n\ndef test_real_distinct():\n \"\"\"\n Test that the roots of x^2 - 1 = 0 are \\pm 1.\n \"\"\"\nroots = (1.0, -1.0)\nassert_equal(real_quadratic_roots(1, 0, -1), roots,\n err_msg=\"Testing x^2-1=0; roots should be 1 and -1.\")\n\ndef test_real_distinct_irrational():\n \"\"\"\n Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \\pm 1e-5.\n \"\"\"\nroots = (1 + 1e-5, 1 - 1e-5)\nassert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,\n err_msg=\"Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.\")\n\ndef test_real_linear_degeneracy():\n \"\"\"\n Test that the root of x + 1 = 0 is -1.\n \"\"\"\nroot = -1.0\nassert_equal(real_quadratic_roots(0, 1, 1), root,\n err_msg=\"Testing x+1=0; root should be -1.\")\n\n```\nThen, in a terminal or command window, switch to the directory containing this file. Then run\nnosetests quadratic.py\nYou should see output similar to\n```\nnosetests quadratic.py \n.....\n\nRan 5 tests in 0.006s\nOK\n```\nEach dot corresponds to a test. If a test fails, nose will report the error and move on to the next test. nose automatically runs every function that starts with test, or every file in a module starting with test, or more. The documentation gives more details about using nose in more complex cases.\nTo summarize: when trying to get code working, tests are essential. Tests should be simple and cover as many of the easy cases and as much of the code as possible. By writing tests as functions that raise exceptions, and using a testing framework such as nose, all tests can be run rapidly, saving time.\nTest Driven Development\nThere are many ways of writing code to solve problems. Most involve planning in advance how the code should be written. An alternative is to say in advance what tests the code should pass. This Test Driven Development (TDD) has advantages (the code always has a detailed set of tests, features in the code are always relevant to some test, it's easy to start writing code) and some disadvantages (it can be overkill for small projects, it can lead down blind alleys). A detailed discussion is given by Beck's book, and a more recent discussion in this series of conversations.\nEven if TDD does not work for you, testing itself is extremely important." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
VectorBlox/PYNQ
Pynq-Z1/notebooks/examples/overlay_integration.ipynb
bsd-3-clause
[ "Writing Python for new Overlay\nThis example will show how to interface to an overlay or hardware library from Python. \nIn this example, we will assume a new overlay has been created with an accelerator that receives data from Python, processes it, and returns the results. \nA command and data will be sent to the accelerator from Python, the accelerator will process the data, return the results to memory, and acknowledge the transaction has completed.\nRather than go through the process or creating a new overlay, for the purposes of this example, the Base overlay will be used to illustrate the process. The IOP1 memory will be used to act like the accelerator memory, although no processing will be carried out on the data.\nFor this example, we will define the following addresses in the overlay, which are in the IOP1 memory space, and are accessible from Python:\n|Address | Name | Memory Location |\n|---------------------------|-----------------|-----------------| \n|Accelerator address | BASE_ADDRESS | 0x40000000 | \n|Command Address offset | CMD_OFFSET | 0x800 |\n|Acknowledge Address offset | ACK_OFFSET | 0x804 |\n|Raw Data Address offset | RAW_DATA_OFFSET | 0x0 |\n|Data Address offset | DATA_OFFSET | 0x400 |\nAssume we only have the following commands for this simple accelerator:\n|Command | Value| \n|--------|------| \n|Idle | 0x0 |\n|Process | 0x1 | \nCreate a new Python module\nThe pynq MMIO module will be used to read and write to memory, or memory mapped peripherals in the Overlay. First MMIO is imported, and then the new class for this module is defined. \n```python\nfrom pynq import MMIO\nclass my_new_accelerator:\n \"\"\"Brief description of Module goes here\nAttributes\n----------\narray_size : int\n Describe parameters used in this module's functions.\n\"\"\"\n\n```\nInstantiate the MMIO\nNext the MMIO will be instantiated inside the new module.\npython\n mmio = MMIO(0x40000000,0x808)\n array_length = 0\nNote that a variable, array_length, for this module will also be declared. You will see how this is used later.\nAssume that the accelerator will check the command address when it starts.\nThe Python module must first initialize the command location (BASE_ADDRESS + CMD_OFFSET) to 0x0 (\"idle\"). \nDeclare an initialization function\nDeclare the function and write zero to the command location:\npython\n def __init__(self, array_size):\n self.mmio.write(CMD_OFFSET, 0)\nDefine the API\nFor this example, we will define two functions; load_data() and process(). \nload_data() will write data to the accelerator memory. \nprocess_data() will send the start command to the accelerator, wait for an acknowledge, and read back the processed data.\n\n0x1 will be written to the command location from Python\nThe accelerator will write 0x1 to the acknowledge location when processing is complete.\n\nNote how the array_length variable is used.\n```python\ndef load_data(self, raw_data):\n self.array_length = len(raw_data)\n for i in range(0 , self.array_length):\n self.mmio.write(RAW_DATA_OFFSET, raw_data[i])\ndef process(self): \n # Send start command to accelerator\n self.mmio.write(CMD_OFFSET, 0x1)\n processed_data = [0] *self.array_length\n #ACK is set to check for 0x0 in the ACK offset\n while (self.mmio.read(ACK_OFFSET)) != 0x1:\n pass\n # Ack has been received\nfor i in range(0 , self.array_length):\n processed_data[i] = self.mmio.read(PROCESSED_DATA_OFFSET)\n\n# Reset Ack\nself.mmio.write(ACK_OFFSET, 0) \nreturn processed_data\n\n```\nFinal code\nThe complete code can be found below, and can be executed and tested in this notebook by running the cells below. The code could be copied to a python file, and run directly on the board.", "BASE_ADDRESS = 0x40000000\nCMD_OFFSET = 0x800\nACK_OFFSET = 0x804\nRAW_DATA_OFFSET = 0\nPROCESSED_DATA_OFFSET = 0x400\n \nfrom pynq import MMIO\n \nclass my_new_accelerator:\n \"\"\"Brief description of Module goes here.\n \n Attributes\n ----------\n array_size : int\n Describe parameters used in this module's functions.\n raw_data : int\n Input Data\n processed_data : int\n Return data\n \n \"\"\"\n mmio = MMIO(0x40000000,0x808)\n array_length = 0\n \n def __init__(self):\n self.mmio.write(CMD_OFFSET, 0)\n \n def load_data(self, raw_data):\n self.array_length = len(raw_data)\n for i in range(0 , self.array_length):\n self.mmio.write(RAW_DATA_OFFSET, raw_data[i])\n \n def process(self): \n # Send start command to accelerator\n self.mmio.write(CMD_OFFSET, 0x1)\n processed_data = [0] *self.array_length\n \n # ACK is set to check for 0x0 in the ACK offset\n while (self.mmio.read(ACK_OFFSET)) != 0x1:\n pass\n # Ack has been received\n\n for i in range(0 , self.array_length):\n processed_data[i] = self.mmio.read(PROCESSED_DATA_OFFSET)\n \n # Reset Ack\n self.mmio.write(ACK_OFFSET, 0) \n return processed_data", "Executing the cell above loads the module into this notebook. This is the equivalent of importing the module (import my_new_accelerator) if it was included as part of the pynq package.\nAs explained previously, this notebook does not show you how to create a custom accelerator, however, the python code can be tested with the Base overlay. In the Base overlay, the IOP memory (starting at 0x40000000) will be used to simulate writing to an accelerator, and reading back from the accelerator. Notice how the code writes to one area of memory (BASE_ADDRESS + RAW_DATA_OFFSET), and expects to read back results from another area in memory (BASE_ADDRESS + PROCESSED_DATA_OFFSET). \nExecute the cell below to load the Pmod overlay, instantiate the accelerator, and send some data to the accelerator.", "from pynq import Overlay\nOverlay(\"base.bit\").download()\n\n# declare acc with a Maximum allowable array size\nacc = my_new_accelerator()\nraw_data = [1]*20\nprint(\"Some data to be sent to the accelerator:\", raw_data)\nacc.load_data(raw_data)", "As the accelerator doesn't exist, any data loaded to memory won't be processed, and the acknowledge will not be written. \nExecute the cell below to use the MMIO to manually write some data to the results area of the memory to simulate data being processed, and to write 0x1 to the acknowledge address. \nThe MMIO can be very useful to peak and poke memory and memory mapped peripherals in the overlay to debug Python code.", "from pynq import MMIO\n \nmmio = MMIO(0x40000000,2056)\n\nfor i in range (0,len(raw_data)):\n mmio.write(PROCESSED_DATA_OFFSET, raw_data[i]+1)\n\nfor i in range (0,len(raw_data)):\n mmio.write(ACK_OFFSET, 1)", "The process() function can now send a start command, read the acknowledge (which has already been set manually in the cell above), and read back from data from the processed data area. You can change the code above to write different data to the processed data area, or to set the acknowlege to 0 (which will cause the code below to hang).", "processed_data = acc.process()\nprint(\"Input Data : \", raw_data)\nprint(\"Processed Data : \", processed_data)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
intel-analytics/BigDL
python/chronos/colab-notebook/chronos_autots_nyc_taxi.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/chronos/colab-notebook/chronos_experimental_autots_nyc_taxi.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\nCopyright 2016 The BigDL Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#", "Environment Preparation\nInstall bigdl-chronos\nYou can install the latest pre-release version with automl support using pip install --pre --upgrade bigdl-chronos[all].", "# Install latest pre-release version of bigdl-chronos \n# Installing bigdl-chronos from pip will automatically install pyspark, bigdl, and their dependencies.\n!pip install --pre --upgrade bigdl-chronos[all]\n!pip uninstall -y torchtext # uninstall torchtext to avoid version conflict\nexit() # restart the runtime to refresh installed pkg", "Distributed automl for time series forecasting using Chronos AutoTS\nIn this guide we will demonstrate how to use Chronos AutoTS for automated time seires forecasting in 5 simple steps.\nStep 0: Prepare dataset\nWe used NYC taxi passengers dataset in Numenta Anomaly Benchmark (NAB) for demo, which contains 10320 records, each indicating the total number of taxi passengers in NYC at a corresonponding time spot.", "# download the dataset\n!wget https://raw.githubusercontent.com/numenta/NAB/v1.0/data/realKnownCause/nyc_taxi.csv\n\n# load the dataset. The downloaded dataframe contains two columns, \"timestamp\" and \"value\".\nimport pandas as pd\ndf = pd.read_csv(\"nyc_taxi.csv\", parse_dates=[\"timestamp\"])", "Step 1: Init Orca Context", "# import necesary libraries and modules\nfrom bigdl.orca import init_orca_context, stop_orca_context\nfrom bigdl.orca import OrcaContext", "This is the only place where you need to specify local or distributed mode. View Orca Context for more details. Note that argument init_ray_on_spark must be True for Chronos.", "# recommended to set it to True when running bigdl-chronos in Jupyter notebook \nOrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).\n\ninit_orca_context(cluster_mode=\"local\", cores=4, init_ray_on_spark=True)", "Step 2: Data transformation and feature engineering using Chronos TSDataset\nTSDataset is our abstract of time series dataset for data transformation and feature engineering. Here we use it to preprocess the data.", "from bigdl.chronos.data import TSDataset\nfrom sklearn.preprocessing import StandardScaler\n\ntsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df, # the dataframe to load\n dt_col=\"timestamp\", # the column name specifying datetime\n target_col=\"value\", # the column name to predict\n with_split=True, # split the dataset into 3 parts\n val_ratio=0.1, # validation set ratio\n test_ratio=0.1) # test set ratio\n\n# for each tsdataset, we \n# 1. generate datetime feature columns.\n# 2. impute the dataset with last occured value.\n# 3. scale the dataset with standard scaler, fit = true for train data.\nstandard_scaler = StandardScaler()\nfor tsdata in [tsdata_train, tsdata_val, tsdata_test]:\n tsdata.gen_dt_feature()\\\n .impute(mode=\"last\")\\\n .scale(standard_scaler, fit=(tsdata is tsdata_train))", "Step 3: Create an AutoTSEstimator\nAutoTSEstimator is our Automated TimeSeries Estimator for time series forecasting task.", "import bigdl.orca.automl.hp as hp\nfrom bigdl.chronos.autots import AutoTSEstimator\nauto_estimator = AutoTSEstimator(model='lstm', # the model name used for training\n search_space='normal', # a default hyper parameter search space\n past_seq_len=hp.randint(1, 10)) # hp sampling function of past_seq_len for auto-tuning", "Step 4: Fit with AutoTSEstimator", "# fit with AutoTSEstimator for a returned TSPipeline\nts_pipeline = auto_estimator.fit(data=tsdata_train, # train dataset\n validation_data=tsdata_val, # validation dataset\n epochs=5) # number of epochs to train in each trial", "Step 5: Further deployment with TSPipeline\nTSPipeline is our E2E solution for time series forecasting task.", "# predict with the best trial\ny_pred = ts_pipeline.predict(tsdata_test)\n\n# evaluate the result pipeline\nmse, smape = ts_pipeline.evaluate(tsdata_test, metrics=[\"mse\", \"smape\"])\nprint(\"Evaluate: the mean square error is\", mse)\nprint(\"Evaluate: the smape value is\", smape)\n\n# plot the result\nimport matplotlib.pyplot as plt\n\nlookback = auto_estimator.get_best_config()['past_seq_len']\ngroundtruth_unscale = tsdata_test.unscale().to_pandas()[lookback - 1:]\n\nplt.figure(figsize=(16,6))\nplt.plot(groundtruth_unscale[\"timestamp\"], y_pred[:,0,0])\nplt.plot(groundtruth_unscale[\"timestamp\"], groundtruth_unscale[\"value\"])\nplt.legend([\"prediction\", \"ground truth\"])\n\n# save the pipeline\nmy_ppl_file_path = \"/tmp/saved_pipeline\"\nts_pipeline.save(my_ppl_file_path)\n\n# restore the pipeline for further deployment\nfrom bigdl.chronos.autots import TSPipeline\nloaded_ppl = TSPipeline.load(my_ppl_file_path)\n\n# Stop orca context when your program finishes\nstop_orca_context()\n\n# show a tensorboard view\n%load_ext tensorboard\n%tensorboard --logdir /tmp/autots_estimator/autots_estimator_leaderboard/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opesci/notebooks
AcousticFWI/MultiOrder_2d-3d.ipynb
bsd-3-clause
[ "from sympy import *\nfrom sympy.abc import *\nfrom sympy.galgebra.ga import *\nimport numpy as np\nfrom numpy import linalg as LA\nfrom __future__ import print_function\ninit_printing()", "Multi order stencil for the 2D/3D acoustic isotropic wave equation", "# Choose dimension (2 or 3)\ndim = 2\n# Choose order\ntime_order = 6\nspace_order = 12\n\n# half width for indexes, goes from -half to half\nwidth_t = int(time_order/2)\nwidth_h = int(space_order/2)\n\n# Define functions and symbols\np=Function('p')\ns,h = symbols('s h')\nif dim==2:\n m=M(x,z)\n q=Q(x,z,t)\n d=D(x,z,t)\n solvep = p(x,z,t+width_t*s)\n solvepa = p(x,z,t-width_t*s)\nelse :\n m=M(x,y,z)\n q=Q(x,y,z,t)\n d=D(x,y,z,t)\n solvep = p(x,y,z,t+width_t*s)\n solvepa = p(x,y,z,t-width_t*s)\n\n# Finite differences coefficients, not necessary here but good to have somewhere\ndef fd_coeff_1(order):\n if order==16:\n coeffs = [0.000010, -0.000178, 0.001554, -0.008702, 0.035354, -0.113131, 0.311111, -0.888889, 0.000000,\n 0.888889, -0.311111, 0.113131, -0.035354, 0.008702, -0.001554, 0.000178, -0.000010]\n \n if order==14:\n coeffs = [-0.000042, 0.000680, -0.005303, 0.026515, -0.097222, 0.291667, -0.875000, 0.000000,\n 0.875000, -0.291667, 0.097222, -0.026515, 0.005303, -0.000680, 0.000042]\n \n if order==12:\n coeffs = [ 0.0002, -0.0026, 0.0179, -0.0794, 0.2679, -0.8571, -0.0000, 0.8571, -0.2679, 0.0794, -0.0179, 0.0026, -0.0002]\n \n if order==10:\n coeffs =[-0.0008 , 0.0099, -0.0595, 0.2381, -0.8333, 0.0000, 0.8333, -0.2381, 0.0595, -0.0099, 0.0008]\n \n if order==8:\n coeffs = [1.0/280, -4.0/105, 1.0/5, -4.0/5, 0, 4.0/5, -1.0/5,4.0/105,-1.0/280]\n \n if space_order==6:\n coeffs = [-1.0/60, 3.0/20, -3.0/4, 0, 3.0/4, -3.0/20, 1.0/60]\n \n if space_order==4:\n coeffs = [1.0/12, -2.0/3, 0, 2.0/3, -1.0/12]\n \n if space_order==2:\n coeffs = [-0.5, 0, 0.5]\n \ndef fd_coeff_2(ordrer):\n if order==16:\n coeffs = [-0.000002, 0.000051, -0.000518, 0.003481, -0.017677, 0.075421, -0.311111, 1.777778, -3.054844,\n 1.777778, -0.311111, 0.075421, -0.017677, 0.003481, -0.000518, 0.000051, -0.000002]\n \n if order==14:\n coeffs = [0.000012, -0.000227, 0.002121, -0.013258, 0.064815, -0.291667, 1.750000, -3.023594, \n 1.750000, -0.291667, 0.064815, -0.013258, 0.002121, -0.000227, 0.000012]\n \n if order==12:\n coeffs = [-0.000060, 0.001039, -0.008929, 0.052910, -0.267857, 1.714286, -2.982778,\n 1.714286, -0.267857, 0.052910, -0.008929, 0.001039, -0.000060]\n \n if order==10:\n coeffs = [0.000317, -0.004960, 0.039683, -0.238095, 1.666667, -2.927222,\n 1.666667, -0.238095, 0.039683, -0.004960, 0.000317]\n \n if order==8:\n coeffs = [-0.001786, 0.025397, -0.200000, 1.600000, -2.847222, 1.600000, -0.200000, 0.025397, -0.001786]\n \n if space_order==6:\n coeffs = [0.011111, -0.150000, 1.500000, -2.722222, 1.500000, -0.150000, 0.01111]\n \n if space_order==4:\n coeffs = [-0.083333, 1.333333, -2.500000, 1.333333, -0.08333]\n \n if space_order==2:\n coeffs = [1, -2, 1]\n\n# Indexes for finite differences\nindx = []\nindy = []\nindz = []\nindt = []\nfor i in range(-width_h,width_h+1):\n indx.append(x + i * h)\n indy.append(y + i * h)\n indz.append(z + i* h)\n \nfor i in range(-width_t,width_t+1):\n indt.append(t + i * s)\n\n# Finite differences\nif dim==2:\n dtt=as_finite_diff(p(x,z,t).diff(t,t),indt)\n dxx=as_finite_diff(p(x,z,t).diff(x,x), indx) \n dzz=as_finite_diff(p(x,z,t).diff(z,z), indz)\n dt=as_finite_diff(p(x,z,t).diff(t), indt)\n lap = dxx + dzz\nelse:\n dtt=as_finite_diff(p(x,y,z,t).diff(t,t),indt)\n dxx=as_finite_diff(p(x,y,z,t).diff(x,x), indx) \n dyy=as_finite_diff(p(x,y,z,t).diff(y,y), indy) \n dzz=as_finite_diff(p(x,y,z,t).diff(z,z), indz)\n dt=as_finite_diff(p(x,y,z,t).diff(t), indt)\n lap = dxx + dyy + dzz\n\n# Argument list for lambdify\narglamb=[]\narglamba=[]\nif dim==2:\n for i in range(-width_t,width_t):\n arglamb.append( p(x,z,indt[i+width_t]))\n arglamba.append( p(x,z,indt[i+width_t+1]))\n \n for i in range(-width_h,width_h+1):\n for j in range(-width_h,width_h+1):\n arglamb.append( p(indx[i+width_h],indz[j+width_h],t))\n arglamba.append( p(indx[i+width_h],indz[j+width_h],t))\nelse:\n for i in range(-width_t,width_t):\n arglamb.append( p(x,y,z,indt[i+width_t]))\n arglamba.append( p(x,y,z,indt[i+width_t+1]))\n \n for i in range(-width_h,width_h+1):\n for j in range(-width_h,width_h+1):\n for k in range(-width_h,width_h+1):\n arglamb.append( p(indx[i+width_h],indy[i+width_h],indz[j+width_h],t))\n arglamba.append( p(indx[i+width_h],indy[i+width_h],indz[j+width_h],t))\n \narglamb.extend((q , m, s, h, e))\narglamb=tuple(arglamb)\narglamba.extend((q , m, s, h, e))\narglamba=tuple(arglamba)\n\nsolvepa", "Forward and adjoint stencil\n2D-3D is automatic from the setup", "# Forward wave equation\nwave_equation = m*dtt- lap- q + e*dt\nstencil = solve(wave_equation,solvep)[0]\nts=lambdify(arglamb,stencil,\"numpy\")\nstencil\n\n# Adjoint wave equation\nwave_equationA = m*dtt- lap - d - e*dt\nstencilA = solve(wave_equationA,solvepa)[0]\ntsA=lambdify(arglamba,stencilA,\"numpy\")\nstencilA" ]
[ "code", "markdown", "code", "markdown", "code" ]
ML4DS/ML4all
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering_professor.ipynb
mit
[ "Lab Session: Clustering algorithms for Image Segmentation\nAuthor: Jesús Cid Sueiro\nJan. 2017", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread", "1. Introduction\nIn this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.\n1.1. Load Image\nSeveral images are provided with this notebook:\n\nBinarySeeds.png\nbirds.jpg\nblood_frog_1.jpg\ncKyDP.jpg\nMatricula.jpg\nMatricula2.jpg\nSeeds.png\n\nSelect and visualize image birds.jpg from file and plot it in grayscale", "name = \"birds.jpg\"\nname = \"Seeds.jpg\"\n\nbirds = imread(\"Images/\" + name)\nbirdsG = np.sum(birds, axis=2)\n\n# <SOL>\nplt.imshow(birdsG, cmap=plt.get_cmap('gray'))\nplt.grid(False)\nplt.axis('off')\nplt.show()\n# </SOL>\n", "2. Thresholding\nSelect an intensity threshold by manual inspection of the image histogram", "# <SOL>\nplt.hist(birdsG.ravel(), bins=256) \nplt.show()\n# </SOL>\n", "Plot the binary image after thresholding.", "# <SOL>\nif name == \"birds.jpg\":\n th = 256\nelif name == \"Seeds.jpg\":\n th = 650\n\nbirdsBN = birdsG > th\n\n# If there are more white than black pixels, reverse the image\nif np.sum(birdsBN) > float(np.prod(birdsBN.shape)/2):\n birdsBN = 1-birdsBN\nplt.imshow(birdsBN, cmap=plt.get_cmap('gray'))\nplt.grid(False)\nplt.axis('off')\nplt.show()\n# </SOL>\n", "3. Dataset generation\nExtract pixel coordinates dataset from image and plot them in a scatter plot.", "# <SOL>\n(h, w) = birdsBN.shape\nbW = birdsBN * range(w)\nbH = birdsBN * np.array(range(h))[:,np.newaxis]\npSet = [t for t in zip(bW.ravel(), bH.ravel()) if t!=(0,0)]\nX = np.array(pSet)\n# </SOL>\n\nprint X\nplt.scatter(X[:, 0], X[:, 1], s=5);\nplt.axis('equal')\nplt.show()", "4. k-means clustering algorithm\nUse the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.", "from sklearn.cluster import KMeans\n\n# <SOL>\nest = KMeans(100) # 4 clusters\nest.fit(X)\ny_kmeans = est.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=5, cmap='rainbow',\n linewidth=0.0)\nplt.axis('equal')\nplt.show()\n# </SOL>\n", "5. Spectral clustering algorithm\n5.1. Affinity matrix\nCompute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\\gamma=5$.", "from sklearn.metrics.pairwise import rbf_kernel\n\n# <SOL>\ngamma = 5\nsf = 4\nXsub = X[0::sf]\nprint Xsub.shape\n\ngamma = 0.001\nK = rbf_kernel(Xsub, Xsub, gamma=gamma)\n# </SOL>\n\n# Visualization\n# <SOL>\nplt.imshow(K, cmap='hot')\nplt.colorbar()\nplt.title('RBF Affinity Matrix for gamma = ' + str(gamma))\nplt.grid('off')\nplt.show()\n# </SOL>", "5.2. Spectral clusering\nApply the spectral clustering algorithm, and show the clustering results using a scatter plot.", "# <SOL>\nfrom sklearn.cluster import SpectralClustering\n\nspc = SpectralClustering(n_clusters=100, gamma=gamma, affinity='rbf')\ny_kmeans = spc.fit_predict(Xsub)\n# </SOL>\n\nplt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)\nplt.axis('equal')\nplt.show()", "Try now with other images in the dataset. You will need to re-adjust some free parameters to get a better performance." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-lme/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: NORESM2-LME\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lme', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pyoceans/erddapy
notebooks/quick_intro.ipynb
bsd-3-clause
[ "Quick introduction\nerddapy can be installed with conda\nshell\nconda install --channel conda-forge erddapy\nor pip\nshell\npip install erddapy\nFirst we need to instantiate the ERDDAP URL constructor for a server.\nIn this example we will use https://gliders.ioos.us/erddap.", "from erddapy import ERDDAP\n\n\ne = ERDDAP(\n server=\"https://gliders.ioos.us/erddap\",\n protocol=\"tabledap\",\n response=\"csv\",\n)", "Now we can populate the object a dataset id, variables of interest, and \nits constraints. We can download the csvp response with the .to_pandas method.", "e.dataset_id = \"whoi_406-20160902T1700\"\n\ne.variables = [\n \"depth\",\n \"latitude\",\n \"longitude\",\n \"salinity\",\n \"temperature\",\n \"time\",\n]\n\ne.constraints = {\n \"time>=\": \"2016-07-10T00:00:00Z\",\n \"time<=\": \"2017-02-10T00:00:00Z\",\n \"latitude>=\": 38.0,\n \"latitude<=\": 41.0,\n \"longitude>=\": -72.0,\n \"longitude<=\": -69.0,\n}\n\n\ndf = e.to_pandas(\n index_col=\"time (UTC)\",\n parse_dates=True,\n).dropna()\n\ndf.head()\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nfig, ax = plt.subplots(figsize=(17, 2))\ncs = ax.scatter(\n df.index,\n df[\"depth (m)\"],\n s=15,\n c=df[\"temperature (Celsius)\"],\n marker=\"o\",\n edgecolor=\"none\"\n)\n\nax.invert_yaxis()\nax.set_xlim(df.index[0], df.index[-1])\nxfmt = mdates.DateFormatter(\"%H:%Mh\\n%d-%b\")\nax.xaxis.set_major_formatter(xfmt)\n\ncbar = fig.colorbar(cs, orientation=\"vertical\", extend=\"both\")\ncbar.ax.set_ylabel(\"Temperature ($^\\circ$C)\")\nax.set_ylabel(\"Depth (m)\");", "Longer introduction\nLet's explore the methods and attributes available in the ERDDAP object?", "from erddapy import ERDDAP\n\n\ne = ERDDAP(server=\"https://gliders.ioos.us/erddap\")\n\n[method for method in dir(e) if not method.startswith(\"_\")]", "All the get_<methods> will return a valid ERDDAP URL for the requested response and options. For example, a search for all datasets available.", "url = e.get_search_url(search_for=\"all\", response=\"csv\")\n\nprint(url)", "There are many responses available, see the docs for griddap and\ntabledap respectively.\nThe most useful ones for Pythonistas are the .csv and .nc that can be read with pandas and netCDF4-python respectively.\nLet's load the csv response directly with pandas.", "import pandas as pd\n\n\ndf = pd.read_csv(url)\nprint(\n f'We have {len(set(df[\"tabledap\"].dropna()))} '\n f'tabledap, {len(set(df[\"griddap\"].dropna()))} '\n f'griddap, and {len(set(df[\"wms\"].dropna()))} wms endpoints.'\n)", "We can refine our search by providing some constraints.\nLet's narrow the search area, time span, and look for sea_water_temperature .", "from erddapy.utilities import show_iframe\n\n\nkw = {\n \"standard_name\": \"sea_water_temperature\",\n \"min_lon\": -72.0,\n \"max_lon\": -69.0,\n \"min_lat\": 38.0,\n \"max_lat\": 41.0,\n \"min_time\": \"2016-07-10T00:00:00Z\",\n \"max_time\": \"2017-02-10T00:00:00Z\",\n \"cdm_data_type\": \"trajectoryprofile\"\n}\n\n\nsearch_url = e.get_search_url(response=\"html\", **kw)\nshow_iframe(search_url)", "Note that the search form was populated with the constraints we provided.\nChanging the response from html to csv we load it in a data frame.", "search_url = e.get_search_url(response=\"csv\", **kw)\nsearch = pd.read_csv(search_url)\ngliders = search[\"Dataset ID\"].values\n\ngliders_list = \"\\n\".join(gliders)\nprint(f\"Found {len(gliders)} Glider Datasets:\\n{gliders_list}\")", "Now that we know the Dataset ID we can explore their metadata with the get_info_url method.", "glider = gliders[-1]\n\ninfo_url = e.get_info_url(dataset_id=glider, response=\"html\")\n\nshow_iframe(src=info_url)", "We can manipulate the metadata and find the variables that have the cdm_profile_variables attribute using the csv response.", "info_url = e.get_info_url(dataset_id=glider, response='csv')\n\ninfo = pd.read_csv(info_url)\ninfo.head()\n\n\"\".join(info.loc[info[\"Attribute Name\"] == \"cdm_profile_variables\", \"Value\"])", "Selecting variables by theirs attributes is such a common operation that erddapy brings its own method to simplify this task.\nThe get_var_by_attr method was inspired by netCDF4-python's get_variables_by_attributes. However, because erddapy is operating on remote serves, it will return the variable names instead of the actual variables.\nHere we check what is/are the variable(s) associated with the standard_name used in the search.\nNote that get_var_by_attr caches the last response in case the user needs to make multiple requests.\n(See the execution times below.)", "%%time\n\n# First one, slow.\ne.get_var_by_attr(\n dataset_id=\"whoi_406-20160902T1700\",\n standard_name=\"sea_water_temperature\"\n)\n\n%%time\n\n# Second one on the same glider, a little bit faster.\ne.get_var_by_attr(\n dataset_id=\"whoi_406-20160902T1700\",\n standard_name=\"sea_water_practical_salinity\"\n)\n\n%%time\n\n# New one, slow again.\ne.get_var_by_attr(\n dataset_id=\"cp_336-20170116T1254\",\n standard_name=\"sea_water_practical_salinity\"\n)", "Another way to browse datasets is via the categorize URL. In the example below we can get all the standard_names available in the dataset with a single request.", "url = e.get_categorize_url(\n categorize_by=\"standard_name\",\n response=\"csv\"\n)\n\npd.read_csv(url)[\"Category\"]", "We can also pass a value to filter the categorize results.", "url = e.get_categorize_url(\n categorize_by=\"institution\",\n value=\"woods_hole_oceanographic_institution\",\n response=\"csv\"\n)\n\ndf = pd.read_csv(url)\nwhoi_gliders = df.loc[~df[\"tabledap\"].isnull(), \"Dataset ID\"].tolist()\nwhoi_gliders", "Let's create a map of all the gliders tracks from WHOI.\n(We are downloading a lot of data! Note that we will use joblib to parallelize the for loop and get the data faster.)", "from joblib import Parallel, delayed\nimport multiprocessing\n\n\ndef request_whoi(dataset_id):\n e.constraints = None\n e.protocol = \"tabledap\"\n e.variables = [\"longitude\", \"latitude\", \"temperature\", \"salinity\"]\n e.dataset_id = dataset_id\n # Drop units in the first line and NaNs.\n df = e.to_pandas(response=\"csv\", skiprows=(1,)).dropna()\n return (dataset_id, df)\n \n\nnum_cores = multiprocessing.cpu_count()\ndownloads = Parallel(n_jobs=num_cores)(\n delayed(request_whoi)(dataset_id) for dataset_id in whoi_gliders\n)\n\ndfs = {glider: df for (glider, df) in downloads}", "Finally let's see some figures!", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter\n\n\ndef make_map():\n fig, ax = plt.subplots(\n figsize=(9, 9),\n subplot_kw=dict(projection=ccrs.PlateCarree())\n )\n ax.coastlines(resolution=\"10m\")\n lon_formatter = LongitudeFormatter(zero_direction_label=True)\n lat_formatter = LatitudeFormatter()\n ax.xaxis.set_major_formatter(lon_formatter)\n ax.yaxis.set_major_formatter(lat_formatter)\n\n return fig, ax\n\n\nfig, ax = make_map()\nlons, lats = [], []\nfor glider, df in dfs.items():\n lon, lat = df[\"longitude\"], df[\"latitude\"]\n lons.extend(lon.array)\n lats.extend(lat.array)\n ax.plot(lon, lat)\n\ndx = dy = 0.25\nextent = min(lons)-dx, max(lons)+dx, min(lats)+dy, max(lats)+dy\nax.set_extent(extent)\n\nax.set_xticks([extent[0], extent[1]], crs=ccrs.PlateCarree())\nax.set_yticks([extent[2], extent[3]], crs=ccrs.PlateCarree());\n\ndef glider_scatter(df, ax):\n ax.scatter(df[\"temperature\"], df[\"salinity\"],\n s=10, alpha=0.25)\n\nfig, ax = plt.subplots(figsize=(9, 9))\nax.set_ylabel(\"salinity\")\nax.set_xlabel(\"temperature\")\nax.grid(True)\n\nfor glider, df in dfs.items():\n glider_scatter(df, ax)\n\nax.axis([5.5, 30, 30, 38]);", "Extra convenience methods for common responses\nOPeNDAP", "from netCDF4 import Dataset\n\n\ne.constraints = None\ne.protocol = \"tabledap\"\ne.dataset_id = \"whoi_406-20160902T1700\"\n\nopendap_url = e.get_download_url(\n response=\"opendap\",\n)\n\nprint(opendap_url)\nwith Dataset(opendap_url) as nc:\n print(nc.summary)", "netCDF Climate and Forecast", "e.response = \"nc\"\ne.variables = [\"longitude\", \"latitude\", \"temperature\", \"salinity\"]\n\nnc = e.to_ncCF()\n\nprint(nc.Conventions)\nprint(nc[\"temperature\"])", "xarray", "ds = e.to_xarray(decode_times=False)\n\nds", "Tabledap represents all data in tabular form and the next steps, while a bit awkward, are necessary to match the dimensions properly. The griddap response (unsupported at the moment) does not have this limitation.", "row_size = ds[\"rowSize\"].values\nlon = ds[\"longitude\"].values\nlat = ds[\"latitude\"].values\n\nlons, lats = [], []\nfor x, y, r in zip(lon, lat, row_size):\n lons.extend([x]*r)\n lats.extend([y]*r)\n\nimport numpy as np\n\n\ndata = ds[\"temperature\"].values\ndepth = ds[\"depth\"].values\n\nmask = ~np.ma.masked_invalid(depth).mask\n\ndata = data[mask]\ndepth = depth[mask]\nlons = np.array(lons)[mask]\nlats = np.array(lats)[mask]\n\nmask = depth <= 5\n\ndata = data[mask]\ndepth = depth[mask]\nlons = lons[mask]\nlats = lats[mask]\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\n\n\ndx = dy = 1.5\nextent = (\n ds.geospatial_lon_min-dx, ds.geospatial_lon_max+dx,\n ds.geospatial_lat_min-dy, ds.geospatial_lat_max+dy\n)\nfig, ax = make_map()\n\ncs = ax.scatter(lons, lats, c=data, s=50, alpha=0.5, edgecolor=\"none\")\ncbar = fig.colorbar(cs, orientation=\"vertical\",\n fraction=0.1, shrink=0.9, extend=\"both\")\nax.set_extent(extent)\nax.coastlines(\"10m\");", "iris", "import warnings\n\n# Iris warnings are quire verbose!\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n cubes = e.to_iris()\n\nprint(cubes)\n\ncubes.extract_strict(\"sea_water_temperature\")", "This example is written in a Jupyter Notebook\nclick here\nto download the notebook so you can run it locally, or click here to run a live instance of this notebook." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
modin-project/modin
examples/tutorial/jupyter/execution/pandas_on_ray/local/exercise_3.ipynb
apache-2.0
[ "<center><h2>Scale your pandas workflows by changing one line of code</h2>\nExercise 3: Not Implemented\nGOAL: Learn what happens when a function is not yet supported in Modin as well as how to extend Modin's functionality using the DataFrame Algebra.\nWhen functionality has not yet been implemented, we default to pandas\n\nWe convert a Modin dataframe to pandas to do the operation, then convert it back once it is finished. These operations will have a high overhead due to the communication involved and will take longer than pandas.\nWhen this is happening, a warning will be given to the user to inform them that this operation will take longer than usual. For example, DataFrame.mask is not yet implemented. In this case, when a user tries to use it, they will see this warning:\nUserWarning: `DataFrame.mask` defaulting to pandas implementation.\nConcept for exercise: Default to pandas\nIn this section of the exercise we will see first-hand how the runtime is affected by operations that are not implemented.", "import modin.pandas as pd\nimport pandas\nimport numpy as np\nimport time\n\nframe_data = np.random.randint(0, 100, size=(2**18, 2**8))\ndf = pd.DataFrame(frame_data).add_prefix(\"col\")\n\npandas_df = pandas.DataFrame(frame_data).add_prefix(\"col\")\n\nmodin_start = time.time()\n\nprint(df.mask(df < 50))\n\nmodin_end = time.time()\nprint(\"Modin mask took {} seconds.\".format(round(modin_end - modin_start, 4)))\n\npandas_start = time.time()\n\nprint(pandas_df.mask(pandas_df < 50))\n\npandas_end = time.time()\nprint(\"pandas mask took {} seconds.\".format(round(pandas_end - pandas_start, 4)))", "Concept for exercise: Register custom functions\nModin's user-facing API is pandas, but it is possible that we do not yet support your favorite or most-needed functionalities. Your user-defined function may also be able to be executed more efficiently if you pre-define the type of function it is (e.g. map, reduce, etc.) using the DataFrame Algebra. To solve either case, it is possible to register a custom function to be applied to your data.\nRegistering a custom function for all query compilers\nTo register a custom function for a query compiler, we first need to import it:\npython\nfrom modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler\nThe PandasQueryCompiler is responsible for defining and compiling the queries that can be operated on by Modin, and is specific to the pandas storage format. Any queries defined here must also both be compatible with and result in a pandas.DataFrame. Many functionalities are very simply implemented, as you can see in the current code: Link.\nIf we want to register a new function, we need to understand what kind of function it is. In our example, we will try to implement a kurtosis on the unary negation of the values in the dataframe, which is a map (unargy negation of each cell) followed by a reduce. So we next want to import the function type so we can use it in our definition:\npython\nfrom modin.core.dataframe.algebra import TreeReduce\nThen we can just use the TreeReduce.register classmethod and assign it to the PandasQueryCompiler:\npython\nPandasQueryCompiler.neg_kurtosis = TreeReduce.register(lambda cell_value, **kwargs: ~cell_value, pandas.DataFrame.kurtosis)\nWe include **kwargs to the lambda function since the query compiler will pass all keyword arguments to both the map and reduce functions.\nFinally, we want a handle to it from the DataFrame, so we need to create a way to do that:\n```python\ndef neg_kurtosis_func(self, kwargs):\n # The constructor allows you to pass in a query compiler as a keyword argument\n return self.constructor(query_compiler=self._query_compiler.neg_kurtosis(kwargs))\npd.DataFrame.neg_kurtosis_custom = neg_kurtosis_func\n```\nAnd then you can use it like you usually would:\npython\ndf.neg_kurtosis_custom()", "from modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler\nfrom modin.core.dataframe.algebra import TreeReduce\n\nPandasQueryCompiler.neg_kurtosis_custom = TreeReduce.register(lambda cell_value, **kwargs: ~cell_value,\n pandas.DataFrame.kurtosis)\n\nfrom pandas._libs import lib\n# The function signature came from the pandas documentation:\n# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.kurtosis.html\ndef neg_kurtosis_func(self, axis=lib.no_default, skipna=True, level=None, numeric_only=None, **kwargs):\n # We need to specify the axis for the query compiler\n if axis in [None, lib.no_default]:\n axis = 0\n # The constructor allows you to pass in a query compiler as a keyword argument\n # Reduce dimension is used for reduces\n # We also pass all keyword arguments here to ensure correctness\n return self._reduce_dimension(\n self._query_compiler.neg_kurtosis_custom(\n axis=axis, skipna=skipna, level=level, numeric_only=numeric_only, **kwargs\n )\n )\n\npd.DataFrame.neg_kurtosis_custom = neg_kurtosis_func", "Speed improvements\nIf we were to try and replicate this functionality using the pandas API, we would need to call df.applymap with our unary negation function, and subsequently df.kurtosis on the result of the first call. Let's see how this compares with our new, custom function!", "start = time.time()\n\nprint(pandas_df.applymap(lambda cell_value: ~cell_value).kurtosis())\n\nend = time.time()\npandas_duration = end - start\nprint(\"pandas unary negation kurtosis took {} seconds.\".format(pandas_duration))\n\nstart = time.time()\n\nprint(df.applymap(lambda x: ~x).kurtosis())\n\nend = time.time()\nmodin_duration = end - start\nprint(\"Modin unary negation kurtosis took {} seconds.\".format(modin_duration))\n\ncustom_start = time.time()\n\nprint(df.neg_kurtosis_custom())\n\ncustom_end = time.time()\nmodin_custom_duration = custom_end - custom_start\nprint(\"Modin neg_kurtosis_custom took {} seconds.\".format(modin_custom_duration))\n\nfrom IPython.display import Markdown, display\n\ndisplay(Markdown(\"### As expected, Modin is {}x faster than pandas when chaining the functions; however we see that our custom function is even faster than that - beating pandas by {}x, and Modin (when chaining the functions) by {}x!\".format(round(pandas_duration / modin_duration, 2), round(pandas_duration / modin_custom_duration, 2), round(modin_duration / modin_custom_duration, 2))))", "Congratulations! You have just implemented new DataFrame functionality!\nConsider opening a pull request: https://github.com/modin-project/modin/pulls\nFor a complete list of what is implemented, see the Supported APIs section.\nTest your knowledge: Add a custom function for another tree reduce: finding DataFrame.mad after squaring all of the values\nSee the pandas documentation for the correct signature: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mad.html", "modin_mad_custom_start = time.time()\n\n# Implement your function here! Put the result of your custom squared `mad` in the variable `modin_mad_custom`\n# Hint: Look at the kurtosis walkthrough above\n\nmodin_mad_custom = ...\nprint(modin_mad_custom)\n\nmodin_mad_custom_end = time.time()\n\n# Evaluation code, do not change!\nmodin_mad_start = time.time()\nmodin_mad = df.applymap(lambda x: x**2).mad()\nprint(modin_mad)\nmodin_mad_end = time.time()\n\nassert modin_mad_end - modin_mad_start > modin_mad_custom_end - modin_mad_custom_start, \\\n \"Your implementation was too slow, or you used the chaining functions approach. Try again\"\nassert modin_mad._to_pandas().equals(modin_mad_custom._to_pandas()), \"Your result did not match the result of chaining the functions, try again\"", "Now that you are able to create custom functions, you know enough to contribute to Modin!\nPlease move on to Exercise 4 when you are ready" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mathemage/h2o-3
h2o-py/demos/kmeans_aic_bic_diagnostics.ipynb
apache-2.0
[ "Much data produced is unlabeled data, data where the target vale or class is unknown. Unsupervised learning gives us the tools to find hidden structure in unlabeled data. There are many techniques, one of which is clustering of the data. Clustering is a method that places objects that are more similar into the same group. One of the steps in clustering is to determine the \"correct\" number of cluster. There are several diagnostics for this step, three of which will be shown in this post. In doing this exercise, we will demonstrate the use of kmeans clustering using H2O's Python API and how to retrieve results of the modeling work from H2O.\nOur first step is to import the H2O Python library start the H2O engine. H2O.ai provides detailed documentation about the Python API and the rest of H2O. When H2O is started it provides summary information about the amount of memory and number of cores being used by the H2O engine.", "import h2o\nimport imp\nfrom h2o.estimators.kmeans import H2OKMeansEstimator\n\n# Start a local instance of the H2O engine.\nh2o.init();", "The next step of using H2O is to parse and load data into H2O's in-memory columnar compressed storage. Today we will be using the Iris flower data set.", "iris = h2o.import_file(path=\"https://github.com/h2oai/h2o-3/raw/master/h2o-r/h2o-package/inst/extdata/iris_wheader.csv\")", "H2O provides convenient commands to understand the H2OFrame object, the data structure for data that will be used by H2O's machine learning algorithms. Because H2O is often used for very large datasets and in a cluster computing configuration information about how much the data is compressed in memory and the distribution of the data across the H2O nodes, along with standard summary statics on the data in the H2OFrame, is provided.", "iris.describe()", "The iris data set is labeled into three classes; there are four measurements that were taken for each iris. While we will not be using the labeled data for clustering, it does provide us a convenient comparison and visualization of the data as it was provided. In this example I use Seaborn for the visualization of the data.\n<sub><sup>(As an aside, the approach taken here of using all the data for visualization does not scale to large datasets. One approach to dealing with large data sets is to sample the data in H2O and then transfer the sample of data to the Python environment for plotting).</sup></sub>", "try:\n imp.find_module('pandas')\n can_pandas = True\n import pandas as pd\nexcept:\n can_pandas = False\n \ntry:\n imp.find_module('seaborn')\n can_seaborn = True\n import seaborn as sns\nexcept:\n can_seaborn = False\n\n%matplotlib inline\n\nif can_seaborn:\n sns.set()\n\n\nif can_seaborn:\n sns.set_context(\"notebook\")\n sns.pairplot(iris.as_data_frame(True), vars=[\"sepal_len\", \"sepal_wid\", \"petal_len\", \"petal_wid\"], hue=\"class\");", "The next step is to model the data using H2O's kmeans algorithm. We will do this across a range of cluster options and collect each H2O model object as an element in an array. In this example the initial position of the cluster centers is selected at random and the random number seed is set for reproducibility. Because H2O is designed for high performance it is quick and easy to explore many different hyper-parameter settings during modeling to find the model that best suits your needs.", "results = [H2OKMeansEstimator(k=clusters, init=\"Random\", seed=2, standardize=True) for clusters in range(2,13)]\nfor estimator in results:\n estimator.train(x=iris.col_names[0:-1], training_frame = iris)", "There are three diagnostics that will be demonstrated to help with determining the number of clusters: total within cluster sum of squares, AIC, and BIC.\nTotal within cluster sum of squares measures sums the distance from each point in a cluster to that point's assigned cluster center. This is the minimization criteria of kmeans. The standard guideline for picking the number of clusters is to look for a 'knee' in the plot, showing where the total within sum of squares stops decreasing rapidly. Total within cluster sum of squares can be difficult to intepret, with the criteria being to look for an arbitrary knee in the plot. \nWith this challenge from total within cluster sum of squares, we will also use two merit statistics for determining the number of clusters. AIC and BIC are both measures of the relative quality of a statistical model. AIC and BIC introduce penality terms for the number of parameters in the model to counter the problem of overfitting; BIC has a larger penality term than AIC. With these merit statistics one is to look for the number of clusters that minimize the statistic.\nHere we build a method for extracting the inputs for each diagnostics and calculating the AIC and BIC values on a model. Each model is then inspected by the method and the results plotted for quick analysis.", "import math as math\n\ndef diagnostics_from_clusteringmodel(model):\n total_within_sumofsquares = model.tot_withinss()\n number_of_clusters = len(model.centers())\n number_of_dimensions = len(model.centers()[0])\n number_of_rows = sum(model.size())\n \n aic = total_within_sumofsquares + 2 * number_of_dimensions * number_of_clusters\n bic = total_within_sumofsquares + math.log(number_of_rows) * number_of_dimensions * number_of_clusters\n \n return {'Clusters':number_of_clusters,\n 'Total Within SS':total_within_sumofsquares, \n 'AIC':aic, \n 'BIC':bic}\n\nif can_pandas:\n diagnostics = pd.DataFrame( [diagnostics_from_clusteringmodel(model) for model in results])\n diagnostics.set_index('Clusters', inplace=True)", "From the plot below, to me, it is difficult to find a 'knee' in the rate of decrease of the total within cluster sum of square. It might be at 4 clusters, it might be 7. AIC is minimized at 7 clusters, and BIC is minimized at 4 clusters.", "if can_pandas:\n diagnostics.plot(kind='line');", "For demonstration purposes, I will selected the number of clusters to be 4. I will use the H2O Model for 4 clusters previously created, and use that to assign the membership in each of the original data points. This predicted cluster assignment is then added to the original iris data frames as a new vector (mostly to make plotting easy).", "clusters = 4\npredicted = results[clusters-2].predict(iris)\niris[\"Predicted\"] = predicted[\"predict\"].asfactor()", "Finally, I will plot the predicted cluster membership using the same layout as on the original data earlier in the notebook.", "if can_seaborn:\n sns.pairplot(iris.as_data_frame(True), vars=[\"sepal_len\", \"sepal_wid\", \"petal_len\", \"petal_wid\"], hue=\"Predicted\");", "This iPython notebook is available for download. Grab the latest version of H2O to try it out and be sure to check out other H2O Python demos. \n-- Hank" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
svdwulp/da-programming-1
week_07_oefeningen-uitwerkingen.ipynb
gpl-2.0
[ "Opmerking bij de onderstaande oefeningen\nProbeer bij het maken van de onderstaande oefeningen programma's te schrijven die generiek zijn. Dat wil zeggen, die niet alleen het juiste antwoord geven voor het bij de opgave gegeven voorbeeld, maar het juiste antwoord in alle mogelijke, op het voorbeeld gelijkende, gevallen.\nDus het is beter als je programma bij 1b werkt niet alleen voor de gegeven list A = [[1, 2, 3], [4, 5, 6], [], [7, 8], [9]], maar ook voor andere lists die er ongeveer hetzelfde uitzien.\nOefening 1\nSchrijf een programma om ... \n<span>a.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit een 6-zijdige dobbelsteen.\n<span>b.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met drie 6-zijdige dobbelstenen.\n<span>c.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met vier dobbelstenen: een 5-zijdige, een 8-zijdige, een 10-zijdige en een 12-zijdige?\n<span>d.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met een opgegeven aantal dobbelstenen met een opgegeven aantal zijden.\npython\ndice = [5, 8, 10, 12] # als oefening 1c. of:\ndice = [6, 6, 8, 10, 12]", "# 1a\npi_times_xi = []\nfor d1 in range(1, 7):\n pi_times_xi.append(d1 / 6)\nexpected_value = sum(pi_times_xi)\nprint(\"Expected value:\", expected_value)\n\n# 1b\npi_times_xi = []\nfor d1 in range(1, 7):\n for d2 in range(1, 7):\n for d3 in range(1, 7):\n pi_times_xi.append((d1 + d2 + d3) / (6**3))\nexpected_value = sum(pi_times_xi)\nprint(\"Expected value:\", expected_value)\n\n# 1c\npi_times_xi = []\nfor d1 in range(1, 6):\n for d2 in range(1, 9):\n for d3 in range(1, 11):\n for d4 in range(1, 13):\n pi_times_xi.append((d1 + d2 + d3 + d4) * (1/5 * 1/8 * 1/10 * 1/12))\nexpected_value = sum(pi_times_xi)\nprint(\"Expected value:\", expected_value)\n\n# 1d: optie 1 - uitrekenen (easy way out)\ndice = [6, 6, 8, 10, 12]\nexpected_values = []\nfor die in dice:\n total = 0\n for value in range(1, die + 1):\n total += value\n expected_values.append(total / die)\nprint(\"Expected value:\", sum(expected_values))\n\n# 1d: optie 2 - alle combinaties genereren zonder recursie (the hard way)\ndice = [6, 6, 8, 10, 12]\nvalues = dice.copy()\npi = 1\nfor value in values:\n pi *= 1 / value\npi_times_xi = 0\nwhile values[0] > 0:\n xi = sum(values)\n pi_times_xi += xi * pi\n i = len(values) - 1\n values[i] -= 1\n while i > 0 and values[i] == 0:\n values[i] = dice[i]\n i -= 1\n values[i] -= 1\nprint(\"Expected value:\", pi_times_xi)\n\n# 1d: optie 3 - alle combinaties genereren met recursie (the easy hard way)\n\ndef dice_combinations(dice):\n result = []\n if len(dice) > 0:\n for i in range(1, dice[0] + 1):\n rest_results = dice_combinations(dice[1:])\n if len(rest_results) > 0:\n for rest_result in rest_results:\n result.append((i,) + rest_result)\n else:\n result.append((i,))\n return result\n\ndice = [6, 6, 8, 10, 12]\ncombinations = dice_combinations(dice)\nexpected_value = 0\nfor combination in combinations:\n expected_value += sum(combination) / len(combinations)\nprint(\"Expected value:\", expected_value)", "Oefening 2\nSchrijf een programma dat ... \n<span>a.</span> het aantal geneste lists van een list geeft.\n```python\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\nbepaal aantal geneste lists in A (=4)\n```\n<span>b.</span> het totaal aantal elementen van een list plus het aantal elementen van alle geneste lists in een list geeft.\n```python\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\nbepaal het totaal aantal elementen in A (incl. geneste lists) (=9)\n```\n<span>c.</span> de som van alle elementen in een list, inclusief de elementen in alle geneste lists.\n```python\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\nbepaal de som van alle elementen van in A geneste lists (=45)\n```", "# 2a\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\ncounter = 0\nfor elem in A:\n if isinstance(elem, list):\n counter += 1\nprint(\"Aantal geneste lists:\", counter)\n\n# 2b\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\ncounter = 0\nfor elem in A:\n if isinstance(elem, list):\n for e in elem:\n counter += 1\n else:\n counter += 1\nprint(\"Aantal elementen:\", counter)\n\n# 2c\nA = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]\ntotal = 0\nfor elem in A:\n if isinstance(elem, list):\n for e in elem:\n total += e\n else:\n total += elem\nprint(\"Som van de elementen:\", total)", "Oefening 3\nSchrijf een programma dat ... \n<span>a.<span> een platte list maakt met alle elementen van een list, inclusief de elementen in geneste lists.\n```python\nA = [[1, 2, 3], 4, 5, [6], [7, 8], 9]\nmaak een 'platte' list met de elementen van A\n(= [1, 2, 3, 4, 5, 6, 7, 8, 9])\n```\n<span>b.</span> alle opeenvolgende duplicaten van elementen in een list verwijdert.\n```python\nA = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]\nverwijder opeenvolgende duplicaten van elementen in A\n(= [1, 2, 3, 4, 1])\n```\n<span>c.</span> een run-length encoded versie van een list maakt. Daarbij maak je een nieuwe list, waarin alle opeenvolgende duplicaten van de elementen van de list zijn vervangen door een tuple met daarin het element en het aantal duplicaten.\n```python\nA = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]\nmaak een run-length encoded versie van A\n(= [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)])\n```\n<span>d.</span> een run-length encoded list (zie 2c.) uitpakt naar een platte list.\n```python\nA = [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)]\npak de run-length encoded list A uit\n(= [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1])\n```", "# 3a\nA = [[1, 2, 3], 4, 5, [6], [7, 8], 9]\nresult = []\nfor elem in A:\n if isinstance(elem, list):\n for e in elem:\n result.append(e)\n else:\n result.append(elem)\nprint(\"Flattened list:\", result)\n\n# 3b\nA = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]\nresult = []\nlast_elem = None\nfor elem in A:\n if elem != last_elem:\n result.append(elem)\n last_elem = elem\nprint(\"Duplicaten verwijderd:\", result)\n\n# 3c\nA = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]\nresult = []\nlast_elem = None\nelem_count = 0\nfor elem in A:\n if elem != last_elem:\n if elem_count > 0:\n result.append((last_elem, elem_count))\n last_elem = elem\n elem_count = 1\n else:\n elem_count += 1\nif elem_count > 0:\n result.append((last_elem, elem_count))\nprint(\"Run-length encoded versie:\", result)\n\n# 3d\nA = [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)]\n# pak de run-length encoded list A uit\n# (= [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1])\nresult = []\nfor elem, elem_count in A:\n result.extend([elem] * elem_count)\nprint(\"Uitgepakte versie van run-length encoded list:\", result)", "Oefening 4\nSchrijf een programma dat ... \n<span>a.</span> alle elementen in een list dupliceert in dezelfde list, dus zonder een nieuwe list te maken.\n```python\nA = [1, 2, 3, 4]\ndupliceer de elementen in A (zonder een nieuwe list te maken)\n(A = [1, 1, 2, 2, 3, 3, 4, 4])\n```\n<span>b.</span> de elementen van een list een opgegeven aantal plaatsen naar links 'draait'. Elementen die er aan de linkerzijde af zouden vallen, komen rechts terug in de list.\n```python\nA = list(\"abcdefgh\")\npositions = 3\ndraai de list A over positions elementen naar links\n(= [\"d\", \"e\", \"f\", \"g\", \"a\", \"b\", \"c\"])\n```", "# 4a\nA = [1, 2, 3, 4]\nfor i in range(len(A)):\n A.insert(2*i, A[2*i])\nprint(\"Lijst met gedupliceerde elementen:\", A)\n\n# 4b\nA = list(\"abcdefgh\")\npositions = 3\n# draai de list A over positions elementen naar links\n# (= [\"d\", \"e\", \"f\", \"g\", \"a\", \"b\", \"c\"])\nresult = A[positions:] + A[:positions]\nprint(\"Geroteerde lijst:\", result)", "Oefening 5\nSchrijf een programma dat ... \n<span>a.</span> een opgegeven aantal, toevallig gekozen elementen uit een list trekt (lotto).\n```python\nA = [1, 3, 5, 7, 8, 9]\ncount = 3\ntrek count toevallig gekozen elementen uit A\n(= [7, 3, 5] of [1, 8, 3] etc.)\n```", "# 5a\nimport random\n\nA = [1, 3, 5, 7, 8, 9]\ncount = 3\nresult = []\nfor i in range(count):\n index = random.randrange(len(A))\n result.append(A[index])\n del A[index]\nprint(\"Willekeurig gekozen elementen:\", result)", "Oefening 6\n<span>a.</span> Schrijf een programma dat twee matrices bij elkaar optelt.\n```python\nA = [[1, 2, 3], [4, 5, 6]]\nB = [[9, 8, 7], [6, 5, 4]]\nbepaal C = A + B\n```\n<span>b.</span> Schrijf een programma dat twee matrices vermenigvuldigt.\n```python\nA = [[1, 2, 3], [4, 5, 6]]\nB = [[9, 8], [7, 6], [5, 4]]\nbepaal C = A x B\n```", "# 6a\nA = [[1, 2, 3], [4, 5, 6]]\nB = [[9, 8, 7], [6, 5, 4]]\nresult = []\nfor i in range(len(A)):\n row = []\n for j in range(len(A[i])):\n row.append(A[i][j] + B[i][j])\n result.append(row)\nprint(\"Som matrix:\", result)\n\n# 6b\nA = [[1, 2, 3], [4, 5, 6]]\nB = [[9, 8], [7, 6], [5, 4]]\nresult = []\nfor i in range(len(A)):\n row = []\n for j in range(len(A)):\n total = 0\n for k in range(len(B)):\n total += A[i][k] * B[k][j]\n row.append(total)\n result.append(row)\nprint(\"Product matrix:\", result)", "Oefening 7\nAls we een matrix niet met een geneste list zouden modelleren, \nmaar met een platte list, hoe zou je dan een element van de matrix \nkunnen aanwijzen?\nDe afmeting van de matrix is bekend.\nBijvoorbeeld:\n$$\nA = \\begin{bmatrix}\n 1 & 2 & 3 \\\n 4 & 5 & 6 \\\n 7 & 8 & 9 \\end{bmatrix}\n$$\nHoe vind ik nu $A_{r,c}$ ?\nIn Python:\n```python\nA = [1, 2, 3, 4, 5, 6, 7, 8, 9]\nrows, cols = (3, 3) # matrix dimensions\nr, c = (1, 2) # 0-based row and column index\nhoe vind ik het element op A(r, c)?\n```", "A = [1, 2, 3, 4, 5, 6, 7, 8, 9]\nrows, cols = (3, 3) # matrix dimensions\nr, c = (1, 2) # 0-based row and column index\nindex = r * cols + c\nprint(\"A(r,c):\", A[index])", "Oefening 8\nIn het 8-koniginnen-probleem probeer je zoveel mogelijk koniginnen op een schaakbord te plaatsen. De koniginnen mogen niet de mogelijkheid hebben \nelkaar te slaan.\nEen konigin mag op het schaakbord zowel horizontaal, verticaal als diagonaal lopen.\nEen voorbeeld van een geldige oplossing:\n\n<span>a.</span> Op hoeveel mogelijke manieren kun je de 8 koniginnen op de vakjes van het schaakbord plaatsen als je geen rekening hoeft te houden met het slaan?\n<span>b.</span> Schrijf een Python programma dat een geldige oplossing van het probleem zoekt en vindt.", "# 8a\nfrom IPython.display import display, Latex\nfrom scipy.misc import comb\n\ncombinations = comb(64, 8, exact=True)\ndisplay(Latex(r\"Er zijn 8 koniginnen te verdelen over 64 vakjes:\"))\ndisplay(Latex(r\"$\\binom{{64}}{{8}} = {}$\".format(combinations)))\n\n# 8b\n# We zouden het schaakbord kunnen modelleren met een \n# geneste list, waarbij iedere sublist een rij voorstelt,\n# en deze dan vullen met een Q bij ieder vakje waar een \n# konigin staat.\n# Maar aangezien koniginnen toch nooit in dezelfde kolom \n# kunnen staan, kunnen we net zo goed alleen de rijnummers\n# van de 8 koniginnen opslaan.\n\nboard = [0] * 8\nwhile True:\n # oplossing controleren\n # koniginnen mogen niet op dezelfde rij staan:\n rows_valid = True\n for col in range(len(board) - 1):\n if board[col] in board[col+1:]:\n rows_valid = False\n break\n if rows_valid:\n # koniginnen mogen niet in dezelfde diagonaal staan,\n # top-left to bottom-right:\n diagonal_tlbr = []\n for col in range(len(board)):\n # calculate diagonal:\n diagonal_tlbr.append(col - board[col])\n diagonal_tlbr_valid = True\n for d in range(len(diagonal_tlbr) - 1):\n if diagonal_tlbr[d] in diagonal_tlbr[d+1:]:\n diagonal_tlbr_valid = False\n break\n if diagonal_tlbr_valid:\n # koniginnen mogen niet in dezelfde diagonaal staan,\n # bottom-left to top-right:\n diagonal_bltr = []\n for col in range(len(board)):\n # calculate diagonal:\n diagonal_bltr.append(col + board[col])\n diagonal_bltr_valid = True\n for d in range(len(diagonal_bltr) - 1):\n if diagonal_bltr[d] in diagonal_bltr[d+1:]:\n diagonal_bltr_valid = False\n break\n if diagonal_bltr_valid:\n print(\"Valid solution:\", board)\n break\n # set up next board:\n col = -1\n board[col] += 1\n while col >= -len(board) and board[col] >= len(board):\n board[col] = 1\n col -= 1\n board[col] += 1\n if col < -len(board):\n break\nelse:\n print(\"No valid solution found!\")", "Oefening 9\nSchrijf een programma dat ... \n<span>a.<span> de geneste lists in een list sorteert op lengte van de \ngeneste list.\n```python\nA = [\n [\"a\", \"b\", \"c\"], [\"d\", \"e\"],\n [\"f\", \"g\", \"h\"], [\"d\", \"e\"],\n [\"i\", \"j\", \"k\", \"l\"], [\"m\", \"n\"],\n [\"o\"]\n]\nmaak een nieuwe lijst, waarin de elementen van A\ngesorteerd zijn op lengte van de list\n(= [[\"o\"], [\"d\", \"e\"], [\"d\", \"e\"], [\"m\", \"n\"],\n[\"a\", \"b\", \"c\"], [\"f\", \"g\", \"h\"], [\"i\", \"j\", \"k\", \"l\"]])\n```", "# 9a\nA = [\n [\"a\", \"b\", \"c\"], [\"d\", \"e\"],\n [\"f\", \"g\", \"h\"], [\"d\", \"e\"],\n [\"i\", \"j\", \"k\", \"l\"], [\"m\", \"n\"],\n [\"o\"]\n]\n\nsort_order = []\nfor i in range(len(A)):\n sort_order.append((len(A[i]), i))\nsort_order.sort()\nresult = []\nfor length, i in sort_order:\n result.append(A[i])\nprint(\"Sorted by length of sublist:\", result)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/alphalens
alphalens/examples/intraday_factor.ipynb
apache-2.0
[ "Alphalens: intraday factor\nIn this notebook we use Alphalens to analyse the performance of an intraday factor, which is computed daily but the stocks are bought at marker open and sold at market close with no overnight positions.", "%pylab inline --no-import-all\nimport alphalens\nimport pandas as pd\nimport numpy as np\nimport datetime\n\nimport warnings\nwarnings.filterwarnings('ignore')", "Below is a simple mapping of tickers to sectors for a small universe of large cap stocks.", "sector_names = {\n 0 : \"information_technology\",\n 1 : \"financials\",\n 2 : \"health_care\",\n 3 : \"industrials\",\n 4 : \"utilities\", \n 5 : \"real_estate\", \n 6 : \"materials\", \n 7 : \"telecommunication_services\", \n 8 : \"consumer_staples\", \n 9 : \"consumer_discretionary\", \n 10 : \"energy\" \n}\n\nticker_sector = {\n \"ACN\" : 0, \"ATVI\" : 0, \"ADBE\" : 0, \"AMD\" : 0, \"AKAM\" : 0, \"ADS\" : 0, \"GOOGL\" : 0, \"GOOG\" : 0, \n \"APH\" : 0, \"ADI\" : 0, \"ANSS\" : 0, \"AAPL\" : 0, \"AMAT\" : 0, \"ADSK\" : 0, \"ADP\" : 0, \"AVGO\" : 0,\n \"AMG\" : 1, \"AFL\" : 1, \"ALL\" : 1, \"AXP\" : 1, \"AIG\" : 1, \"AMP\" : 1, \"AON\" : 1, \"AJG\" : 1, \"AIZ\" : 1, \"BAC\" : 1,\n \"BK\" : 1, \"BBT\" : 1, \"BRK.B\" : 1, \"BLK\" : 1, \"HRB\" : 1, \"BHF\" : 1, \"COF\" : 1, \"CBOE\" : 1, \"SCHW\" : 1, \"CB\" : 1,\n \"ABT\" : 2, \"ABBV\" : 2, \"AET\" : 2, \"A\" : 2, \"ALXN\" : 2, \"ALGN\" : 2, \"AGN\" : 2, \"ABC\" : 2, \"AMGN\" : 2, \"ANTM\" : 2,\n \"BCR\" : 2, \"BAX\" : 2, \"BDX\" : 2, \"BIIB\" : 2, \"BSX\" : 2, \"BMY\" : 2, \"CAH\" : 2, \"CELG\" : 2, \"CNC\" : 2, \"CERN\" : 2,\n \"MMM\" : 3, \"AYI\" : 3, \"ALK\" : 3, \"ALLE\" : 3, \"AAL\" : 3, \"AME\" : 3, \"AOS\" : 3, \"ARNC\" : 3, \"BA\" : 3, \"CHRW\" : 3,\n \"CAT\" : 3, \"CTAS\" : 3, \"CSX\" : 3, \"CMI\" : 3, \"DE\" : 3, \"DAL\" : 3, \"DOV\" : 3, \"ETN\" : 3, \"EMR\" : 3, \"EFX\" : 3,\n \"AES\" : 4, \"LNT\" : 4, \"AEE\" : 4, \"AEP\" : 4, \"AWK\" : 4, \"CNP\" : 4, \"CMS\" : 4, \"ED\" : 4, \"D\" : 4, \"DTE\" : 4,\n \"DUK\" : 4, \"EIX\" : 4, \"ETR\" : 4, \"ES\" : 4, \"EXC\" : 4, \"FE\" : 4, \"NEE\" : 4, \"NI\" : 4, \"NRG\" : 4, \"PCG\" : 4,\n \"ARE\" : 5, \"AMT\" : 5, \"AIV\" : 5, \"AVB\" : 5, \"BXP\" : 5, \"CBG\" : 5, \"CCI\" : 5, \"DLR\" : 5, \"DRE\" : 5,\n \"EQIX\" : 5, \"EQR\" : 5, \"ESS\" : 5, \"EXR\" : 5, \"FRT\" : 5, \"GGP\" : 5, \"HCP\" : 5, \"HST\" : 5, \"IRM\" : 5, \"KIM\" : 5,\n \"APD\" : 6, \"ALB\" : 6, \"AVY\" : 6, \"BLL\" : 6, \"CF\" : 6, \"DWDP\" : 6, \"EMN\" : 6, \"ECL\" : 6, \"FMC\" : 6, \"FCX\" : 6,\n \"IP\" : 6, \"IFF\" : 6, \"LYB\" : 6, \"MLM\" : 6, \"MON\" : 6, \"MOS\" : 6, \"NEM\" : 6, \"NUE\" : 6, \"PKG\" : 6, \"PPG\" : 6,\n \"T\" : 7, \"CTL\" : 7, \"VZ\" : 7, \n \"MO\" : 8, \"ADM\" : 8, \"BF.B\" : 8, \"CPB\" : 8, \"CHD\" : 8, \"CLX\" : 8, \"KO\" : 8, \"CL\" : 8, \"CAG\" : 8,\n \"STZ\" : 8, \"COST\" : 8, \"COTY\" : 8, \"CVS\" : 8, \"DPS\" : 8, \"EL\" : 8, \"GIS\" : 8, \"HSY\" : 8, \"HRL\" : 8,\n \"AAP\" : 9, \"AMZN\" : 9, \"APTV\" : 9, \"AZO\" : 9, \"BBY\" : 9, \"BWA\" : 9, \"KMX\" : 9, \"CCL\" : 9, \n \"APC\" : 10, \"ANDV\" : 10, \"APA\" : 10, \"BHGE\" : 10, \"COG\" : 10, \"CHK\" : 10, \"CVX\" : 10, \"XEC\" : 10, \"CXO\" : 10,\n \"COP\" : 10, \"DVN\" : 10, \"EOG\" : 10, \"EQT\" : 10, \"XOM\" : 10, \"HAL\" : 10, \"HP\" : 10, \"HES\" : 10, \"KMI\" : 10\n}\n\nimport pandas_datareader.data as web\n\ntickers = list(ticker_sector.keys())\npan = web.DataReader(tickers, \"google\", datetime.datetime(2017, 1, 1), datetime.datetime(2017, 6, 1))", "Our example factor ranks the stocks based on their overnight price gap (yesterday close to today open price). We'll see if the factor has some alpha or if it is pure noise.", "today_open = pan['Open']\ntoday_close = pan['Close']\nyesterday_close = today_close.shift(1)\n\nfactor = (today_open - yesterday_close) / yesterday_close", "The pricing data passed to alphalens should contain the entry price for the assets so it must reflect the next available price after a factor value was observed at a given timestamp. Those prices must not be used in the calculation of the factor values for that time. Always double check to ensure you are not introducing lookahead bias to your study.\nThe pricing data must also contain the exit price for the assets, for period 1 the price at the next timestamp will be used, for period 2 the price after 2 timestamps will be used and so on.\nThere are no restrinctions/assumptions on the time frequencies a factor should be computed at and neither on the specific time a factor should be traded (trading at the open vs trading at the close vs intraday trading), it is only required that factor and price DataFrames are properly aligned given the rules above.\nIn our example, we want to buy the stocks at marker open, so the need the open price at the exact timestamps as the factor valules, and we want to sell the stocks at market close so we will add the close prices too, which will be used to compute period 1 forward returns as they appear just after the factor values timestamps. The returns computed by Alphalens will therefore be based on the difference between open to close assets prices.\nIf we had other prices we could compute other period returns, for example one hour after market open and 2 hours and so on. We could have added those prices right after the open prices and instruct Alphalens to compute 1, 2, 3... periods too and not only period 1 like in this example.", "# Fix time as Yahoo doesn't set it\ntoday_open.index += pd.Timedelta('9h30m')\ntoday_close.index += pd.Timedelta('16h')\n# pricing will contain both open and close\npricing = pd.concat([today_open, today_close]).sort_index()\n\npricing.head()\n\n# Align factor to open price\nfactor.index += pd.Timedelta('9h30m')\nfactor = factor.stack()\nfactor.index = factor.index.set_names(['date', 'asset'])\n\nfactor.unstack().head()", "Run Alphalens\nPeriod 1 will show returns from market open to market close while period 2 will show returns from today open to tomorrow open", "non_predictive_factor_data = alphalens.utils.get_clean_factor_and_forward_returns(factor, \n pricing, \n periods=(1,2),\n groupby=ticker_sector,\n groupby_labels=sector_names)\n\nalphalens.tears.create_returns_tear_sheet(non_predictive_factor_data)\n\nalphalens.tears.create_event_returns_tear_sheet(non_predictive_factor_data, pricing)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
doc/notebooks/automaton.sum.ipynb
gpl-3.0
[ "automaton.sum(aut,algo=\"auto\")\nBuild an automaton whose behavior is the sum of the behaviors of the input automata.\nThe algorithm has to be one of these:\n\n\"auto\": default parameter, same as \"standard\" if parameters fit the standard preconditions, \"general\" otherwise.\n\"general\": general addition or union, no additional preconditions.\n\"standard\": standard addition.\n\nPreconditions:\n- \"standard\": automaton has to be standard.\nPostconditions:\n- \"standard\": the result automaton is standard.", "import vcsn\nctx = vcsn.context('lal_char, q')\naut = lambda e: ctx.expression(e).standard()", "Examples\nSum of standard automata.", "aut('a') + aut('b')\n\naut('a').sum(aut('b'), \"general\")", "Sum of non standard automata.", "%%automaton -s a\ncontext = \"lal_char, q\"\n$ -> 0\n1 -> 0 b\n0 -> 1 a\n1 -> $ <2>\n\na+a" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
voyagenius/ds
EDA.ipynb
mit
[ "Data Ingestion & Exploratory Analysis of the UFO Database\nUnidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at National UFO Report Center (NUFORC). Some of these reports are hoax and amongst those that seem legitimate, there isn’t currently an established method to confirm that they indeed are events related to flying objects from aliens in outer space. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We present a scheme consisting of feature extraction by filtering related datasets over a time-band of 24 hrs and use multi-dimensional textual summaries along with geospatial information to determine best clusters of UFO activity. Later, we look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and discover interesting correlations with real-time events!", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport statsmodels.api as sm\nimport pandas as pd\nimport numpy as np\nimport geocoder\nimport re\nimport math", "The UFO database as you can see below has the following columns:\n * Date/Time of the event.\n * City where the event was reported.\n * State of the city.\n * Shape that the observer thought they saw.\n * Duration of the event.\n * Summary - description of the UFO event.\n * Date posted by the UFO website.", "ufo_data = pd.read_csv(\"data/ufo/ufo_data.csv\", sep='\\t')\nufo_data.head()", "The data used in this research is collected and made public by the National UFO Reporting Center launched in 1974. The NUFORC site hosts an extensive database of UFO sighting reports that are submitted either online or though a 24-hour telephone hotline. The data undergoes an internal quality check by the NUFORC staff before being made public and, at the moment, presents one of the most comprehensive UFO reports databases available online. It provides the following information: Date/Time, City, State, Shape, Duration, Summary, and Posting date. The data gets occasionally used for local news reports as well as a broader-level reporting. \nThe Date/Time needs to be parsed to extract the date components. The datetime utility cannot be easily used as the format of Date doesnt come with padded 0s for single digits.", "ufo_data['Month'] = [int(r.split('/')[0]) for r in ufo_data['Date/Time']]\nufo_data['Day'] = [int(r.split('/')[1]) for r in ufo_data['Date/Time']]\nufo_data['Date'] = [(r.split(' ')[0]) for r in ufo_data['Date/Time']]\nufo_data['Time'] = [(r.split(' ')[-1]) for r in ufo_data['Date/Time']]", "Let us convert the bin the events by time since the time of reporting is continous and by the minute. A binning of the events helps analyse statistics such as, how many events occurred during noon?", "def time_period(time):\n '''\n Convert time into periods 1,2,...,12. If the time is hh: mm \n and 2i<hh<2(i+1), then hh:mm belongs to period t.\n Suppose the time is 6:30am, since 2*3<6<2*4, then it belongs to period 3. \n \n Args:\n time (time): Time period.\n Returns:\n periods (time): Formatted time period.\n \n '''\n periods=[]\n for t in time:\n try:\n p = int(t.split(':')[0])\n for i in range(12):\n if(p>=2*i) & (p<2*(i+1)):\n periods.append(i+1)\n except ValueError:\n periods.append(-1)\n return periods \n\nufo_data['TimePeriod'] = time_period(ufo_data['Time'])\nufo_data.head(1)", "Let us look at events where durations are reported as null", "null_data = ufo_data[ufo_data.Duration.isnull()]\nnull_data.head(1)\n\ndef duration_sec(duration_text):\n '''\n Add a duration column with normalized units of measurement (seconds). Extracts the duration in seconds\n by infering the duration from the text.\n Args:\n text (str): String of text.\n Returns:\n (int): Time duration in seconds.\n \n '''\n try:\n metric_text = [\"second\",\"s\",\"Second\",\"minute\",\"m\",\"min\",\"Minute\",\"hour\",\"h\",\"Hour\"]\n metric_seconds = [1,1,1,60,60,60,60,3600,3600,3600]\n for m,st in zip(metric_text, metric_seconds):\n regex = \"\\s*(\\d+)\\+?\\s*{}s?\".format(m)\n a = re.findall(regex, duration_text)\n if len(a)>0:\n return int(int(a[0]) * st)\n else:\n return None\n except:\n return None", "Impute duration column with mean", "ufo_data[\"Duration_Sec\"] = ufo_data[\"Duration\"].apply(duration_sec)\nufo_data[\"Duration_Sec\"] = ufo_data.Duration_Sec.fillna(int(ufo_data.Duration_Sec.mean()))\nufo_data['Duration_Sec'].unique()", "From the above you can see that there are no null values in Durations. \nLet us look at the unique states and which states are most reporting UFO events.", "sns.set(style=\"darkgrid\")\nplt.figure(figsize=(8, 12))\nsns.countplot(y=\"State\", data=ufo_data, palette=\"Greens_d\");", "Let us look at how many of these are in the United States.", "all_states = ufo_data['State'].value_counts(dropna=False) \nUS = [\"AL\", \"AK\", \"AZ\", \"AR\", \"CA\", \"CO\", \"CT\", \"DC\", \"DE\",\n \"FL\", \"GA\", \"HI\", \"ID\", \"IL\", \"IN\", \"IA\", \"KS\", \"KY\",\n \"LA\", \"ME\", \"MD\", \"MA\", \"MI\", \"MN\", \"MS\", \"MO\", \"MT\",\n \"NE\", \"NV\", \"NH\", \"NJ\", \"NM\", \"NY\", \"NC\", \"ND\", \"OH\",\n \"OK\", \"OR\", \"PA\", \"RI\", \"SC\", \"SD\", \"TN\", \"TX\", \"UT\",\n \"VT\", \"VA\", \"WA\", \"WV\", \"WI\", \"WY\"]\n\nprint([state for state in all_states.index if state not in US])", "We can see from above that many states are outside of United States. Since we are looking to study most events that occurred in the United States, let us create a column \"US\".", "def is_US(state):\n '''\n Check if the state is in United States or not.\n Args:\n state (str): state reported in UFO data.\n Returns:\n (boolean): True if state is in US or False otherwise.\n \n '''\n if state in US:\n return True\n else:\n return False\n\nufo_data['US'] = ufo_data['State'].apply(is_US)\nufo_data.head(1)", "Let us see how much we stand to lose in terms of considering data inside of US by plotting a count plot with a hue.", "plt.figure(figsize=(8, 12))\ng = sns.countplot(y=\"State\", hue=\"US\", data=ufo_data)", "We can study from countplot that there is a small percentage of data that we stand to lose by ignoring states outisde of United States. Let us keep only states within US for our initial analysis.", "ufo_data = ufo_data[ufo_data.US == 1] ", "Let us explore how the city column looks. It is important to take a look at lots of values in the dataset to check for anomalies or data with noise. For example the city data has noise with text containing additional information within () and other such noises.", "#print(ufo_data['City'].unique()[0:1000])", "Transfrom the City column to exclude all irrelevant text entries (e.g., additional comments).", "def clean_city_data(city_name):\n '''\n Cleans the city string of additional comments and irrelevant data.\n \n Args:\n city_name (str): Name of the city\n Returns:\n (str): correct city name.\n \n '''\n try:\n city_name = city_name.split('/')[0]\n city_name = city_name.split('(')[0]\n city_name = city_name.split(',')[0]\n city_name = city_name.split('?')[0]\n return city_name\n except AttributeError:\n return 'Unknown'\n \nufo_data['City'].apply(clean_city_data)", "Latitude & Longitude of reported events.\nThese events are reported at various cities. We need the latitude and longitude information to perform geospatial analysis. A process of coverting an address to a latitude and longitude is called forward geocoding. The geopy library is useful for forward geocoding. It connects to a network and looks up the address and returns back the latitude and longitude information. The code below is used to determine the coordianates. We have already run the code below and generated a file. Hence, do not run this portion of the code below. Also it takes a while to look up all the addresses. \nWarning\nDon't run this code as it is a placeholder; instead use data exported to a csv:data_coord.csv\nThis code was used to extract coordinates for each state-city combination.", "# from geopy.geocoders import ArcGIS\n# from geopy.exc import GeocoderTimedOut\n\n# Latitude=[]\n# Longitude=[]\n# geolocator = ArcGIS()\n# fails=[]\n# for i in range(len(df)):\n# try:\n# location = geolocator.geocode(df.iloc[i,1]+','+df.iloc[i,2])\n# df.iloc[i,-2] = location.latitude\n# df.iloc[i,-1] = location.longitude\n# print (i, location.address, df.iloc[i,-2], df.iloc[i,-1])\n# df.to_csv('data_coord_1.csv',sep='\\t', encoding='utf-8', index=False)\n# except (AttributeError, GeocoderTimedOut) as e:\n# df.to_csv('data_coord_1.csv',sep='\\t', encoding='utf-8', index=False)\n# print ('exception:', i)", "Now that we have the cleaned and processed the data, let us extract right columns from the dataframe that are useful and shall be our features for modeling.", "features = ['Date', 'Month', 'Day', 'Time', 'TimePer', 'City', 'State', 'Lat', 'Long',\n 'Shape', 'Duration', 'Duration_Sec', 'Summary', 'Posted', 'US_STATE']\n\nufo_data = pd.DataFrame(ufo_data, columns = features)\nufo_data.head(1)\n\nufo_data['Lat'] = np.nan\nufo_data['Long'] = np.nan\n\nufo_data = pd.read_csv(\"data/ufo/data_coord.csv\", sep='\\t')\nufo_data = ufo_data[(~ufo_data.Long.isnull()) & (~ufo_data.Lat.isnull())]\n\nufo_data['Date'] = pd.to_datetime(ufo_data['Date'])\nufo_data.set_index('Date', inplace=True)\nufo_data = ufo_data.reset_index()\nufo_data['WeekDay'] = ufo_data['Date'].dt.dayofweek\nufo_data['Week'] = ufo_data['Date'].dt.weekofyear\nufo_data['Quarter'] = ufo_data['Date'].dt.quarter\nufo_data['Year'] = ufo_data['Date'].dt.year\n\nufo_data.columns", "Adding other data sources\nIn order to have a better understanding of the UFO reports, let us add the following external-data sources: \n* dates of astronomical events in CY 2014-2015\n* dates of national holidays in CY 2014-2015\n* US state population and share of active military population per year per each state. \nUS state population and share of active military population per year per each state.", "ufo_stats = pd.read_excel(\"data/ufo/stats2.xlsx\")\nufo_stats.head(2)", "We can now merge the two datasets on state abbreviations.", "ufo_stats['State'] = ufo_stats['state abbr']\nufo_data = pd.merge(ufo_data, ufo_stats, on='State')\nufo_data.columns\n\nufo_data.loc[ufo_data['Year'] == 2014, 'Pop'] = ufo_data['2014 Popualtion'][ufo_data['Year'] == 2014]\nufo_data.loc[ufo_data['Year'] == 2015, 'Pop'] = ufo_data['2015 Population'][ufo_data['Year'] == 2015]\nufo_data.loc[ufo_data['Year'] == 2014, 'Milit_Share'] = ufo_data['Number of Active Duty members 2014'][ufo_data['Year'] == 2014]/ufo_data['2014 Popualtion'][ufo_data['Year'] == 2014]\nufo_data.loc[ufo_data['Year'] == 2015, 'Milit_Share'] = ufo_data['Number of Active Duty members 2014'][ufo_data['Year'] == 2015]/ufo_data['2015 Population'][ufo_data['Year'] == 2015]", "Hoax Prediction\nThe inevitable presence of IFO reports in the dataset can, in fact, be considered an added value, since the non-UFO reports are still indicative of actual events taking place. Therefore, our analysis focuses on the events that are reported as UFOs, regardless of them being an alien activity or in future recognized as an IFO. In addition to general reporting trends, the analysis of NUFORC data can offer insight into the UFO perception and their validity as some of the latter are labeled to be hoax reports by NUFORC.", "#Adding a Hoax column derived from the Summary column\npattern = '|'.join([\"HOAX\",\"NUFORC Note\"])\nufo_data['Validity'] = ufo_data.Summary.str.contains(pattern)\n\ndef binary_convert(value):\n if value==True:\n return 0\n else:\n return 1\n \nufo_data['Validity'] = ufo_data['Validity'].apply(lambda x: binary_convert(x))\n\nufo_data_hoax = ufo_data[['Summary','Validity']]\nufo_data_hoax[ufo_data_hoax.Validity==0].shape", "Average reports during the day per state grouped by Time.", "# sns.set(style=\"white\", palette=\"muted\", color_codes=True)\ng = sns.distplot(ufo_data.groupby(['State'])['Time'].count().nlargest(10), color=\"r\")", "Violin plots of reports on weekdays vs weekends\nCreate a categorical variable column called WeekEnd. Violin plots showcase the distribution of events that aren't hoax over the weekdays vs weekends. This will be a large plot as we can get information about the density of reports in all states.", "ufo_data['WeekEnd'] = \"WeekDay\"\nufo_data.ix[ufo_data['WeekDay']>4, 'WeekEnd'] = \"WeekEnd\"", "Let us look at the states which reported the highest UFO events and look at their violin plots.", "ufo_states = ufo_data.groupby('State')['Year'].count().nlargest(10)\n\nplt.figure(figsize=(10, 10))\nsns.set(style=\"whitegrid\", palette=\"pastel\", color_codes=True)\nsns.violinplot(y=\"State\", x=\"Validity\", hue=\"WeekEnd\", \n data=ufo_data.loc[ufo_data['State'].isin(ufo_states.index.tolist())], \n split=True, inner=\"quart\", palette={\"WeekDay\": \"b\", \"WeekEnd\": \"y\"})", "Reports of various shapes in a violin plot on a weekday vs weekend.\nLet us look at the largest shapes reported.", "ufo_shapes = ufo_data.groupby('Shape')['Year'].count().nlargest(10)\nprint(ufo_shapes)\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set(style=\"whitegrid\")\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(10, 8))\n\n# Draw a violinplot with a narrower bandwidth than the default\nsns.violinplot(y=\"Validity\", x=\"Shape\", hue=\"WeekEnd\", data=ufo_data.loc[ufo_data['Shape'].isin(ufo_shapes.index.tolist())],\n split=True, inner=\"quart\", palette=\"Set3\", bw=.2, cut=2, linewidth=1)\n\n# Finalize the figure\nsns.despine(left=True, bottom=True)", "Remove the rows where State or City is unknown. Alternatively impute the rows for missing values.", "ufo_data = ufo_data[~ufo_data.State.isnull() & ~ufo_data.City.isnull()]\nufo_data.isnull().sum()\n\nimport folium\n\n# Get a basic world map.\nUFOmap = folium.Map(location=[30, 0], zoom_start=2)\n# Draw markers on the map.\nfor name, row in ufo_data.iterrows():\n UFOmap.circle_marker(location=[row[\"Lat\"], row[\"Long\"]], popup=row[\"City\"])\n# Create and show the map.\nUFOmap.create_map('UFOmap.html')\nUFOmap" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GuidoBR/python-for-finance
python-for-finance-investment-fundamentals-data-analytics/2 - Measuring Investment Risk/Covariance and Correlation.ipynb
mit
[ "Covariance and Correlation\n\\begin{eqnarray}\nCovariance Matrix: \\ \\ \n\\Sigma = \\begin{bmatrix}\n \\sigma_{1}^2 \\ \\sigma_{12} \\ \\dots \\ \\sigma_{1I} \\\n \\sigma_{21} \\ \\sigma_{2}^2 \\ \\dots \\ \\sigma_{2I} \\\n \\vdots \\ \\vdots \\ \\ddots \\ \\vdots \\\n \\sigma_{I1} \\ \\sigma_{I2} \\ \\dots \\ \\sigma_{I}^2\n \\end{bmatrix}\n\\end{eqnarray}\nThe Covariance Matrix is a representation of how two or more variables relate to each other. The covariance between a \nvariable to itself, is the variance of that same variables.", "import numpy as np\nimport pandas as pd\nfrom pandas_datareader import data as wb\nimport matplotlib.pyplot as plt\n\ntickers = ['PG', 'GOOG']\n\nsec_data = pd.DataFrame()\n\nfor t in tickers:\n sec_data[t] = wb.DataReader(t, data_source='google', start='2007-1-1')['Close']\n\nsec_returns = np.log(sec_data / sec_data.shift(1))\n\nPG_var_a = sec_returns['PG'].var() * 250\nPG_var_a\n\nGOOG_var_a = sec_returns['GOOG'].var() * 250\nGOOG_var_a\n\ncov_matrix = sec_returns.cov() * 250\ncov_matrix\n\ncorr_matrix = sec_returns.corr()\ncorr_matrix\n\nweights = np.array([0.5, 0.5]) # 50% Google, 50% P&G", "Portfolio Variance", "pfolio_var = np.dot(weights.T, np.dot(sec_returns.cov() * 250, weights))\npfolio_var", "Portfolio Volatillity", "pfolio_vol = pfolio_var ** 0.5\nprint('{}%'.format(round(pfolio_vol, 4) * 100))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mbakker7/ttim
pumpingtest_benchmarks/6_test_of_schroth.ipynb
mit
[ "Confined Aquifer Test\nThis test is taken from examples presented in MLU tutorial.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom ttim import *", "The test is condected at a fully confined two-aquifer system. Both the pumping well and the observation piezometer are screened at the second aquifer.\nSet basic parameters:", "Q = 82.08 #constant discharge in m^3/d\nzt0 = -46 #top boundary of upper aquifer in m\nzb0 = -49 #bottom boundary of upper aquifer in m\nzt1 = -52 #top boundary of lower aquifer in m\nzb1 = -55 #bottom boundary of lower aquifer in m\nrw = 0.05 #well radius in m", "Load data of two observation wells:", "data1 = np.loadtxt('data/schroth_obs1.txt', skiprows = 1)\nt1 = data1[:, 0]\nh1 = data1[:, 1]\nr1 = 0 \ndata2 = np.loadtxt('data/schroth_obs2.txt', skiprows = 1)\nt2 = data2[:, 0]\nh2 = data2[:, 1]\nr2 = 46 #distance between observation well2 and pumping well", "Create single layer model (overlying aquifer and aquitard are excluded):", "ml_0 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)\nw_0 = Well(ml_0, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)])\nml_0.solve()\n\nca_0 = Calibrate(ml_0)\nca_0.set_parameter(name='kaq0', initial=10)\nca_0.set_parameter(name='Saq0', initial=1e-4)\nca_0.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)\nca_0.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)\nca_0.fit(report=True)\n\ndisplay(ca_0.parameters)\nprint('RMSE:', ca_0.rmse())\n\nhm1_0 = ml_0.head(r1, 0, t1)\nhm2_0 = ml_0.head(r2, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, h1, '.', label='obs1')\nplt.semilogx(t2, h2, '.', label='obs2')\nplt.semilogx(t1, hm1_0[-1], label='ttim1')\nplt.semilogx(t2, hm2_0[-1], label='ttim2')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one1.eps');", "To improve model's performance, rc & res are adding:", "ml_1 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)\nw_1 = Well(ml_1, xw=0, yw=0, rw=rw, rc=0, res=5, tsandQ = [(0, Q), (1e+08, 0)])\nml_1.solve()\n\nca_1 = Calibrate(ml_1)\nca_1.set_parameter(name='kaq0', initial=10)\nca_1.set_parameter(name='Saq0', initial=1e-4)\nca_1.set_parameter_by_reference(name='rc', parameter=w_1.rc[:], initial=0.2)\nca_1.set_parameter_by_reference(name='res', parameter=w_1.res[:], initial=3)\nca_1.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)\nca_1.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)\nca_1.fit(report=True)\n\ndisplay(ca_1.parameters)\nprint('RMSE:', ca_1.rmse())\n\nhm1_1 = ml_1.head(r1, 0, t1)\nhm2_1 = ml_1.head(r2, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, h1, '.', label='obs1')\nplt.semilogx(t2, h2, '.', label='obs2')\nplt.semilogx(t1, hm1_1[-1], label='ttim1')\nplt.semilogx(t2, hm2_1[-1], label='ttim2')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one2.eps');", "Create three-layer conceptual model:", "ml_2 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\\\n Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)\nw_2 = Well(ml_2, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)], layers=1)\nml_2.solve()\n\nca_2 = Calibrate(ml_2)\nca_2.set_parameter(name= 'kaq0', initial=20, pmin=0)\nca_2.set_parameter(name='kaq1', initial=1, pmin=0)\nca_2.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca_2.set_parameter(name='Saq1', initial=1e-5, pmin=0)\nca_2.set_parameter_by_reference(name='Sll', parameter=ml_2.aq.Sll[:],\\\n initial=1e-4, pmin=0)\nca_2.set_parameter(name='c1', initial=100, pmin=0)\nca_2.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)\nca_2.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)\nca_2.fit(report=True)\n\ndisplay(ca_2.parameters)\nprint('RMSE:',ca_2.rmse())\n\nhm1_2 = ml_2.head(r1, 0, t1)\nhm2_2 = ml_2.head(r2, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, h1, '.', label='obs1')\nplt.semilogx(t2, h2, '.', label='obs2')\nplt.semilogx(t1, hm1_2[-1], label='ttim1')\nplt.semilogx(t2, hm2_2[-1], label='ttim2')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three1.eps');", "Try adding res & rc:", "ml_3 = ModelMaq(kaq=[19, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[4e-4, 1e-5],\\\n Sll=1e-4, topboundary='conf', tmin=1e-4, tmax=0.5)\nw_3 = Well(ml_3, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \\\n layers=1)\nml_3.solve()\n\nca_3 = Calibrate(ml_3)\nca_3.set_parameter(name= 'kaq0', initial=20, pmin=0)\nca_3.set_parameter(name='kaq1', initial=1, pmin=0)\nca_3.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca_3.set_parameter(name='Saq1', initial=1e-5, pmin=0)\nca_3.set_parameter_by_reference(name='Sll', parameter=ml_3.aq.Sll[:],\\\n initial=1e-4, pmin=0)\nca_3.set_parameter(name='c1', initial=100, pmin=0)\nca_3.set_parameter_by_reference(name='res', parameter=w_3.res[:], initial=0, pmin=0)\nca_3.set_parameter_by_reference(name='rc', parameter=w_3.rc[:], initial=0.2, pmin=0)\nca_3.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)\nca_3.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)\nca_3.fit(report=True)\n\ndisplay(ca_3.parameters)\nprint('RMSE:', ca_3.rmse())\n\nhm1_3 = ml_3.head(r1, 0, t1)\nhm2_3 = ml_3.head(r2, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, h1, '.', label='obs1')\nplt.semilogx(t2, h2, '.', label='obs2')\nplt.semilogx(t1, hm1_3[-1], label='ttim1')\nplt.semilogx(t2, hm2_3[-1], label='ttim2')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three2.eps');", "Calibrate with fitted characters for upper aquifer:", "ml_4 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\\\n Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)\nw_4 = Well(ml_4, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \\\n layers=1)\nml_4.solve()", "The optimized value of res is very close to the minimum limitation, thus res has little effect on the performance of the model. res is removed in this calibration.", "ca_4 = Calibrate(ml_4)\nca_4.set_parameter(name='kaq1', initial=1, pmin=0)\nca_4.set_parameter(name='Saq1', initial=1e-5, pmin=0)\nca_4.set_parameter(name='c1', initial=100, pmin=0)\nca_4.set_parameter_by_reference(name='rc', parameter=w_4.rc[:], initial=0.2, pmin=0)\nca_4.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)\nca_4.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)\nca_4.fit(report=True)\n\ndisplay(ca_4.parameters)\nprint('RMSE:', ca_4.rmse())\n\nhm1_4 = ml_4.head(r1, 0, t1)\nhm2_4 = ml_4.head(r2, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, h1, '.', label='obs1')\nplt.semilogx(t2, h2, '.', label='obs2')\nplt.semilogx(t1, hm1_4[-1], label='ttim1')\nplt.semilogx(t2, hm2_4[-1], label='ttim2')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three3.eps');", "Summary of values simulated by MLU\nResults of calibrations done with three-layer model of ttim are presented below.", "t = pd.DataFrame(columns=['k0[m/d]','k1[m/d]','Ss0[1/m]','Ss1[1/m]','Sll[1/m]','c[d]',\\\n 'res', 'rc'], \\\n index=['MLU', 'MLU-fixed k1','ttim','ttim-rc','ttim-fixed upper'])\nt.loc['ttim-rc'] = ca_3.parameters['optimal'].values\nt.iloc[2,0:6] = ca_2.parameters['optimal'].values\nt.iloc[4,5] = ca_4.parameters['optimal'].values[2]\nt.iloc[4,7] = ca_4.parameters['optimal'].values[3]\nt.iloc[4,0] = 17.28\nt.iloc[4,1] = ca_4.parameters['optimal'].values[0]\nt.iloc[4,2] = 1.2e-4\nt.iloc[4,3] = ca_4.parameters['optimal'].values[1]\nt.iloc[4,4] = 3e-5\nt.iloc[0, 0:6] = [17.424, 6.027e-05, 1.747, 6.473e-06, 3.997e-05, 216]\nt.iloc[1, 0:6] = [2.020e-04, 9.110e-04, 3.456, 6.214e-05, 7.286e-05, 453.5]\nt['RMSE'] = [0.023452, 0.162596, ca_2.rmse(), ca_3.rmse(), ca_4.rmse()]\nt" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hhain/sdap17
notebooks/robin_ue2/mustererkennung_in_funkmessdaten.ipynb
mit
[ "Mustererkennung in Funkmessdaten\nAufgabe 1: Laden der Datenbank in Jupyter Notebook", "# imports\nimport re\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pprint as pp", "Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben. \n", "hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')\nprint(hdf.keys)", "Aufgabe 2: Inspektion eines einzelnen Dataframes\nWir laden den Frame x1_t1_trx_1_4 und betrachten seine Dimension.", "df_x1_t1_trx_1_4 = hdf.get('/x1/t1/trx_1_4')\nprint(\"Rows:\", df_x1_t1_trx_1_4.shape[0])\nprint(\"Columns:\", df_x1_t1_trx_1_4.shape[1])", "Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.", "# first inspection of columns from df_x1_t1_trx_1_4\ndf_x1_t1_trx_1_4.head(5)", "Für die Analyse der Frames definieren wir einige Hilfsfunktionen.", "# Little function to retrieve sender-receiver tuples from df columns\ndef extract_snd_rcv(df):\n regex = r\"trx_[1-4]_[1-4]\"\n # creates a set containing the different pairs\n snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)}\n return [(x[0],x[-1]) for x in snd_rcv]\n\n# Sums the number of columns for each sender-receiver tuple\ndef get_column_counts(snd_rcv, df):\n col_counts = {}\n for snd,rcv in snd_rcv:\n col_counts['Columns for pair {} {}:'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}'.format(snd, rcv))])\n return col_counts\n\n# Analyze the column composition of a given measurement.\ndef analyse_columns(df):\n df_snd_rcv = extract_snd_rcv(df)\n cc = get_column_counts(df_snd_rcv, df)\n\n for x in cc:\n print(x, cc[x])\n print(\"Sum of pair related columns: %i\" % sum(cc.values()))\n print()\n print(\"Other columns are:\")\n for att in [col for col in df.columns if 'ifft' not in col and 'ts' not in col]:\n print(att)\n\n# Analyze the values of the target column.\ndef analyze_target(df):\n print(df['target'].unique())\n print(\"# Unique values in target: %i\" % len(df['target'].unique()))", "Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.", "analyse_columns(df_x1_t1_trx_1_4)", "Betrachte den Inhalt der \"target\"-Spalte von df_x1_t1_trx_1_4.", "analyze_target(df_x1_t1_trx_1_4)", "Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension.", "df_x3_t2_trx_3_1 = hdf.get('/x3/t2/trx_3_1')\nprint(\"Rows:\", df_x3_t2_trx_3_1.shape[0])\nprint(\"Columns:\", df_x3_t2_trx_3_1.shape[1])", "Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner \"target\"-Werte.", "analyse_columns(df_x3_t2_trx_3_1)\n\nanalyze_target(df_x3_t2_trx_3_1)", "Frage: Was stellen Sie bzgl. der „Empfänger-Nummer_Sender-Nummer“-Kombinationen fest? Sind diese gleich? Welche Ausprägungen finden Sie in der Spalte „target“? \nAntwort: Wir sehen, wenn jeweils ein Paar sendet, hören die anderen beiden Sender zu und messen ihre Verbindung zu den gerade sendenden Knoten (d.h. 6 Paare in jedem Dataframe). Sendet z.B. das Paar 3 1, so misst Knoten 1 die Verbindung 1-3, Knoten 3 die Verbindung 3-1 und Knoten 2 und 4 Verbindung 2-1 und 2-3 bzw. 4-1 und 4-3. Die 10 verschiedenen Ausprägungen der Spalte \"target\" sind oben zu sehen.\nAufgabe 3: Visualisierung der Messreihe des Datensatz\nWir visualisieren die Rohdaten mit verschiedenen Heatmaps, um so die Integrität der Daten optisch zu validieren und Ideen für mögliche Features zu entwickeln. Hier stellen wir exemplarisch die Daten von Frame df_x1_t1_trx_1_4 dar.", "vals = df_x1_t1_trx_1_4.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values\n\n# one big heatmap\nplt.figure(figsize=(14, 12))\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')\nplt.show()", "Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben.", "# compare different heatmaps\nplt.figure(1, figsize=(12,10))\n\n# nipy_spectral_r scheme\nplt.subplot(221)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')\n\n# terrain scheme\nplt.subplot(222)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='terrain')\n\n# Vega10 scheme\nplt.subplot(223)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Vega10')\n\n# Wistia scheme\nplt.subplot(224)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Wistia')\n\n# Adjust the subplot layout, because the logit one may take more space\n# than usual, due to y-tick labels like \"1 - 10^{-3}\"\nplt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,\n wspace=0.2)\n\n\nplt.show()", "Aufgabe 3: Groundtruth-Label anpassen", "# Iterating over hdf data and creating interim data presentation stored in data/interim/testmessungen_interim.hdf\n# Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty)\n# and multi class target (multi_target - encoding 0-9 for each possible class)\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\n\ninterim_path = '../../data/interim/01_testmessungen.hdf'\n\ndef binary_mapper(df):\n \n def map_binary(target):\n if target.startswith('Empty'):\n return 0\n else:\n return 1\n \n df['binary_target'] = pd.Series(map(map_binary, df['target']))\n \n \ndef multiclass_mapper(df):\n le.fit(df['target'])\n df['multi_target'] = le.transform(df['target'])\n \nfor key in hdf.keys():\n df = hdf.get(key)\n binary_mapper(df)\n multiclass_mapper(df)\n df.to_hdf(interim_path, key)\n\nhdf.close()", "Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1).", "hdf = pd.HDFStore('../../data/interim/01_testmessungen.hdf')\ndf_x1_t1_trx_3_1 = hdf.get('/x1/t1/trx_3_1')\nprint(\"binary_target for measurement 5:\", df_x1_t1_trx_3_1['binary_target'][5])\nprint(\"binary_target for measurement 120:\", df_x1_t1_trx_3_1['binary_target'][120])\nhdf.close()", "Aufgabe 4: Einfacher Erkenner mit Hold-Out-Validierung\nWir folgen den Schritten in Aufgabe 4 und testen einen einfachen Erkenner.", "from evaluation import *\nfrom filters import *\nfrom utility import *\nfrom features import *", "Öffnen von Hdf mittels pandas", "# raw data to achieve target values\nhdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')", "Beispiel Erkenner\nDatensätze vorbereiten", "# generate datasets\ntst = ['1','2','3']\ntst_ds = []\n\nfor t in tst:\n\n df_tst = hdf.get('/x1/t'+t+'/trx_3_1')\n lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]\n \n #df_tst_cl,_ = distortion_filter(df_tst_cl)\n \n groups = get_trx_groups(df_tst)\n df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')\n df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)\n df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature\n \n df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature\n \n df_all = cf_std_window(df_all, window=4, label='target')\n \n df_tst_sum = generate_class_label_presence(df_all, state_variable='target')\n \n # remove index column\n df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]\n print('Columns in Dataset:',t)\n print(df_tst_sum.columns)\n \n tst_ds.append(df_tst_sum.copy())\n\n# holdout validation\nprint(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1))", "Schließen von HDF Store", "hdf.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/csir-csiro/cmip6/models/sandbox-3/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CSIR-CSIRO\nSource ID: SANDBOX-3\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:54\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liganega/Gongsu-DataSci
previous/y2017/Wextra/GongSu26_Pandas_Introduction_2.ipynb
gpl-3.0
[ "Pandas 소개 2\nGonsSu24 내용에 이어서 Pandas 라이브러리를 소개한다.\n먼저 GongSu24를 임포트 한다.", "from GongSu24_Pandas_Introduction_1 import *", "색인(Index) 클래스\nPandas에 정의된 색인(Index) 클래스는 Series와 DataFrame 자료형의 행과 열을 구분하는 이름들의 목록을 저장하는 데에 사용된다. \nSeries 객체에서 사용되는 Index 객체\n\nindex 속성\n\n아래와 같이 Series 객체를 생성한 후에 index를 확인해보자.", "s6 = Series(range(3), index=['a', 'b', 'c'])\ns6", "index의 자료형이 Index 클래스의 객체임을 확인할 수 있다.", "s6_index = s6.index\ns6_index", "Index 객체에 대해 인덱싱과 슬라이싱을 리스트의 경우처럼 활용할 수 있다.", "s6_index[2]\n\ns6_index[1:]", "Index 객체는 불변(immutable) 자료형이다.", "s6_index[1] = 'd'", "색인 객체는 변경될 수 없기에 자료 구조 사이에서 안전하게 공유될 수 있다.", "an_index = pd.Index(np.arange(3))\nan_index", "앞서 선언된 an_index를 새로운 Series 나 DataFrame 을 생성하는 데에 사용할 수 있으며, 사용된 index가 무엇인지를 확인할 수도 있다.", "s7= Series([1.5, -2.5, 0], index=an_index)\ns7.index is an_index", "DataFrame 객체에서 사용되는 Index 객체\n\nindex 속성\ncolumns 속성", "df3", "columns와 index 속성 모두 Index 객체이다.", "df3.columns\n\ndf3.index\n\ndf3.columns[:2]", "in 연산자 활용하기\nin 연산자를 활용하여 index 와 columns에 사용된 행과 열의 이름의 존재여부를 확인할 수 있다.", "'debt' in df3.columns\n\n'four' in df3.index", "각각의 색인은 담고 있는 데이터에 대한 정보를 취급하는 여러 가지 메서드와 속성을 가지고 있다. [표 5-3]을 참고하자.\nSeries와 DataFrame 관련 연산 및 주요 메소드\nSeries나 DataFrame 형식으로 저장된 데이터를 다루는 주요 연산 및 기능을 설명한다.\n재색인(reindex) 메소드\nreindex() 메소드는 지정된 색인을 사용해서 새로운 Series나 DataFrame 객체를 생성한다.\nSeries의 경우 재색인", "s8 = Series([4.3, 9.2, 8.1, 3.9], index= ['b', 'c', 'a', 'd'])\ns8", "reindex() 메소드를 이용하여 인덱스를 새로 지정할 수 있다. \n주의: 새로 사용되는 항목이 index에 추가되면 NaN이 값으로 사용된다.", "s9 = s8.reindex(['a', 'b', 'c', 'd', 'e', 'f'])\ns9", "누락된 값을 지정된 값으로 채울 수도 있다.", "s8.reindex(['a','b','c','d','e', 'f'], fill_value=0.0)", "method 옵션\n시계열(time series) 등과 데이터 처럼 어떠 순서에 따라 정렬된 데이터를 재색인할 때 \n보간법을 이용하여 누락된 값들을 채워 넣어야 하는 경우가 있다. \n이런 경우 method 옵션을 이용하며, ffill, bfill, nearest 등을 옵션값으로 활용한다.", "s9 = Series(['blue', 'purple', 'yellow'], index=[0, 2, 4])\ns9\n\ns9.reindex(range(6))\n\ns9.reindex(range(6), method='ffill')\n\ns9.reindex(range(6), method='bfill')\n\ns9.reindex(range(6), method='nearest')", "DataFrame의 경우 재색인\n행과 열에 대해 모두 사용이 가능하다.", "data = np.arange(9).reshape(3, 3)\ndata\n\ndf6 = DataFrame(data, index=['a', 'b', 'd'], columns= ['Ohio', 'Texas', 'California'])\ndf6", "index 속성의 재색인은 Series의 경우와 동일하다.", "df7 = df6.reindex(['a', 'b', 'c', 'd'])\ndf7", "columns 속성의 재색인은 키워드(예약어)를 사용한다.", "states = ['Texas', 'Utah', 'California']\ndf6.reindex(columns=states)", "method 옵션을 이용한 보간은 행 대해서만 이루어진다.", "df6.reindex(index=['a', 'b', 'c', 'd'], method='ffill')\n\ndf6.reindex(index=['a', 'b', 'c', 'd'], method='bfill')\n\ndf6.reindex(index=['a', 2, 3, 4])", "method='nearest'는 인덱스가 모두 숫자인 경우에만 적용할 수 있다.", "df6.reindex(index=['a', 'b', 'c', 'd'], method='nearest')", "주의\nreindex는 기존 자료를 변경하지 않는다.", "df6", "loc 메소드를 이용한 재색인\nloc 메소드를 이용하여 재색인이 가능하다.", "states\n\ndf6.loc[['a', 'b', 'c', 'd'], states]", "5.2.2 하나의 로우 또는 칼럼 제외하기: drop 메소드\ndrop 메서드를 사용하여 지정된 행 또는 열을 제외하여 새로운 Series나 DataFrame을 생성할 수 있다.", "obj = Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])\nobj\n\nnew_obj = obj.drop('c')\nnew_obj\n\nobj.drop(['d', 'c'])", "DataFrame에서는 로우와 칼럼 모두에서 값을 삭제할 수 있다.", "df7", "행 삭제", "df7.drop('a', axis=0)\n\ndf7.drop('Ohio', axis=1)", "drop() 메소드는 기존의 자료를 건드리지 않는다.", "df7\n\ndata.drop('two', axis=1)\n\ndata.drop(['two', 'four'], axis=1)", "5.2.2 하나의 로우 또는 칼럼 삭제하기: del 메소드\ndel 메서드를 사용하여 지정된 행 또는 열을 삭제할 수 있다. \n5.2.3 색인하기, 선택하기, 거르기\nSeries의 색인 (obj[...])은 NumPy 배열의 색인과 유사하게 동작하는데, Series의 색인은 정수가 아니어도 된다는 점이 다르다. \n라벨 이름으로 슬라이싱하는 것은 시작점과 끝점을 포함한다는 점이 일반 파이선에서 슬라이싱과 다른 점이다.", "obj = Series(np.arange(4.), index=['a', 'b', 'c', 'd'])\nobj['b':'c']", "슬라이싱 문법으로 선택된 영역에 값을 대입하는 것은 예상한 대로 동작한다.", "obj['b':'c'] = 5\nobj", "앞에서 확인한대로 색인으로 DataFrame에서 칼럼의 값을 하나 이상 가져올 수 있다.", "data = DataFrame(np.arange(16).reshape((4, 4)),\n index=['Ohio', 'Colorado', 'Utah', 'New York'],\n columns = ['one', 'two', 'three', 'four'])\ndata\n\ndata['two']\n\ndata[['three', 'one']]", "슬라이싱으로 로우를 선택하거나 불리언 배열로 칼럼을 선택할 수 있다.", "data[:2]\n\ndata[data['three'] > 5]", "이 문법에 모순이 있다고 생각할 수 있지만, 실용성에 기인한 것일 뿐이다.\n또 다른 사례는 스칼라 비교를 통해 생성된 불리언 DataFrame을 사용해서 값을 선택하는 것이다.", "data < 5\n\ndata[data < 5] = 0\ndata", "이 예제는 DataFrame을 ndarray와 문법적으로 비슷하게 보이도록 의도한 것이다.\nDataFrame의 칼럼에 대해 라벨로 색인하는 방법으로, 특수한 색인 필드인 ix를 소개한다. ix는 NumPy와 비슷한 방식에 추가적으로 축의 라벨을 사용하여 DataFrame의 로우와 칼럼을 선택할 수 있도록 한다. 앞에서 언급했듯이 이 방법은 재색인을 좀 더 간단하게 할 수 있는 방법이다.", "data.ix['Colorado', ['two', 'three']]\n\ndata.ix[['Colorado', 'Utah'], [3,0,1]]\n\ndata.ix[2]\n\ndata.ix[:'Utah', 'two']\n\ndata.ix[data.three > 5, :3]", "지금까지 살펴봤듯이 pandas 객체에서 데이터를 선택하고 재배열하는 방법은 여러 가지가 있다. [표 5-6]에 다양한 방법을 정리해두었다. 나중에 살펴볼 계층적 색인을 이용하면 좀 더 다양한 방법을 사용할 수 있다.\n5.2.4 산술연산과 데이터 정렬\npandas에서 중요한 기능은 색인이 다른 객체 간의 산술연산이다. 객체를 더할 때 짝이 맞지 않는 색인이 있다면 결과에 두 색인이 통합된다.", "s1 = Series([7.3, -2.5, 3.4, 1.5], index=['a', 'c', 'd','e'])\ns2 = Series([-2.1, 3.6, -1.5, 4, 3.1], index=['a', 'c', 'e', 'f', 'g'])\ns1 + s2", "서로 겹치는 색인이 없다면 데이터는 NA 값이 된다. 산술연산 시 누락된 값은 전파되며, DataFrame에서는 로우와 칼럼 모두에 적용된다.", "df1 = DataFrame(np.arange(9.).reshape((3, 3)), columns=list('bcd'),\n index=['Ohio', 'Texas', 'Colorado'])\ndf2 = DataFrame(np.arange(12.).reshape((4,3)), columns=list('bde'),\n index=['Utah', 'Ohio', 'Texas', 'Oregon'])\n\ndf1 + df2", "산술연산 메서드에 채워 넣을 값 지정하기\n서로 다른 색인을 가지는 객체 간의 산술연산에서 존재하지 않는 축의 값을 특수한 값( 0 같은)으로 지정하고 싶을 때는 다음과 같이 할 수 있다.", "df1 = DataFrame(np.arange(12.).reshape((3,4)), columns=list('abcd'))\ndf2 = DataFrame(np.arange(20.).reshape((4,5)), columns=list('abcde'))\n\ndf1\n\ndf2\n\ndf1 + df2", "이 둘을 더했을 때 겹치지 않는 부분의 값이 NA값이 된 것을 알 수 있다.\ndf1의 add메서드로 df2와 fill_value 값을 인자로 전달한다.", "df1.add(df2, fill_value=0)\n\ndf1.reindex(columns=df2.columns, fill_value=0)", "Series나 DataFrame을 재색인할 때 역시 fill_value를 지정할 수 있다.\nDataFrame과 Series 간의 연산\nNumPy 배열의 연산처럼 DataFrame과 Series 간의 연산도 잘 정의되어 있다. 먼저 2차원 배열과 그 배열 중 한 칼럼의 차이에 대해서 생각할 수 있는 예제를 살펴보자.", "arr = np.arange(12).reshape(3, 4)\n\narr\n\narr[0]\n\narr - arr[0]", "이 예제는 브로드캐스팅에 대한 예제로 자세한 내용은 12장에서 살펴볼 것이다. DataFrame과 Series간의 연산은 이와 유사하다.", "frame = DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'),\n index=['Utah', 'Ohio', 'Texas', 'Oregon'])\nseries = frame.ix[0]\nframe\n\nseries", "기본적으로 DataFrame과 Series 간의 산술 연산은 Series의 색인을 DataFrame의 칼럼에 맞추고 아래 로우로 전파한다.", "frame - series", "만약 색인 값을 DataFrame의 칼럼이나 Series의 색인에서 찾을 수 없다면 그 객체는 형식을 맞추기 위해 재색인된다.", "series2 = Series(range(3), index = list('bef'))\nframe + series2", "만약 각 로우에 대해 연산을 수행하고 싶다면 산술연산 메서드를 사용하면 된다.", "series3 = frame['d']\n\nframe\n\nseries3\n\nframe.sub(series3, axis=0)", "5.2.5 함수 적용과 매핑\npandas 객체에도 NumPy의 유니버설 함수( 배열의 각 원소에 적용되는 메서드)를 적용할 수 있다.", "frame = DataFrame(np.random.randn(4,3), columns=list('bde'),\n index=['Utah', 'Ohio', 'Texas', 'Oregon'])\nframe\n\nnp.abs(frame) #절대값", "자주 사용되는 또 다른 연산은 각 로우나 칼럼의 1차원 배열에 함수를 적용하는 것이다.\nDataFrame의 apply 메서드를 통해 수행할 수 있다.", "f = lambda x: x.max() - x.min()\nframe.apply(f)\n\nframe.apply(f, axis=1)", "배열의 합계나 평균 같은 일반적인 통계는 DataFrame의 메서드로 있으므로 apply 메서드를 사용해야만 하는 것은 아니다.\napply 메서드에 전달된 함수는 스칼라 값을 반환할 필요가 없으며, Series 또는 여러 값을 반환해도 된다.", "def f(x):\n return Series([x.min(), x.max()], index=['min', 'max'])\n\nframe.apply(f)", "배열의 각 원소에 적용되는 파이썬의 함수를 사용할 수도 있다. frame 객체에서 실수 값을 문자열 포맷으로 변환하고 싶다면 applymap을 이용해서 다음과 같이 해도 된다.", "format = lambda x: '%.2f' % x\nframe.applymap(format)", "이 메서드의 이름이 applymap인 이유는 Series가 각 원소에 적용할 함수를 지정하기 위한 map 메서드를 가지고 있기 때문이다.", "frame['e'].map(format)", "5.2.6 정렬과 순위\n어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반화하는 sort_index 메서드를 사용하면 된다.", "frame['e'].map(format)", "5.2.6\n어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반환하는 sort_index 메서드를 사용하면 된다.", "obj = Series(range(4), index=['d', 'a', 'b', 'c'])\nobj.sort_index()", "DataFrame은 로우나 칼럼 중 하나의 축을 기준으로 정렬할 수 있다.", "frame = DataFrame(np.arange(8).reshape((2,4)), index = ['three', 'one'], columns = ['d', 'a', 'b', 'c'])\n\nframe.sort_index()\n\nframe.sort_index(axis=1)", "데이터는 기본적으로 오름차순으로 정렬되지만 내림차순으로 정렬할 수도 있다.", "frame.sort_index(axis=1, ascending=False)", "Series 객체를 값에 따라 정렬하고 싶다면 sort_values 메서드를 사용한다.", "obj.sort_values()\n\nobj = Series([4, 7, -3, 2])\nobj.sort_values()", "정렬할 때 비어있는 값은 기본적으로 Series 객체에서 가장 마지막에 위치한다.\nobj = Series([4, np.nan, 7, np.nan, -3, 2])\nobj.sort_values()\nDataFrame에서는 하나 이상의 칼럼에 있는 값으로 정렬이 필요할 수 있다. 이럴 때는 by 옵션에 필요한 칼럼의 이름을 넘기면 된다.", "frame = DataFrame({'b': [4, 7, -3, 2], 'a': [0, 1, 0, 1]})\nframe\n\nframe.sort_values(by='b')", "여러 개의 칼럼을 정렬하려면 칼럼의 이름이 담긴 리스트를 전달하면 된다.", "frame.sort_values(by=['a','b'])", "순위는 정렬과 거의 흡사하며, 1부터 배열의 유효한 데이터 개수까지 순위를 매긴다. 또한 순위는 numpy.argsort에서 반환하는 간접 정렬 색인과 유사한데, 동률인 순위를 처리하는 방식이 다르다. 기본적으로 Series와 DataFrame의 rank 메서드는 동점인 항목에 대해서는 평균 순위를 매긴다.", "obj = Series([7, -5, 7, 4, 2, 0 ,4])\nobj.rank()", "데이터 상에서 나타나는 순서에 따라 순위를 매길 수도 있다.", "obj.rank(method='first')", "내림차순으로 순위를 매길 수도 있다.", "# 'max' 는 같은 값을 가지는 그룹을 높은 순위로 매긴다.\nobj.rank(ascending=False, method='max')", "5.2.7 중복 색인\n지금까지 살펴본 모든 예제는 모두 축의 이름(색인 값)이 유일했다.\npandas의 많은 함수(reindex 같은) 에서 색인 값은 유일해야 하지만 강제 사항은 아니다. 이제 색인 값이 중복된 Series객체를 살펴보자.", "obj = Series(range(5), index=['a', 'a', 'b', 'b', 'c'])\nobj", "색인의 is_unique 속성은 해당 값이 유일한지 아닌지 알려준다.", "obj.index.is_unique", "중복되는 색인 값이 있으면 색인을 이용한 데이터 선택은 다르게 동작하고 하나의 Series 객체를 반환한다. 하지만 중복되는 색인 값이 없으면 색인을 이용한 데이터 선택은 스칼라 값을 반환한다.", "obj['a']\n\nobj['c']", "DataFrame에서 로우를 선택하는 것도 동일하다.", "df = DataFrame(np.random.randn(4, 3), index=['a', 'a', 'b','b'])\ndf\n\ndf.ix['b']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jacekfilipczuk/Interview-Tests
Nightclub_thieves/Night Club Thieves.ipynb
gpl-3.0
[ "Stolen phones in a nightclub\nScenario\nYour friend owns a nightclub, and the nightclub is suffering an epidemic of stolen phones. At least one thief has been frequenting her club and stealing her visitors' phones. Her club has a licence scanner at its entrance, that records the name and date-of-birth of everyone who enters the club - so she should have the personal details of the thief or thieves; it's just mixed in with the details of her honest customers. She heard you call yourself a \"data scientist\", so has asked you to come up with a ranked list of up to 20 suspects to give to the police.\nShe's given you:\nvisitor_log.csv - details of who visited the club and on what day (those visiting 2AM Tuesday are counted as visiting on Monday).\n`theft_log.csv' - a list of days on which thefts were reported to occur (again, thefts after midnight are counted as the previous day - we're being nice to you)\nShe wants from you:\n- A list of ID details for the 20 most suspicious patrons, ranked from most-suspicious to least-suspicious.\n- If you think there are fewer than 20 thieves, a list of ID details for everyone that you think is a thief.\nPlease just follow the flow of the notebook!", "import pandas as pd\nfrom matplotlib.pyplot import plot\nimport numpy as np\nfrom scipy.stats import zscore,normaltest\n%pylab inline\npylab.rcParams['figure.figsize'] = (20, 15)\n\n# First things first lets load the data\nvisitor_log = pd.read_csv('visitor_log.csv')\nprint visitor_log.shape\n\n# After I loaded the first time the thief_log I noticed that there is no header, so I have changed the reading\n# process adding the column name for the dates\nthief_log = pd.read_csv('theft_log.csv', header=None,names=['theft_date'])\nprint thief_log.shape\n\n# looking at the data\nvisitor_log.head()\n\nthief_log.head()\n\n# In a previous version of pandas there was a function called 'convert_objects', that function was used to\n# infer automatically the type of the data in the columns\n# Now that function is deprecated so I have to cast all the data to the correct type\nvisitor_log.visit_date = pd.to_datetime(visitor_log.visit_date)\nvisitor_log.dob = pd.to_datetime(visitor_log.dob)\nvisitor_log.name = visitor_log.name.astype(basestring)\nvisitor_log.dtypes\n\nthief_log.theft_date = pd.to_datetime(thief_log.theft_date)\nthief_log.dtypes\n\n# Lets see if there is any duplicate or missing data and remove it\nvisitor_log = visitor_log.dropna().drop_duplicates()\nthief_log.csv = thief_log.dropna().drop_duplicates()\nprint visitor_log.shape\nprint thief_log.shape", "All good! Now we can proceed", "#Since there is the date record in each data set, it makes sense to merge them on the date.\n# In this way we can later see who was in the nightclub when there was a robbery\nlog_merged = visitor_log.merge(thief_log,how='left', left_on='visit_date',right_on='theft_date')\n\n# In order to plot, we need numerical values, so I am adding a boolean value that is 1 if there was a robbery and \n# zero otherwise\nlog_merged['theft_bool']= log_merged['theft_date'].apply(lambda x: 0 if pd.isnull(x) else 1)\n\n#Now it is possible to group the data by the visitor name and count how many times that visitor was at the club\n# when there was a robbery.\n# I sort the data in descending order and plot it\nlog_merged.groupby('name').theft_bool.sum().sort_values(ascending=False).plot(legend=False)", "It seems we have some interesting insight here.\nFrom the plot we can see that Karen Keeney is the visitor who was most often at the club when there was a robbery.", "# Lets have a look what is the density distribution of this data\nlog_merged.groupby('name').theft_bool.sum().sort_values(ascending=False).plot(legend=False,kind='kde')", "Looks like a normal distribution!", "# Now lets count how many times a visitor was at the club during a robbery\n# lets remove the visitors who were never at the club during a robbery\ntheft_sum = log_merged.groupby('name').theft_bool.sum().sort_values(ascending=False).reset_index()\ntheft_sum = theft_sum[theft_sum.theft_bool!=0]\ntheft_sum.columns = ['name','robbery_count']\ntheft_sum.shape\n\n# Karen Keeney seems our thief! But lets not jump to conclusions\ntheft_sum.head(20)", "Since there were in total 34 theft days, it is clear that there is more than one thief.\nWe will come back to this later on.", "#Lets have a look at the visitors who where at the club when there was no robbery\nnot_theft = log_merged[log_merged.theft_bool!=1]\nnot_theft.groupby('name').theft_bool.count().sort_values().plot(legend=False)\n\n# Lets store the previous count into a dataframe\nnot_theft_count = not_theft.groupby('name').theft_bool.count().sort_values().reset_index()\nnot_theft_count.columns=['name','not_robbery_count']\n\n#Now its time to calculate some stats for the visitors!!\n# I add the counts to the original dataset and make some new ones, the names should be self explanatory\nfinal_stats = log_merged.merge(not_theft_count,how='left',left_on='name',right_on='name')\nfinal_stats = final_stats.merge(theft_sum,how='left',left_on='name',right_on='name')\n#here i replace missing values with zeros, this creates an 'artificial' value for the column theft_date\n# but it doesnt matter since there is the theft_bool column now\nfinal_stats.fillna(0,inplace=True)\nfinal_stats['total_visits'] = final_stats.robbery_count + final_stats.not_robbery_count\nfinal_stats['robbery_freq'] = final_stats.robbery_count/final_stats.total_visits\nfinal_stats.head()\n\n#Lets have a look at the robbery frequency distribution\nfinal_stats.robbery_freq.plot(legend=True,kind='density')\n\n#lets remove visitors who are clearly innocent\nfinal_stats = final_stats[final_stats.robbery_freq!=0]\n\n# Lets have another look at the distribution\nfinal_stats.robbery_freq.plot(legend=True,kind='density')\n\nfinal_stats.sort_values(['robbery_freq','robbery_count'],ascending=False).head()\n\nfinal_stats.describe()\n\n# Now we want to confirm in a statistical way which visitors could be a thief\n# Lets select just the numerical columns and calculate the z-score\nnumeric_cols = final_stats.drop('theft_date',axis=1).select_dtypes(include=[np.number]).columns\nzscore_vals = final_stats[numeric_cols].apply(zscore).sort_values(['robbery_count','robbery_freq'],ascending=False)\nzscore_vals.head()\n\n#the results have the original index of the visitor in the dataframe, lets show the top 50 \nidxs = zscore_vals.drop_duplicates().index.values\nfor i in idxs[:20]:\n print final_stats.ix[i]\n\nresults = final_stats.join(zscore_vals,rsuffix='_zscore').sort_values(['robbery_count','theft_bool'],ascending=False).drop_duplicates('name')\n\nresults.sort_values(['robbery_count','robbery_freq'],ascending=False).head()", "Now that we have some more confirmation on which visitors could be a thief, lets find out who are the members of the gang", "#First lets select just the data when there was a theft and lets do the crosstab\nrobbery_dates = final_stats[final_stats.theft_bool==1]\ncross_tab = pd.crosstab(robbery_dates['name'],robbery_dates['visit_date'], margins=True)\n\n#Basically here we see a matrix like dataset. Every cell tells us if a visitor was at the club at that date\ncross_tab.head()\n\n# We do the crosstab because it is easier to find out the gang members with data in this format!\ncross_tab_mat=cross_tab.reset_index().values\n\n#With data in this format we can calculate the correlation between visitors\n#We need to transpose the dataframe because the correlation is calculated between the columns\ncross_tab.transpose().corr().head()", "Unfortunatelly the data is too sparse and the correlation metric is not really helpfull to understand which visitors could be complices\nThat is why we need to calculate the correlation manually.\nWe calculate how many times two visitors were at the club when there was a theft, and we do this for every pair of visitors", "#Lets count how many times two visitors were at the club on the same date, when a phone was stolen\nfrom itertools import combinations\ncriminal_friends = []\nfor r1,r2 in combinations([x for x in cross_tab_mat],2):\n matches = [i for i,j in zip(r1,r2) if i==j and i!=0]\n criminal_friends.append((r1[0],r2[0], len(matches)))\n\n#We clearly can see that Karen Keeney is the boss, and with her there are some other thiefs!\ncriminal_friends_ranking = sorted(criminal_friends, key=lambda x:x[2],reverse=True)\ncriminal_friends_ranking[:50]\n\n# Lets select the top names as possible thiefs\n# I use 21 as a threshold because the number of names retrieved is less than 20\ncriminal_names = set()\nfor t in criminal_friends_ranking:\n if t[2]>21:\n criminal_names.add(t[0])\n criminal_names.add(t[1])\nprint len(criminal_names)\n\n#Finally here the list of possible thiefs of the nightclub!!\n# If I have to choose, I would say that Karen Keeney is the boss! And the following names are most\n# likely her accomplices\ncriminals = results[results.name.isin(criminal_names)].sort_values(['robbery_count','robbery_freq'],ascending=False)\n\ncriminals.head()\n\ncriminals[['name','dob']]", "Conclusion\nThe proposed list shows the possible thieves, but there is another road that could be explored.\nIt is possible that the thieves went to the club just once and stole the phones.\nAs a future exercise it would be interesting to cover this other scenario as well." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rsignell-usgs/notebook
HOPS/hops_velocity3.ipynb
mit
[ "The problem: CF compliant readers cannot read HOPS dataset directly.\nThe solution: read with the netCDF4-python raw interface and create a CF object from the data.\nNOTE: Ideally this should be a nco script that could be run as a CLI script and fix the files.\nHere I am using Python+iris. That works and could be written as a CLI script too.\nThe main advantage is that it takes care of the CF boilerplate.\nHowever, this approach is to \"heavy-weight\" to be applied in many variables and files.", "from netCDF4 import Dataset\n\n#url = ('http://geoport.whoi.edu/thredds/dodsC/usgs/data2/rsignell/gdrive/'\n# 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')\nurl = ('/usgs/data2/rsignell/gdrive/'\n 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')\nnc = Dataset(url)", "Extract lon, lat variables from vgrid2 and u, v variables from vbaro.\nThe goal is to split the joint variables into individual CF compliant phenomena.", "vtime = nc['time']\ncoords = nc['vgrid2']\nvbaro = nc['vbaro']", "Using iris to create the CF object.\nNOTE: ideally lon, lat should be DimCoord like time and not AuxCoord,\nbut iris refuses to create 2D DimCoord. Not sure if CF enforces that though.\nFirst the Coordinates.\nFIXME: change to a full time slice later!", "import iris\niris.FUTURE.netcdf_no_unlimited = True\n\nlongitude = iris.coords.AuxCoord(coords[:, :, 0],\n var_name='vlat',\n long_name='lon values',\n units='degrees')\n\nlatitude = iris.coords.AuxCoord(coords[:, :, 1],\n var_name='vlon',\n long_name='lat values',\n units='degrees')\n\n# Dummy Dimension coordinate to avoid default names.\n# (This is either a bug in CF or in iris. We should not need to do this!)\nlon = iris.coords.DimCoord(range(866),\n var_name='x',\n long_name='lon_range',\n standard_name='longitude')\n\nlat = iris.coords.DimCoord(range(1032),\n var_name='y',\n long_name='lat_range',\n standard_name='latitude')", "Now the phenomena.\nNOTE: You don't need the broadcast_to trick if saving more than 1 time step.\nHere I just wanted the single time snapshot to have the time dimension to create a full example.", "vbaro.shape\n\nimport numpy as np\n\nu_cubes = iris.cube.CubeList()\nv_cubes = iris.cube.CubeList()\n\n\nfor k in range(vbaro.shape[0]): # vbaro.shape[0]\n time = iris.coords.DimCoord(vtime[k],\n var_name='time',\n long_name=vtime.long_name,\n standard_name='time',\n units=vtime.units)\n \n u = vbaro[k, :, :, 0]\n u_cubes.append(iris.cube.Cube(np.broadcast_to(u, (1,) + u.shape),\n units=vbaro.units,\n long_name=vbaro.long_name,\n var_name='u',\n standard_name='barotropic_eastward_sea_water_velocity',\n dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)],\n aux_coords_and_dims=[(latitude, (1, 2)),\n (longitude, (1, 2))]))\n\n v = vbaro[k, :, :, 1]\n v_cubes.append(iris.cube.Cube(np.broadcast_to(v, (1,) + v.shape),\n units=vbaro.units,\n long_name=vbaro.long_name,\n var_name='v',\n standard_name='barotropic_northward_sea_water_velocity',\n dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)],\n aux_coords_and_dims=[(longitude, (1, 2)),\n (latitude, (1, 2))]))", "Join the individual CF phenomena into one dataset.", "u_cube = u_cubes.concatenate_cube()\nv_cube = v_cubes.concatenate_cube()\n\ncubes = iris.cube.CubeList([u_cube, v_cube])", "Save the CF-compliant file!", "iris.save(cubes, 'hops.nc')\n\n!ncdump -h hops.nc" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "使用 tf.data 加载 pandas dataframes\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">Download notebook</a>\n </td>\n</table>\n\n本教程提供了如何将 pandas dataframes 加载到 tf.data.Dataset。\n本教程使用了一个小型数据集,由克利夫兰诊所心脏病基金会(Cleveland Clinic Foundation for Heart Disease)提供. 此数据集中有几百行CSV。每行表示一个患者,每列表示一个属性(describe)。我们将使用这些信息来预测患者是否患有心脏病,这是一个二分类问题。\n使用 pandas 读取数据", "!pip install tensorflow-gpu==2.0.0-rc1\nimport pandas as pd\nimport tensorflow as tf", "下载包含心脏数据集的 csv 文件。", "csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')", "使用 pandas 读取 csv 文件。", "df = pd.read_csv(csv_file)\n\ndf.head()\n\ndf.dtypes", "将 thal 列(数据帧(dataframe)中的 object )转换为离散数值。", "df['thal'] = pd.Categorical(df['thal'])\ndf['thal'] = df.thal.cat.codes\n\ndf.head()", "使用 tf.data.Dataset 读取数据\n使用 tf.data.Dataset.from_tensor_slices 从 pandas dataframe 中读取数值。\n使用 tf.data.Dataset 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 loading data guide 可以了解更多。", "target = df.pop('target')\n\ndataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))\n\nfor feat, targ in dataset.take(5):\n print ('Features: {}, Target: {}'.format(feat, targ))", "由于 pd.Series 实现了 __array__ 协议,因此几乎可以在任何使用 np.array 或 tf.Tensor 的地方透明地使用它。", "tf.constant(df['thal'])", "随机读取(shuffle)并批量处理数据集。", "train_dataset = dataset.shuffle(len(df)).batch(1)", "创建并训练模型", "def get_compiled_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n ])\n\n model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n return model\n\nmodel = get_compiled_model()\nmodel.fit(train_dataset, epochs=15)", "代替特征列\n将字典作为输入传输给模型就像创建 tf.keras.layers.Input 层的匹配字典一样简单,应用任何预处理并使用 functional api。 您可以使用它作为 feature columns 的替代方法。", "inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}\nx = tf.stack(list(inputs.values()), axis=-1)\n\nx = tf.keras.layers.Dense(10, activation='relu')(x)\noutput = tf.keras.layers.Dense(1, activation='sigmoid')(x)\n\nmodel_func = tf.keras.Model(inputs=inputs, outputs=output)\n\nmodel_func.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])", "与 tf.data 一起使用时,保存 pd.DataFrame 列结构的最简单方法是将 pd.DataFrame 转换为 dict ,并对该字典进行切片。", "dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)\n\nfor dict_slice in dict_slices.take(1):\n print (dict_slice)\n\nmodel_func.fit(dict_slices, epochs=15)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ShubhamDebnath/Coursera-Machine-Learning
Course 4/Autonomous driving application - Car detection - v3.ipynb
mit
[ "Autonomous driving - Car detection\nWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). \nYou will learn to:\n- Use object detection on a car detection dataset\n- Deal with bounding boxes\nRun the following cell to load the packages and dependencies that are going to be useful for your journey!", "import argparse\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\nimport scipy.io\nimport scipy.misc\nimport numpy as np\nimport pandas as pd\nimport PIL\nimport tensorflow as tf\nfrom keras import backend as K\nfrom keras.layers import Input, Lambda, Conv2D\nfrom keras.models import load_model, Model\nfrom yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes\nfrom yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body\n\n%matplotlib inline", "Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).\n1 - Problem Statement\nYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. \n<center>\n<video width=\"400\" height=\"200\" src=\"nb_images/road_video_compressed2.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.\n</center></caption>\n<img src=\"nb_images/driveai.png\" style=\"width:100px;height:100;\">\nYou've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.\n<img src=\"nb_images/box_label.png\" style=\"width:500px;height:250;\">\n<caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption>\nIf you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. \nIn this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. \n2 - YOLO\nYOLO (\"you only look once\") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm \"only looks once\" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.\n2.1 - Model details\nFirst things to know:\n- The input is a batch of images of shape (m, 608, 608, 3)\n- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. \nWe will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).\nLets look in greater detail at what this encoding represents. \n<img src=\"nb_images/architecture.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption>\nIf the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.\nSince we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.\nFor simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).\n<img src=\"nb_images/flatten.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption>\nNow, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.\n<img src=\"nb_images/probability_extraction.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption>\nHere's one way to visualize what YOLO is predicting on an image:\n- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). \n- Color that grid cell according to what object that grid cell considers the most likely.\nDoing this results in this picture: \n<img src=\"nb_images/proba_map.png\" style=\"width:300px;height:300;\">\n<caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>\nNote that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. \nAnother way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: \n<img src=\"nb_images/anchor_map.png\" style=\"width:200px;height:200;\">\n<caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>\nIn the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: \n- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)\n- Select only one box when several boxes overlap with each other and detect the same object.\n2.2 - Filtering with a threshold on class scores\nYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class \"score\" is less than a chosen threshold. \nThe model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:\n- box_confidence: tensor of shape $(19 \\times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.\n- boxes: tensor of shape $(19 \\times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.\n- box_class_probs: tensor of shape $(19 \\times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.\nExercise: Implement yolo_filter_boxes().\n1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: \npython\na = np.random.randn(19*19, 5, 1)\nb = np.random.randn(19*19, 5, 80)\nc = a * b # shape of c will be (19*19, 5, 80)\n2. For each box, find:\n - the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)\n - the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)\n3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] &lt; 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep. \n4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint)\nReminder: to call a Keras function, you should use K.function(...).", "# GRADED FUNCTION: yolo_filter_boxes\n\ndef yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):\n \"\"\"Filters YOLO boxes by thresholding on object and class confidence.\n \n Arguments:\n box_confidence -- tensor of shape (19, 19, 5, 1)\n boxes -- tensor of shape (19, 19, 5, 4)\n box_class_probs -- tensor of shape (19, 19, 5, 80)\n threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box\n \n Returns:\n scores -- tensor of shape (None,), containing the class probability score for selected boxes\n boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes\n classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes\n \n Note: \"None\" is here because you don't know the exact number of selected boxes, as it depends on the threshold. \n For example, the actual output size of scores would be (10,) if there are 10 boxes.\n \"\"\"\n \n # Step 1: Compute box scores\n ### START CODE HERE ### (≈ 1 line)\n box_scores = box_confidence * box_class_probs\n ### END CODE HERE ###\n \n # Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score\n ### START CODE HERE ### (≈ 2 lines)\n box_classes = K.argmax(box_scores, axis = -1)\n box_class_scores = K.max(box_scores, axis = -1)\n ### END CODE HERE ###\n \n # Step 3: Create a filtering mask based on \"box_class_scores\" by using \"threshold\". The mask should have the\n # same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)\n ### START CODE HERE ### (≈ 1 line)\n filtering_mask = box_class_scores >= threshold\n ### END CODE HERE ###\n \n # Step 4: Apply the mask to scores, boxes and classes\n ### START CODE HERE ### (≈ 3 lines)\n scores = tf.boolean_mask(box_class_scores, filtering_mask)\n boxes = tf.boolean_mask(boxes, filtering_mask)\n classes = tf.boolean_mask(box_classes, filtering_mask)\n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_a:\n box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)\n boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)\n box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)\n scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.shape))\n print(\"boxes.shape = \" + str(boxes.shape))\n print(\"classes.shape = \" + str(classes.shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 10.7506\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [ 8.42653275 3.27136683 -0.5313437 -4.94137383]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n 7\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (?,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (?, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (?,)\n </td>\n </tr>\n\n</table>\n\n2.3 - Non-max suppression\nEven after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). \n<img src=\"nb_images/non-max-suppression.png\" style=\"width:500px;height:400;\">\n<caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>\nNon-max suppression uses the very important function called \"Intersection over Union\", or IoU.\n<img src=\"nb_images/iou.png\" style=\"width:500px;height:400;\">\n<caption><center> <u> Figure 8 </u>: Definition of \"Intersection over Union\". <br> </center></caption>\nExercise: Implement iou(). Some hints:\n- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.\n- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1).\n- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:\n - xi1 = maximum of the x1 coordinates of the two boxes\n - yi1 = maximum of the y1 coordinates of the two boxes\n - xi2 = minimum of the x2 coordinates of the two boxes\n - yi2 = minimum of the y2 coordinates of the two boxes\n- In order to compute the intersection area, you need to make sure the height and width of the intersection are positive, otherwise the intersection area should be zero. Use max(height, 0) and max(width, 0).\nIn this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.", "# GRADED FUNCTION: iou\n\ndef iou(box1, box2):\n \"\"\"Implement the intersection over union (IoU) between box1 and box2\n    \n Arguments:\n box1 -- first box, list object with coordinates (x1, y1, x2, y2)\n    box2 -- second box, list object with coordinates (x1, y1, x2, y2)\n    \"\"\"\n\n # Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.\n ### START CODE HERE ### (≈ 5 lines)\n xi1 = np.max([box1[0], box2[0]])\n yi1 = np.max([box1[1], box2[1]])\n xi2 = np.min([box1[2], box2[2]])\n yi2 = np.min([box1[3], box2[3]])\n inter_area = np.max([yi2 - yi1, 0]) * np.max([xi2 - xi1, 0])\n ### END CODE HERE ###    \n\n # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)\n ### START CODE HERE ### (≈ 3 lines)\n box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])\n box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])\n union_area = box1_area + box2_area - inter_area\n ### END CODE HERE ###\n \n # compute the IoU\n ### START CODE HERE ### (≈ 1 line)\n iou = inter_area / union_area\n ### END CODE HERE ###\n \n return iou\n\nbox1 = (2, 1, 4, 3)\nbox2 = (1, 2, 3, 4) \nprint(\"iou = \" + str(iou(box1, box2)))", "Expected Output:\n<table>\n <tr>\n <td>\n **iou = **\n </td>\n <td>\n 0.14285714285714285\n </td>\n </tr>\n\n</table>\n\nYou are now ready to implement non-max suppression. The key steps are: \n1. Select the box that has the highest score.\n2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.\n3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.\nThis will remove all boxes that have a large overlap with the selected boxes. Only the \"best\" boxes remain.\nExercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation):\n- tf.image.non_max_suppression()\n- K.gather()", "# GRADED FUNCTION: yolo_non_max_suppression\n\ndef yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):\n \"\"\"\n Applies Non-max suppression (NMS) to set of boxes\n \n Arguments:\n scores -- tensor of shape (None,), output of yolo_filter_boxes()\n boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)\n classes -- tensor of shape (None,), output of yolo_filter_boxes()\n max_boxes -- integer, maximum number of predicted boxes you'd like\n iou_threshold -- real value, \"intersection over union\" threshold used for NMS filtering\n \n Returns:\n scores -- tensor of shape (, None), predicted score for each box\n boxes -- tensor of shape (4, None), predicted box coordinates\n classes -- tensor of shape (, None), predicted class for each box\n \n Note: The \"None\" dimension of the output tensors has obviously to be less than max_boxes. Note also that this\n function will transpose the shapes of scores, boxes, classes. This is made for convenience.\n \"\"\"\n \n max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()\n K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor\n \n # Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep\n ### START CODE HERE ### (≈ 1 line)\n nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)\n ### END CODE HERE ###\n \n # Use K.gather() to select only nms_indices from scores, boxes and classes\n ### START CODE HERE ### (≈ 3 lines)\n scores = K.gather(scores, nms_indices)\n boxes = K.gather(boxes, nms_indices)\n classes = K.gather(classes, nms_indices)\n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_b:\n scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)\n boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)\n classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)\n scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.eval().shape))\n print(\"boxes.shape = \" + str(boxes.eval().shape))\n print(\"classes.shape = \" + str(classes.eval().shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 6.9384\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [-5.299932 3.13798141 4.45036697 0.95942086]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n -2.24527\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (10, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n\n</table>\n\n2.4 Wrapping up the filtering\nIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. \nExercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): \npython\nboxes = yolo_boxes_to_corners(box_xy, box_wh)\nwhich converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes\npython\nboxes = scale_boxes(boxes, image_shape)\nYOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. \nDon't worry about these two functions; we'll show you where they need to be called.", "# GRADED FUNCTION: yolo_eval\n\ndef yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):\n \"\"\"\n Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.\n \n Arguments:\n yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:\n box_confidence: tensor of shape (None, 19, 19, 5, 1)\n box_xy: tensor of shape (None, 19, 19, 5, 2)\n box_wh: tensor of shape (None, 19, 19, 5, 2)\n box_class_probs: tensor of shape (None, 19, 19, 5, 80)\n image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)\n max_boxes -- integer, maximum number of predicted boxes you'd like\n score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box\n iou_threshold -- real value, \"intersection over union\" threshold used for NMS filtering\n \n Returns:\n scores -- tensor of shape (None, ), predicted score for each box\n boxes -- tensor of shape (None, 4), predicted box coordinates\n classes -- tensor of shape (None,), predicted class for each box\n \"\"\"\n \n ### START CODE HERE ### \n \n # Retrieve outputs of the YOLO model (≈1 line)\n box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs\n\n # Convert boxes to be ready for filtering functions \n boxes = yolo_boxes_to_corners(box_xy, box_wh)\n\n # Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)\n scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)\n \n # Scale boxes back to original image shape.\n boxes = scale_boxes(boxes, image_shape)\n\n # Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)\n scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)\n \n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_b:\n yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))\n scores, boxes, classes = yolo_eval(yolo_outputs)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.eval().shape))\n print(\"boxes.shape = \" + str(boxes.eval().shape))\n print(\"classes.shape = \" + str(classes.eval().shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 138.791\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n 54\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (10, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n\n</table>\n\n<font color='blue'>\nSummary for YOLO:\n- Input image (608, 608, 3)\n- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. \n- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):\n - Each cell in a 19x19 grid over the input image gives 425 numbers. \n - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. \n - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect\n- You then select only few boxes based on:\n - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold\n - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes\n- This gives you YOLO's final output. \n3 - Test YOLO pretrained model on images\nIn this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.", "sess = K.get_session()", "3.1 - Defining classes, anchors and image shape.\nRecall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files \"coco_classes.txt\" and \"yolo_anchors.txt\". Let's load these quantities into the model by running the next cell. \nThe car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.", "class_names = read_classes(\"model_data/coco_classes.txt\")\nanchors = read_anchors(\"model_data/yolo_anchors.txt\")\nimage_shape = (720., 1280.) ", "3.2 - Loading a pretrained model\nTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in \"yolo.h5\". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the \"YOLOv2\" model, but we will more simply refer to it as \"YOLO\" in this notebook.) Run the cell below to load the model from this file.", "yolo_model = load_model(\"model_data/yolo.h5\")", "This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.", "yolo_model.summary()", "Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.\nReminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).\n3.3 - Convert output of the model to usable bounding box tensors\nThe output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.", "yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))", "You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.\n3.4 - Filtering boxes\nyolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.", "scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)", "3.5 - Run the graph on an image\nLet the fun begin. You have created a (sess) graph that can be summarized as follows:\n\n<font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font>\n<font color='purple'> yolo_model.output </font> is processed by yolo_head. It gives you <font color='purple'> yolo_outputs </font>\n<font color='purple'> yolo_outputs </font> goes through a filtering function, yolo_eval. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>\n\nExercise: Implement predict() which runs the graph to test YOLO on an image.\nYou will need to run a TensorFlow session, to have it compute scores, boxes, classes.\nThe code below also uses the following function:\npython\nimage, image_data = preprocess_image(\"images/\" + image_file, model_image_size = (608, 608))\nwhich outputs:\n- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.\n- image_data: a numpy-array representing the image. This will be the input to the CNN.\nImportant note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.", "def predict(sess, image_file):\n \"\"\"\n Runs the graph stored in \"sess\" to predict boxes for \"image_file\". Prints and plots the preditions.\n \n Arguments:\n sess -- your tensorflow/Keras session containing the YOLO graph\n image_file -- name of an image stored in the \"images\" folder.\n \n Returns:\n out_scores -- tensor of shape (None, ), scores of the predicted boxes\n out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes\n out_classes -- tensor of shape (None, ), class index of the predicted boxes\n \n Note: \"None\" actually represents the number of predicted boxes, it varies between 0 and max_boxes. \n \"\"\"\n\n # Preprocess your image\n image, image_data = preprocess_image(\"images/\" + image_file, model_image_size = (608, 608))\n\n # Run the session with the correct tensors and choose the correct placeholders in the feed_dict.\n # You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})\n ### START CODE HERE ### (≈ 1 line)\n out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict = {yolo_model.input: image_data, K.learning_phase(): 0})\n ### END CODE HERE ###\n\n # Print predictions info\n print('Found {} boxes for {}'.format(len(out_boxes), image_file))\n # Generate colors for drawing bounding boxes.\n colors = generate_colors(class_names)\n # Draw bounding boxes on the image file\n draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)\n # Save the predicted bounding box on the image\n image.save(os.path.join(\"out\", image_file), quality=90)\n # Display the results in the notebook\n output_image = scipy.misc.imread(os.path.join(\"out\", image_file))\n imshow(output_image)\n \n return out_scores, out_boxes, out_classes", "Run the following cell on the \"test.jpg\" image to verify that your function is correct.", "out_scores, out_boxes, out_classes = predict(sess, \"test.jpg\")", "Expected Output:\n<table>\n <tr>\n <td>\n **Found 7 boxes for test.jpg**\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.60 (925, 285) (1045, 374)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.66 (706, 279) (786, 350)\n </td>\n </tr>\n <tr>\n <td>\n **bus**\n </td>\n <td>\n 0.67 (5, 266) (220, 407)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.70 (947, 324) (1280, 705)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.74 (159, 303) (346, 440)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.80 (761, 282) (942, 412)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.89 (367, 300) (745, 648)\n </td>\n </tr>\n</table>\n\nThe model you've just run is actually able to detect 80 different classes listed in \"coco_classes.txt\". To test the model on your own images:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the cell above code\n 4. Run the code and see the output of the algorithm!\nIf you were to run your session in a for loop over all your images. Here's what you would get:\n<center>\n<video width=\"400\" height=\"200\" src=\"nb_images/pred_video_compressed2.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks drive.ai for providing this dataset! </center></caption>\n<font color='blue'>\nWhat you should remember:\n- YOLO is a state-of-the-art object detection model that is fast and accurate\n- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume. \n- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.\n- You filter through all the boxes using non-max suppression. Specifically: \n - Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes\n - Intersection over Union (IoU) thresholding to eliminate overlapping boxes\n- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise. \nReferences: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's github repository. The pretrained weights used in this exercise came from the official YOLO website. \n- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You Only Look Once: Unified, Real-Time Object Detection (2015)\n- Joseph Redmon, Ali Farhadi - YOLO9000: Better, Faster, Stronger (2016)\n- Allan Zelener - YAD2K: Yet Another Darknet 2 Keras\n- The official YOLO website (https://pjreddie.com/darknet/yolo/) \nCar detection dataset:\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\" /></a><br /><span xmlns:dct=\"http://purl.org/dc/terms/\" property=\"dct:title\">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0 International License</a>. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ramseylab/networkscompbio
class06_degdist_python3.ipynb
apache-2.0
[ "CS446/546 - Class Session 6 - Degree Distribution\nIn this class session we are going to plot the degree distribution of the undirected human\nprotein-protein interaction network (PPI), without using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder) and we'll \nmanually compute the degree of each vertex (protein) in the network. We'll then compute\nthe count N(k) of vertices that have a given vertex degree k, for all k values.\nFinally, we'll plot the degree distribution and discuss whether it is consistent with the \nresults obtained in the Jeong et al. article for the yeast PPI. \nWe'll start by loading all of the Python modules that we will need for this notebook. Because we'll be calling a bunch of functions from numpy and matplotlib.pyplot, we'll alias them as np and plt, respectively.", "import pandas\nimport collections\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats\nimport igraph", "Step 1: load in the SIF file as a pandas data frame using pandas.read_csv. Make sure the column names of your data frame are species1, interaction_type, and species2. Save the data frame as the object sif_data.", "sif_data = pandas.read_csv(\"shared/pathway_commons.sif\",\n sep=\"\\t\", names=[\"species1\",\"interaction_type\",\"species2\"])", "Step 2: restrict the interactions to protein-protein undirected (\"in-complex-with\", \"interacts-with\"). The restricted data frame should be called interac_ppi. Then we will make a copy using copy so interac_ppi is independent of sif_data which will be convenient for this exercise.", "interaction_types_ppi = set([\"interacts-with\",\n \"in-complex-with\"])\ninterac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)].copy()", "Step 3: for each interaction, reorder species1 and species2 (if necessary) so that\nspecies1 &lt; species2 (in terms of the species names, in lexicographic order). You can make a boolean vector boolean_vec containing (for each row of the data frame interac_ppi) True if species2 &gt; species1 (by lexicographic order) for that row, or False otherwise. You can then use the loc method on the data frame, to select rows based on boolean_vec and the two columns that you want (species1 and species2). Thanks to Garrett Bauer for suggesting this approach (which is more elegant than looping over all rows):", "boolean_vec = interac_ppi['species1'] > interac_ppi['species2']\ninterac_ppi.loc[boolean_vec, ['species1', 'species2']] = interac_ppi.loc[boolean_vec, ['species2', 'species1']].values", "Since iterating is reasonably fast in Python, you could also do this using a for loop through all of the rows of the data frame, swapping species1 and species2 entries as needed (and in-place in the data frame) so that in the resulting data frame interac_ppi satisfies species1 &lt; species2 for all rows.", "for rowtuple in interac_ppi.head().iterrows():\n row = rowtuple[1]\n rowid = rowtuple[0]\n print(rowid)\n if row['species1'] > row['species2']:\n interac_ppi['species1'][rowid] = row['species2'] \n interac_ppi['species2'][rowid] = row['species1']\n\ntype(interac_ppi.head())\n\nfor i in range(0, interac_ppi.shape[0]):\n if interac_ppi.iat[i,0] > interac_ppi.iat[i,2]:\n temp_name = interac_ppi.iat[i,0]\n interac_ppi.set_value(i, 'species1', interac_ppi.iat[i,2])\n interac_ppi.set_value(i, 'species2', temp_name)", "Step 4: Restrict the data frame to only the columns species1 and species2. Use the drop_duplicates method to subset the rows of the resulting two-column data frame to only unique rows. Assign the resulting data frame object to have the name interac_ppi_unique. This is basically selecting only unique pairs of proteins, regardless of interaction type.", "interac_ppi_unique = interac_ppi[[\"species1\",\"species2\"]].drop_duplicates()", "Step 5: compute the degree of each vertex (though we will not associate the vertex degrees with vertex names here, since for this exercise we only need the vector of vertex degree values, not the associated vertex IDs). You'll want to create an object called vertex_degrees_ctr which is of class collections.Counter. You'll want to name the final list of vertex degrees, vertex_degrees.", "vertex_degrees_ctr = collections.Counter()\nallproteins = interac_ppi_unique[\"species1\"].tolist() + interac_ppi_unique[\"species2\"].tolist()\nfor proteinname in allproteins:\n vertex_degrees_ctr.update([proteinname])\nvertex_degrees = list(vertex_degrees_ctr.values())", "Let's print out the vertex degrees of the first 10 vertices, in whatever the key order is. Pythonistas -- anyone know of a less convoluted way to do this?", "dict(list(dict(vertex_degrees_ctr).items())[0:9])", "Let's print out the first ten entries of the vertex_degrees list. Note that we don't expect it to be in the same order as the output from the previous command above, since dict changes the order in the above.", "vertex_degrees[0:9]", "Step 6: Calculate the histogram of N(k) vs. k, using 30 bins, using plt.hist. You'll probably want to start by making a numpy.array from your vertex_degrees. Call the resulting object from plt.hist, hist_res. Obtain a numpy array of the bin counts as element zero from hist_res (name this object hist_counts) and obtain a numpy array of the bin centers (which are k values) as element one from hist_res (name this object hist_breaks). Finally, you want the k values of the centers of the bins, not the breakpoint values. So you'll have to do some arithmetic to go from the 31 k values of the bin breakpoints, to a numpy array of the 30 k values of the centers of the bins. You should call that object kvals.", "nbins=30\nhist_res = plt.hist(np.array(vertex_degrees), bins=nbins)\nhist_counts = hist_res[0]\nhist_breaks = hist_res[1]\nkvals = 0.5*(hist_breaks[0:(nbins-1)]+hist_breaks[1:nbins])", "Let's print the k values of the bin centers:", "kvals", "Let's print the histogram bin counts:", "hist_counts", "Step 7: Plot N(k) vs. k, on log-log scale (using only the first 14 points, which is plenty sufficient to see the approximatey scale-free degree distribution and where it becomes exponentially suppressed at high k. For this you'll use plt.loglog. You'll probably want to adjust the x-axis limits using plt.gca().set_xlim(). To see the plot, you'll have to do plt.show().", "plt.loglog(kvals[1:14],\n hist_counts[1:14], \"o\")\nplt.xlabel(\"k\")\nplt.ylabel(\"N(k)\")\nplt.gca().set_xlim([50, 2000])\nplt.show()", "Step 8: Do a linear fit to the log10(N(k)) vs. log10(k) data (just over the range in which the relationship appears to be linear, which is the first four points). You'll want to use scipy.stats.linregress to do the linear regression. Don't forget to log10-transform the data using np.log10.", "scipy.stats.linregress(np.log10(kvals[0:3]), np.log10(hist_counts[0:3]))", "Slope is -1.87 with SE 0.084, i.e., gamma = 1.87 with a 95% CI of about +/- 0.17.\nNow let's compute the slope for the degree distribution Fig. 1b in the Jeong et al. article, for the yeast PPI. The change in ordinate over the linear range is about -6.5 in units of natural logarithm. The change in abscissa over the linear range is approximately log(45)-log(2), so we can compute the Jeong et al. slope thus:", "jeong_slope = -6.5/(np.log(45)-np.log(2))\nprint(\"%.2f\" % jeong_slope)", "How close was your slope from the human PPI, to the slope for the yeast PPI from the Jeong et al. article?\nNow we'll do the same thing in just a few lines of igraph code", "g = igraph.Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)\n\nxs, ys = zip(*[(left, count) for left, _, count in \n g.degree_distribution().bins()])\nplt.loglog(xs, ys)\nplt.show()\n\nigraph.statistics.power_law_fit(g.degree())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brucefan1983/GPUMD
examples/nep_potentials/PbTe/train/nep_tutorial.ipynb
gpl-3.0
[ "NEP tutorial\n1. Introduction\n\nIn this tutorial, we will show how to use the nep executable to train a NEP potential and check the results.\nThe NEP potential has been first introduced in GPUMD-v2.6 and this version wil be called NEP1. It corresponds to the following paper: \nZheyong Fan, Zezhu Zeng, Cunzhi Zhang, Yanzhou Wang, Keke Song, Haikuan Dong, Yue Chen, and Tapio Ala-Nissila, Neuroevolution machine learning potentials: Combining high accuracy and low cost in atomistic simulations and application to heat transport, Phys. Rev. B 104, 104309 (2021). https://doi.org/10.1103/PhysRevB.104.104309\nFrom GPUMD-v2.7 to GPUMD-v3.0 we have updated the NEP potential to NEP2, which corresponds to the following paper:\nZheyong Fan, Improving the accuracy of the neuroevolution machine learning potential for multi-component systems, Journal of Physics: Condensed Matter 34 125902 (2022). https://doi.org/10.1088/1361-648X/ac462b\nNow we are using GPUMD-v3.1, in which we have updated the NEP potential to NEP3. A corresponding manuscript is in preparation.\n\n2. Prerequisites\n\nYou need to have access to an Nvidia GPU with compute capability >= 3.5. CUDA 9.0 or higher is also needed to compile the GPUMD package.\nYou can download GPUMD-v3.1, unpack it, go to the src directory, and type make -j to compile. Then, you should get two executables: gpumd and nep in the src directory.\n\n3. Train a NEP potential for PbTe\n\nTo train a NEP potential, one must prepare three input files: train.in, test.in and nep.in. The train.in file contains all the training data, the test.in file contains all the testing data, and the nep.in file contains some controlling parameters.\n\n3.1. Prepare the train.in and test.in files\n\nThe train.in file should be prepared according to the documentation: https://gpumd.zheyongfan.org/index.php/The_train.in_input_file\nFor the example in this tutorial, we have already prepared the trian.in file in the current folder.\nThis is the training data set for PbTe as used in the NEP1 and NEP2 papers.\nIn this example, there are only energy and force training data. In general, one can also have virial in train.in.\nWe used the VASP code to calculate the training data, but nep does not care about this. You can generate the data in train.in using any method that works for you. For example, before doing new DFT calculations, you can first check if there are training data publicly available and then try to convert them into the format as required by train.in. There are quite a few tools (https://github.com/brucefan1983/GPUMD/tree/master/tools/nep_related) available for this purpose.\nFor simplicity, the test.in file has the same data as in the train.in file.\n\n3.2. Prepare the nep.in file.\n\nFirst, study the document about this input file: https://gpumd.zheyongfan.org/index.php/The_nep.in_input_file\nFor the example in this tutorial, we have prepared the nep.in file in the current folder. \nThe nep.in file reads:\ntype 2 Te Pb\nversion 3\ncutoff 8 4\nn_max 4 4\nl_max 4 2\nneuron 30\nbatch 25\ngeneration 10000\nWe explain it line by line below:\nThere are $N_{\\rm typ}=2$ atoms types in the training set: Te and Pb. The user can also write Pb first and Te next. The code will remember the order of the atom types here and record it into the NEP potential file nep.txt. Therefore, the user does not need to remember the order of the atom types here.\nThe NEP version is 3, which means NEP3 as introduced in GPUMD-v3.1.\nThe radial and angular cutoff distances are $r_{\\rm c}^{\\rm R}=8$ angstrom and $r_{\\rm c}^{\\rm A}=4$ angstrom, respectively.\nThe $n_{\\rm max}$ parameter for the radial and angular descriptor parts are $n_{\\rm max}^{\\rm R}=4$ and $n_{\\rm max}^{\\rm A}=4$, respectively.\nThe $l_{\\rm max}$ parameter for the 3-body and 4-body descriptor parts are $l_{\\rm max}^{\\rm 3b}=4$ and $l_{\\rm max}^{\\rm 4b}=2$, respectively.\nThe number of neurons in the hidden layer (there is only one hidden layer in NEP) is $N_{\\rm neu}=30$.\nThe batch size is $N_{\\rm bat}=25$.\nThe maximum number of generations is $N_{\\rm gen}=10^4$.\nAll the other parameters will have default values (please check the screen output).\n\n3.3. Run the nep executable\n\n\nWe have prepared a driver input file input_nep.txt in the examples/nep_potentials folder. This driver input file reads:\n 1\nexamples/nep_potentials/PbTe/train\n\n\nNow one can run the nep executable to train a NEP potential for PbTe. To do this, first go the directory where you can see src. \n\nIn Linux, type this to run:\nsrc/nep &lt; examples/nep_potentials/input_nep.txt\n\nIn Windows, type this to run:\nsrc\\nep &lt; examples/nep_potentials/input_nep.txt\n\n\nThis example takes about 11 min to run using my laptop with a GeForce RTX 2070 GPU card. I have restarted the run twice (see the end of this tutorial for the information of restarting).\n\n\n4. Check the training results\n\nAfter running the nep executable, there will be some output on the screen. We encourage the user to read the screen output carefully. It can help to understand the calculation flow of nep. \nSome files will be generated in the folder containing the train.in and nep.in files and will be updated every 100 generations (some will be updated every 1000 generations). Therefore, one can check the results even before finishing the maximum number of generations as set in nep.in. \nIf the user has not studied the documentation of the output files generated by nep, it is time to read it here: https://gpumd.zheyongfan.org/index.php/The_output_files_for_the_nep_executable\nWe will use Python to visualize the results in some of the output files next. We first load pylab that we need.", "from pylab import *", "4.1. Checking the loss.out file.\n\nWe see that the $\\mathcal{L}_1$ and $\\mathcal{L}_2$ regularization loss functions first increase and then decreases, which indicates the effectiveness of the regularization.\nThe energy loss is the root mean square error (RMSE) of energy per atom, which converges to about 0.4 meV/atom. \nThe force loss is the RMSE of force components, which converges to about 36 meV/Angstrom.", "loss = loadtxt('loss.out')\nloglog(loss[:, 1:6])\nloglog(loss[:, 7:9])\nxlabel('Generation/100')\nylabel('Loss')\nlegend(['Total', 'L1-regularization', 'L2-regularization', 'Energy-train', 'Force-train', 'Energy-test', 'Force-test'])\ntight_layout()", "4.2. Checking the energy_test.out file\n\nThe dots are the raw data and the line represents the identity function used to guide the eyes.", "energy_test = loadtxt('energy_test.out')\nplot(energy_test[:, 1], energy_test[:, 0], '.')\nplot(linspace(-3.85,-3.69), linspace(-3.85,-3.69), '-')\nxlabel('DFT energy (eV/atom)')\nylabel('NEP energy (eV/atom)')\ntight_layout()", "4.3. Checking the force_test.out file\n\nThe dots are the raw data and the line represents the identity function used to guide the eyes.", "force_test = loadtxt('force_test.out')\nplot(force_test[:, 3:6], force_test[:, 0:3], '.')\nplot(linspace(-4,4), linspace(-4,4), '-')\nxlabel('DFT force (eV/A)')\nylabel('NEP force (eV/A)')\nlegend(['x direction', 'y direction', 'z direction'])\ntight_layout()", "4.4. Checking the virial_test.out file\n\nIn general, one can similarly check the virial.out file, but in this particular example, virial data were not used in the training process (no virial data exist in train.in). \nHowever, virial.out still contains the predicted virial data for each structure in the training set.", "virial_test = loadtxt('virial_test.out')\nplot(virial_test[:, 1], virial_test[:, 0], '.')\nplot(linspace(-2,2), linspace(-2,2), '-')\nxlabel('DFT virial (eV/atom)')\nylabel('NEP virial (eV/atom)')\ntight_layout()", "4.5. Checking the other files\n\nOne can similarly check the other files: energy_train.out, force_train.out, and virial_train.out. In this example, we have used the same data for training and testing. \n\n5. Restart\n\nAfter each 100 steps, the nep.restart file will be updated.\nIf nep.restart exists, it means you want to continue the previous training. Remember not to change the parameters related to the descriptor and the number of neurons. Otherwise the restarting behavior is undefined.\nIf you want to train from scratch, you need to delete nep.restart first (better to first make a copy of all the results from the previous training)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
karthikrangarajan/intro-to-sklearn
07.Pipelines and Parameter Tuning.ipynb
bsd-3-clause
[ "Search for best parameters and create a pipeline\nEasy reading...create and use a pipeline\n\n<b>Pipelining</b> (as an aside to this section)\n* Pipeline(steps=[...]) - where steps can be a list of processes through which to put data or a dictionary which includes the parameters for each step as values\n* For example, here we do a transformation (SelectKBest) and a classification (SVC) all at once in a pipeline we set up.\n\nSee a full example here\nNote: If you wish to perform <b>multiple transformations</b> in your pipeline try FeatureUnion", "from sklearn.cross_validation import train_test_split\nfrom sklearn.svm import SVC\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_selection import SelectKBest, chi2\nfrom sklearn.datasets import load_iris\n\niris = load_iris()\nX, y = iris.data, iris.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)\n\n# a feature selection instance\nselection = SelectKBest(chi2, k = 2)\n\n# classification instance\nclf = SVC(kernel = 'linear')\n\n# make a pipeline\npipeline = Pipeline([(\"feature selection\", selection), (\"classification\", clf)])\n\n# train the model\npipeline.fit(X, y)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef plot_fit(X_train, y_train, X_test, y_pred):\n plt.plot(X_test, y_pred, label = \"Model\")\n #plt.plot(X_test, fun, label = \"Function\")\n plt.scatter(X_train, y_train, label = \"Samples\")\n plt.xlabel(\"x\")\n plt.ylabel(\"y\")\n plt.xlim((0, 1))\n plt.ylim((-2, 2))\n\nimport numpy as np\n\ny_pred = pipeline.predict(X_test)\n\n#plot_fit(X_train, y_train, X_test, y_pred)", "Last, but not least, Searching Parameter Space with GridSearchCV", "from sklearn.grid_search import GridSearchCV\n\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\n\npoly = PolynomialFeatures(include_bias = False)\nlm = LinearRegression()\n\npipeline = Pipeline([(\"polynomial_features\", poly),\n (\"linear_regression\", lm)])\n\nparam_grid = dict(polynomial_features__degree = list(range(1, 30, 2)),\n linear_regression__normalize = [False, True])\n\ngrid_search = GridSearchCV(pipeline, param_grid=param_grid)\ngrid_search.fit(X[:, np.newaxis], y)\nprint(grid_search.best_params_)", "Created by a Microsoft Employee.\nThe MIT License (MIT)<br>\nCopyright (c) 2016 Micheleen Harris" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
tuanavu/coursera-university-of-washington
machine_learning/4_clustering_and_retrieval/assigment/week2/0_nearest-neighbors-features-and-metrics_graphlab.ipynb
mit
[ "Nearest Neighbors\nWhen exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically\n* Decide on a notion of similarity\n* Find the documents that are most similar \nIn the assignment you will\n* Gain intuition for different notions of similarity and practice finding similar documents. \n* Explore the tradeoffs with representing documents using raw word counts and TF-IDF\n* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.\nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nImport necessary packages\nAs usual we need to first import the Python packages that we will need.", "import graphlab\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline", "Load Wikipedia dataset\nWe will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).", "wiki = graphlab.SFrame('people_wiki.gl')\n\nwiki", "Extract word count vectors\nAs we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.", "wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])\n\nwiki", "Find nearest neighbors\nLet's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.", "model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],\n method='brute_force', distance='euclidean')", "Let's look at the top 10 nearest neighbors by performing the following query:", "model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)", "All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.\n\nFrancisco Barrio is a Mexican politician, and a former governor of Chihuahua.\nWalter Mondale and Don Bonker are Democrats who made their career in late 1970s.\nWynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.\nAndy Anstett is a former politician in Manitoba, Canada.\n\nNearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.\nFor instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:", "def top_words(name):\n \"\"\"\n Get a table of the most frequent words in the given person's wikipedia page.\n \"\"\"\n row = wiki[wiki['name'] == name]\n word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])\n return word_count_table.sort('count', ascending=False)\n\nobama_words = top_words('Barack Obama')\nobama_words\n\nbarrio_words = top_words('Francisco Barrio')\nbarrio_words", "Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details.\nFor instance, running\nobama_words.join(barrio_words, on='word')\nwill extract the rows from both tables that correspond to the common words.", "combined_words = obama_words.join(barrio_words, on='word')\ncombined_words", "Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.", "combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})\ncombined_words", "Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.", "combined_words.sort('Obama', ascending=False)", "Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?\nHint:\n* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.\n* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task.\n - Convert the list of top 5 words into set using the syntax\nset(common_words)\n where common_words is a Python list. See this link if you're curious about Python sets.\n - Extract the list of keys of the word count dictionary by calling the keys() method.\n - Convert the list of keys into a set as well.\n - Use issubset() method to check if all 5 words are among the keys.\n* Now apply the has_top_words function on every row of the SFrame.\n* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.", "common_words = combined_words['word'][0:5] # YOUR CODE HERE\n\ndef has_top_words(word_count_vector):\n # extract the keys of word_count_vector and convert it to a set\n unique_words = set(word_count_vector.keys()) # YOUR CODE HERE\n # return True if common_words is a subset of unique_words\n # return False otherwise\n return set(common_words).issubset(unique_words) # YOUR CODE HERE\n\nwiki['has_top_words'] = wiki['word_count'].apply(has_top_words)\n\n# use has_top_words column to answer the quiz question\nwiki['has_top_words'].sum() # YOUR CODE HERE\n\nwiki.head(5)", "Checkpoint. Check your has_top_words function on two random articles:", "print 'Output from your function:', has_top_words(wiki[32]['word_count'])\nprint 'Correct output: True'\nprint 'Also check the length of unique_words. It should be 167'\n\nprint 'Output from your function:', has_top_words(wiki[33]['word_count'])\nprint 'Correct output: False'\nprint 'Also check the length of unique_words. It should be 188'", "Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?\nHint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage.", "wiki['word_count'][wiki['name']=='Barack Obama'][0]\n\nprint graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='Barack Obama'][0], \n wiki['word_count'][wiki['name']=='George W. Bush'][0])\nprint graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='Barack Obama'][0], \n wiki['word_count'][wiki['name']=='Joe Biden'][0])\nprint graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='George W. Bush'][0], \n wiki['word_count'][wiki['name']=='Joe Biden'][0])", "Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.", "def get_common_words(name1, name2, num_of_words=10):\n words1 = top_words(name1)\n words2 = top_words(name2)\n combined_words = words1.join(words2, on='word')\n return combined_words.sort('count', ascending=False)[0:num_of_words]\n\nget_common_words('Barack Obama', 'George W. Bush')", "Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.\nTF-IDF to the rescue\nMuch of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as \"the\", \"and\", and \"his\". So nearest neighbors is recommending plausible results sometimes for the wrong reasons. \nTo retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:", "wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])\n\nmodel_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n method='brute_force', distance='euclidean')\n\nmodel_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)", "Let's determine whether this list makes sense.\n* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.\n* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.\nClearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.", "wiki.head(3)\n\ndef top_words_tf_idf(name):\n row = wiki[wiki['name'] == name]\n word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])\n return word_count_table.sort('weight', ascending=False)\n\nobama_tf_idf = top_words_tf_idf('Barack Obama')\nobama_tf_idf\n\nschiliro_tf_idf = top_words_tf_idf('Phil Schiliro')\nschiliro_tf_idf", "Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.", "combined_tf_idf_words = obama_tf_idf.join(schiliro_tf_idf, on='word')\ncombined_tf_idf_words.sort('weight', ascending=False)", "The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.\nQuiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?", "common_words = combined_tf_idf_words['word'][0:5] # YOUR CODE HERE\n\ndef has_top_words(word_count_vector):\n # extract the keys of word_count_vector and convert it to a set\n unique_words = set(word_count_vector.keys()) # YOUR CODE HERE\n # return True if common_words is a subset of unique_words\n # return False otherwise\n return set(common_words).issubset(unique_words) # YOUR CODE HERE\n\nwiki['has_top_words'] = wiki['word_count'].apply(has_top_words)\n\n# use has_top_words column to answer the quiz question\nwiki['has_top_words'].sum() # YOUR CODE HERE", "Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.\nChoosing metrics\nYou may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.\nQuiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.", "print graphlab.distances.euclidean(wiki['tf_idf'][wiki['name']=='Barack Obama'][0],\n wiki['tf_idf'][wiki['name']=='Joe Biden'][0])", "The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:", "model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)", "But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.", "def compute_length(row):\n return len(row['text'])\n\nwiki['length'] = wiki.apply(compute_length) \n\nwiki.head(3)\n\nnearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)\nnearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})\n\nnearest_neighbors_euclidean.sort('rank')", "To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.", "plt.figure(figsize=(10.5,4.5))\nplt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,\n label='Entire Wikipedia', zorder=3, alpha=0.8)\nplt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,\n label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)\nplt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,\n label='Length of Barack Obama', zorder=2)\nplt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,\n label='Length of Joe Biden', zorder=1)\nplt.axis([1000, 5500, 0, 0.004])\n\nplt.legend(loc='best', prop={'size':15})\nplt.title('Distribution of document length')\nplt.xlabel('# of words')\nplt.ylabel('Percentage')\nplt.rcParams.update({'font.size':16})\nplt.tight_layout()", "Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long. \nNote: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.\nTo remove this bias, we turn to cosine distances:\n$$\nd(\\mathbf{x},\\mathbf{y}) = 1 - \\frac{\\mathbf{x}^T\\mathbf{y}}{\\|\\mathbf{x}\\| \\|\\mathbf{y}\\|}\n$$\nCosine distances let us compare word distributions of two articles of varying lengths.\nLet us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.", "model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n method='brute_force', distance='cosine')\n\nnearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)\nnearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})\n\nnearest_neighbors_cosine.sort('rank')", "From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.\nLet's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.", "plt.figure(figsize=(10.5,4.5))\nplt.figure(figsize=(10.5,4.5))\nplt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,\n label='Entire Wikipedia', zorder=3, alpha=0.8)\nplt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,\n label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)\nplt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,\n label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)\nplt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,\n label='Length of Barack Obama', zorder=2)\nplt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,\n label='Length of Joe Biden', zorder=1)\nplt.axis([1000, 5500, 0, 0.004])\nplt.legend(loc='best', prop={'size':15})\nplt.title('Distribution of document length')\nplt.xlabel('# of words')\nplt.ylabel('Percentage')\nplt.rcParams.update({'font.size': 16})\nplt.tight_layout()", "Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.\nMoral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application.\nProblem with cosine distances: tweets vs. long articles\nHappily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.\n+--------------------------------------------------------+\n| +--------+ |\n| One that shall not be named | Follow | |\n| @username +--------+ |\n| |\n| Democratic governments control law in response to |\n| popular act. |\n| |\n| 8:05 AM - 16 May 2016 |\n| |\n| Reply Retweet (1,332) Like (300) |\n| |\n+--------------------------------------------------------+\nHow similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)", "sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})\nsf['word_count'] = graphlab.text_analytics.count_words(sf['text'])\n\nencoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')\nencoder.fit(wiki)\nsf = encoder.transform(sf)\nsf", "Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.", "tweet_tf_idf = sf[0]['tf_idf.word_count']\ntweet_tf_idf", "Now, compute the cosine distance between the Barack Obama article and this tweet:", "obama = wiki[wiki['name'] == 'Barack Obama']\nobama_tf_idf = obama[0]['tf_idf']\ngraphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)", "Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:", "model2_tf_idf.query(obama, label='name', k=10)", "With cosine distances, the tweet is \"nearer\" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from The Atlantic, you wouldn't recommend him/her a tweet." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BrainIntensive/OnlineBrainIntensive
resources/matplotlib/Examples/lineplots.ipynb
mit
[ "Sebastian Raschka\nback to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery\nLink the matplotlib gallery at https://github.com/rasbt/matplotlib-gallery", "%load_ext watermark\n\n%watermark -u -v -d -p matplotlib,numpy", "<font size=\"1.5em\">More info about the %watermark extension</font>", "%matplotlib inline", "<br>\n<br>\nLineplots in matplotlib\nSections\n\n\nSimple line plot\n\n\nLine plot with error bars\n\n\nLine plot with x-axis labels and log-scale\n\n\nGaussian probability density functions\n\n\nCumulative Plots\n\n\nCumulative Sum\n\n\nAbsolute Count\n\n\n\n\nColormaps\n\n\nMarker styles\n\n\nLine styles\n\n\n<br>\n<br>\nSimple line plot\n[back to top]", "import matplotlib.pyplot as plt\n\nx = [1, 2, 3]\n\ny_1 = [50, 60, 70]\ny_2 = [20, 30, 40]\n\nplt.plot(x, y_1, marker='x')\nplt.plot(x, y_2, marker='^')\n\nplt.xlim([0, len(x)+1])\nplt.ylim([0, max(y_1+y_2) + 10])\nplt.xlabel('x-axis label')\nplt.ylabel('y-axis label')\nplt.title('Simple line plot')\nplt.legend(['sample 1', 'sample2'], loc='upper left')\n\nplt.show()", "<br>\n<br>\nLine plot with error bars\n[back to top]", "import matplotlib.pyplot as plt\n\nx = [1, 2, 3]\n\ny_1 = [50, 60, 70]\ny_2 = [20, 30, 40]\n\ny_1_err = [4.3, 4.5, 2.0] \ny_2_err = [2.3, 6.9, 2.1] \n\nx_labels = [\"x1\", \"x2\", \"x3\"]\n\nplt.errorbar(x, y_1, yerr=y_1_err, fmt='-x')\nplt.errorbar(x, y_2, yerr=y_2_err, fmt='-^')\n\nplt.xticks(x, x_labels)\nplt.xlim([0, len(x)+1])\nplt.ylim([0, max(y_1+y_2) + 10])\nplt.xlabel('x-axis label')\nplt.ylabel('y-axis label')\nplt.title('Line plot with error bars')\nplt.legend(['sample 1', 'sample2'], loc='upper left')\n\nplt.show()", "<br>\n<br>\nLine plot with x-axis labels and log-scale\n[back to top]", "import matplotlib.pyplot as plt\n\nx = [1, 2, 3]\n\ny_1 = [0.5,7.0,60.0]\ny_2 = [0.3,6.0,30.0]\n\n\n\nx_labels = [\"x1\", \"x2\", \"x3\"]\n\nplt.plot(x, y_1, marker='x')\nplt.plot(x, y_2, marker='^')\n\nplt.xticks(x, x_labels)\nplt.xlim([0,4])\nplt.xlabel('x-axis label')\nplt.ylabel('y-axis label')\nplt.yscale('log')\nplt.title('Line plot with x-axis labels and log-scale')\nplt.legend(['sample 1', 'sample2'], loc='upper left')\n\nplt.show()", "<br>\n<br>\nGaussian probability density functions\n[back to top]", "import numpy as np\nfrom matplotlib import pyplot as plt\nimport math\n\ndef pdf(x, mu=0, sigma=1):\n \"\"\"\n Calculates the normal distribution's probability density \n function (PDF). \n \n \"\"\"\n term1 = 1.0 / ( math.sqrt(2*np.pi) * sigma )\n term2 = np.exp( -0.5 * ( (x-mu)/sigma )**2 )\n return term1 * term2\n\n\nx = np.arange(0, 100, 0.05)\n\npdf1 = pdf(x, mu=5, sigma=2.5**0.5)\npdf2 = pdf(x, mu=10, sigma=6**0.5)\n\nplt.plot(x, pdf1)\nplt.plot(x, pdf2)\nplt.title('Probability Density Functions')\nplt.ylabel('p(x)')\nplt.xlabel('random variable x')\nplt.legend(['pdf1 ~ N(5,2.5)', 'pdf2 ~ N(10,6)'], loc='upper right')\nplt.ylim([0,0.5])\nplt.xlim([0,20])\n\nplt.show()", "<br>\n<br>\nCumulative Plots\n[back to top]\n<br>\n<br>\nCumulative Sum\n[back to top]", "import numpy as np\nimport matplotlib.pyplot as plt\n\nA = np.arange(1, 11)\nB = np.random.randn(10) # 10 rand. values from a std. norm. distr.\nC = B.cumsum()\n\nfig, (ax0, ax1) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10,5))\n\n## A) via plt.step()\n\nax0.step(A, C, label='cumulative sum') # cumulative sum via numpy.cumsum()\nax0.scatter(A, B, label='actual values')\nax0.set_ylabel('Y value')\nax0.legend(loc='upper right')\n\n\n## B) via plt.plot()\n\nax1.plot(A, C, label='cumulative sum') # cumulative sum via numpy.cumsum()\nax1.scatter(A, B, label='actual values')\nax1.legend(loc='upper right')\n\nfig.text(0.5, 0.04, 'sample number', ha='center', va='center')\nfig.text(0.5, 0.95, 'Cumulative sum of 10 samples from a random normal distribution', ha='center', va='center')\n\nplt.show()", "<br>\n<br>\nAbsolute Count\n[back to top]", "import numpy as np\nimport matplotlib.pyplot as plt\n\nA = np.arange(1, 11)\nB = np.random.randn(10) # 10 rand. values from a std. norm. distr.\n\nplt.figure(figsize=(10,5))\n\n\nplt.step(np.sort(B), A) \nplt.ylabel('sample count')\nplt.xlabel('x value')\nplt.title('Number of samples at a certain threshold')\n\nplt.show()", "<br>\n<br>\nColormaps\n[back to top]\nMore color maps are available at http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps", "import numpy as np\nimport matplotlib.pyplot as plt\n\n\nfig, (ax0, ax1) = plt.subplots(1,2, figsize=(14, 7))\nsamples = range(1,16)\n\n# Default Color Cycle\n\nfor i in samples:\n ax0.plot([0, 10], [0, i], label=i, lw=3) \n\n# Colormap \n \ncolormap = plt.cm.Paired\nplt.gca().set_color_cycle([colormap(i) for i in np.linspace(0, 0.9, len(samples))])\n\nfor i in samples:\n ax1.plot([0, 10], [0, i], label=i, lw=3) \n \n# Annotation \n \nax0.set_title('Default color cycle')\nax1.set_title('plt.cm.Paired colormap')\nax0.legend(loc='upper left')\nax1.legend(loc='upper left')\n\nplt.show()", "<br>\n<br>\nMarker styles\n[back to top]", "\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmarkers = [\n\n'.', # point\n',', # pixel\n'o', # circle\n'v', # triangle down\n'^', # triangle up\n'<', # triangle_left\n'>', # triangle_right\n'1', # tri_down\n'2', # tri_up\n'3', # tri_left\n'4', # tri_right\n'8', # octagon\n's', # square\n'p', # pentagon\n'*', # star\n'h', # hexagon1\n'H', # hexagon2\n'+', # plus\n'x', # x\n'D', # diamond\n'd', # thin_diamond\n'|', # vline\n\n]\n\nplt.figure(figsize=(13, 10))\nsamples = range(len(markers))\n\n\nfor i in samples:\n plt.plot([i-1, i, i+1], [i, i, i], label=markers[i], marker=markers[i], markersize=10) \n\n\n# Annotation \n \nplt.title('Matplotlib Marker styles', fontsize=20)\nplt.ylim([-1, len(markers)+1])\nplt.legend(loc='lower right')\n\n\nplt.show()\n", "<br>\n<br>\nLine styles\n[back to top]", "import numpy as np\nimport matplotlib.pyplot as plt\n\nlinestyles = ['-.', '--', 'None', '-', ':']\n\nplt.figure(figsize=(8, 5))\nsamples = range(len(linestyles))\n\n\nfor i in samples:\n plt.plot([i-1, i, i+1], [i, i, i], \n label='\"%s\"' %linestyles[i], \n linestyle=linestyles[i],\n lw=4\n ) \n\n# Annotation \n \nplt.title('Matplotlib line styles', fontsize=20)\nplt.ylim([-1, len(linestyles)+1])\nplt.legend(loc='lower right')\n\n\nplt.show()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
teoguso/sentdex
finance/Data-visualization.ipynb
mit
[ "Part 8\nHere we continue with the python for finance series and visualize the big dataframe we just created.", "%matplotlib qt\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nimport numpy as np\nimport pandas as pd\n\nstyle.use('ggplot')\n\ndf = pd.read_csv('data/sp500_joined_closes.csv', index_col=0)\n\ndef visualize_data(df):\n df_corr = df.corr()\n# print(df_corr.head())\n \n data = df_corr.values\n fig = plt.figure()\n ax = fig.add_subplot(1,1,1)\n \n heatmap = ax.pcolor(data, cmap=plt.cm.RdYlGn)\n fig.colorbar(heatmap)\n ax.set_xticks(np.arange(data.shape[0]) + 0.5, minor=False)\n ax.set_yticks(np.arange(data.shape[1]) + 0.5, minor=False)\n ax.invert_yaxis()\n ax.xaxis.tick_top()\n \n column_labels = df_corr.columns\n row_labels = df_corr.index\n \n ax.set_xticklabels(column_labels)\n ax.set_yticklabels(row_labels)\n plt.xticks(rotation=90)\n heatmap.set_clim(-1, 1)\n plt.tight_layout()\n \n# df['AAPL'].plot()\n\n# Careful! This will produce a BIG graph.\nvisualize_data(df)", "Part 9\nHere we start preparing the data for ML", "import pickle\n\ndef process_data_for_labels(ticker, df):\n hm_days = 7 # How many days in the future are we looking\n tickers = df.columns.values\n df.fillna(0, inplace=True)\n \n for i in range(1, hm_days+1):\n df['{}_{}d'.format(ticker, i)] =\\\n (df[ticker].shift(-i) - df[ticker]) / df[ticker]\n \n df.fillna(0, inplace=True)\n return tickers, df\n\nprocess_data_for_labels('XOM', df)", "Part 10\nNext we start creating the labels for future supervised learning by creating a helper function.", "def buy_sell_hold(*args):\n cols = [c for c in args]\n requirement = 0.028 # This is the threshold for buying/selling\n for col in cols:\n if col > requirement:\n return 1\n elif col < -requirement:\n return -1\n else:\n return 0 ", "Part 11\nNow we use our helper function to mapaour data to buy/sell/hold accordingly.", "from collections import Counter\ndef extract_featuresets(ticker, df):\n tickers, df = process_data_for_labels(ticker, df)\n \n hm_days = 7 # How many days in the future are we looking\n# for i in range(1, hm_days+1):\n# df['{}_{}d'.format(ticker, i)] =\\\n# (df[ticker].shift(-i) - df[ticker]) / df[ticker]\n df['{}_target'.format(ticker)] = list(\n map(buy_sell_hold, *[df['{}_{}d'.format(ticker, i)] for i in range(1, hm_days+1)]))\n \n vals = df['{}_target'.format(ticker)].values.tolist()\n str_vals = [str(i) for i in vals]\n print('Data spread: {}'.format(Counter(str_vals)))\n \n df.fillna(0, inplace=True)\n df.replace([np.inf, -np.inf], np.nan)\n df.dropna(inplace=True)\n \n df_vals = df[[ticker for ticker in tickers]].pct_change()\n df_vals = df_vals.replace([np.inf, -np.inf], 0)\n df_vals.fillna(0, inplace=True)\n \n X, y = df_vals.values, df['{}_target'.format(ticker)].values\n \n return X, y, df\n\n# extract_featuresets('XOM', df) # Just to check if it works", "Part 12\nHere we use our created features to train a classifier with scikit-learn.", "from sklearn import svm, neighbors, cross_validation\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.ensemble import VotingClassifier, RandomForestClassifier\n\ndef do_ml(ticker, df):\n X, y, df = extract_featuresets(ticker, df)\n \n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25)\n \n# clf = neighbors.KNeighborsClassifier()\n# print(X_train.shape, y_train.shape); exit()\n clf = VotingClassifier([('lsvc', svm.LinearSVC()),\n ('knn', neighbors.KNeighborsClassifier()),\n ('rfor', RandomForestClassifier())], n_jobs=-1)\n \n clf.fit(X_train, y_train)\n \n confidence = clf.score(X_test, y_test)\n print(\"Accuracy:\", confidence)\n predictions = clf.predict(X_test)\n print(\"Predicted spread: {}\".format(Counter(predictions)))\n \n return confidence\n\ndo_ml('BAC', df)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Hackerfarm/awarenode
notebook/Workshop - 04 - Cleaning up data (WIP).ipynb
agpl-3.0
[ "Ok, let's get serious\nFirst, let's import a few libs we will use to make the graphs:", "%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pprint import pprint\n", "Then, let's open the raw data file and read all the content. The beginning is pretty simple: I replace all the | and = and ; with spaces and then split along spaces to get an array of the single values of each record.\nThen we clean a bit. I need to explain the source of some bad data:\n\nI messed up the recording and ended up with two times the data (I thought the file recording had failed at first). \nIt sends all its history every time, and this includes some history I did not erase where I was testing the sensors or the data dump and they will typically be records that are much closer to each other than the normal 30 minutes\nThe first line will typically be garbled, maybe a bug in the FTDI software I use. \n\nWe will correct 1. at a later time but at this stage, we can parse the timestamp (that was set a bit creatively. I am surprised the clock accepts '0' as a valid month number or '31' as an hour) and check when less than 1000 seconds separate two records (the normal time should be 1800 seconds, or 30 minutes). This will happen on some valid occasions: the saboten was reset or the date changed, so in that case, we suppress a good packet. But it will mainly remove packets from the testing time.\nFor 3. I just remove lines with not enough records in them.", "f = open(\"awanode-farmlab-2017-08-14.txt\")\ndata=[]\ntimes = list()\nprevt=0\nlinei=0\nfor l in f.read().split('\\n'):\n a=l.replace(\"|\",\" \").replace(\"=\",\" \").replace(\";\",\" \").split(\" \")\n if len(a)>11:\n tim = [int(num) for num in a[1].split(':')]\n curt = tim[2]+tim[1]*60+tim[0]*3600\n if curt-prevt>1000:\n data.append((a[2],a[4],a[6],a[8],a[10],a[12], linei))\n prevt = curt\n linei+=1\n", "Let's examine the data. First we need to convert it as 1-dimension arrays that matplotlib can use:", "ytemp = [int(row[1]) for row in data]\n", "Then we call the plotting functions:", "plt.plot(range(len(ytemp)), ytemp)\nplt.show()", "And we see that the data actually repeats itself 3 times! By exploring a bit the dataset, we can find the start of the last repeat, conveniently indicated by these erroneous zero readings.", "pprint((data[1132:])[:20])\n", "Here we are. Let's select only the data starting from after this series of zeros:", "dww = data_we_want = data[1138:]", "And let's make a more full-featured graph:", "x = range(len(dww))\ntemp_soil = [float(row[1])/100.0 for row in dww]\ntemp_air = [float(row[2]) for row in dww]\nhum = [float(row[3]) for row in dww]\nvbat = [float(row[4])/50.0 for row in dww]\nvsol = [float(row[5])/50.0 for row in dww]\n\n\nxmin=0\nxmax=len(x)\nymin = -1\nymax= 140\nplt.plot(x, temp_soil, label=\"Soil temperature\")\nplt.plot(x, temp_air, label=\"Air temperature\")\nplt.plot(x, hum, label=\"Air humidity\")\nplt.plot(x, vbat, label=\"Battery volts (100=5V)\")\nplt.plot(x, vsol, label=\"Solar volts (100=5V)\")\nplt.legend(loc='upper center', bbox_to_anchor=(1.3, 0.5))\naxes = plt.gca()\naxes.set_xlim([xmin,xmax])\naxes.set_ylim([ymin,ymax])\nplt.show()\n\nfout = open(\"awa_clean.csv\", \"w\")\nfor d in dww:\n fout.write(str(d[1]))\n for rec in d[2:-1]:\n fout.write(\", \"+str(rec))\n fout.write(\"\\n\")\nfout.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CS109-comment/NYTimes-Comment-Popularity-Prediction
IPython Process Notebook.ipynb
mit
[ "<img src=\"header2.png\">\nAbout This IPython Process Book\nWelcome! Below you will find our IPython process book for our AC209 (Data Science) final project at Harvard University. This process book details our steps in developing our solution: the data collection process we used, the statistical methods we applied, and the insights we found. Specifically, this process book follows the following outline:\n\n\n<a href='#overview'><strong>Overview and Motivation</strong></a>\n\n\n<a href='#related'><strong>Related Work</strong></a>\n\n\n<a href='#questions'><strong>Initial Questions</strong></a>\n\n\n<a href='#data'><strong>The Data</strong></a>\n\n\n<a href='#exploratory'><strong>Exploratory Data Analysis</strong></a>\n\n\n<a href='#final'><strong>Final Analysis</strong></a>\n\n\n<a href='#conclusion'><strong>Conclusion</strong></a>\n\n\n<a id='overview'></a>\nOverview and Motivation\n\nAs one of the most popular online news entities, The New York Times (NYT) attracts thousands of unique visitors each day to its website, nytimes.com. Users who visit the site can provide their thoughts and reactions to published content in the form of comments. \nThe website receives around 9,000 submitted comments per day, over 60,000 unique contributors per month, and approximately two million comment recommendations (i.e., \"likes\") each month. There is a dedicated staff commited to review each submission and even hand-select the very best comments as \"NYT Picks.\"\n<img src=\"NYT Pick ex.png\" width=\"600\">\nThe Times embraces this personal, intimate approach to comment moderation based on the hypothesis that \"readers of The Times would demand an elevated experience.\" Thus, we aim to examine the relationship between comment success (i.e., the number of recommendations it receives by other users and if it is selected as a NYT Pick) and various features of the comment itself. This way, we will be able to produce a model that can predict the success of a given comment. \nWe envision this model as a complementary tool used in the moderators' daily review of each comment. Perhaps there is a comment they are unsure about; they could run our model to see the comment's predicted success.\nThis tool could also benefit the commenters themselves. An effective prediction system could be used in an automated comment recommender to help steer users toward higher quality content.\n<a id='related'></a>\nRelated Work\nWe are all avid readers of The New York Times and find the comment section to be a great launching pad for further discussion and debate. Moreover, Andrew was on the leadership board of The Harvard Crimson, so he has experienced journalism first-hand.\nWhile we have not encountered any work that specifically looks at what makes a successful comment on a news site such as that of the NYT, there has been some recent analysis by the NYT on their top 14 commenters. Their methodology to select the top 14 was to divide Total No. of Recommendations by Total No. of Comments and add a small bonus for each \"NYT Pick\" designation. The feature on the top 14 commenters themselves can be found here, and the description about the methodology can be found here.\n<img src=\"Top Commenters.png\" width=\"600\">\nSentiment Analysis\nIn our project, we will employ sentiment analysis. Below are summaries of some interesting past work using sentiment analysis.\nAgarwal et al. focused on the sentiment analysis of Twitter data. Specifically they built models for classifying tweets into positive, negative, and neutral sentiment. To do so, three separate models were used. In their work, they found that standard natural languge processing tools are useful even in a genre which is quite different from the genre on which they were trained. \nPang et al. also invsetigate sentiment analysis but propose a new machine-learning method that applies categorization techniques to only the subjective portions of the input document. Specifically, their process is as follows: they label sentences as either subjective or objective, discarding the objective sentences as they go. Then they apply a machine learning classifier. They show that subjectivity alone can accurately represent the sentiment information.\n<a id='questions'></a>\nInitial Questions\nWe approached this project with following two main questions in mind:\n\n\nCan we predict how many recommendations a comment will receive? \n\n\nCan we predict if a comment will be selected as a NYT Pick?\n\n\nAdditionally, we aim to quantitatively examine what makes a successful and highly rated comment. For example, do longer comments fare better? Does average word or sentence length play a role? Does the sentiment of the comment have an effect?\n<a id='data'></a>\nThe Data\nWe obtained the comment data from The New York Times API. The API operates similarly to that of the Huffington Post API that was used earlier in the course. Initially, we planned to gather 300 comments per day (i.e., 12 requests per day and 25 comments per request) from Nov 1, 2014 to Nov 15, 2015. However, we ran into issues caused by the API frequently that unpredictably returned a link to the New York Times generic error page Note that this returns an HTTP response code of 200 (OK), in contrast to errors resulting from exceeding rate limits or server errors, which return 400, 404, or 500 response codes. Often, trying a specific query again would succeed, but, for several dates, we found ourselves totally unable to extract any comments at all.\nThe code below is highly robust against these sorts of errors. For each search, it tries four times to get a valid response, with short waits in between each try. Failing that, it moves on to the next date in the range, dumping that day's comments if any are found, into a file. This ensures that if the script crashes during execution, it will lose at most one day's worth of results. This produces one JSON file for each day, so we then combine all these files into one large file. Finally, we put all the comments into a data frame.", "# Import packages, libraries, and modules to be used\n\nfrom datetime import date, datetime, timedelta\nimport requests, time, simplejson, sys, re\nimport numpy as np\nimport pandas as pd\nimport sys\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LinearRegression, LogisticRegression\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.feature_extraction import text\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix, roc_curve\nfrom scipy.sparse import csr_matrix as csr\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib.ticker import FuncFormatter\nsns.set_style(\"darkgrid\")\n%matplotlib inline", "Scraping", "# Yields an iterator which allows to iterate through date.\n# This function draws from http://stackoverflow.com/a/10688060\n\ndef perdelta(start, end, delta):\n curr = start\n while curr < end:\n yield curr\n curr += delta\n\n# Scrape 300 comments per day\n# For each search, loop tries 4 times to get a valid response. \n# If >4 tries, then loops moves on to next day, dumping that day's comments if any are found into a file. \n# Outputs a JSON file for each day.\n\nfor da in perdelta(date(2015, 2, 21), date(2015, 11, 1), timedelta(days=1)):\n comments = []\n print da\n skip = False\n gotany = True\n \n # Collect 25 comments at a time for 12 times (25*12 = 300 comments)\n for i in range(12): \n if not skip:\n success = False\n count = 0\n \n # Need to include your own API key here\n url = ('http://api.nytimes.com/svc/community/v3/user-content/' +\n 'by-date.json?api-key=KEY&date=' + str(da) +\n '&offset=' + str(25*i))\n \n while not success:\n comments_data = requests.get(url)\n try:\n data = simplejson.loads(comments_data.content)\n success = True # go to the next offset\n for d in data['results']['comments']:\n comments.append(d)\n time.sleep(2)\n except:\n print 'error on {}'.format(str(da))\n print url\n count += 1\n if count > 3:\n success = True \n #skip to the next day\n skip = True \n if i == 0:\n # If we didn't get any comments from that day\n gotany = False \n time.sleep(2)\n \n # Save data pulled into JSON file \n if gotany: \n filestr = 'comments {}.json'.format(str(da))\n with open(filestr, 'w') as f:\n simplejson.dump(comments, f)\n\n# Combine all the JSON files into a single JSON file\n\nallcomments = []\nfor d in perdelta(date(2014, 1, 1), date(2015, 12, 31), timedelta(days=1)):\n # Don't have to worry about failed comment collections thanks to try/except. \n # If we didn't collect the comments for a given day, the file load fails and it moves on.\n try:\n with open('json_files/comments {}.json'.format(str(d))) as f:\n c = simplejson.load(f)\n allcomments.extend(c)\n except Exception:\n pass\n\n# Save JSON file\n\n# Note: commented out as the file has already been created. Uncomment if need to start over.\n\n#with open ('comment_data.json', 'w') as f:\n #simplejson.dump(allcomments, f)\n\n# Load JSON file\nwith open('comment_data.json', 'r') as f:\n comments = simplejson.load(f)", "Parsing the data\nNow that we have our data, we can parse it and store it into a Pandas dataframe. More columns to this data frame will be created later in this IPython Notebook, but for now we will start with the basic features that we want to extract.", "#Convert data into a dataframe by creating a dataframe out of a list of dictionaries.\ncommentsdicts=[]\n\n# Loop through every comment\nfor c in comments:\n \n d={}\n d['approveDate']=c['approveDate']\n d['assetID']=c['assetID']\n d['assetURL']=c['assetURL']\n d['commentBody']=c['commentBody'].replace(\"<br/>\",\" \")\n \n # Calculate word count by splitting on spaces. Treating two, three, etc... spaces as single space.\n d['commentWordCount'] = len(c['commentBody'].replace(\"<br/><br/>\",\" \").replace(\" \",\" \").replace(\" \",\" \").replace(\" \",\" \").split(\" \"))\n \n # Count number of letters in each word, divide by word count. Treating two, three, etc... spaces as single space.\n d['averageWordLength'] = float(len(c['commentBody'].replace(\"%\",\"\").replace(\"&\",\"\").replace(\"!\",\"\").replace(\"?\",\"\").replace(\",\",\"\").replace(\"'\",\"\").replace(\".\",\"\").replace(\":\",\"\").replace(\";\",\"\").replace(\" \",\" \").replace(\" \",\" \").replace(\" \",\" \").replace(\" \",\"\")))/d[\"commentWordCount\"]\n \n d['commentID']=c['commentID']\n d['commentSequence']=c['commentSequence']\n d['commentTitle']=c['commentTitle']\n d['createDate']=c['createDate']\n d['editorsSelection']=c['editorsSelection']\n d['lft']=c['lft']\n d['parentID']=c['parentID']\n d['recommendationCount']=c['recommendationCount']\n d['replies']=c['replies']\n d['replyCount']=c['replyCount']\n d['rgt']=c['rgt']\n d['status']=c['status']\n d['statusID']=c['statusID'] \n d['updateDate']=c['updateDate'] \n d['userDisplayName']=c['userDisplayName']\n d['userID']=c['userID']\n d['userLocation']=c['userLocation']\n d['userTitle']=c['userTitle']\n d['userURL']=c['userURL'] \n \n commentsdicts.append(d) \n\ncommentsdf=pd.DataFrame(commentsdicts)", "Let's take a look at the first 5 rows of our initial data frame.", "commentsdf.head()", "<a id='exploratory'></a>\nExploratory Data Analysis\nThe first thing we did in our Exploratory Data Analysis (EDA) was call the describe method to get a high level understanding of the data. From the below, we can see that we have <strong>~180,000 comments</strong>, where the <strong>average comment is 83 words</strong> in length. Based on our number of comments per user ID calculations, we see that the <strong>majority of users only write a single comment</strong> (though, there is one outlier who has written 820!). In terms of recommendation count, we see that the <strong>average comment receives 24 recommendations</strong> and the maximum number of recommendations received by any single comment was 3064. With respect to the NYT Pick designation, a binary indicator, we see that the mean is 0.026, which implies that <strong>just under 3% of our comments received a NYT Pick</strong> designation. \nDescribe the data", "# Describe the recommendation count data\ncommentsdf[\"recommendationCount\"].describe()\n\n# Describe the NYT Pick data\ncommentsdf[\"editorsSelection\"].describe()\n\n# Describe the comment word count data\ncommentsdf[\"commentWordCount\"].describe()\n\n# Investigate number of comments per user\ngroupByUser = commentsdf.groupby(\"userID\")\ncommentsPerUser = [i for i in groupByUser.count().commentID]\nprint \"Mean Comments per User: \", np.round(np.mean(commentsPerUser),decimals=2)\nprint \"Median Comments per User: \", np.median(commentsPerUser)\nprint \"Minimum Comments per User: \", min(commentsPerUser)\nprint \"Maximum Comments per User: \", max(commentsPerUser)", "Plot Histograms\nNext we plotted several histograms to gain a better underestanding of the distribution of the data. The plots below support some of the above insights: we see 75% of of comments have 16 or fewer comments.", "# Plot histogram of number of recommendations a comment receives\n\n# Format Y axis to remove 000's\ndef thousands(x, pos):\n 'The two args are the value and tick position'\n return '%1.f' % (x*1e-3)\n\nformatter = FuncFormatter(thousands)\n\n# Plot\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nax.yaxis.set_major_formatter(formatter)\nplt.hist(commentsdf[\"recommendationCount\"],alpha = .7, bins = 20)\nplt.title(\"Recommendations per Comment\", fontsize=14)\nplt.ylabel(\"Count (000's)\", fontsize=14)\nplt.xlabel(\"Number of Recommendations\", fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.show()\n\n# Focus our histogram to recommendation counts <1000 since the above plot is not very informative\ndata = commentsdf[commentsdf[\"recommendationCount\"] < 1000]\n\n# Plot\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nax.yaxis.set_major_formatter(formatter)\nplt.hist(data.recommendationCount,alpha = .7, bins =20)\nplt.title(\"Recommendations per Comment (<1000)\", fontsize=14)\nplt.ylabel(\"Count (000's)\", fontsize=14)\nplt.xlabel(\"Number of Recommendations\", fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.show()\n\n# Focus our histogram even more to recommendation counts <50 for best visibility into the majority of the data\ndata = commentsdf[commentsdf[\"recommendationCount\"] < 50]\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nax.yaxis.set_major_formatter(formatter)\nplt.hist(data.recommendationCount, bins=20,alpha=.7)\nplt.axvline(23.73,color = 'r',alpha = .5,label = 'Mean = 24')\nplt.axvline(5,color = 'g',alpha = .5,label = 'Median = 5')\nplt.title(\"Recommendations per Comment (<50)\", fontsize=14)\nplt.ylabel(\"Count (000's)\", fontsize=14)\nplt.xlabel(\"Number of Recommendations\", fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.legend(fontsize=14)\nplt.show()", "From these histogrames, we see that while there are some outliers on the far end of the spectrum, most of the mass is situated at <24 recommendations.", "# Plot histogram of comment word count to get a sense of how long comments are\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nax.yaxis.set_major_formatter(formatter)\nplt.hist(commentsdf[\"commentWordCount\"], bins=20, alpha = .7)\nplt.axvline(89,color = 'r',alpha = .5,label = 'Mean = 89')\nplt.axvline(61,color = 'g',alpha = .5,label = 'Median = 61')\nplt.title(\"Comment Word Count\", fontsize=14)\nplt.ylabel(\"Count (000's)\", fontsize=14)\nplt.xlabel(\"Number of Words\", fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.legend(fontsize=14)\nplt.legend( fontsize=14)\nplt.show()", "From this plot, we can see that the average and median word counts are below 100. Specifically, mean work count is 81 words which equates to several sentences.", "# Plot a Pairplot of Recommendation Count vs Comment Word Count\n\npicks = commentsdf[commentsdf.editorsSelection ==1]\nnot_picks = commentsdf[commentsdf.editorsSelection == 0]\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nplt.scatter(not_picks.commentWordCount,not_picks.recommendationCount, c = 'r',label = \"Not a NYT Pick\")\nplt.scatter(picks.commentWordCount,picks.recommendationCount, c = 'g',label = \"NYT Pick\")\n\nplt.xlim(0,350)\nplt.ylim(0,3500)\nplt.title(\"Recommendation Count vs. Comment Word Count\", fontsize=15)\nplt.ylabel(\"Recommendation Count\", fontsize=14)\nplt.xlabel(\"Comment Word Count\", fontsize=14)\nplt.legend(bbox_to_anchor=(1.4, 1),fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.show()", "This plot above show the similarity between NYT Picks and not NYT Picks with respect with recommendation count as well as comment word count.\nFeature Selection\nWe use a variety of features for our modeling:\n * Comment Word Count\n * Average Word Length in a Comment\n * Sentiment of comment \n * Term Frequency - Inverse Document Frequency (tf-idf) \n * Binary bag of words \nComment word count and average word length have already been calculated earlier. They have been repasted again below as a resfresher. We feel that it is possible that comments that are longer are possibly more thorough, and thus may receive more recommendations. Similarly, perhaps comments with a longer average word length may indicate more thoughtful, better written comments.\nIt is also important to keep in mind that our modeling will be taking in a new comment as input. Thus, certain aspects from our data set cannot be used. For example, reply count denotes the number of replies a comment has received. A new comment will have, of course, no replies before it has been posted. Thus, we will not train our modeling using reply count as a feature.", "word_features = commentsdf[['commentWordCount','averageWordLength']]\nword_features.head()", "Sentiment analysis\nWe used sentiment analysis to extract a positive and negative sentiment score for every comment. We hypothesized that comment recommendations may depend on sentiment, as posts with a strong sentiment are likely to be more controversial than neutral posts.\nThe sentiment analysis was performed using a number of steps. Firstly, we obtained the SentiWordNet database, which is a list of English words that have been ranked for positive and negative sentiment (where their complement determines the 'neutrality' of words). After removing the file's comments, we read it into a Pandas data frame. Subsequently, we combined words that are the same into one score by taking the mean across all entries. Then we saved the sentiment per word in a dictionary, and wrote functions to calculate the average sentiment for a comment. By applying these functions to the comment bodies we obtained scores for every comment in the data frame.", "# Read in the SentiWordNet database (without comments at the top)\n\nsentimentdf = pd.read_csv('SentiWordNet_prepared.txt', sep='\\t') # We stripped comments and the last newline\nsentimentdf.head()\n\n# Clean up different meanings of words into one (the mean score for a word)\n\nsentimentdf.SynsetTerms = sentimentdf.SynsetTerms.apply(lambda words: words.split(' '))\nsentimentdf.SynsetTerms = sentimentdf.SynsetTerms.apply(lambda words: [word[:-2] for word in words])\nsentimentdf.drop(['POS', 'ID', 'Gloss'], axis=1, inplace=True)\n\nrebuilt = []\n\nfor row in sentimentdf.as_matrix():\n positive = row[0]\n negative = row[1]\n words = row[2]\n for word in words:\n entry = (positive, negative, word)\n rebuilt.append(entry)\n\nsentimentdf = pd.DataFrame(rebuilt, columns=['positive', 'negative', 'word'])\nsentimentdf = sentimentdf.groupby('word').agg({'positive': np.mean, 'negative': np.mean})\nsentimentdf.head(4)\n\n# Define function to calculate score per comment (avg. positive and negative scores over words)\n\nsentiment = sentimentdf.to_dict(orient='index')\ndelete_characters = re.compile('\\W')\n\ndef positive_score(comment):\n commentlist = comment.split(' ')\n commentlist = map(lambda s: re.sub(delete_characters, '', s).lower(), commentlist)\n score = 0.0\n number = 0\n for word in commentlist:\n if word in sentiment:\n score +=sentiment[word]['positive']\n number += 1\n if number > 0:\n return score/number\n else:\n return 0\n\ndef negative_score(comment):\n commentlist = comment.split(' ')\n commentlist = map(lambda s: re.sub(delete_characters, '', s).lower(), commentlist)\n score = 0.0\n number = 0\n for word in commentlist:\n if word in sentiment:\n score +=sentiment[word]['negative']\n number += 1\n if number > 0:\n return score/number\n else:\n return 0", "Example:", "print sentiment['exquisite']", "Application to commentsdf", "# Now we calculate the sentiment score for each comment\n\ncommentsdf['positive_sentiment'] = commentsdf.commentBody.apply(positive_score)\ncommentsdf['negative_sentiment'] = commentsdf.commentBody.apply(negative_score)\nsenti_commentsdf = commentsdf[['commentBody','positive_sentiment','negative_sentiment']]\nsenti_commentsdf.head()\n\n# Checking if the feature looks valid\n\npair = sns.pairplot(commentsdf, x_vars=['positive_sentiment', 'negative_sentiment'], \n y_vars=['recommendationCount'], hue='editorsSelection', size=7);\naxes = pair.axes;\naxes[0, 0].set_xlim(0, 1);\naxes[0, 0].set_ylim(0, 3250);\naxes[0, 1].set_xlim(0, 1);\naxes[0, 1].set_ylim(0, 3250);\naxes[0, 0].set_xlabel(\"Positive Sentiment\", fontsize=14)\naxes[0, 1].set_xlabel(\"Negative Sentiment\", fontsize=14)\naxes[0, 0].set_ylabel(\"Recomendation Count\", fontsize=14)\naxes[0, 0].set_title(\"Recommendation Count vs. Positive Sentiment\", fontsize=14)\naxes[0, 1].set_title(\"Recommendation Count vs. Negative Sentiment\", fontsize=14)\nplt.show()", "As we can see from the above plots, most recommendations are given to posts with a 0.1 sentiment score, either positive or negative. We can also plot negative sentiment against positive sentiment to see if there is any correlation between the two. From the below plot, we see that the two features are not strongly correlated, which is good.", "ax = sns.regplot('positive_sentiment', 'negative_sentiment', data=commentsdf, scatter_kws={'alpha':0.3})\nax.set_ylim(0, 1.0)\nax.set_xlim(0, 1.0)\nax.set_title('Sentiment features')\nax.set_ylabel('Negative Sentiment')\nax.set_xlabel('Positive Sentiment');", "Binary Bag of Words\nWe use a binary bag of words feature that encodes which of the $n$ most popular words appear in a comment. In order to determine the size of the bag, we examine how many comments have no words in the bag for various bag sizes.", "stop_words = text.ENGLISH_STOP_WORDS.union(['didn', 've', 'don']) # add additional stop words\ncorpus = commentsdf.commentBody.tolist()\nword_in_bag_percent = list()\n\nfor f in np.arange(100, 301, 20):\n vectorizer = CountVectorizer(stop_words=stop_words, max_features=f, binary=True)\n bowmat = vectorizer.fit_transform(corpus) # bag of words matrix\n words = vectorizer.vocabulary_.keys()\n word_in_bag = np.zeros(commentsdf.shape[0])\n\n for i in range(f):\n word_in_bag = word_in_bag + bowmat[:, i].toarray().flatten()\n word_in_bag_percent.append(1. * np.sum(word_in_bag == 0) / word_in_bag.shape[0])\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nplt.xlim(100, 300)\nplt.ylim(0, 0.1)\nplt.title(\"Bag-of-Words Size vs.\\nComments With No Words in Bag\", fontsize=15)\nplt.ylabel(\"Proportion of Comments\\nWith No Words in Bag\", fontsize=14)\nplt.xlabel(\"Bag-of-Words size\", fontsize=14)\nplt.plot(np.arange(100, 301, 20), word_in_bag_percent);", "A bag of words of size 100 is sufficient to reduce the proportion of comments with no words in the bag below 0.1. As expected, increasing the size of the bag results in diminishing returns. The choice of bag-of-words size is somewhat arbitrary, but we will use a size of 200, which results in about 95% of posts having at least one word in the bag. Heuristically, this seems to provide a reasonable balance between having informative features and having too many features.", "stop_words = text.ENGLISH_STOP_WORDS.union(['didn', 've', 'don']) # add additional stop words\ncorpus = commentsdf.commentBody.tolist()\nvectorizer = CountVectorizer(stop_words=stop_words, max_features=200, binary=True)\nbowmat = vectorizer.fit_transform(corpus) # bag of words matrix\nwords = vectorizer.vocabulary_.keys()\nword_in_bag = np.zeros(commentsdf.shape[0])\nfor i in range(200):\n commentsdf['word{}_{}'.format(i, words[i])] = bowmat[:, i].toarray().flatten()", "Tf-Idf\nWe add a column to the dataframe composed of the average tf-idf (term frequency-inverse document frequency) score over all words in a comment. The tf score for term $t$ in document $d$ is given by the number of times $t$ appears in $d$. The idf score for $t$ in a corpus $D$ is given by\n$$\\log \\frac{N}{|{d \\in D: t \\in d}|}$$\nwhere $N$ is the total number of documents, and $|{d \\in D: t \\in d}|$ is the total number of documents containing $t$. The tf-idf score for $t$ in $d$ is the $tf$ score multiplied by the $idf$ score.\nIntuitively, tf-idf measures how important a word is in a document compared with how important the word is to the corpus as a whole. Words that appear frequently in documents but appear rarely in the corpus receive high scores.", "tfidf_vectorizer = TfidfVectorizer(stop_words=stop_words)\ntfidfmat = tfidf_vectorizer.fit_transform(corpus)\ncommentsdf['tfidf'] = csr.sum(tfidfmat, axis=1)\ncommentsdf['tfidf'] = commentsdf['tfidf'].div(commentsdf['commentWordCount'], axis='index')\n\nax = sns.regplot(x='tfidf', y='recommendationCount', data=commentsdf, fit_reg=False)\nax.set_title('Recommendation Count vs. Tf-idf')\nax.set_ylabel('Recommendation Count')\nax.set_xlabel('Tf-idf')\nplt.xlim(0, 3.25)\nplt.ylim(0, 3250);", "Transforming variables\nWe apply log and arcsinh transforms to tf-idf and sentiment scores on the basis that a very negative comment is not much different from a moderately negative comment. We also apply these transforms to the recommendation counts. As seen below, these transformations seem to do a good job transforming the \"recommendation count vs. tf-idf\" relationship from negative exponential to something better behaved. However, these transforms end up not being terribly effective in the end.", "# Log transformations\ncommentsdf['logrecommendationCount'] = np.log(commentsdf.recommendationCount + 1) # + 1 to deal with log(0)\ncommentsdf['logtfidf'] = np.log(commentsdf.tfidf)\ncommentsdf['logpositive'] = np.log(commentsdf.positive_sentiment)\ncommentsdf['lognegative'] = np.log(commentsdf.negative_sentiment)\n\n# Arcsinh transformations\ncommentsdf['srecommendationCount'] = np.arcsinh(commentsdf.recommendationCount)\ncommentsdf['stfidf'] = np.arcsinh(commentsdf.tfidf)\ncommentsdf['spositive'] = np.arcsinh(commentsdf.positive_sentiment)\ncommentsdf['snegative'] = np.arcsinh(commentsdf.negative_sentiment)\n\nfig, axes = plt.subplots(1, 3)\nfig.set_size_inches(15, 5)\nsns.regplot('tfidf', 'logrecommendationCount', data=commentsdf, fit_reg=False, ax=axes[0], scatter_kws={'alpha':0.3});\nsns.regplot('logtfidf', 'recommendationCount', data=commentsdf, fit_reg=False, ax=axes[1], scatter_kws={'alpha':0.3});\nsns.regplot('logtfidf', 'logrecommendationCount', data=commentsdf, fit_reg=False, ax=axes[2], scatter_kws={'alpha':0.3});\naxes[0].set_title('log(recommendationCount) vs. tf-idf')\naxes[0].set_xlabel('tf-idf')\naxes[0].set_ylabel('log(recommendationCount)')\naxes[0].set_xlim(0, 3.25);\naxes[0].set_ylim(0, 9);\naxes[1].set_title('recommendation count vs. log(tf-idf)')\naxes[1].set_xlabel('log(tf-idf)')\naxes[1].set_ylabel('recommendation count')\naxes[1].set_xlim(-5, 1.5);\naxes[1].set_ylim(0, 3250);\naxes[2].set_title('log(recommendationCount) vs. log(tf-idf)')\naxes[2].set_xlabel('log(tf-idf)')\naxes[2].set_ylabel('log(recommendationCount)')\naxes[2].set_xlim(-5, 1.5);\naxes[2].set_ylim(0, 9);\n\nfig, axes = plt.subplots(1, 3)\nfig.set_size_inches(15, 5)\nsns.regplot('tfidf', 'srecommendationCount', data=commentsdf, fit_reg=False, ax=axes[0], scatter_kws={'alpha':0.3});\nsns.regplot('stfidf', 'recommendationCount', data=commentsdf, fit_reg=False, ax=axes[1], scatter_kws={'alpha':0.3});\nsns.regplot('stfidf', 'srecommendationCount', data=commentsdf, fit_reg=False, ax=axes[2], scatter_kws={'alpha':0.3});\naxes[0].set_title('arcsinh(recommendationCount) vs. tf-idf')\naxes[0].set_xlabel('tf-idf')\naxes[0].set_ylabel('arcsinh(recommendationCount)')\naxes[0].set_xlim(0, 3.25);\naxes[0].set_ylim(0, 9);\naxes[1].set_title('recommendation count vs. arcsinh(tf-idf)')\naxes[1].set_xlabel('arcsinh(tf-idf)')\naxes[1].set_ylabel('recommendation count')\naxes[1].set_xlim(0, 2);\naxes[1].set_ylim(0, 3250);\naxes[2].set_title('arcsinh(recommendationCount) vs. arcsinh(tf-idf)')\naxes[2].set_xlabel('arcsinh(tf-idf)')\naxes[2].set_ylabel('arcsinh(recommendationCount)')\naxes[2].set_xlim(0, 2);\naxes[2].set_ylim(0, 9);", "<a id='final'></a>\nFinal Analysis\nWe tried a variety of methods to solve our original problem of predicting recommendation count / editor's selection. As seen below, none of them worked, so we decided instead to try to solve the somewhat easier problem of classifying comments as \"good\" (i.e. above a certain recommendation threshold) and \"not good\" (below the threshold).", "def rmse(test, pred):\n return np.sqrt(((test - pred)**2).mean())", "Split data\nThroughout, we split our data into train, validation, and test sets. We test our models on the train and validation sets, saving the test sets for when we select a final, most effective model out of the ones we try.", "data = commentsdf.ix[:, [3] + [8] + np.arange(25, 227).tolist()].as_matrix() # select relevant columns\nlabel = commentsdf.recommendationCount.as_matrix()\nXtrain, Xtest, ytrain, ytest = train_test_split(data, label, test_size=0.2)\nXtrain_val, Xval, ytrain_val, yval = train_test_split(Xtrain, ytrain, test_size=0.2)", "We used linear regression and a random forest regressor to predict recommendation counts.\nLinear Regression", "regression = LinearRegression()\nregression.fit(Xtrain_val, ytrain_val)\nypred = regression.predict(Xval)\nprint 'RMSE:', rmse(ypred, yval), '\\nRMSE of predicting zero:', rmse(ypred, np.zeros_like(ypred))", "Predicting that no comment will receive any recommendations does about three times as well as linear regression.\nRandom Forest Regressor", "rf = RandomForestRegressor()\nrf.fit(Xtrain_val, ytrain_val)\nypred = rf.predict(Xval)\nprint 'RMSE:', rmse(ypred, yval)", "Again, we do much worse than predicting zero.\nPart of the reason we do badly may be because using regression to predict discrete outputs (recommendation counts can't be non-integers) is not a viable strategy. Below, we try linear regression on log(recommendation count) and arcsinh(recommendation count), but they also both fail.\nLinear Regression", "label_log = commentsdf.logrecommendationCount.as_matrix()\nXtrain_log, Xtest_log, ytrain_log, ytest_log = train_test_split(data, label_log, test_size=0.2)\nXtrain_val_log, Xval_log, ytrain_val_log, yval_log = train_test_split(Xtrain_log, ytrain_log, test_size=0.2)\n\n# Log-linear regression\nregression = LinearRegression()\nregression.fit(Xtrain_val_log, ytrain_val_log)\nypred = regression.predict(Xval_log)\nprint \"RMSE:\", rmse(ypred, yval_log), '\\nRMSE of predicting zero:', rmse(ypred, np.zeros_like(ypred))\nprint 'RMSE of predicting mean:', rmse(ypred, ytrain_val_log.mean())\n\nlabel_asin = commentsdf.logrecommendationCount.as_matrix()\nXtrain_asin, Xtest_asin, ytrain_asin, ytest_asin = train_test_split(data, label_asin, test_size=0.2)\nXtrain_val_asin, Xval_asin, ytrain_val_asin, yval_asin = train_test_split(Xtrain_asin, ytrain_asin, test_size=0.2)\n\n# Arcsinh regression\nregression = LinearRegression()\nregression.fit(Xtrain_val_asin, ytrain_val_asin)\nypred = regression.predict(Xval_asin)\nprint \"RMSE:\", rmse(ypred, yval_asin), '\\nRMSE of predicting zero:', rmse(ypred, np.zeros_like(ypred))\nprint 'RMSE of predicting mean:', rmse(ypred, ytrain_val_log.mean())", "We are able to beat predicting zero, but predicting the mean of the training set still produces a much better result.\nNext, we try logistic regression and random forest classification on editor's selection.\nLogistic Regression & Random Forest Classifier Editor's Selection", "label_ed = commentsdf.editorsSelection.as_matrix()\nXtrain_ed, Xtest_ed, ytrain_ed, ytest_ed = train_test_split(data, label_ed, test_size=0.2)\nXtrain_val_ed, Xval_ed, ytrain_val_ed, yval_ed = train_test_split(Xtrain_ed, ytrain_ed, test_size=0.2)\n\n# Logistic regression\nlogreg = LogisticRegression(max_iter=100, verbose=1, n_jobs=-1)\nlogreg.fit(Xtrain_val_ed, ytrain_val_ed)\nypred = logreg.predict(Xval_ed)\n\nprint \"Actual number of editor's choices:\", np.sum(yval_ed)\nprint \"Predicted number of editors choices:\", np.sum(ypred)\nprint 'Confusion matrix:\\n', confusion_matrix(ypred, yval_ed)\n\n# Try adjusting the threshold\ndef t_repredict(est, t, xtest):\n probs=est.predict_proba(xtest)\n p0 = probs[:,0]\n p1 = probs[:,1]\n ypred = (p1 > t)*1\n return ypred\n\nprint \"Actual number of editor's choice\",np.sum(yval_ed)\nprint \"Predicted number of editor's choices t = 1: \", np.sum(t_repredict(logreg, .95, Xval_ed))\nprint \"Predicted number of editor's choices t = 0.5: \", np.sum(t_repredict(logreg, 0.5, Xval_ed))\nprint \"Predicted number of editor's choices t = 0.1: \", np.sum(t_repredict(logreg, 0.1, Xval_ed))\nprint \"Predicted number of editor's choices t = 0.075: \", np.sum(t_repredict(logreg, 0.075, Xval_ed))\nprint \"Predicted number of editor's choices t = 0.05: \", np.sum(t_repredict(logreg, 0.05, Xval_ed))\nprint \"Predicted number of editor's choices t = 0.025: \", np.sum(t_repredict(logreg, 0.025, Xval_ed))\n\n# We see most of the predicted positives are false positives. only 164 positive positives. \nconfusion_matrix(yval_ed, t_repredict(logreg, 0.075, Xval_ed))\n\nrfc = RandomForestClassifier(n_estimators=100)\nrfc.fit(Xtrain_val_ed, ytrain_val_ed)\nypred = rfc.predict(Xval_ed)\nprint confusion_matrix(ypred, yval_ed)", "The dataset is very unbalanced, so predicting that there will be no editor's selections is a viable strategy to minimize error.\nLog-Arcsinh Regression", "# This model uses arcsinh to deal with zero values on tfidf and sentiment\ndata_ll = commentsdf.ix[:, [3] + [8] + range(27, 227) + range(233, 236)].as_matrix()\nlabel_ll = commentsdf.logrecommendationCount.as_matrix()\nXtrain_ll, Xtest_ll, ytrain_ll, ytest_ll = train_test_split(data_ll, label_ll)\nXtrain_val_ll, Xval_ll, ytrain_val_ll, yval_ll = train_test_split(Xtrain_ll, ytrain_ll)\n\nregression = LinearRegression()\nregression.fit(Xtrain_val_ll, ytrain_val_ll)\nypred = regression.predict(Xval_ll)\nprint \"RMSE:\", rmse(ypred, yval_ll), 'RMSE from predicting zero:', rmse(ypred, np.zeros_like(ypred))\nprint 'RMSE from predicting training mean:', rmse(ypred, ytrain_val_ll.mean())", "It appears that the transformations of the features that we tried do not improve performance.\nClassifying Comments as \"Good\" and \"Not Good\"\nAs described above, predicting recommendation counts appears intractable given the data and our feature selection abilities. Therefore, we consider an easier problem: classifying comments as \"good\" and \"not good\". For our purposes, \"good\" comments are in the top 25% in terms of recommendations.\nThe problem ends up still being quite difficult and we are forced to decide which are worse: false positives or false negatives. This is a subjective decision, but in this case, we think that false positives are worse. Suppose this method was used in content recommendation system. If a good comment is not recommended, it's not a big deal, and users may still see it by scrolling down. They could also just enjoy other good recommended comments. However, if a bad comment is recommended, it would hurt the user experience. Obviously trollish comments are deleted by moderators, but a comment section full of mediocre comments could still drive readers away.\nWe use a linear SVC and a random forest classifier. We normalize the data before feeding it into the SVC.", "# Helper functions\ndef split(rec_cutoffs, response):\n \"\"\"\n Split the data according to recommendation cutoffs.\n rec_cutoffs: A vector [k1, ..., kn] of recommendation counts where the split is [k1, k2), [k2, k3), ...\n Note: kn should be greater than the largest element in response.\n \"\"\"\n cat = np.zeros_like(response)\n for i in range(len(rec_cutoffs) - 1):\n cat = cat + i * np.logical_and(rec_cutoffs[i] <= response, response < rec_cutoffs[i+1])\n return cat\n\ndef getstats(conf):\n tp, fp, fn, tn = conf[0, 0], conf[0, 1], conf[1, 0], conf[1, 1]\n total = tp + fp + fn + tn\n accuracy = 1. * (tp + tn) / total\n fdr = 1. * fp / (tp + fp) # false discovery rate\n precision = 1. * tp / (tp + fp)\n recall = 1. * tp / (tp + fn)\n fnr = 1. * fn / (fp + tp)# false negative rate\n print ('accuracy: {} false discovery rate: {} precision: {} recall: {} false negative rate: {}' \\\n .format(accuracy, fdr, precision, recall, fnr))\n\nq = commentsdf[\"recommendationCount\"].quantile(q=0.75)\nprint '75% of recommendations are below', q\n\n# Scale the data\nscale = StandardScaler()\ndata = commentsdf.ix[:, [3] + [8] + np.arange(25, 227).tolist()].as_matrix()\nlabel = commentsdf.recommendationCount.as_matrix()\nlabel_categorical = split([0, q, 9999], label)\nXtrain_cat, Xtest_cat, ytrain_cat, ytest_cat = train_test_split(data, label_categorical)\nXtrain_val_cat, Xval_cat, ytrain_val_cat, yval_cat = train_test_split(Xtrain_cat, ytrain_cat)\nXtrain_val_scaled = scale.fit_transform(Xtrain_val_cat)\nXval_scaled = scale.transform(Xval_cat)\n\n# Support vector classifier\nsvm = LinearSVC(verbose=1, fit_intercept=False)\nsvm.fit(Xtrain_val_scaled, ytrain_val_cat)\nypred = svm.predict(Xval_scaled)\nprint confusion_matrix(yval_cat, ypred)\ngetstats(confusion_matrix(yval_cat, ypred))\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nplt.title(\"ROC Curve\", fontsize=15)\nplt.ylabel(\"False Positive Rate\", fontsize=14)\nplt.xlabel(\"True Positive Rate\", fontsize=14)\nfpr, tpr, thresholds = roc_curve(yval_cat, svm.decision_function(Xval_cat))\nplt.plot(fpr, tpr);\nplt.plot([0, 1], [0, 1], 'k--');", "There are a great deal of false negatives and false positives here. Our false discovery rate, and false negative rate are all quite high. However, the resuls are promising.", "# Random forest classifier\nrfc = RandomForestClassifier(n_estimators=100, verbose=1, n_jobs=-1)\nrfc.fit(Xtrain_val_cat, ytrain_val_cat)\nypred = rfc.predict(Xval_cat)\nprint confusion_matrix(yval_cat, ypred)\ngetstats(confusion_matrix(yval_cat, ypred))", "(Moderate) success! The random forest classifier gives a reasonable accuracy, precision, etc. There are more false negatives than the SVC, but the false positive rate is much lower. Since we prefer false negatives to false positives, this is not too bad.\nWe now test the prediction on our test set and see that the results are similar to those on the validation set.", "rfc = RandomForestClassifier(n_estimators=100, verbose=1, n_jobs=-1)\nrfc.fit(Xtrain_cat, ytrain_cat)\nypred = rfc.predict(Xtest_cat)\nprint confusion_matrix(ytest_cat, ypred)\ngetstats(confusion_matrix(ytest_cat, ypred))", "<a id='conclusion'></a>\nWhy Did We Have So Much Trouble, and What Could We Do Better?\nWe clearly had a great deal of trouble generating an even moderately useful prediction. There were two main reasons for this: (1) highly unbalanced data, and (2) the difficulty of natural language processing.\n\n\nUnbalanced data: As we showed in exploratory data analysis section, the vast majority of comments comments have very few recommendations, and only a small proportion of comments are designated editor's choices. This results in a dataset where predicting zero recommendations and editor's choices is effective at minimizing error. It is, in general, hard to make any sort of good predictions when the data is this unbalanced. One straightforward, but time-consuming, approach to ameliorating this problem is to get more data. This would likely be the first step in a future analysis. 180,000 comments is only a small proportion of the total comments posted each year, and collecting more data would give us more popular comments on which to train our models.\n\n\nNLP: NLP is a deep and complicated field, and since we did not have prior experience, we were able to perform only rudimentary feature selection. Given more time, we could research and implement more sophisticated feature selection techniques and engineer features that carry more information about the comments.\n\n\nWe could further improve our model by exploring how an article relates to its comments. As a simple example, positive sentiment sentiment comments on restaurant reviews might fare better than positive comments on highly politicized editorials, but the true relationships are likely much more intricate. With a larger sample of comments and article data, we could use a deep learning approach to derive insights from the complicated relationships between articles and comments, and between comments and other comments. Building a model that incorporates article text and metadata could be very powerful; unfortunately, it would also require much more data scraping and much more sophisticated methods, both of which are time-prohibitive.\nExtra Visualizations", "commentsdf[\"GoodOrBad\"] = label_categorical\n\nbad_comments = commentsdf[commentsdf.GoodOrBad == 0]\ngood_comments = commentsdf[commentsdf.GoodOrBad == 1]\n\nsns.set_style('ticks')\nfig, ax = plt.subplots()\nfig.set_size_inches(10, 8)\n\nsns.regplot('tfidf', 'positive_sentiment', bad_comments, fit_reg=False, label=\"> 16 recommendations\", scatter_kws={'alpha': 0.8})\nsns.regplot('tfidf', 'positive_sentiment', good_comments, fit_reg=False, label=\"< 16 recommendations\", scatter_kws={'alpha': 0.8})\n\nplt.xlim(0, 1.1)\nplt.ylim(0, 1.0)\nplt.title(\"Specific & positive comments get more recommendations\", fontsize=18)\nplt.ylabel(\"Positive Sentiment of Comment\", fontsize=14)\nplt.xlabel(\"Avg. Word Specificity in Comment\", fontsize=14)\nplt.legend(bbox_to_anchor=(1.4, 1), fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14);\n\ndata = commentsdf[commentsdf[\"recommendationCount\"] < 50]\n\nfig,ax = plt.subplots(nrows=1, ncols=1, figsize=(7,5))\nax.yaxis.set_major_formatter(formatter)\nplt.hist(data.recommendationCount, bins=20,alpha=.5)\nplt.axvline(23.73,color = 'r',alpha = .5,label = 'Mean = 24')\nplt.axvline(5,color = 'g',alpha = .5,label = 'Median = 5')\nplt.title(\"Recommendations per Comment Histogram\", fontsize=16)\nplt.ylabel(\"Count (000's)\", fontsize=14)\nplt.xlabel(\"Number of Recommendations\", fontsize=14)\nplt.xticks(fontsize=14)\nplt.yticks(fontsize=14)\nplt.legend(fontsize=14);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightSoftwareConsortium/SimpleITK-Notebooks
Python/11_Progress.ipynb
apache-2.0
[ "Progress Reporting and Command Observers <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F11_Progress.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nSimpleITK Filters and other classes derived from ProcessObjects have the ability for user code to be executed when certain events occur. This is known as the Command and Observer design patters to implement user callbacks. This allows for the monitoring and abortion of processes as they are being executed.\nConsider the following image source which takes a few seconds to execute. It would be nice to quickly know how long your going to need to wait, to know if you can go get a cup of coffee.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport SimpleITK as sitk\n\nprint(sitk.Version())\n\nimport sys\nimport os\nimport threading\n\nfrom myshow import myshow\nfrom myshow import myshow3d\n\nsize = 256 # if this is too fast increase the size\nimg = sitk.GaborSource(\n sitk.sitkFloat32,\n size=[size] * 3,\n sigma=[size * 0.2] * 3,\n mean=[size * 0.5] * 3,\n frequency=0.1,\n)\nmyshow3d(img, zslices=[int(size / 2)], dpi=40);\n\nmyshow(img);", "We need to add a command to display the progress reported by the ProcessObject::GetProgress method during the sitkProgressEvent. This involves three components:\n\nEvents\nProcessObject's methods\nCommands\n\nWe'll look at some examples after a brief explanation of these components.\nEvents\nThe avaiable events to observed are defined in a namespace enumeration.\n<table>\n <tr><td>sitkAnyEvent</td><td>Occurs for all event types.</td></tr>\n <tr><td>sitkAbortEvent</td><td>Occurs after the process has been aborted, but before exiting the Execute method.</td></tr>\n <tr><td>sitkDeleteEvent</td><td>Occurs when the underlying itk::ProcessObject is deleted.</td></tr>\n <tr><td>sitkEndEvent</td><td>Occurs at then end of normal processing.</td></tr>\n <tr><td>sitkIterationEvent</td><td>Occurs with some algorithms that run for a fixed or undetermined number of iterations.</td></tr>\n <tr><td>sitkProgressEvent</td><td>Occurs when the progress changes in most process objects.</td></tr>\n <tr><td>sitkStartEvent</td><td>Occurs when then itk::ProcessObject is starting.</td></tr>\n <tr><td>sitkUserEvent</td><td>Other events may fall into this enumeration.</td></tr>\n</table>\n\nThe convention of pre-fixing enums with \"sitk\" is continued, although it's getting a little crowded. \nC++ is more strongly typed than Python it allows for implicit conversion from an enum type to an int, but not from an int to an enum type. Care needs to be made to ensure the correct enum value is passed in Python.\nProcessObject's methods\nTo be able to interface with the ProcessObject during execution, the object-oriented interface must be used to access the method of the ProcessObject. While any constant member function can be called during a command call-back there are two common methods:\n\nProcessObject::GetProgress()\nProcessObject::Abort()\n\nThe methods are only valid during the Command while a process is being executed, or when the process is not in the Execute method.\nAdditionally it should be noted that follow methods can not be called during a command or from another thread during execution Execute and RemoveAllCommands. In general the ProcessObject should not be modified during execution.\nCommands\nThe command design pattern is used to allow user code to be executed when an event occurs. It is implemented in the Command class. The Command class provides an Execute method to be overridden in derived classes. \nThere are three ways to define a command with SimpleITK in Python.\n\nDerive from the Command class.\nUse the PyCommand class' SetCallbackPyCallable method.\nUse an inline lambda function in ProcessOject::AddCommand.", "help(sitk.Command)\n\nclass MyCommand(sitk.Command):\n def __init__(self):\n # required\n super(MyCommand, self).__init__()\n\n def Execute(self):\n print(\"MyCommand::Execute Called\")\n\n\ncmd = MyCommand()\ncmd.Execute()\n\nhelp(sitk.PyCommand)\n\ncmd = sitk.PyCommand()\ncmd.SetCallbackPyCallable(lambda: print(\"PyCommand Called\"))\ncmd.Execute()", "Back to watching the progress of out Gabor image source. First lets create the filter as an object", "size = 256\nfilter = sitk.GaborImageSource()\nfilter.SetOutputPixelType(sitk.sitkFloat32)\nfilter.SetSize([size] * 3)\nfilter.SetSigma([size * 0.2] * 3)\nfilter.SetMean([size * 0.5] * 3)\nfilter.SetFrequency(0.1)\nimg = filter.Execute()\nmyshow3d(img, zslices=[int(size / 2)], dpi=40);", "The ProcessObject interface for the Invoker or Subject\nSimpleITK doesn't have a large heirachy of inheritance. It has been kept to a minimal, so there is no common Object or LightObject base class as ITK has. As most of the goals for the events have to do with observing processes, the \"Subject\" interface of the Observer patter or the \"Invoker\" part of the Command design pattern, has been added to a ProcessObject base class for filters.\nThe ProcessObject base class has the following methods of handling commands: AddCommand, RemoveAllCommands, and HasCommand.\nAdding these functionalities are not available in the procedural interface available for SimpleITK. They are only available through the Object Oriented interface, and break the method chaining interface.", "help(sitk.ProcessObject)", "Deriving from the Command class\nThe traditional way of using Commands in ITK involves deriving from the Command class and adding to the ProcessObject.", "class MyCommand(sitk.Command):\n def __init__(self, msg):\n # required\n super(MyCommand, self).__init__()\n self.msg = msg\n\n def __del__(self):\n print(f'MyCommand begin deleted: \"{self.msg}\"')\n\n def Execute(self):\n print(self.msg)\n\ncmd1 = MyCommand(\"Start\")\ncmd2 = MyCommand(\"End\")\nfilter.RemoveAllCommands() # this line is here so we can easily re-execute this code block\nfilter.AddCommand(sitk.sitkStartEvent, cmd1)\nfilter.AddCommand(sitk.sitkEndEvent, cmd2)\nfilter.Execute()", "A reference to the Command object must be maintained, or else it will be removed from the ProcessObject.", "filter.AddCommand(sitk.sitkStartEvent, MyCommand(\"stack scope\"))\nprint(\"Before Execution\")\nfilter.Execute()", "Using a labmda function as the Command\nIn Python the AddCommand has been extended to accept PyCommand objects and implicitly creates a PyCommand from a callable python argument. This is really useful.", "filter.RemoveAllCommands() # this line is here so we can easily re-execute this code block\nfilter.AddCommand(sitk.sitkStartEvent, lambda: print(\"Starting...\", end=\"\"))\nfilter.AddCommand(sitk.sitkStartEvent, lambda: sys.stdout.flush())\nfilter.AddCommand(sitk.sitkEndEvent, lambda: print(\"Done\"))\nfilter.Execute()", "Access to ITK data during command execution\nThe commands are not too useful unless you can query the filter through the SimpleITK interface. A couple status variables and methods are exposed in the SimpleITK ProcessObject through the polymorphic interface of the same ITK class.", "filter.RemoveAllCommands()\nfilter.AddCommand(\n sitk.sitkProgressEvent,\n lambda: print(f\"\\rProgress: {100*filter.GetProgress():03.1f}%...\", end=\"\"),\n)\nfilter.AddCommand(sitk.sitkProgressEvent, lambda: sys.stdout.flush())\nfilter.AddCommand(sitk.sitkEndEvent, lambda: print(\"Done\"))\nfilter.Execute()", "Utilizing Jupyter Notebooks and Commands\nUtilization of commands and events frequently occurs with advanced integration into graphical user interfaces. Let us now export this advanced integration into Jupyter Notebooks.\nJupyter notebooks support displaying output as HTML, and execution of javascript on demand. Together this can produce animation.", "import uuid\nfrom IPython.display import HTML, Javascript, display\n\ndivid = str(uuid.uuid4())\n\nhtml_progress = f\"\"\"\n<p style=\"margin:5px\">FilterName:</p>\n<div style=\"border: 1px solid black;padding:1px;margin:5px\">\n <div id=\"{divid}\" style=\"background-color:blue; width:0%%\">&nbsp;</div>\n</div>\n\"\"\"\n\n\ndef command_js_progress(processObject):\n p = processObject.GetProgress()\n display(Javascript(\"$('div#%s').width('%i%%')\" % (divid, int(p * 100))))\n\nfilter.RemoveAllCommands()\nfilter.AddCommand(sitk.sitkStartEvent, lambda: display(HTML(html_progress)))\nfilter.AddCommand(sitk.sitkProgressEvent, lambda: command_js_progress(filter))\n\nfilter.Execute()", "Support for Bi-direction JavaScript\nIt's possible to get button in HTML to execute python code...", "import uuid\nfrom IPython.display import HTML, Javascript, display\n\ng_Abort = False\ndivid = str(uuid.uuid4())\n\nhtml_progress_abort = f\"\"\"\n<div style=\"background-color:gainsboro; border:2px solid black;padding:15px\">\n<p style=\"margin:5px\">FilterName:</p>\n<div style=\"border: 1px solid black;padding:1px;margin:5px\">\n <div id=\"{divid}\" style=\"background-color:blue; width:0%%\">&nbsp;</div>\n</div>\n<button onclick=\"set_value()\" style=\"margin:5px\" >Abort</button>\n</div>\n\"\"\"\n\njavascript_abort = \"\"\"\n<script type=\"text/Javascript\">\n function set_value(){\n var command = \"g_Abort=True\"\n console.log(\"Executing Command: \" + command);\n \n var kernel = IPython.notebook.kernel;\n kernel.execute(command);\n }\n</script>\n\"\"\"\n\n\ndef command_js_progress_abort(processObject):\n p = processObject.GetProgress()\n display(Javascript(\"$('div#%s').width('%i%%')\" % (divid, int(p * 100))))\n if g_Abort:\n processObject.Abort()\n\n\ndef command_js_start_abort():\n g_Abort = False\n\ng_Abort = False\nfilter.RemoveAllCommands()\nfilter.AddCommand(sitk.sitkStartEvent, command_js_start_abort)\nfilter.AddCommand(\n sitk.sitkStartEvent, lambda: display(HTML(html_progress_abort + javascript_abort))\n)\nfilter.AddCommand(sitk.sitkProgressEvent, lambda: command_js_progress_abort(filter))", "A caveat with this approach is that the IPython kernel must continue to execute while the filter is running. So we must place the filter in a thread.", "import threading\n\nthreading.Thread(target=lambda: filter.Execute()).start()", "While the lambda command are convenient, the lack for having an object to hold data can still be problematic. For example in the above code the uuid, is used to uniquely identify the HTML element. So if the filter is executed multiple times then the JavaScript update will be confused on what to update.", "#### The following shows a failure that you will want to avoid.\nthreading.Thread(target=lambda: filter.Execute()).start()", "A Reusable class for IPython Progress\nThere currently are too many caveats without support for Abort. Let us create a reusable class which will automatically generate the UUID and just display the progress.", "import uuid\nfrom IPython.display import HTML, Javascript, display\n\n\nclass HTMLProgressWatcher:\n def __init__(self, po):\n self.processObject = po\n self.abort = False\n\n po.AddCommand(sitk.sitkStartEvent, lambda: self.cmdStartEvent())\n po.AddCommand(sitk.sitkProgressEvent, lambda: self.cmdProgressEvent())\n po.AddCommand(sitk.sitkEndEvent, lambda: self.cmdEndEvent())\n\n def cmdStartEvent(self):\n global sitkIPythonProgress_UUID\n self.abort = False\n self.divid = str(uuid.uuid4())\n\n try:\n sitkIPythonProgress_UUID[self.divid] = self\n except NameError:\n sitkIPythonProgress_UUID = {self.divid: self}\n\n html_progress_abort = f\"\"\"\n<p style=\"margin:5px\">{self.processObject.GetName()}:</p>\n<div style=\"border: 1px solid black;padding:1px;margin:5px\">\n <div id=\"{self.divid}\" style=\"background-color:blue; width:0%%\">&nbsp;</div>\n</div>\n\"\"\"\n\n display(HTML(html_progress_abort + javascript_abort))\n\n def cmdProgressEvent(self):\n p = self.processObject.GetProgress()\n display(Javascript(\"$('div#%s').width('%i%%')\" % (self.divid, int(p * 100))))\n if self.abort:\n self.processObject.Abort()\n\n def cmdEndEvent(self):\n global sitkIPythonProgress_UUID\n del sitkIPythonProgress_UUID[self.divid]\n del self.divid\n\nfilter.RemoveAllCommands()\nwatcher = HTMLProgressWatcher(filter)\n\nfilter.Execute()\n\n?threading.Thread.start" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dlsun/symbulate
docs/joint.ipynb
mit
[ "Symbulate Documentation\nMultiple Random Variables and Joint Distributions\n<a id='contents'></a>\n\nDefining multiple random variables\nSimulating pairs (or tuples) of values with &amp;\nVisualizing simulation results with .plot()\nCommonly used joint distributions \nDefining independent random variables\nRandom vectors\n\"Unpacking\"\nMarginal distributions\nCovariance\nCorrelation\nTransformations\nA caution about working with multiple random variables\n\n< Commonly used probability models | Contents | Conditioning >\nBe sure to import Symbulate using the following commands.\n<a id='joint'></a>", "from symbulate import *\n%matplotlib inline", "<a id='joint'></a>\n<a id='def_mult_rand'></a>\nDefining multiple random variables\nMany problems involve several random variables defined on the same probability space. Of interest are properties of the joint distribution which describe the relationship between the random variables. \nIn the context of multiple random variables, the distribution of any single random variable is referred to as a marginal distribution. Joint distributions fully specify the corresponding marginal distributions; however, the converse is not true (unless the random variables are independent.)\nExample. Roll two fair six-sided dice. Let $X$ be the sum of the two dice and $Y$ the larger of the two rolls.", "die = list(range(1, 6 + 1))\nP = BoxModel(die, size=2)\nX = RV(P, sum)\nY = RV(P, max)", "<a id='ampersand'></a>\nSimulating pairs (or tuples) of values with &amp;\nJoining X and Y with an ampersand &amp; and calling .sim() simultaneously simulates the pair of (X, Y) values for each simulated outcome of the probability space. The simulated results can be used to approximate the joint distribution of X and Y which describes the possible pairs of values and their relative likelihoods. Likewise, tuples of values of multiple random variables can be simulated simultaneously using the ampersand &amp;. Simulation tools like .sim(), .tabulate(), etc work as before.\nExample. Roll two fair six-sided dice. Let $X$ be the sum of the two dice and $Y$ the larger of the two rolls.", "die = list(range(1, 6 + 1))\nP = BoxModel(die, size=2)\nX = RV(P, sum)\nY = RV(P, max)\n(X & Y).sim(10000).tabulate(normalize=True)", "<a id='plot'></a>\nVisualizing simulation results with .plot()\nCalling .plot() for simulated (X, Y) pairs produces a scatterplot summary of the simulated values. When the variables are discrete, it is recommended to use the jitter=True option to better visualize relative frequencies. The alpha= parameter controls the level of transparency.\nExample. Roll two fair six-sided dice. Let $X$ be the sum of the two dice and $Y$ the larger of the two rolls.", "die = list(range(1, 6 + 1))\nP = BoxModel(die, size=2)\nX = RV(P, sum)\nY = RV(P, max)\n(X & Y).sim(10000).plot(jitter = True, alpha = 0.01)", "See the section on Symbulate graphics for more details on plotting options and functionality. \n<a id='common_joint_dist'></a>\nCommonly used joint distributions\nRecall that a RV can be defined by specifying its distribution directly. Similarly, multiple RVs can be defined by specifying the joint distribution directly. Several commonly used joint distributions are built in to Symbulate. For example, a multivariate normal distribution is a joint distribution parametrized by a mean vector and a covariance matrix.", "covmatrix = [[1, -0.5],\n [-0.5, 4]]\nX, Y = RV(MultivariateNormal(mean = [0, 1], cov = covmatrix)) # see below for notes on \"unpacking\"", "Custom joint distributions can be specified using ProbabilitySpace. For example, it is possible to specify a joint distribution via conditional and marginal distributions.\n<a id='def_ind_ran'></a>\nDefining independent random variables\nIntuitvely, a collection of random variables are independent if knowing the values of some does not influence the joint distribution of the others. Random variables $X$ and $Y$ are independent if and only if the joint distribution factors into the product of the corresponding marginal distributions. That is, for independent RVs the joint distribution is fully specified by the marginal distributions.\nRecall that a RV can be defined by specifying its distribution directly. When dealing with multiple random variables it is common to specify the marginal distribution of each and assume independence. In Symbulate, independence of distributions is represented by the asterisks *. The * syntax reflects that under independence joints objects (i.e. cdf, pdf) are products of the corresponding marginal objects.\nExample. Let $X$, $Y$, and $Z$ be independent, with $X$ having a Binomial(5, 0.5) distribution, $Y$ a Normal(0,1) distribution, and $Z$ a Uniform(0,1) distribution.", "X, Y, Z = RV(Binomial(5, 0.5) * Normal(0, 1) * Uniform(0, 1)) # see below for notes on \"unpacking\"\n(X & Y & Z).sim(10000)", "The product syntax emphasizes that the random variables are defined on the same probability space (a product space). It is also possible to define each random variable separately and then use the AssumeIndependent command. The following code is equivalent to the above code. Either syntax has the effect of creating an unspecified probability space upon which random variables $X, Y, Z$ are defined via unspecified functions such that $X$, $Y$, and $Z$ are independent and have the specified marginal distributions.", "X = RV(Binomial(5, 0.5))\nY = RV(Normal(0, 1))\nZ = RV(Uniform(0, 1)) \nX, Y, Z = AssumeIndependent(X, Y, Z)", "Random variables are independent and identically distribution (i.i.d.) when they are independent and have a common marginal distribution. For example, if V represents the number of heads in two flips of a penny and W the number of Heads in two flips of a dime, then V and W are i.i.d., with a common marginal Binomial(n=2, p=0.5) distribution. For i.i.d. random variables, defining the joint distribution using the \"exponentiation\" notation ** makes the code a little more compact.\nExample. Let $X$ and $Y$ be i.i.d. Normal(0, 1) random variables.", "X, Y = RV(Normal(0,1) ** 2) # see below for notes on \"unpacking\"\n(X & Y).sim(10000).plot(alpha = 0.01)", "<a id='rv'></a>\nRandom vectors\nRecall that a random variable maps an outcome in a probability space to a real number. A random vector maps an outcome in a probability space to a vector of values. In other words, a random vector is a vector of random variables.\nEach realization of a random vector is a tuple of values, rather than a single value. For example, a roll of two dice could return the pair of values (sum of the rolls, larger of the rolls). The RV class can be used to define random vectors as well as random variables.\nExample. Suppose that a random vector X is formed by drawing two values independently, the first from a Binomial(5, 0.5) distribution and the second from a Normal(0, 1) distribution. Note that calling .sim() on X below generates pairs of values.", "X = RV(Binomial(5, 0.5) * Normal(0, 1))\nX.sim(3)", "Components of a random vector X can be accessed using brackets []. Note that Python starts the index at 0, so the first entry of a vector X is X[0], the second entry is X[1], etc. Each component of a random vector is a random variable so indexing using brackets produces a random variable which can be manipulated accordingly.", "X = RV(Binomial(5, 0.5) * Normal(0, 1))\nX[0].sim(10000).plot()", "Brackets can be used to access components of the random vector itself, or the simulated values of a random vector", "X = RV(Binomial(5, 0.5) * Normal(0, 1))\nX.sim(10000)[1].plot()", "<a id='unpack'></a>\n\"Unpacking\"\nIndividual components of a random vector X can be accessed using brackets, e.g. X[0], X[1], etc. When a problem involves only a few random variables, it is typical to denote them as e.g. X, Y, Z (rather than X[0], X[1], X[2]). Components of a random vector can be \"unpacked\" in this way when defining an RV, allowing for more compact syntax.\nExample. Let $X$, $Y$, and $Z$ be independent, with $X$ having a Binomial(5, 0.5) distribution, $Y$ a Normal(0,1) distribution, and $Z$ a Uniform(0,1) distribution. The following two cells provide two ways this situation can be defined. The first version is the \"unpacked\" definition which defines the three random variables. The second defines a random vector and then accesses each of its components with brackets.", "# unpacked version\nX, Y, Z = RV(Binomial(5, 0.5) * Normal(0,1) * Uniform(0,1))\nY.sim(10000).plot()\n(X & Y & Z).sim(4)\n\n# vector version\nXYZ = RV(Binomial(5, 0.5) * Normal(0,1) * Uniform(0,1))\nX = XYZ[0]\nY = XYZ[1]\nZ = XYZ[2]\nY.sim(10000).plot()\nXYZ.sim(4)", "<a id='marginal'></a>\nMarginal distributions\nEach component of a random vector is a random variable, so unpacking or indexing using brackets produces random variables which can be manipulated accordingly to describe their marginal distribution.\nWhen multiple random variables are simulated, applying .mean(), .var(), or .sd() returns the marginal means, variances, and standard deviations, respectively, of each of the random variables involved.\nExample. A vector of independent random variables.", "X = RV(Binomial(5, 0.5) * Normal(0, 1) * Poisson(4))\nX[2].sim(10000).plot()\n\nX.sim(10000).mean()\n\nX.sim(10000).sd()", "Example. A multivariate normal example, with \"unpacking\".", "covmatrix = [[1, -0.5],\n [-0.5, 4]]\nX, Y = RV(MultivariateNormal(mean = [0, 1], cov = covmatrix))\nxy = (X & Y).sim(10000)\nxy.mean()\n\nxy.var()", "<a id='cov'></a>\nCovariance\nThe covariance between random variables $X$ and $Y$, defined as\n$$\nCov(X,Y) = E[(X-E(X))(Y-E(Y))],\n$$\nmeasures the degree of linear dependence between $X$ and $Y$. Covariance can be approximated by simulating many pairs of values of the random variables and using .cov().\nExample. Let $X$ be the minimum and $Y$ the maximum of two independent Uniform(0,1) random variables. It can be shown that $Cov(X,Y) = 1/36$ (and the correlation is 1/2).", "P = Uniform(a=0, b=1) ** 2\nX = RV(P, min)\nY = RV(P, max)\nxy = (X & Y).sim(10000)\nplot(xy, alpha = 0.01)\nxy.cov()", "Example. A multivariate normal example.", "covmatrix = [[1, -0.5],\n [-0.5, 4]]\nX, Y = RV(MultivariateNormal(mean=[0, 1], cov=covmatrix)) # see below for notes on \"unpacking\"\nxy = (X & Y).sim(10000)\nxy.cov()", "When simulating more than two random variables, applying .cov() returns the covariance matrix of covariances between each pair of values (with the variances on the diagonal).", "(X & Y & X+Y).sim(10000).cov()", "<a id='corr'></a>\nCorrelation\nThe correlation coefficient is a standardized measure of linear dependence which takes values in $[-1, 1]$.\n$$\nCorr(X,Y) = \\frac{Cov (X,Y)}{\\sqrt{Var(X)}\\sqrt{Var(Y}} = Cov\\left(\\frac{X - E(X)}{SD(X)},\\frac{Y - E(Y)}{SD(Y)}\\right)\n$$\nThe correlation coefficient can be approximated by simulating many pairs of values and using .corr().\nExample. A bivariate normal example.", "X, Y = RV(BivariateNormal(mean1=0, mean2=1, sd1=1, sd2=2, corr=-0.25 ))\nxy = (X & Y).sim(10000)\nxy.corr()", "When simulating more than two random variables, applying .corr() returns the correlation matrix of correlations between each pair of values (with 1s on the diagonal since a variable is perfectly correlated with itself).", "(X & Y & X+Y).sim(10000).corr()", "<a id='transform'></a>\nTransformations of random variables\nRandom variables are often defined as functions of other random variables. In particular, arithmetic operations like addition, subtraction, multiplication, and division can be applied to random variables defined on the same probability space.\nExample. Two soccer teams score goals independently of each other, team A according to a Poisson distribution with mean 2.3 goals per goal and team B according to a Poisson distribution with mean 1.7 goals per game. Produce a plot of the approximate distribution of the total number of goals scored in a game.", "X, Y = RV(Poisson(lam=2.3) * Poisson(lam=1.7))\nZ = X + Y\nZ.sim(10000).plot()", "Example. The coordinates of a \"random point\" in the $(x, y)$ plane are random variables $X$ and $Y$ chosen independently of each other, each according to a Normal(0, 1) distribution. Produce a plot of the approximate distribution of $Z$, the distance of the $X, Y$ point from the origin.", "X, Y = RV(Normal(0, 1) ** 2)\nZ = sqrt(X ** 2 + Y ** 2)\nZ.sim(10000).plot()", "Example. Let $X$ and $Y$ be i.i.d. Exponential(1) random variables. Produce a plot of the approximate joint distribution of $W = X+ Y$ and $Z = X / W$.", "X, Y = RV(Exponential(1)**2)\nW = X + Y\nZ = X / W\n(W & Z).sim(10000).plot(alpha = 0.05)", "<a id='caution'></a>\nA caution about working with multiple random variables\nIn order to manipulate multiple random variables simulultaneously, they must be defined on the sample probability space. Otherwise, it would not be possible to determine the relationship between the random variables. Note that the following code would produce an error because the random variables are not explicitly defined on the same probability space. In particular, Symbulate has no way of determining the joint distribution of $X$ and $Y$. The error can be fixed by adding X, Y = AssumeIndependent(X, Y) before the last line, after which the code would be equivalent to the code above in the soccer example above.\nX = RV(Poisson(2.3)) \nY = RV(Poisson(1.7)) \n(X + Y).sim(10000).plot()\nIf it is desired to define independent random variables, the independence must be made explicit, either with the product syntax * (or **) or with AssumeIndependent. (The AssumeIndependent command has the effect of defining the random variables involved on the same probability space, a product space formed from the marginal distributions.)\n< Commonly used probability models | Contents | Conditioning >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
atlury/deep-opencl
cs480/25 K-Means Clustering.ipynb
lgpl-3.0
[ "K-Means Clustering\nThe UCI Machine Learning Repository is a \"go-to\" place for fascinating data sets often used in machine learning education and research. \nLet's check out the Stone Flakes data set. Find and download the StoneFlakes.dat file by clicking here or by running wget like below.", "!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00299/StoneFlakes.dat", "Let's look at the first few lines.", "!head StoneFlakes.dat", "Read about the column names and the meaning of the ID values at the data set's web site. \nNotice that values are separated by commas, except for the first column. Also notice that there are question marks where data is missing. How can we read this? Well, the usual answer is to \"google\" for the answer. Try seaching for \"read data set numpy\"\nnumpy includes functions for reading csv files. However, the pandas package offers more powerful functions for parsing data. Let's try its read_csv function.", "import pandas\n\nd = pandas.read_csv(open('StoneFlakes.dat'))\n\nd[:5]\n\nd = pandas.read_csv(open('StoneFlakes.dat'),sep=',')\nd[:5]", "Let's just replace commas with spaces, using unix. Read about the tr unix command at Linux TR Command Examples", "! tr -s ' ' ',' < StoneFlakes.dat > StoneFlakes2.dat\n! head StoneFlakes2.dat\n\nd = pandas.read_csv(open('StoneFlakes2.dat'))\nd[:5]\n\nd = pandas.read_csv(open('StoneFlakes2.dat'),na_values='?')\nd[:5]\n\nd = pandas.read_csv(open('StoneFlakes2.dat'),na_values='?',error_bad_lines=False)\nd[:5]\n\nd[:5].isnull()\n\nd[:5].isnull().any(axis=1)\n\nd[:5].isnull().any(axis=1) == False\n\nprint(d.shape)\nd = d[d.isnull().any(axis=1)==False]\nprint(d.shape)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndata = d.iloc[:,1:].values\ndata.shape\n\ndata[:5,:]", "To see this data, let's try plotting each column as a separate curve on the same axes.", "plt.plot(data);", "Each sample has 8 attribues, so each sample is a point in 8-dimensional space. I wonder how well the samples \"clump\" in those 8 dimensions. Let's try clustering them with the k-means algorithm.\nFirst, let's try to find two clusters, so $k=2$. We must initialize the two means of the two clusters. Let's just pick two samples at random.", "np.random.choice(range(data.shape[0]),2, replace=False) # data.shape[0] is number of rows, or samples\n\nnp.random.choice(range(data.shape[0]),2, replace=False)\n\ncentersIndex = np.random.choice(range(data.shape[0]),2, replace=False)\ncentersIndex\n\ncenters = data[centersIndex,:]\ncenters", "Now we must find all samples that are closest to the first center, and those that are closest to the second sample.\nCheck out the wonders of numpy broadcasting.", "a = np.array([1,2,3])\nb = np.array([10,20,30])\na, b\n\na-b", "But what if we want to subtract every element of a with every element of b?", "np.resize(a,(3,3))\n\nnp.resize(b, (3,3))\n\nnp.resize(a,(3,3)).T\n\nnp.resize(a,(3,3)).T - np.resize(b,(3,3))", "However, we can ask numpy to do this duplication for us if we reshape a to be a column vector and leave b as a row vector.\n$$ \\begin{pmatrix}\n1\\\n2\\\n3\n\\end{pmatrix}\n-\n\\begin{pmatrix}\n10 & 20 & 30\n\\end{pmatrix}\n\\;\\; = \\;\\;\n\\begin{pmatrix}\n1 & 1 & 1\\\n2 & 2 & 2\\\n3 & 3 & 3\n\\end{pmatrix}\n-\n\\begin{pmatrix}\n10 & 20 & 30\\\n10 & 20 & 30\\\n10 & 20 & 30\n\\end{pmatrix}\n$$", "a[:,np.newaxis]\n\na.reshape((-1,1))\n\na.shape, a[:,np.newaxis].shape\n\na[:,np.newaxis] - b\n\na = np.array([1,2,3])\nb = np.array([[10,20,30],[40,50,60]])\nprint(a)\nprint(b)\n\nb-a", "The single row vector a is duplicated for as many rows as there are in b! We can use this to calculate the squared distance between a center and every sample.", "centers[0,:]\n\nnp.sum((centers[0,:] - data)**2, axis=1)\n\nnp.sum((centers[1,:] - data)**2, axis=1) > np.sum((centers[0,:] - data)**2, axis=1)\n\ncenters\n\ncenters[:,np.newaxis,:].shape, data.shape\n\n(centers[:,np.newaxis,:] - data).shape\n\nnp.sum((centers[:,np.newaxis,:] - data)**2, axis=2).shape\n\nnp.argmin(np.sum((centers[:,np.newaxis,:] - data)**2, axis=2), axis=0)\n\ncluster = np.argmin(np.sum((centers[:,np.newaxis,:] - data)**2, axis=2), axis=0)\ncluster\n\ndata[cluster==0,:].mean(axis=0)\n\ndata[cluster==1,:].mean(axis=0)\n\nk = 2\nfor i in range(k):\n centers[i,:] = data[cluster==i,:].mean(axis=0)\n\ncenters\n\ndef kmeans(data, k = 2, n = 5):\n # Initial centers\n centers = data[np.random.choice(range(data.shape[0]),k, replace=False), :]\n # Repeat n times\n for iteration in range(n):\n # Which center is each sample closest to?\n closest = np.argmin(np.sum((centers[:,np.newaxis,:] - data)**2, axis=2), axis=0)\n # Update cluster centers\n for i in range(k):\n centers[i,:] = data[closest==i,:].mean(axis=0)\n return centers\n\nkmeans(data,2)\n\nkmeans(data,2)", "Let's define $J$ from the book, which is a performance measure being minimized by k-means. It is defined on page 424 as\n$$\nJ = \\sum_{n=1}^N \\sum_{k=1}^K r_{nk} ||\\mathbf{x}_n - \\mathbf{\\mu}_k||^2\n$$\nwhere $N$ is the number of samples, $K$ is the number of cluster centers, $\\mathbf{x}_n$ is the $n^{th}$ sample and $\\mathbf{\\mu}_k$ is the $k^{th}$ center, each being an element of $\\mathbf{R}^p$ where $p$ is the dimensionality of the data.\nThe sums can be computed using python for loops, but for loops are much slower than matrix operations in python, as the following cells show.", "a = np.linspace(0,10,30).reshape(3,10)\na\n\nb = np.arange(30).reshape(3,10)\nb\n\nresult = np.zeros((3,10))\nfor i in range(3):\n for j in range(10):\n result[i,j] = a[i,j] + b[i,j]\nresult\n\n%%timeit\nresult = np.zeros((3,10))\nfor i in range(3):\n for j in range(10):\n result[i,j] = a[i,j] + b[i,j]\n\nresult = a + b\nresult\n\n%%timeit\nresult = a + b", "So, the matrix form is 10 times faster!\nNow, back to our problem. How can we use matrix operations to calculate the squared distance between two centers and, say, five data samples? Let's say both are two-dimensional.", "centers = np.array([[1,2],[5,4]])\ncenters\n\ndata = np.array([[3,2],[4,6],[7,3],[4,6],[1,8]])\ndata", "This will be a little weird, and hard to understand, but by adding an empty dimension to the centers array, numpy broadcasting does all the work for us.", "centers[:,np.newaxis,:]\n\ncenters[:,np.newaxis,:].shape\n\ndata.shape\n\ndiffsq = (centers[:,np.newaxis,:] - data)**2\ndiffsq\n\ndiffsq.shape\n\nnp.sum(diffsq,axis=2)", "Now we have a 2 x 5 array with the first row containing the squared distance from the first center to each of the five data samples, and the second row containing the squared distances from the second center to each of the five data samples.\nNow we just have to find the smallest distance in each column and sum them up.", "np.min(np.sum(diffsq,axis=2), axis=0)\n\nnp.sum(np.min(np.sum(diffsq,axis=2), axis=0))", "Let's define a function named calcJ to do this calculation.", "def calcJ(data,centers):\n diffsq = (centers[:,np.newaxis,:] - data)**2\n return np.sum(np.min(np.sum(diffsq,axis=2), axis=0))\n\ncalcJ(data,centers)\n\ndef kmeans(data, k = 2, n = 5):\n # Initialize centers and list J to track performance metric\n centers = data[np.random.choice(range(data.shape[0]),k,replace=False), :]\n J = []\n \n # Repeat n times\n for iteration in range(n):\n \n # Which center is each sample closest to?\n sqdistances = np.sum((centers[:,np.newaxis,:] - data)**2, axis=2)\n closest = np.argmin(sqdistances, axis=0)\n \n # Calculate J and append to list J\n J.append(calcJ(data,centers))\n \n # Update cluster centers\n for i in range(k):\n centers[i,:] = data[closest==i,:].mean(axis=0)\n \n # Calculate J one final time and return results\n J.append(calcJ(data,centers))\n return centers,J,closest\n\ncenters,J,closest = kmeans(data,2)\n\nJ\n\nplt.plot(J);\n\ncenters,J,closest = kmeans(data,2,10)\nplt.plot(J);\n\ncenters,J,closest = kmeans(data,3,10)\nplt.plot(J);\n\nsmall = np.array([[8,7],[7,6.6],[9.2,8.3],[6.8,9.2], [1.2,3.2],[4.8,2.3],[3.4,3.2],[3.2,5.6],[1,4],[2,2.2]])\n\nplt.scatter(small[:,0],small[:,1]);\n\nc,J,closest = kmeans(small,2,n=2)\n\nc\n\nclosest\n\nplt.scatter(small[:,0], small[:,1], s=80, c=closest, alpha=0.5);\nplt.scatter(c[:,0],c[:,1],s=80,c=\"green\",alpha=0.5);\n\nc,J,closest = kmeans(small,2,n=2)\nplt.scatter(small[:,0], small[:,1], s=80, c=closest, alpha=0.5);\nplt.scatter(c[:,0],c[:,1],s=80,c=\"green\",alpha=0.5);\n\nJ\n\nimport gzip\nimport pickle\n\nwith gzip.open('mnist.pkl.gz', 'rb') as f:\n train_set, valid_set, test_set = pickle.load(f, encoding='latin1')\n # zero = train_set[0][1,:].reshape((28,28,1))\n # one = train_set[0][3,:].reshape((28,28,1))\n # two = train_set[0][5,:].reshape((28,28,1))\n # four = train_set[0][20,:].reshape((28,28,1))\n\nX = train_set[0]\nT = train_set[1].reshape((-1,1))\n\nXtest = test_set[0]\nTtest = test_set[1].reshape((-1,1))\n\nX.shape, T.shape, Xtest.shape, Ttest.shape\n\nc,J,closest = kmeans(X, k=10, n=20)\n\nplt.plot(J)\n\nc.shape\n\nfor i in range(10):\n plt.subplot(2,5,i+1)\n plt.imshow(-c[i,:].reshape((28,28)), interpolation='nearest', cmap='gray')\n plt.axis('off')\n\nc,J,closest = kmeans(X, k=10, n=20)\nplt.plot(J)\nplt.figure()\nfor i in range(10):\n plt.subplot(2,5,i+1)\n plt.imshow(-c[i,:].reshape((28,28)), interpolation='nearest', cmap='gray')\n plt.axis('off')\n\nc,J,closest = kmeans(X, k=20, n=20)\nplt.plot(J)\nplt.figure()\nfor i in range(20):\n plt.subplot(4,5,i+1)\n plt.imshow(-c[i,:].reshape((28,28)), interpolation='nearest', cmap='gray')\n plt.axis('off')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/autoregressive_distributed_lag.ipynb
bsd-3-clause
[ "Autoregressive Distributed Lag (ARDL) models\nARDL Models\nAutoregressive Distributed Lag (ARDL) models extend Autoregressive models with lags of explanatory variables. While ARDL models are technically AR-X models, the key difference is that ARDL models focus on the exogenous variables and selecting the correct lag structure from both the endogenous variable and the exogenous variables. ARDL models are also closely related to Vector Autoregressions, and a single ARDL is effectively one row of a VAR. The key distinction is that an ARDL assumes that the exogenous variables are exogenous in the sense that it is not necessary to include the endogenous variable as a predictor of the exogenous variables.\nThe full specification of ARDL models is\n$$\nY_t = \\underset{\\text{Constant and Trend}}{\\underbrace{\\delta_0 + \\delta_1 t + \\ldots + \\delta_k t^k}} \n + \\underset{\\text{Seasonal}}{\\underbrace{\\sum_{i=0}^{s-1} \\gamma_i S_i}}\n + \\underset{\\text{Autoregressive}}{\\underbrace{\\sum_{p=1}^P \\phi_p Y_{t-p}}}\n + \\underset{\\text{Distributed Lag}}{\\underbrace{\\sum_{k=1}^M \\sum_{j=0}^{Q_k} \\beta_{k,j} X_{k, t-j}}}\n + \\underset{\\text{Fixed}}{\\underbrace{Z_t \\Gamma}} + \\epsilon_t\n$$\nThe terms in the model are:\n\n$\\delta_i$: constant and deterministic time regressors. Set using trend.\n$S_i$ are seasonal dummies which are included if seasonal=True.\n$X_{k,t-j}$ are the exogenous regressors. There are a number of formats that can be used to specify which lags are included. Note that the included lag lengths do no need to be the same. If causal=True, then the lags start with lag 1. Otherwise lags begin with 0 so that the model included the contemporaneous relationship between $Y_t$ and $X_t$.\n$Z_t$ are any other fixed regressors that are not part of the distributed lag specification. In practice these regressors may be included when they do no contribute to the long run-relationship between $Y_t$ and the vector of exogenous variables $X_t$.\n${\\epsilon_t}$ is assumed to be a White Noise process", "import numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nsns.set_style(\"darkgrid\")\nsns.mpl.rc(\"figure\", figsize=(16, 6))\nsns.mpl.rc(\"font\", size=14)", "Data\nThis notebook makes use of money demand data from Denmark, as first used in S. Johansen and K. Juselius (1990). The key variables are:\n\nlrm: Log of real money measured using M2\nlry: Log of real income\nibo: Interest rate on bonds\nide: Interest rate of bank deposits\n\nThe standard model uses lrm as the dependent variable and the other three as exogenous drivers.\nJohansen, S. and Juselius, K. (1990), Maximum Likelihood Estimation and Inference on Cointegration – with Applications to the Demand for Money, Oxford Bulletin of Economics and Statistics, 52, 2, 169–210.\nWe start by loading the data and examining it.", "from statsmodels.datasets.danish_data import load\nfrom statsmodels.tsa.api import ARDL\nfrom statsmodels.tsa.ardl import ardl_select_order\n\ndata = load().data\ndata = data[[\"lrm\", \"lry\", \"ibo\", \"ide\"]]\ndata.tail()", "We plot the demeaned data so that all series appear on the same scale. The lrm series appears to be non-stationary, as does lry. The stationarity of the other two is less obvious.", "_ = (data - data.mean()).plot()", "Model Selection\nardl_select_order can be used to automatically select the order. Here we use min the minimum AIC among all modes that consider up to 3 lags of the endogenous variable and 3 lags of each exogenous variable. trend=\"c\" indicates that a constant should be included in the model.", "sel_res = ardl_select_order(\n data.lrm, 3, data[[\"lry\", \"ibo\", \"ide\"]], 3, ic=\"aic\", trend=\"c\"\n)\nprint(f\"The optimal order is: {sel_res.model.ardl_order}\")", "The optimal order is returned as the number of lags of the endogenous variable followed by each of the exogenous regressors. The attribute model on sel_res contains the model ARDL specification which can be used to call fit. Here we look at the summary where the L# indicates that lag length (e.g., L0 is no lag, i.e., $X_{k,t}$, L2 is 2 lags, i.e., $X_{k,t-2}$).", "res = sel_res.model.fit()\nres.summary()", "Global searches\nThe selection criteria can be switched the BIC which chooses a smaller model. Here we also use the glob=True option to perform a global search which considers models with any subset of lags up to the maximum lag allowed (3 here). This option lets the model selection choose non-contiguous lag specifications.", "sel_res = ardl_select_order(\n data.lrm, 3, data[[\"lry\", \"ibo\", \"ide\"]], 3, ic=\"bic\", trend=\"c\", glob=True\n)\nsel_res.model.ardl_order", "While the ardl_order shows the largest included lag of each variable, ar_lags and dl_lags show the specific lags included. The AR component is regular in the sense that all 3 lags are included. The DL component is not since ibo selects only lags 0 and 3 and ide selects only lags 2.", "sel_res.model.ar_lags\n\nsel_res.model.dl_lags", "We can take a look at the best performing models according to the BIC which are stored in the bic property. ibo at lags 0 and 3 is consistently selected, as is ide at either lag 2 or 3, and lry at lag 0. The selected AR lags vary more, although all of the best specifications select some.", "for i, val in enumerate(sel_res.bic.head(10)):\n print(f\"{i+1}: {val}\")", "Direct Parameterization\nARDL models can be directly specified using the ARDL class. The first argument is the endogenous variable ($Y_t$). The second is the AR lags. It can be a constant, in which case lags 1, 2, ..., $P$ are included, or a list of specific lags indices to include (e.g., [1, 4]). The third are the exogenous variables, and the fourth is the list of lags to include. This can be one of\n\nAn int: Include lags 0, 1, ..., Q\nA dict with column names when exog is a DataFrame or numeric column locations when exog is a NumPy array (e.g., {0:1, 1: 2, 2:3}, would match the specification below if a NumPy array was used.\nA dict with column names (DataFrames) or integers (NumPy arrays) that contains a list of specific lags to include (e.g., {\"lry\":[0,2], \"ibo\":[1,2]}).\n\nThe specification below matches that model selected by ardl_select_order.", "res = ARDL(\n data.lrm, 2, data[[\"lry\", \"ibo\", \"ide\"]], {\"lry\": 1, \"ibo\": 2, \"ide\": 3}, trend=\"c\"\n).fit()\nres.summary()", "NumPy Data\nBelow we see how the specification of ARDL models differs when using NumPy arrays. The key difference is that the keys in the dictionary are now integers which indicate the column of x to use. This model is identical to the previously fit model and all key value match exactly (e.g., Log Likelihood).", "y = np.asarray(data.lrm)\nx = np.asarray(data[[\"lry\", \"ibo\", \"ide\"]])\nres = ARDL(y, 2, x, {0: 1, 1: 2, 2: 3}, trend=\"c\").fit()\nres.summary()", "Causal models\nUsing the causal=True flag eliminates lag 0 from the DL components, so that all variables included in the model are known at time $t-1$ when modeling $Y_t$.", "res = ARDL(\n data.lrm,\n 2,\n data[[\"lry\", \"ibo\", \"ide\"]],\n {\"lry\": 1, \"ibo\": 2, \"ide\": 3},\n trend=\"c\",\n causal=True,\n).fit()\nres.summary()", "Unconstrained Error Correction Models (UECM)\nUnconstrained Error Correction Models reparameterize ARDL model to focus on the long-run component of a time series. The reparameterized model is\n$$\n\\Delta Y_t = \\underset{\\text{Constant and Trend}}{\\underbrace{\\delta_0 + \\delta_1 t + \\ldots + \\delta_k t^k}} \n + \\underset{\\text{Seasonal}}{\\underbrace{\\sum_{i=0}^{s-1} \\gamma_i S_i}}\n + \\underset{\\text{Long-Run}}{\\underbrace{\\lambda_0 Y_{t-1} + \\sum_{b=1}^M \\lambda_i X_{b,t-1}}}\n + \\underset{\\text{Autoregressive}}{\\underbrace{\\sum_{p=1}^P \\phi_p \\Delta Y_{t-p}}}\n + \\underset{\\text{Distributed Lag}}{\\underbrace{\\sum_{k=1}^M \\sum_{j=0}^{Q_k} \\beta_{k,j} \\Delta X_{k, t-j}}}\n + \\underset{\\text{Fixed}}{\\underbrace{Z_t \\Gamma}} + \\epsilon_t\n$$\nMost of the components are the same. The key differences are:\n\nThe levels only enter at lag 1\nAll other lags of $Y_t$ or $X_{k,t}$ are differenced\n\nDue to their structure, UECM models do not support irregular lag specifications, and so lags specifications must be integers. The AR lag length must be an integer or None, while the DL lag specification can be an integer or a dictionary of integers. Other options such as trend, seasonal, and causal are identical.\nBelow we select a model and then using the class method from_ardl to construct the UECM. The parameter estimates prefixed with D. are differences.", "from statsmodels.tsa.api import UECM\n\nsel_res = ardl_select_order(\n data.lrm, 3, data[[\"lry\", \"ibo\", \"ide\"]], 3, ic=\"aic\", trend=\"c\"\n)\n\necm = UECM.from_ardl(sel_res.model)\necm_res = ecm.fit()\necm_res.summary()", "Cointegrating Relationships\nBecause the focus is on the long-run relationship, the results of UECM model fits contains a number of properties that focus on the long-run relationship. These are all prefixed ci_, for cointegrating. ci_summary contains the normalized estimates of the cointegrating relationship and associated estimated values.", "ecm_res.ci_summary()", "ci_resids contains the long-run residual, which is the error the drives figure changes in $\\Delta Y_t$.", "_ = ecm_res.ci_resids.plot(title=\"Cointegrating Error\")", "Seasonal Dummies\nHere we add seasonal terms, which appear to be statistically significant.", "ecm = UECM(data.lrm, 2, data[[\"lry\", \"ibo\", \"ide\"]], 2, seasonal=True)\nseasonal_ecm_res = ecm.fit()\nseasonal_ecm_res.summary()", "All deterministic terms are included in the ci_ prefixed terms. Here we see the normalized seasonal effects in the summary.", "seasonal_ecm_res.ci_summary()", "The residuals are somewhat more random in appearance.", "_ = seasonal_ecm_res.ci_resids.plot(title=\"Cointegrating Error with Seasonality\")", "The relationship between Consumption and Growth\nHere we look at an example from Greene's Econometric analysis which focuses on teh long-run relationship between consumption and growth. We start by downloading the raw data.\nGreene, W. H. (2000). Econometric analysis 4th edition. International edition, New Jersey: Prentice Hall, 201-215.", "greene = pd.read_fwf(\"http://www.stern.nyu.edu/~wgreene/Text/Edition7/TableF5-2.txt\")\ngreene.head()", "We then transform the index to be a pandas DatetimeIndex so that we can easily use seasonal terms.", "index = pd.to_datetime(\n greene.Year.astype(\"int\").astype(\"str\")\n + \"Q\"\n + greene.qtr.astype(\"int\").astype(\"str\")\n)\ngreene.index = index\ngreene.index.freq = greene.index.inferred_freq\ngreene.head()", "We defined g as the log of real gdp and c as the log of real consumption.", "greene[\"c\"] = np.log(greene.realcons)\ngreene[\"g\"] = np.log(greene.realgdp)", "Lag Length Selection\nThe selected model contains 5 lags of consumption and 2 of growth (0 and 1). Here we include seasonal terms although these are not significant.", "sel_res = ardl_select_order(\n greene.c, 8, greene[[\"g\"]], 8, trend=\"c\", seasonal=True, ic=\"aic\"\n)\nardl = sel_res.model\nardl.ardl_order\n\nres = ardl.fit(use_t=True)\nres.summary()", "from_ardl is a simple way to get the equivalent UECM specification. Here we rerun the selection without the seasonal terms.", "sel_res = ardl_select_order(greene.c, 8, greene[[\"g\"]], 8, trend=\"c\", ic=\"aic\")\n\nuecm = UECM.from_ardl(sel_res.model)\nuecm_res = uecm.fit()\nuecm_res.summary()", "We see that for every % increase in consumption, we need a 1.05% increase in gdp. In other words, the saving rate is estimated to be around 5%.", "uecm_res.ci_summary()\n\n_ = uecm_res.ci_resids.plot(title=\"Cointegrating Error\")", "Direct Specification of UECM models\nUECM can be used to directly specify model lag lengths.", "uecm = UECM(greene.c, 2, greene[[\"g\"]], 1, trend=\"c\")\nuecm_res = uecm.fit()\nuecm_res.summary()", "The changes in the lag structure make little difference in the estimated long-run relationship.", "uecm_res.ci_summary()", "Bounds Testing\nUECMResults expose the bounds test of Pesaran, Shin, and Smith (2001). This test facilitates testing whether there is a level relationship between a set of variables without identifying which variables are I(1). This test provides two sets of critical and p-values. If the test statistic is below the critical value for the lower bound, then there appears to be no levels relationship irrespective of the order or integration in the $X$ variables. If it is above the upper bound, then there appears to be a levels relationship again, irrespective of the order of integration of the $X$ variables. There are 5 cases covered in the paper that include different combinations of deterministic regressors in the model or the test.\n$$\\Delta Y_{t}=\\delta_{0} + \\delta_{1}t + Z_{t-1}\\beta + \\sum_{j=0}^{P}\\Delta X_{t-j}\\Gamma + \\epsilon_{t}$$\nwhere $Z_{t-1}$ includes both $Y_{t-1}$ and $X_{t-1}$.\nThe cases determine which deterministic terms are included in the model and which are tested as part of the test.\n\nNo deterministic terms\nConstant included in both the model and the test\nConstant included in the model but not in the test\nConstant and trend included in the model, only trend included in the test\nConstant and trend included in the model, neither included in the test\n\nHere we run the test on the Danish money demand data set. Here we see the test statistic is above the 95% critical value for both the lower and upper.\nPesaran, M. H., Shin, Y., & Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3), 289-326.", "ecm = UECM(data.lrm, 3, data[[\"lry\", \"ibo\", \"ide\"]], 3, trend=\"c\")\necm_fit = ecm.fit()\nbounds_test = ecm_fit.bounds_test(case=4)\nbounds_test\n\nbounds_test.crit_vals", "Case 3 also rejects the null of no levels relationship.", "ecm = UECM(data.lrm, 3, data[[\"lry\", \"ibo\", \"ide\"]], 3, trend=\"c\")\necm_fit = ecm.fit()\nbounds_test = ecm_fit.bounds_test(case=3)\nbounds_test" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scoaste/showcase
machine-learning/regression/week-2-multiple-regression-assignment-2-complete.ipynb
mit
[ "Regression Week 2: Multiple Regression (gradient descent)\nIn the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.\nIn this notebook we will cover estimating multiple regression weights via gradient descent. You will:\n* Add a constant column of 1's to a graphlab SFrame to account for the intercept\n* Convert an SFrame into a Numpy array\n* Write a predict_output() function using Numpy\n* Write a numpy function to compute the derivative of the regression weights with respect to a single feature\n* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.\n* Use the gradient descent function to estimate regression weights for multiple features\nFire up graphlab create\nMake sure you have the latest version of graphlab (>= 1.7)", "import graphlab", "Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.", "sales = graphlab.SFrame('../Data/kc_house_data.gl/')", "If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.\nConvert to Numpy Array\nAlthough SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional \"array\").\nRecall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the \"feature matrix\" by the \"weight vector\". \nFirst we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.", "import numpy as np # note this allows us to refer to numpy as np instead ", "Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:\n* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')\n* A numpy array containing the values of the output\nWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)\nPlease note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!", "def get_numpy_data(data_sframe, features, output):\n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n\n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe[output]\n\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n return(feature_matrix, output_array)", "For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list\nprint example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'\nprint example_output[0] # and the corresponding output\n\nprint [sales[0]['constant'],sales[0]['sqft_living']]\nprint sales[0]['price']", "Predicting output given regression weights\nSuppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:", "my_weights = np.array([1., 1.]) # the example weights\nmy_features = example_features[0,] # we'll use the first data point\npredicted_value = np.dot(my_features, my_weights)\nprint predicted_value", "np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:", "def predict_output(feature_matrix, weights):\n # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array\n # create the predictions vector by using np.dot()\n predictions = np.dot(feature_matrix, weights)\n\n return(predictions)", "If you want to test your code run the following cell:", "test_predictions = predict_output(example_features, my_weights)\nprint test_predictions[0] # should be 1181.0\nprint test_predictions[1] # should be 2571.0", "Computing the Derivative\nWe are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.\nSince the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:\n(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2\nWhere we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:\n2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]\nThe term inside the parenthesis is just the error (difference between prediction and output). So we can re-write this as:\n2*error*[feature_i]\nThat is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!\nRecall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. \nWith this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).", "def feature_derivative(errors, feature):\n # Assume that errors and feature are both numpy arrays of the same length (number of data points)\n # compute twice the dot product of these vectors as 'derivative' and return the value\n derivative = 2 * np.dot(errors,feature)\n\n return(derivative)", "To test your feature derivartive run the following:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') \nmy_weights = np.array([0., 0.]) # this makes all the predictions 0\ntest_predictions = predict_output(example_features, my_weights) \n# just like SFrames 2 numpy arrays can be elementwise subtracted with '-': \nerrors = test_predictions - example_output # prediction errors in this case is just the -example_output\nfeature = example_features[:,0] # let's compute the derivative with respect to 'constant', the \":\" indicates \"all rows\"\nderivative = feature_derivative(errors, feature)\nprint derivative\nprint -np.sum(example_output)*2 # should be the same as derivative", "Gradient Descent\nNow we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. \nThe amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.\nWith this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria", "from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)\n\ndef regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):\n converged = False \n weights = np.array(initial_weights) # make sure it's a numpy array\n while not converged:\n # compute the predictions based on feature_matrix and weights using your predict_output() function\n predictions = predict_output(feature_matrix, weights)\n\n # compute the errors as predictions - output\n errors = predictions - output\n\n gradient_sum_squares = 0 # initialize the gradient sum of squares\n # while we haven't reached the tolerance yet, update each feature's weight\n for i in range(weights.size):\n # for i in range(len(weights)): # loop over each weight\n # Recall that feature_matrix[:, i] is the feature column associated with weights[i]\n # compute the derivative for weight[i]:\n derivative = feature_derivative(errors, feature_matrix[:, i])\n\n # add the squared value of the derivative to the gradient magnitude (for assessing convergence)\n gradient_sum_squares += np.power(derivative, 2)\n\n # subtract the step size times the derivative from the current weight\n weights[i] -= step_size * derivative\n \n # compute the square-root of the gradient sum of squares to get the gradient matnigude:\n gradient_magnitude = sqrt(gradient_sum_squares)\n if gradient_magnitude < tolerance:\n converged = True\n return(weights)", "A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect \"tolerance\" to be small, small is only relative to the size of the features. \nFor similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.\nRunning the Gradient Descent as Simple Regression\nFirst let's split the data into training and test data.", "train_data,test_data = sales.random_split(.8,seed=0)", "Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimate the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:", "# let's test out the gradient descent\nsimple_features = ['sqft_living']\nmy_output = 'price'\n(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)\ninitial_weights = np.array([-47000., 1.])\nstep_size = 7e-12\ntolerance = 2.5e7", "Next run your gradient descent with the above parameters.", "simple_weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)\nprint simple_weights", "How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? \nQuiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?", "print np.round(simple_weights[1:2],1)", "Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:", "(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)", "Now compute your predictions using test_simple_feature_matrix and your weights from above.", "test_simple_predictions = predict_output(test_simple_feature_matrix, simple_weights)", "Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?", "print np.round(test_simple_predictions[0:1],0)", "Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).", "test_simple_residuals = test_simple_predictions - test_data['price']\ntest_simple_rss = sum(pow(test_simple_residuals,2))\nprint test_simple_rss", "Running a multiple regression\nNow we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:", "model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. \nmy_output = 'price'\n(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)\ninitial_weights = np.array([-100000., 1., 1.])\nstep_size = 4e-12\ntolerance = 1e9", "Use the above parameters to estimate the model weights. Record these values for your quiz.", "multi_weights = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)\nprint multi_weights", "Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!", "(multi_feature_matrix, multi_output) = get_numpy_data(test_data, model_features, my_output)\nmulti_predictions = predict_output(multi_feature_matrix, multi_weights)", "Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?", "print np.round(multi_predictions[0:1],0)", "What is the actual price for the 1st house in the test data set?", "print test_data[0]['price']", "Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?\nmodel 1\nNow use your predictions and the output to compute the RSS for model 2 on TEST data.", "multi_residuals = multi_predictions - test_data['price']\nmulti_rss = sum(pow(multi_residuals,2))\nprint multi_rss", "Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data? \nmodel 2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wmvanvliet/neuroscience_tutorials
preassignment.ipynb
bsd-2-clause
[ "Pre-assignment for the MEG analysis lecture\nCongratulations on making it this far! You now have a working Python environment.\n\nDuring the lecture, we will do a small exercise on data analysis of magnetoencephalography (MEG) data. We'll be using Python. The main exercise requires you to be able to do these five things:\n\nRun a code cell\nUse a variable\nCall a Python function\nCall a Python method\nAccess the help text for a function or method\n\nI'll walk you through these five things. I'll be oversimplifying things a little to shield you from some underlying complexity and just give you the absolute bare minimum knowledge required to tackle the exercise during the lecture. For a proper introduction to Python, I recommend reading \"A Whirlwind Tour of Python\".\n1. Run a code cell\nThis notebook environment consists of \"cells\". Cells can either contain text (like this one) or Python programming code (like the cell below). To exectute a cell, place your cursor in it and press CTRL+Enter. Try it now:", "print('Hello, world!')", "2. Use a variable\nVariables allow us to assign names to chunks of data, so we can keep it in the computer's memory and refer to it later. Python defines different types of \"chunks of data\". Here are some types that are relevant to us, and how to write them in Python:\n\nA number: 42, 3.1415, 1e-6, 1_000_000\nA (unicode) string: 'Hello, world!', 'banana', 'π ≈ 3.1415', '😄😄😄'\nA boolean \"truth\" value: True, False. These are commonly used as an on/off switch.\nA list of things (things can be of any data type):\n[1, 2, 3, 4], ['apple', 'banana', 'strawberry'], [1, 'banana', 2]\n\nAssigning a chunk of data to a variable is done with the = symbol:\npython\nx = 2\nname = 'Marijn van Vliet'\nfibonacci = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]\n(A note on Python comments)\nIn Python, anything following the # symbol is ignored. This allows us to add comments to Python code:\n```python\nLook at this beautiful piece of Python code. I am so very proud of it!\ncolor = 'yellow' # Define the color of our submarine to be yellow\nWatch the magic of variables produce a wonderful song:\nprint('We all live in a', color, 'submarine,', color, 'submarine,', color, 'submarine.')\n```\n(ok, back to the lesson)\nUsing the above knowledge, write some Python code in the cell below that will create a variable planets that holds a list of all the names of the planets in our solar system:", "# Write your Python code here\n\n# If your code in the cell above was correct, executing this cell will display all the planets of our solar system\nprint('The planets in our solar system are:', ', '.join(planets))", "3. Call a Python function\nGood news! The majority of all the code you'll ever need is already written for you by others. You just have to know how to use it.\nThe most common way to re-use code that other people wrote is to call Python functions.\nFor example, the print function has been written for you by the people who build Python. You can \"call\" this function like this:\npython\nprint('Hello, world!') # Displays a string\nprint(planets) # Displays a variable\nThe amount of functions already written is nearly endless. Here is how to call the max function that computes the maximum of two values:\npython\nmax(10, 42) # Produces 42\nVery often, a function will produce a result. In programming lingo, we say it \"returns\" a result. For example max(10, 42) will return 42. Unless we assign this result to a variable, it gets lost. So here is a more useful example of using the max function:\npython\nmax_value = max(10, 42) # Assigns 42 to the max_value variable\nFunctions can have optional parameters, that you can specify, but if you don't, they have a default value assigned that will be used instead. By convention, you specify optional parameters using parameter=value:\npython\nprint(*planets, sep=' | ') # mercury | venus | earth | ...\nOk, your turn: in the cell below, write some Python code that calles the sorted function with numbers as parameter and assigns the result to the numbers_sorted variable:", "numbers = [1, 6, 5, 2, 3, 7, 4]\n# Replace this line with your code to sort the `numbers` variable\n\n# If your code in the cell above was correct, executing this cell should display a sorted list of numbers\nprint(numbers_sorted)", "4. Call a Python method\nA \"method\" is a function that is attached to a variable. In Python, all variables are \"objects\", which is programming lingo meaning they do not only contain a chunk of data, but also meta-data (called \"properties\" in programming lingo) and tools to manipulate the chunk of data (called \"methods\" in programming lingo).\nFor example, here are some methods that a variable that holds a Python list has to offer you:\npython\nnumbers = [1, 2, 3, 4, 5] # Make a list of numbers and assign it to a variable\nnumbers.sort() # After calling this, you'll find the list is now sorted\nnumbers.reverse() # The list is now reversed\nnumbers.count(3) # Returns the number of 3's in the list\nYour turn: use the .sort() method to sort the list of planets you coded earlier:", "# Write your Python code here\n\n# If your code in the cell above was correct, executing this cell should display the planets in alphabetical order\nprint('The planets in alphabetical order:', ', '.join(planets))", "5. Access the help text for a function or method\nTo find out what a function or method does, which parameters it wants, and what it returns, append a question mark ? to the name of the function/method. Try it by executing the cell below, which should pop up the documentation on the print function:", "print?", "Ok, final test: sort the planets variable using its planets.sort() method. Give the .sort() method an optional parameter to make it sort in reverse order. To find out the name of this optional parameter, you'll need to read the method's help text.", "# Write your Python code here\n\n# If your code in the cell above was correct, executing this cell\n# should display the planets in reverse alphabetical order\nprint('The planeters in reverse alphabetical order:', ', '.join(planets))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kompgraf/course-material
notebooks/06-bezier-felulet/06-bezier-felulet.ipynb
mit
[ "from IPython.core.display import HTML\nfrom string import Template\ndef jsConfig():\n src = \"\"\"\n <script>require.config({ baseUrl: 'https://rawgit.com/kompgraf/course-material/master/assets/' });</script>\n \"\"\"\n return HTML(src)\ndef addScript(script, identifier):\n src = Template(\"\"\"\n <div id=\"${identifier}-container\"></div>\n <script>require(['${script}'], main => main($$(\"#${identifier}-container\"), '${identifier}'));</script>\n \"\"\")\n return HTML(src.substitute(script = script, identifier = identifier))\njsConfig()\n", "Bézier-felület\nBevezetés\nMiután megismerkedtünk számos különböző görbetípussal, ideje, hogy dimenziót lépve elkezdjünk a felületekkel is foglalkozni. Ebben a jegyzetben egy általános minta kerül bemutatásra, melynek segítségével paraméteres felületeket tudunk előállítani. Részletesen ezek közül a Bézier-felületet fogjuk szemügyre venni.\nHogyan készítsünk görbéből felületet?\nA célunk, hogy az eddigi tudásunkra építve hozzunk létre felületeket. Az alapötlet ennek megfelelően rendkívül egyszerű lesz. Vegyünk egy tetszőleges térgörbét, melyet a $P_0, P_1, \\ldots, P_n$ (háromdimenziós) kontrollpontok határoznak meg. Ha módosítjuk ezeknek a kontrollpontoknak a pozícióját a térben, akkor újabb görbét kapunk. Sorozatosan újabb görbéket képezve görbék egy családját kapjuk, melyek együttesen egy felületet definiálnak. A legegyszerűbb példa erre az, ha egy, az $xy$-síkon adott térgörbe kontrollpontjait a $z$-tengely mentén eltoljuk. \nFelület létrehozása eltolással\nVegyünk az előzőnél egy általánosabb példát! Legyen adva egy tetszőleges paraméteres görbe a következő módon:\n$$\n\\gamma(t) = \\sum\\limits_{i=0}^{n} b_i(t)P_i \\qquad t \\in [0, 1].\n$$\nLegyen továbbá adott egy $Q$ pont. A $\\gamma(t)$ függvény által képzett görbét toljuk végig a $P_0$ és $Q$ pontok közötti szakaszon, így egy felületet képezve!\nEz azt jelenti, hogy a szakasz mentén haladva újabb és újabb görbéket kell létrehoznunk a $\\gamma(t)$ függvény segítségével. Egy adott kontrollpontra alkalmazandó eltolás mértékét a következő képlettel számolhatjuk:\n$$\np(s) = s(Q - P_0) \\qquad s \\in [0, 1]\n$$\nMost már tehát a súlyfüggvénnyel nem az eredeti $P_i$ pontok valamelyikét, hanem mindig az eltolással képzett pontok egyikét kell megszoroznunk ahhoz, hogy valóban végighaladjunk a szakasz mentén. Ehhez definiáljuk az $i$-edik kontrollpont $s$ paraméter szerinti eltoltját a következőképpen:\n$$\np_i(s) = P_i + s(Q - P_0) \\qquad s \\in [0, 1].\n$$\n$p_i$ birtokában az eredeti görbét már leírhatjuk\n$$\n\\gamma(t) = \\sum\\limits_{i=0}^{n} b_i(t)p_i(0) \\qquad t \\in [0, 1]\n$$\nformában. Vegyük észre, hogy $p_i$ paraméterét rögzítettük a $0$ értékre, mely azt jelenti, hogy az eltolás nem játszik szerepet. Ha bevezetünk egy új változót, ezzel kétváltozóssá téve a függvényt, akkor kapjuk a teljes felületet leíró kifejezést:\n$$\n\\gamma(s, t) = \\sum\\limits_{i=0}^{n} b_i(t)p_i(s) \\qquad s \\in [0, 1], \\quad t \\in [0, 1].\n$$\nAz eredmény tehát nem más, mint görbék egy olyan családja, melyek kontrollpontjait egy függvény állítja elő.\nDemonstráció\nA demonstráció az előző ötletet szemléleti. A vezérléshez mind az egérre, mind a billentyűzetre szükség van. Ha rákattintunk a kék téglalapra, akkor az megkapja a fókuszt, és el tudja kapni a billentyűeseményeket. A kamera mozgatását a következő billentyűkkel vezérelhetjük:\n\n<kbd>W</kbd> - a kamera mozgatása fölfele a henger palástján,\n<kbd>S</kbd> - a kamera mozgatása lefele a henger palástján,\n<kbd>D</kbd> - a kamera mozgatása jobbra a henger palástján,\n<kbd>A</kbd> - a kamera mozgatása balra a henger palástján,\n<kbd>Numpad+</kbd> - a henger sugarának növelése (ha nincs kijelölt kontrollpont),\n<kbd>Numpad-</kbd> - a henger sugarának csökkentése (ha nincs kijelölt kontrollpont).\n\nKattintással tudunk kontrollpontot kijelölni. Az éppen kijelölt kontrollpont zöld színnel lesz kirajzolva. Ha üres területre kattintunk, akkor eltűnik a kijelölés. Amennyiben van kiválasztott kontrollpont, akkor az <kbd>X</kbd>, <kbd>Y</kbd> és <kbd>Z</kbd> billentyűkkel tudjuk kijelölni a tengelyt, amelynek mentén mozgatni szeretnénk a pontot, és a <kbd>Numpad+</kbd>, <kbd>Numpad-</kbd> billentyűk használatával tudjuk a kontrollpontot a kijelölt tengely mentén elmozgatni.\nAz öt kontrollpont közül négy egy Bézier-görbét határoz meg, az ötödik pont pedig az eltolás nagyságának és irányának kijelöléséért felel.", "addScript(\"js/bezier-along-line\", \"bezier-along-line\")", "Tenzorszorzat-felületek\nHa jobban megnézzük az eltolással előállított felületet leíró képletet, akkor láthatjuk, hogy a kontrollpontok helyére bevezetett $p_i(s)$ függvény gyakorlatilag tetszőleges vektorértékű függvény lehet. Visszatérve az eredeti ötlethez, mi lenne, ha egy szakasz helyett most egy görbe mentén mozgatnánk el az eredeti görbénk kontrollpontjait? Az így kapott felületeket tenzorszorzat-felületeknek nevezzük (tensor product spline patch).\nVezessük le a tenzorszorzat-felületek általános alakját kiindulva az ismert $\\gamma(t)$ függvényből, azonban ezúttal $j$-t használva indexeléshez:\n$$\n\\gamma(t) = \\sum\\limits_{j} b_j(t)P_j \\qquad t \\in [0, 1].\n$$\nCseréljük le a $P_j$ kontrollpontokat\n$$\np_j: [0, 1] \\rightarrow \\mathbb{R}^3\n$$\nfüggvényekre. E módon egy görbecsaládot kapunk, melynek első\n$$\n\\gamma(t) = \\sum\\limits_{j} b_j(t)p_j(0) \\qquad t \\in [0, 1]\n$$\ntagja adja az eredeti görbét. Az egész felületet ezúttal is egy kétváltozós függvény fogja előállítani:\n$$\n\\gamma: [0, 1] \\times [0, 1] \\rightarrow \\mathbb{R}^3,\n$$\nahol\n$$\n\\gamma(s, t) = \\sum\\limits_{j} b_j(t)p_j(s).\n$$\nEddig ugyanott tartunk, mint a szakasz mentén eltolt felület esetében. Azonban most a görbecsalád egyes tagjait meghatározó kontrollpontokat görbéken adott pontokból származtatjuk. Például az első kontrollpont a\n$$\np_0(s) = \\sum\\limits_{i}Q_{i0}q_i(s)\n$$\ngörbe mentén fog mozogni, ahol a $Q_{i0}$ pontok az ezt a görbét meghatározó kontrollpontok, $q(s)$ pedig valamilyen súlyfüggvény. Általánosan tehát\n$$\np_j(s) = \\sum\\limits_{i}Q_{ij}q_i(s).\n$$\nBontsuk ki ennek ismeretében a $\\gamma(s, t)$ függvényt:\n$$\n\\begin{align}\n\\gamma(s, t) &= \\sum\\limits_{j} b_j(t)p_j(s) \\\n &= \\sum\\limits_{j} b_j(t) \\bigg(\\sum\\limits_{i}Q_{ij}q_i(s)\\bigg) \\\n &= \\sum\\limits_{j}\\bigg(\\sum\\limits_{i} Q_{ij}q_i(s)b_j(t)\\bigg) \\\n\\end{align}\n$$\nEzzel készen vagyunk, megkaptuk a tenzorszorzat-felület általános formuláját. Láthatjuk, hogy a felület létrehozásához meg kell adnunk a $Q_{ij}$ kontrollpontokat, valamint egy $s$ és egy $t$ irányú görbét. Azonban ezek a görbék nem szükségszerűen azonosak. Dolgozhatunk $s$ irányban egy Bézier-görbével, $t$ irányban pedig B-Spline-nal. Ez hatalmas rugalmasságot biztosít, azonban leggyakrabban megegyező görbéket alkalmazunk (például harmadfokú Bézier-görbéket).\nDemonstráció\nA demonstrációk rendre bikubikus ($s$ és $t$ irányban is harmadfokú) Bézier-, B-Spline és Catmull-Rom Spline-felületeket mutatnak be. A kamerát kezelni és a kontrollpontokat manipulálni az előző példával azonos módon tudjuk.\nHasonlítsuk össze a különböző típusú felületeket, és vizsgáljuk meg, hogy milyen tulajdonságokat örököltek a görbetípustól, melyből származtatva lettek!\nBézier", "addScript(\"js/bezier-surface\", \"bezier-surface\")", "B-Spline", "addScript(\"js/b-spline-surface\", \"b-spline-surface\")", "Catmull-Rom", "addScript(\"js/catmull-rom-spline-surface\", \"catmull-rom-spline-surface\")\n\ndef styling():\n styles = open(\"../../styles/custom.html\", \"r\").read()\n return HTML(styles)\nstyling()\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
holla2040/valvestudio
simulations/inductorMeasurement/notchFilter.ipynb
mit
[ "Measuring inductance using a series resonance circuit\n<img src=\"notchSchematic.png\" width=\"35%\">\n$$\\frac{V_o}{V_i} = \\frac{(\\frac{1}{j \\omega C} + R_l + j \\omega L)}{R_s + (\\frac{1}{j \\omega C} + R_l + j \\omega L)}$$\n<br>\n<center>at resonance,</center>\n$$ \\left|\\frac{1}{j \\omega C}\\right| = \\left|j \\omega L\\right|$$\n$$\\omega = \\frac{1}{\\sqrt{L C}}$$\n$$L = \\frac{1}{\\omega^2 C}, \\omega = 2 \\pi f$$\n$$L = \\frac{1}{(2 \\pi f)^2 C}$$", "%matplotlib inline\n\nfrom cmath import phase, pi\nfrom math import degrees,sqrt\nimport matplotlib.pyplot as plt\n\nRs = 39000\nC = 0.1e-6\nL = 10\nRl = 384\nplotpoints = 5000\n\ndef Vo(f, Vi = 1):\n RLC = 1/(2*pi*f*C*1j) + Rl + (2*pi*f*L*1j)\n return Vi * (RLC/(Rs+RLC))\n \nfrequencies = [ pow(10,4.0*i/plotpoints) for i in range(plotpoints)]\nmags = []\nphases = []\n\nfmin = 1\nmagmin = 2\n\nfor f in frequencies:\n vo = Vo(f)\n mag = abs(vo)\n mags.append(mag)\n phases.append(phase(vo))\n if mag < magmin:\n fmin = f\n magmin = mag\n\nprint \"Expect Voutmin=%0.4f at %.2fHz\"%(float(Rl)/(Rl+Rs),1/(2*pi*sqrt(L*C)))\nprint \"Calculated Vout=%0.4f at %.2fHz\"%(magmin,fmin)\n \nplt.figure()\nplt.figure(figsize=(12, 6))\nplt.semilogx(frequencies,mags)\nplt.semilogx(frequencies,phases)\nplt.grid(True)\nplt.show()\n", "with $f_{resonance}$ and C, calculate $L = \\frac{1}{(2 \\pi f)^2 C}$\nPut your values of f and C here to calculate your L", "f = 160.6\nC = 0.1e-6\n\nL = 1/(pow((2*pi*f),2)*C)\nprint \"L=%0.1f\"%L", "Further reading<br>\nhttp://www.allaboutcircuits.com/textbook/alternating-current/chpt-6/resonance-series-parallel-circuits/<br>\nhttp://www.qsl.net/i0jx/supply.html<br>\nhttp://www.dos4ever.com/inductor/inductor.html<br>\nLTSpice Results\n<img src=\"Series_RCL_Circuit.png\"><br><img src=\"Series_RCL_Circuit_Plots.png\"><br>" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_read_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading an inverse operator\nThe inverse operator's source space is shown in 3D.", "# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname = data_path\nfname += '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\n\ninv = read_inverse_operator(fname)\n\nprint(\"Method: %s\" % inv['methods'])\nprint(\"fMRI prior: %s\" % inv['fmri_prior'])\nprint(\"Number of sources: %s\" % inv['nsource'])\nprint(\"Number of channels: %s\" % inv['nchan'])", "Show result on 3D source space", "lh_points = inv['src'][0]['rr']\nlh_faces = inv['src'][0]['use_tris']\nrh_points = inv['src'][1]['rr']\nrh_faces = inv['src'][1]['use_tris']\nfrom mayavi import mlab # noqa\n\nmlab.figure(size=(600, 600), bgcolor=(0, 0, 0))\nmesh = mlab.triangular_mesh(lh_points[:, 0], lh_points[:, 1], lh_points[:, 2],\n lh_faces, colormap='RdBu')\nmesh.module_manager.scalar_lut_manager.reverse_lut = True\n\nmesh = mlab.triangular_mesh(rh_points[:, 0], rh_points[:, 1], rh_points[:, 2],\n rh_faces, colormap='RdBu')\nmesh.module_manager.scalar_lut_manager.reverse_lut = True" ]
[ "code", "markdown", "code", "markdown", "code" ]
fantasycheng/udacity-deep-learning-project
tutorials/reinforcement/Q-learning-cart.ipynb
mit
[ "Deep Q-learning\nIn this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.\n\nWe can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.", "import gym\nimport tensorflow as tf\nimport numpy as np", "Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.", "# Create the Cart-Pole game environment\nenv = gym.make('CartPole-v0')", "We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.\nRun the code below to watch the simulation run.", "env.reset()\nrewards = []\nfor _ in range(100):\n env.render()\n state, reward, done, info = env.step(env.action_space.sample()) # take a random action\n rewards.append(reward)\n if done:\n rewards = []\n env.reset()", "To shut the window showing the simulation, use env.close().\nIf you ran the simulation above, we can look at the rewards:", "print(rewards[-20:])", "The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.\nQ-Network\nWe train our Q-learning agent using the Bellman Equation:\n$$\nQ(s, a) = r + \\gamma \\max{Q(s', a')}\n$$\nwhere $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.\nBefore we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.\n<img src=\"assets/deep-q-learning.png\" width=450px>\nNow, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.\n<img src=\"assets/q-network.png\" width=550px>\nAs I showed before, we can define our targets for training as $\\hat{Q}(s,a) = r + \\gamma \\max{Q(s', a')}$. Then we update the weights by minimizing $(\\hat{Q}(s,a) - Q(s,a))^2$. \nFor this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.\nBelow is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.", "class QNetwork:\n def __init__(self, learning_rate=0.01, state_size=4, \n action_size=2, hidden_size=10, \n name='QNetwork'):\n # state inputs to the Q-network\n with tf.variable_scope(name):\n self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')\n \n # One hot encode the actions to later choose the Q-value for the action\n self.actions_ = tf.placeholder(tf.int32, [None], name='actions')\n one_hot_actions = tf.one_hot(self.actions_, action_size)\n \n # Target Q values for training\n self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')\n \n # ReLU hidden layers\n self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)\n self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)\n\n # Linear output layer\n self.output = tf.contrib.layers.fully_connected(self.fc2, action_size, \n activation_fn=None)\n \n ### Train with loss (targetQ - Q)^2\n # output has length 2, for two actions. This next line chooses\n # one value from output (per row) according to the one-hot encoded actions.\n self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)\n \n self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))\n self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)", "Experience replay\nReinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. \nHere, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.\nBelow, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.", "from collections import deque\nclass Memory():\n def __init__(self, max_size = 1000):\n self.buffer = deque(maxlen=max_size)\n \n def add(self, experience):\n self.buffer.append(experience)\n \n def sample(self, batch_size):\n idx = np.random.choice(np.arange(len(self.buffer)), \n size=batch_size, \n replace=False)\n return [self.buffer[ii] for ii in idx]", "Exploration - Exploitation\nTo learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\\epsilon$ (epsilon). That is, with some probability $\\epsilon$ the agent will make a random action and with probability $1 - \\epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\\epsilon$-greedy policy.\nAt first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.\nQ-Learning training algorithm\nPutting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:\n\nInitialize the memory $D$\nInitialize the action-value network $Q$ with random weights\nFor episode = 1, $M$ do\nFor $t$, $T$ do\nWith probability $\\epsilon$ select a random action $a_t$, otherwise select $a_t = \\mathrm{argmax}_a Q(s,a)$\nExecute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$\nStore transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$\nSample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$\nSet $\\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\\hat{Q}_j = r_j + \\gamma \\max{a'}{Q(s'_j, a')}$\nMake a gradient descent step with loss $(\\hat{Q}_j - Q(s_j, a_j))^2$\n\n\nendfor\nendfor\n\nHyperparameters\nOne of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.", "train_episodes = 1000 # max number of episodes to learn from\nmax_steps = 200 # max steps in an episode\ngamma = 0.99 # future reward discount\n\n# Exploration parameters\nexplore_start = 1.0 # exploration probability at start\nexplore_stop = 0.01 # minimum exploration probability \ndecay_rate = 0.0001 # exponential decay rate for exploration prob\n\n# Network parameters\nhidden_size = 64 # number of units in each Q-network hidden layer\nlearning_rate = 0.0001 # Q-network learning rate\n\n# Memory parameters\nmemory_size = 10000 # memory capacity\nbatch_size = 20 # experience mini-batch size\npretrain_length = batch_size # number experiences to pretrain the memory\n\ntf.reset_default_graph()\nmainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)", "Populate the experience memory\nHere I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.", "# Initialize the simulation\nenv.reset()\n# Take one random step to get the pole and cart moving\nstate, reward, done, _ = env.step(env.action_space.sample())\n\nmemory = Memory(max_size=memory_size)\n\n# Make a bunch of random actions and store the experiences\nfor ii in range(pretrain_length):\n # Uncomment the line below to watch the simulation\n # env.render()\n\n # Make a random action\n action = env.action_space.sample()\n next_state, reward, done, _ = env.step(action)\n\n if done:\n # The simulation fails so no next state\n next_state = np.zeros(state.shape)\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n \n # Start new episode\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n else:\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n state = next_state", "Training\nBelow we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.", "# Now train with experiences\nsaver = tf.train.Saver()\nrewards_list = []\nwith tf.Session() as sess:\n # Initialize variables\n sess.run(tf.global_variables_initializer())\n \n step = 0\n for ep in range(1, train_episodes):\n total_reward = 0\n t = 0\n while t < max_steps:\n step += 1\n # Uncomment this next line to watch the training\n # env.render() \n \n # Explore or Exploit\n explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step) \n if explore_p > np.random.rand():\n # Make a random action\n action = env.action_space.sample()\n else:\n # Get action from Q-network\n feed = {mainQN.inputs_: state.reshape((1, *state.shape))}\n Qs = sess.run(mainQN.output, feed_dict=feed)\n action = np.argmax(Qs)\n \n # Take action, get new state and reward\n next_state, reward, done, _ = env.step(action)\n \n total_reward += reward\n \n if done:\n # the episode ends so no next state\n next_state = np.zeros(state.shape)\n t = max_steps\n \n print('Episode: {}'.format(ep),\n 'Total reward: {}'.format(total_reward),\n 'Training loss: {:.4f}'.format(loss),\n 'Explore P: {:.4f}'.format(explore_p))\n rewards_list.append((ep, total_reward))\n \n # Add experience to memory\n memory.add((state, action, reward, next_state))\n \n # Start new episode\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n\n else:\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n state = next_state\n t += 1\n \n # Sample mini-batch from memory\n batch = memory.sample(batch_size)\n states = np.array([each[0] for each in batch])\n actions = np.array([each[1] for each in batch])\n rewards = np.array([each[2] for each in batch])\n next_states = np.array([each[3] for each in batch])\n \n # Train network\n target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})\n \n # Set target_Qs to 0 for states where episode ends\n episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)\n target_Qs[episode_ends] = (0, 0)\n \n targets = rewards + gamma * np.max(target_Qs, axis=1)\n\n loss, _ = sess.run([mainQN.loss, mainQN.opt],\n feed_dict={mainQN.inputs_: states,\n mainQN.targetQs_: targets,\n mainQN.actions_: actions})\n \n saver.save(sess, \"checkpoints/cartpole.ckpt\")\n", "Visualizing training\nBelow I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef running_mean(x, N):\n cumsum = np.cumsum(np.insert(x, 0, 0)) \n return (cumsum[N:] - cumsum[:-N]) / N \n\neps, rews = np.array(rewards_list).T\nsmoothed_rews = running_mean(rews, 10)\nplt.plot(eps[-len(smoothed_rews):], smoothed_rews)\nplt.plot(eps, rews, color='grey', alpha=0.3)\nplt.xlabel('Episode')\nplt.ylabel('Total Reward')", "Testing\nLet's checkout how our trained agent plays the game.", "test_episodes = 10\ntest_max_steps = 400\nenv.reset()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n for ep in range(1, test_episodes):\n t = 0\n while t < test_max_steps:\n env.render() \n \n # Get action from Q-network\n feed = {mainQN.inputs_: state.reshape((1, *state.shape))}\n Qs = sess.run(mainQN.output, feed_dict=feed)\n action = np.argmax(Qs)\n \n # Take action, get new state and reward\n next_state, reward, done, _ = env.step(action)\n \n if done:\n t = test_max_steps\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n\n else:\n state = next_state\n t += 1\n\nenv.close()", "Extending this\nSo, Cart-Pole is a pretty simple game. However, the same model can be used to train an agent to play something much more complicated like Pong or Space Invaders. Instead of a state like we're using here though, you'd want to use convolutional layers to get the state from the screen images.\n\nI'll leave it as a challenge for you to use deep Q-learning to train an agent to play Atari games. Here's the original paper which will get you started: http://www.davidqiu.com:8888/research/nature14236.pdf." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
RNAer/Calour
doc/source/notebooks/microbiome_manipulation.ipynb
bsd-3-clause
[ "Microbiome data manipulation tutorial\nThis is a jupyter notebook example of how to sort, filter and handle sample metadata\nSetup", "import calour as ca\nca.set_log_level(11)\n%matplotlib notebook", "Load the data\nwe use two datasets:\nthe Chronic faitigue syndrome data from:\nGiloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.\nReduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.\nMicrobiome, 4(1), p.30.", "cfs=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',\n 'data/chronic-fatigue-syndrome.sample.txt',\n normalize=10000,min_reads=1000)\n\nprint(cfs)", "Moving pictures dataset. from:\nCaporaso, J.G., Lauber, C.L., Costello, E.K., Berg-Lyons, D., Gonzalez, A., Stombaugh, J., Knights, D., Gajer, P., Ravel, J., Fierer, N. and Gordon, J.I., 2011.\nMoving pictures of the human microbiome.\nGenome biology, 12(5), p.R50.", "movpic=ca.read_amplicon('data/moving_pic.biom',\n 'data/moving_pic.sample.txt',\n normalize=10000,min_reads=1000)\n\nprint(movpic)", "sorting the samples based on a metadata field (sort_samples)\nSort the samples of the experiment based on the values in the given field.\nis the original data sorted by the Subject field?", "print(cfs.sample_metadata['Subject'].is_monotonic_increasing)\n\ncfs=cfs.sort_samples('Subject')", "and is the new data sorted?", "print(cfs.sample_metadata['Subject'].is_monotonic_increasing)", "consecutive sorting using different fields\nKeeps the order of the previous fields if values for the new field are tied.\nFor the moving pictures dataset, we want the data to be sorted by individual, and within each individual to be sorted by timepoint", "movpic=movpic.sort_samples('DAYS_SINCE_EXPERIMENT_START')\nmovpic=movpic.sort_samples('HOST_SUBJECT_ID')\n\nprint(movpic.sample_metadata['DAYS_SINCE_EXPERIMENT_START'].is_monotonic_increasing)\n\nprint(movpic.sample_metadata['HOST_SUBJECT_ID'].is_monotonic_increasing)", "filter samples based on metadata field (filter_samples)\nKeep only samples matching the values we supply for the selected metadata field.\nlets keep only samples from participant F4", "tt=movpic.filter_samples('HOST_SUBJECT_ID','F4')\nprint('* original:\\n%s\\n\\n* filtered:\\n%s' % (movpic, tt))", "we can supply a list of values instead of only one value\nnow lets only keep skin and fecal samples", "print(movpic.sample_metadata['BODY_HABITAT'].unique())\n\nyy=tt.filter_samples('BODY_HABITAT', ['UBERON:skin', 'UBERON:feces'])\nprint(yy)", "we can also reverse the filtering (removing samples with the supplied values)\nWe use the negate=True parameter\nlet's keep just the non-skin and non-feces samples", "yy=tt.filter_samples('BODY_HABITAT', ['UBERON:skin', 'UBERON:feces'], negate=True)\nprint(yy)", "filter low abundance features (filter_abundance)\nRemove all features (bacteria) with < 10 reads total (summed over all samples, after normalization).\nThis is useful for getting rid of non-interesting features. Note that differently from filtering based of fraction of samples where feature is present (filter_prevalence), this method (filter_abundance) will also keep features present in a small fraction of the samples, but in high frequency.", "tt=cfs.filter_abundance(25)\nprint('* original:\\n%s\\n\\n* filtered:\\n%s' % (cfs, tt))", "Keeping the low abundance bacteria instead\nBy default, the function removes the low abundance feature. This can be reversed (i.e. keep low abundance features) by using the negate=True parameter)", "tt=cfs.filter_abundance(25, negate=True)\nprint('* original:\\n%s\\n\\n* filtered:\\n%s' % (cfs,tt))", "filter non-common bacteria (filter_prevalence)\nRemove bacteria based on fraction of the samples where this bacteria is present.", "# remove bacteria present in less than half of the samples\ntt=cfs.filter_prevalence(0.5)\nprint('* original:\\n%s\\n\\n* filtered:\\n%s' % (cfs, tt))", "Filter bacteria based on the mean frequency over all samples (filter_mean)\nRemove bacteria which have a mean (over all samples) lower than the desired threshold.", "# keep only high frequency bacteria (mean over all samples > 1%)\ntt=cfs.filter_mean(0.01)\nprint('* original:\\n%s\\n\\n* filtered:\\n%s' % (cfs, tt))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kadircet/CENG
783/HW2/task1_denoising_autoencoder.ipynb
gpl-3.0
[ "Implementing a denoising autoencoder\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the HW page on the course website.\nIn this exercise we will develop a denoising autoencoder, and test it out on the MNIST dataset.", "# A bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))", "We will use the class DenoisingAutoencoder in the file METU/denoising_autoencoder.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.", "from METU.denoising_autoencoder import DenoisingAutoencoder\nfrom METU.Noise import Noise, GaussianNoise\n\n# Create a small net and some toy data to check your implementations.\n# Note that we set the random seed for repeatable experiments.\n\ninput_size = 4\nhidden_size = 2\nnum_inputs = 100\n# Outputs are equal to the inputs\nnetwork_size = (input_size, hidden_size, input_size)\n\ndef init_toy_model(num_inputs, input_size):\n np.random.seed(0)\n net = DenoisingAutoencoder((input_size, hidden_size, input_size))\n net.init_weights()\n return net\n\ndef init_toy_data(num_inputs, input_size):\n np.random.seed(1)\n X = np.random.randn(num_inputs, input_size)\n return X\n\nnet = init_toy_model(num_inputs, input_size)\nX = init_toy_data(num_inputs, input_size)\nprint \"Ok, now we have a toy network\"", "Forward pass: compute loss\nOpen the file METU/denoising_autoencoder.py and look at the method DenoisingAutoencoder.loss. This function is very similar to the loss functions you have written in the first HW: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. \nImplement the first part of the forward pass which uses the weights and biases to compute the scores for the corrupted input. In the same function, implement the second part that computes the data and the regularization losses.", "loss,_ = net.loss(GaussianNoise(0.5)(X), X, reg=3e-3, activation_function='sigmoid')\n\ncorrect_loss = 2.42210627243\nprint 'Your loss value:' + str(loss)\n\nprint 'Difference between your loss and correct loss:'\nprint np.sum(np.abs(loss - correct_loss))", "Backward pass\nImplement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:", "from METU.gradient_check import eval_numerical_gradient\n\nreg = 3e-3\n\n# Use numeric gradient checking to check your implementation of the backward pass.\n# If your implementation is correct, the difference between the numeric and\n# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.\n\nnet.init_weights()\nnoisy_X = GaussianNoise(0.5)(X)\nloss, grads = net.loss(noisy_X, X, reg, activation_function='tanh')\n\n# these should all be less than 1e-5 or so\nf = lambda W: net.loss(noisy_X, X, reg, activation_function='tanh')[0]\nW1_grad = eval_numerical_gradient(f, net.weights[1]['W'], verbose=False)\nprint '%s max relative error: %e' % (\"W1\", rel_error(W1_grad, grads[1]['W']))\nW0_grad = eval_numerical_gradient(f, net.weights[0]['W'], verbose=False)\nprint '%s max relative error: %e' % (\"W0\", rel_error(W0_grad, grads[0]['W']))\nb1_grad = eval_numerical_gradient(f, net.weights[1]['b'], verbose=False)\nprint '%s max relative error: %e' % (\"b1\", rel_error(b1_grad, grads[1]['b']))\nb0_grad = eval_numerical_gradient(f, net.weights[0]['b'], verbose=False)\nprint '%s max relative error: %e' % (\"b0\", rel_error(b0_grad, grads[0]['b']))", "Train the network\nTo train the network we will use stochastic gradient descent (SGD). Look at the function DenoisingAutoencoder.train_with_SGD and fill in the missing sections to implement the training procedure. This should be very similar to the training procedures you used in the first HW. \nOnce you have implemented the method, run the code below to train the network on toy data. You should achieve a training loss less than 2.0.", "net = init_toy_model(num_inputs, input_size)\nreg = 3e-3\nstats = net.train_with_SGD(X, noise=GaussianNoise(sd=0.5),\n learning_rate=0.02, learning_rate_decay=0.95, \n reg=reg, batchsize=100, num_iters=500, verbose=False, activation_function='sigmoid')\n\nprint 'Final training loss: ', stats['loss_history'][-1]\n# plot the loss history\nplt.plot(stats['loss_history'])\nplt.xlabel('iteration')\nplt.ylabel('training loss')\nplt.title('Training Loss history')\nplt.show()", "Load the data\nNow that you have implemented a DAE network that passes gradient checks and works on toy data, it's time to load up the MNIST dataset so we can use it to train DAE on a real dataset. Make sure that you have run \"cs231n/datasets/get_datasets.sh\" script before you continue with this step.", "from cs231n.data_utils import load_mnist\n\nX_train, y_train, X_val, y_val, X_test, y_test = load_mnist()\nX_train = X_train.reshape(X_train.shape[0], -1)\nX_val = X_val.reshape(X_val.shape[0], -1)\nX_test = X_test.reshape(X_test.shape[0], -1)\n \nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\n\n#Visualize some samples\n\nx = np.reshape(X_train[100], (28,28))\n\nplt.imshow(x)\nplt.title(y_train[0])\nplt.show()\n\nplt.imshow(GaussianNoise(rate=0.5,sd=0.5)(x))\nplt.show()\n# Yes, DAE will learn to reconstruct from such corrupted data", "Train a network\nTo train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.", "import time\n\ninput_size = 28 * 28\nhidden_size = 300 # Try also sizes bigger than 28*28\n\nreg = 0.003 # 3e-3\n\nnet = DenoisingAutoencoder((input_size, hidden_size, input_size))\nnet.init_weights()\n\n# Train with SGD\ntic = time.time()\nstats = net.train_with_SGD(X_train, noise=GaussianNoise(rate=0.5,sd=0.5),\n learning_rate=0.4, learning_rate_decay=0.99, \n reg=reg, num_iters=1000, batchsize=128, momentum='classic', mu=0.9, verbose=True, \n activation_function='sigmoid')\ntoc = time.time()\nprint toc-tic, 'sec elapsed'", "Debug the training\nWith the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.\nOne strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.\nAnother strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.", "# Plot the loss function and train / validation accuracies\nplt.subplot(2, 1, 1)\nplt.plot(stats['loss_history'])\nplt.title('Loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\n\nplt.show()\n\n#from cs231n.vis_utils import visualize_grid\n#from cs231n.vis_utils import visualize_grid_2D\n\n# SHOW SOME WEIGHTS\nW0 = net.weights[0]['W']\nW0 = W0.T\nnum_of_samples=100\nfor i in range(0,10):\n for j in range(0,10):\n plt.subplot(10, 10, i*10+j+1)\n rand_index = np.random.randint(0,W0.shape[0]-1,1)\n plt.imshow(W0[rand_index].reshape(28,28))\n plt.axis('off')\nplt.show()\n\n# SHOW SOME RECONSTRUCTIONS\nplt_index=1\nfor i in range(0,10):\n rand_index = np.random.randint(0,X_train.shape[0]-1,1)\n x = X_train[rand_index]\n x_noisy = GaussianNoise(rate=0.5,sd=0.5)(x)\n x_recon = net.predict(x_noisy)\n #x_loss,_ = net.loss(x_noisy, x, reg=0.0, activation_function='sigmoid')\n \n plt.subplot(10,3,plt_index)\n plt.imshow(x.reshape(28,28))\n plt.axis('off')\n if i == 0: plt.title('input')\n plt_index+=1\n plt.subplot(10,3,plt_index)\n plt.imshow(x_noisy.reshape(28,28))\n plt.axis('off')\n if i == 0: plt.title('corrupted input')\n plt_index+=1\n plt.subplot(10,3,plt_index)\n plt.imshow(x_recon.reshape(28,28))\n plt.axis('off')\n if i == 0: plt.title('reconstruction')\n plt_index+=1", "Tune your hyperparameters\nWhat's wrong?. Look at the visualizations above and try to come up with strategies for improving your training. With some effort, I came up with the following weights (which are also not perfect) and reconstructions (which are quite good):\n<img src=\"dae_learned_representation.png\">\n<img src=\"dae_learned_representation_demo.png\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
henchc/Rediscovering-Text-as-Data
04-Stylometry/01-Ad-Hoc-Stylometry.ipynb
mit
[ "Stylometry\nThis notebook is designed to reproduce several findings from Emily Thornbury's chapter \"The Poet Alone\" in her book Becoming a Poet in Anglo-Saxon England. In particular, Fig. 4.5 on page 170.\nFirst, however, we're going to think about what we might do with lists of strings. After all, how else can we count features of a string unless we can somehow make a list of items out of it?\nLists\nHere's a list:", "[\"þæt\", \"wearð\", \"underne\"]", "How do I know?", "type([\"þæt\", \"wearð\", \"underne\"])", "We can assign these to variables too!", "first_hemistich = [\"þæt\", \"wearð\", \"underne\"]\nsecond_hemistich = [\"eorðbuendum\"]\nprint(first_hemistich)\nprint(second_hemistich)", "And perform mathematical operations:", "print(first_hemistich + second_hemistich)", "Let's assign that to first_line:", "first_line = first_hemistich + second_hemistich\nprint(first_line)", "You can get the length of a list using the len function:", "len(first_line)", "You can index lists with brackets [], let's get the first word of the first line:", "print(first_line[1])", "<div class=\"alert alert-danger\">\nDon't forget, Python (and many other langauges) start counting from 0.\n</div>", "print(first_line[0])", "You can get ranges using a semi-colon :", "print(first_line[:2])\nprint(type(first_line[:2]))", "Challenge 1\n\nConcatenate the first three lines of Christ and Satan.\nRetrieve the third element from the combined list.\nRetrieve the fourth through sixth elements from the combined list.", "first_line = ['þæt', 'wearð', 'underne', 'eorðbuendum,']\nsecond_line = ['þæt', 'meotod', 'hæfde', 'miht', 'and', 'strengðo']\nthird_line = ['ða', 'he', 'gefestnade', 'foldan', 'sceatas.']", "List Comprehension\nFor now, think of a list comprehension as a fast way to sift out items from a list, instead of writing a for loop that appends to a new one.", "[word for word in first_line if \"e\" in word]", "INSTEAD OF", "has_e = []\n\nfor word in first_line:\n if \"e\" in word:\n has_e.append(word)\n\nhas_e", "Now you know why list comprehensions are one of the best parts of Python!\nEspecially for text analysis, these will come in handy when we want to parse and sift through text.\nChallenge 2\n\nConcatenate the first three lines of Christ and Satan.\nCreate a new list that contains only the words whose last letter is \"e\"\nCreate a new list that contains the first letter of each word.\nCreate a new list that contains only words longer than two letters.\n\n\nWord Frequencies", "with open('data/christ-and-satan.txt', 'r') as f:\n christ_and_satan = f.read()\n\ntokens = christ_and_satan.split()\n\ntokens", "Looks like a decent start. But we still have verse numbering in there, as well as some punctuation. What if we just want the words?", "from string import punctuation, digits\n\npunctuation\n\ndigits", "Does it feel like time for a list comprehension? It should.\nChallenge 3\nWrite a list comprehension to remove line numbers and punctuation.\n\nPython comes with the convenient Counter method from the collections library. It returns a dictionary like object that will return the frequency of a particular key.", "from collections import Counter\ncs_dict = Counter(tokens)\n\ncs_dict\n\ncs_dict.keys()\n\ncs_dict.values()\n\ncs_dict.most_common()", "Believe it or not, even 1000 years ago \"and\" was still used all the time :) .\nChallenge 4\n\nA common measure of lexical diversity for a given text is its Type-Token Ratio: the ratio of unique words (type) to number of all words (tokens) in the text.\nCalculate the Type-Token Ratio for Christ and Satan.\n\n\nVisualization", "%matplotlib inline\nfrom datascience import *\nimport numpy as np\n\nwords, frequency = zip(*cs_dict.items())\n\nt = Table([\"Words\", \"Frequency\"])\nt.append_column(\"Words\", words)\nt.append_column(\"Frequency\", frequency)\ntop_table = t.sort(\"Frequency\", descending=\"True\").take(np.arange(5))\ntop_table.bar(column_for_categories=\"Words\")", "Ad Hoc Stylometry\nWe can now put together our knowledge of strings, list comprehensions, and plotting frequencies to look at frequency of alliteration letters. Remember: Alliteration is the repetition of a sound at the beginning of two or more words in the same line.\nLet's start by looking at the first letter of every word in the whole text:", "cs_tokens = christ_and_satan.lower().split()\nfirst_letters = [x[0] if x[0] not in ['a','e','i','o','u','y'] else 'a' for x in cs_tokens]\nfirst_l_dict = Counter(first_letters)\nfirst_l_freq = first_l_dict.most_common()\nprint(first_l_freq)\n\n# plot\nletters, frequency = zip(*first_l_dict.items())\n\nt = Table([\"Letters\", \"Frequency\"])\nt.append_column(\"Letters\", letters)\nt.append_column(\"Frequency\", frequency)\ntop_table = t.sort(\"Frequency\", descending=\"True\").take(np.arange(5))\ntop_table.bar(column_for_categories=\"Letters\")", "Cool! But we need it within a line, and Thornbury specifically does it for each Fitt. What's a \"Fitt\"? It's a further division in poetry constituted by a group of lines. Luckily this is nicely delimited by double line breaks (\\n\\n).", "cs_fitts = christ_and_satan.split('\\n\\n')\ncs_fitts\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize = (10,10))\n\n# iterate through fitts\nfor i in range(len(cs_fitts)):\n \n # lowercase the string and get the tokens for each line back\n fitt_tokens = [l.split() for l in cs_fitts[i].lower().split('\\n')]\n \n # collect letter of most freq alliteration\n most_freq_allit = []\n \n # cycle through lines\n for l in fitt_tokens:\n \n # get first letter of all words in line\n first_letters = [x[0] if x[0] not in ['a','e','i','o','u','y'] else 'a' for x in l]\n \n # count freq of all first letters\n allit_freq = Counter(first_letters).most_common()\n try:\n # append most freq letter (alliterated letter) to list for all lines\n most_freq_allit.append(allit_freq[0][0])\n except:\n pass\n \n # use Counter to get the most common alliterations\n allit_freq = Counter(most_freq_allit).most_common()\n\n # need keys for x axis\n common_keys = [x[0] for x in allit_freq]\n \n # need values for y axes\n common_values = [x[1] for x in allit_freq]\n \n # normalize so we can compare across Fitts despite different number of words\n normed_values = [x[1]/sum(common_values) for x in allit_freq]\n \n # add up to get cumulative alliteration of the four most preferred patterns\n cumulative_values = np.cumsum(normed_values)\n\n # add the Fitt to the plot\n plt.xticks(range(4), ['1st','2nd','3rd','4th'], rotation='vertical')\n plt.plot(cumulative_values[:4], color = plt.cm.bwr(i*.085), lw=3)\nplt.legend(labels=['Fitt '+str(i+1) for i in range(12)], loc=0)\nplt.show()", "Homework: Acrostics\nIn poetry, an acrostic is a message created by taking certain letters in a pattern over lines. One 9th century German writer, Otfrid of Weissenburg, was notorius for his early use of acrostics, one instance of which is in the text below: Salomoni episcopo Otfridus. His message can be found by taking the first character of every other line. Print Otfrid's message!\nSource: http://titus.uni-frankfurt.de/texte/etcs/germ/ahd/otfrid/otfri.htm", "text = '''si sálida gimúati sálomones gúati, \n ther bíscof ist nu édiles kóstinzero sédales; \n allo gúati gidúe thio sín, thio bíscofa er thar hábetin, \n ther ínan zi thiu giládota, in hóubit sinaz zuívalta! \n lékza ih therera búachi iu sentu in suábo richi, \n thaz ir irkíaset ubar ál, oba siu frúma wesan scal; \n oba ir hiar fíndet iawiht thés thaz wírdig ist thes lésannes: \n iz iuer húgu irwállo, wísduames fóllo. \n mir wárun thio iuo wízzi ju ófto filu núzzi, \n íueraz wísduam; thes duan ih míhilan ruam. \n ófto irhugg ih múates thes mánagfalten gúates, \n thaz ír mih lértut hárto íues selbes wórto. \n ni thaz míno dohti giwérkon thaz io móhti, \n odo in thén thingon thio húldi so gilángon; \n iz datun gómaheiti, thio íues selbes gúati, \n íueraz giráti, nales míno dati. \n emmizen nu ubar ál ih druhtin férgon scal, \n mit lón er iu iz firgélte joh sínes selbes wórte; \n páradyses résti gébe iu zi gilústi; \n ungilónot ni biléip ther gotes wízzode kleip. \n in hímilriches scóne so wérde iz iu zi lóne \n mit géltes ginúhti, thaz ír mir datut zúhti. \n sínt in thesemo búache, thes gómo theheiner rúache; \n wórtes odo gúates, thaz lích iu iues múates: \n chéret thaz in múate bi thia zúhti iu zi gúate, \n joh zellet tház ana wánc al in íuweran thanc. \n ofto wírdit, oba gúat thes mannes júngoro giduat, \n thaz es líwit thráto ther zúhtari gúato. \n pétrus ther rícho lono iu es blídlicho, \n themo zi rómu druhtin gráp joh hús inti hóf gap; \n óbana fon hímile sént iu io zi gámane \n sálida gimýato selbo kríst ther gúato! \n oba ih irbálden es gidár, ni scal ih firlázan iz ouh ál, \n nub ih ío bi iuih gerno gináda sina férgo, \n thaz hóh er iuo wírdi mit sínes selbes húldi, \n joh iu féstino in thaz múat thaz sinaz mánagfalta gúat; \n firlíhe iu sines ríches, thes hohen hímilriches, \n bi thaz ther gúato hiar io wíaf joh émmizen zi góte riaf; \n rihte íue pédi thara frúa joh míh gifúage tharazúa, \n tház wir unsih fréwen thar thaz gotes éwiniga jár, \n in hímile unsih blíden, thaz wízi wir bimíden; \n joh dúe uns thaz gimúati thúruh thio síno guati! \n dúe uns thaz zi gúate blídemo múate! \n mit héilu er gibóran ward, ther io thia sálida thar fand, \n uuanta es ni brístit furdir (thes gilóube man mír), \n nirfréwe sih mit múatu íamer thar mit gúatu. \n sélbo krist ther guato firlíhe uns hiar gimúato, \n wir íamer fro sin múates thes éwinigen gúates!'''\n\n# HINT: remember what % does, (maybe) lookup enumerate", "Otfrid was more skillful than to settle for the first letter of every other line. What happens if you extract the last letter of the last word of each line, for every other line starting on the second line?", "# HINT: first remove punctuation, tab is represented by \\t\nfrom string import punctuation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Zhenxingzhang/AnalyticsVidhya
Articles/Python_List_Comprehension/List and Dictionary Comprehension-final.ipynb
apache-2.0
[ "Python List Comprehension\nNote: The run times might be different from those presented in article because the codes were re-ran on a different machine which is much faster than the older one. But the trend should be the same.\nWhat is LC?\nAnalogous to set-builder form", "[x**2 for x in range(0,10)]\n\n[x for x in range(1,20) if x%2==0 ]\n\n[x for x in 'MATHEMATICS' if x in ['A','E','I','O','U']]\n\nfor i in range(1,101):\n if int(i**0.5)==i**0.5:\n print i\n\n[i for i in range(1,101) if int(i**0.5)==i**0.5]\n\nimport numpy as np", "Eg1: Matrix Flatten:", "# matrix = [ range(0,5), range(5,10), range(10,15) ]\n# print matrix\n\ndef eg1_for(matrix):\n flat = []\n for row in matrix:\n for x in row:\n flat.append(x)\n return flat\n\ndef eg1_lc(matrix):\n return [x for row in matrix for x in row ]\n\n\nmatrix = [ range(0,5), range(5,10), range(10,15) ]\nprint \"Original Matrix: \" + str(matrix)\nprint \"FOR-loop result: \" + str(eg1_for(matrix))\nprint \"LC result : \" + str(eg1_lc(matrix))\n\n%timeit eg1_for(matrix)\n\n%timeit eg1_lc(matrix)", "Eg2: Removing vowels from a sentence:", "def eg2_for(sentence):\n vowels = 'aeiou'\n filtered_list = []\n for l in sentence:\n if l not in vowels:\n filtered_list.append(l)\n return ''.join(filtered_list)\neg2_for('My name is Aarshay Jain!')\n\ndef eg2_lc(sentence):\n vowels = 'aeiou'\n return ''.join([ l for l in sentence if l not in vowels])\neg2_for('My name is Aarshay Jain!')\n\nsentence = 'My name is Aarshay Jain!'\nprint \"FOR-loop result: \" + eg2_for(sentence)\nprint \"LC result : \" + eg2_lc(sentence)\n\n%timeit eg2_for('My name is Aarshay Jain!')\n\n%timeit eg2_lc('My name is Aarshay Jain!')", "Eg3: Dictionary Comprehension:", "country = ['India', 'Pakistan', 'Nepal', 'Bhutan', 'China', 'Bangladesh']\ncapital = ['New Delhi', 'Islamabad','Kathmandu', 'Thimphu', 'Beijing', 'Dhaka']\n\ndef eg3_for(keys, values):\n dic = {}\n for i in range(len(keys)):\n dic[keys[i]] = values[i]\n return dic\neg3_for(country,capital)\n\ndef eg3_lc(keys, values):\n return { keys[i] : values[i] for i in range(len(keys)) }\neg3_lc(country,capital)\n\ncountry = ['India', 'Pakistan', 'Nepal', 'Bhutan', 'China', 'Bangladesh']\ncapital = ['New Delhi', 'Islamabad','Kathmandu', 'Thimphu', 'Beijing', 'Dhaka']\nprint \"FOR-loop result: \" + str(eg3_for(country, capital))\nprint \"LC result : \" + str(eg3_lc(country, capital))\n\n%timeit eg3_for(country,capital)\n\n%timeit eg3_lc(country,capital)", "Additional examples (mentioned as exercise for users)\nEg: Prime numbers:", "#FOR:\ndef eg4_for(N):\n non_primes = [] \n for i in range(2,int(N**0.5)+1):\n for j in range(i,N,i):\n# print j\n non_primes.append(j)\n primes = []\n for i in range(2,N):\n if i not in non_primes:\n primes.append(i)\n return primes\nprint eg4_for(100)\n%timeit eg4_for(100)\n\n#LC:\ndef eg4_lc(N):\n non_primes = [ j for i in range(2,int(N**0.5)+1) for j in range(i,N,i)]\n return [ i for i in range(2,N) if i not in non_primes]\nprint eg4_lc(100)\n%timeit eg4_lc(100)", "Eg: Matrix Multiplication:", "mat1 = [ range(0,5), range(5,10) ]\nmat2 = [ range(0,2), range(2,4), range(4,6), range(6,8), range(8,10) ]\nprint mat1 , mat2\n\ndef eg2_for(mat1, mat2):\n mat1_row = len(mat1)\n mat2_row = len(mat2) #also num of col of mat1\n mat2_col = len(mat2[0])\n matm2 = [ [0]*mat2_col for i in range(mat1_row) ]\n for row in range(mat1_row):\n for col in range(mat2_col):\n for i in range(mat2_row):\n matm2[row][col] += (mat1[row][i]*mat2[i][col])\n return matm2\nprint eg2_for(mat1,mat2)\n%timeit eg2_for(mat1,mat2)\n\ndef eg2_lc(mat1, mat2):\n mat1_row = len(mat1)\n mat2_row = len(mat2) #also num of col of mat1\n mat2_col = len(mat2[0])\n matm = [ sum( [mat1[row][i]*mat2[i][col] for i in range(mat2_row)] ) for row in range(mat1_row) for col in range(mat2_col) ]\n return matm\nprint eg2_lc(mat1,mat2)\n%timeit eg2_lc(mat1,mat2)\n\n%timeit eg2_for(mat1,mat2)\n\n%timeit eg2_lc(mat1,mat2)", "Eg: Find all possible triangles:\nWe are given an integer N and we have to find all possible triangles with unique lengths that can be formed using side lengths <=N. Let's compare the for and LC cases:", "def tri_for(N):\n L=[]\n for i in range(1,N-2):\n for j in range(i+1,N-1):\n for k in range(j+1, N):\n if (i+j<k) | (i+k<j) | (j+k<i):\n L.append((i,j,k))\n return L \n\ndef tri_lc(N):\n return [(i,j,k) for i in range(1,N-2) for j in range(i+1,N-1) for k in range(j+1,N) if ((i+j<k) | (i+k<j) | (j+k<i))]\n# [ (i,j,k) for i in range(1,N-2) for j in range(i+1,N-1) for k in range(j+1,N) ]\n\nprint tri_for(10)\n%timeit tri_for(10)\n\nprint tri_lc(10)\n%timeit tri_lc(10)", "The Time Advantage\nMap Function Review:\nUsed to apply a function to each element of a list or any other iterable. \nSyntax: map(function, iterable)\nFor example, we can multiply each element of a list of integers with the next number.", "arr = range(10) #contains [0,1,...,9]\nmap(lambda x: x*(x+1), arr)", "Here we have used the Python temporary function lambda. This can be replaced with a standard Python function or a user-defined function declared earlier.\nA simple example", "#Method 1: For-Loop\ndef square_for(arr):\n result = []\n for i in arr:\n result.append(i**2)\n return result\nprint square_for(range(1,11))\n\n#Method 2: Map Function\ndef square_map(arr):\n return map(lambda x: x**2, arr)\nprint square_map(range(1,11))\n\n#Method 3: List comprehension:\ndef square_lc(arr):\n return [i**2 for i in arr]\nprint square_lc(range(1,11))", "Though the three techniques produce the same result, we can see that LC is the most elegant and readable technique. You might argue that even the map function is not bad in this case. But map has its own limitations which are not evident in this example.\nTaking a step forward\nLet's include a catch here. What if we want the square of only even numbers in the list? The three functions would look like:", "#Method 1: For-Loop\ndef square_even_for(arr):\n result = []\n for i in arr:\n if i%2 == 0:\n result.append(i**2)\n return result\nprint square_even_for(range(1,11))\n\n#Method 2: Map Function\ndef square_even_map(arr):\n return filter(lambda x: x is not None,map(lambda x: x**2 if x%2==0 else None, arr))\nprint square_even_map(range(1,11))\n\n#Method 3: List comprehension:\ndef square_even_lc(arr):\n return [i**2 for i in arr if i%2==0]\nprint square_even_lc(range(1,11))", "It is clearly evident that with the slight increase in complexity, both for and map routines became bulkier and less readable. However, the LC routine is still concise and required a minor modification.\nBefore going into more complex examples, let's try to appreciate another advantage of using LC - lower computational time!\nComparing run-times:\nLet us compare the time taken for each of the above functions to run. We'll be using the %timeit magic function of iPython notebook to determine the runtime. Alternatively, you can use the time or timeit modules. \nNow you will be able to appreciate the importance of writing each code fragment as a function. Also, we shall focus on the relative run times and not the absolute values because it is subject to the machine specs. FYI, I am using a Dell XPS 14Z system with following specs: \n2nd Gen i5 (2.5GHz) | 4GB RAM | 64-bit OS | Windows 7 Home Premium\nLet's compare the time for first example:", "%timeit square_for(range(1,11))\n\n%timeit square_map(range(1,11))\n\n%timeit square_lc(range(1,11))", "Here we can see that in this case LC is ~30% faster than for-loop and ~45% faster than map function.\nLet's check for the second example:", "%timeit square_even_for(range(1,11))\n\n%timeit square_even_map(range(1,11))\n\n%timeit square_even_lc(range(1,11))", "In this case, LC is ~20% faster than for-loop and ~65% faster than map function.\nNow this is something incredible. Not only is LC more elegant but also faster than its counterparts. Yes, even I want to get into advanced applications of LC. But hang on! I am not convinced. Why is LC faster? Will it faster in all scenarios or are these special cases? Let's try to find out!\nWhy is LC fast?\nI would not doubt your intellectual skills at this point if you are still wondering why is LC faster. After all it's following the same process:\n1. Iterating over the list\n2. Modifying each element\n3. Storing the result\nLet's try to inspect each element one by one. Let's simply call a function that does nothing and check for iteration times:", "#Method 1: For-loop:\ndef empty_for(arr):\n for i in arr:\n pass\n%timeit empty_for(range(1,11))\n\n#Method 2: Map\ndef empty_map(arr):\n map(lambda x: None,arr)\n%timeit empty_map(range(1,11))\n\n#Method 3: LC\ndef empty_lc(arr):\n [None for i in arr]\n%timeit empty_lc(range(1,11))", "Here we see that for-loop is fasters. This is because in a for-loop, we need not return an element and just move onto next iteration using \"pass\".\nIn both LC and map, returning an element is necessary. The codes here return None. But still map takes more than twice the time. Intuitively, we can think that map involves a definite function call at each iteration which can be the reason behind the extra time.\nNow, lets perform a simple operation of multiplying the number by 2 but we need not store the result:", "#Method 1: For-loop:\ndef x2_for(arr):\n for i in arr:\n i*2\n%timeit x2_for(range(1,11))\n\n#Method 2: Map\ndef x2_map(arr):\n map(lambda x: x*2,arr)\n%timeit x2_map(range(1,11))\n\n#Method 3: LC\ndef x2_lc(arr):\n [i*2 for i in arr]\n%timeit x2_lc(range(1,11))", "Here we see a similar trend as before. So till the point of iterating and making slight modifications, for-loop is clear winner.\nLC is close to for-loop but again map takes around twice as much time. Note that here the difference between time will also depend on the complexity of the function being applied to each element.\nAnother intuition for higher time of map and LC can be that in both cases, it is compulsory to store information and we are actually performing all 3 steps for LC and map. So let's check runtime of for-loop with step 3:", "def store_for(arr):\n result=[]\n for i in arr:\n result.append(i*2)\n return result\n%timeit store_for(range(1,11))", "This is interesting! So the runtime jumps to almost twice just because of storing the information. The reason being that we have to define an empty list and append the result to each in each iteration.\nAfter all 3 steps, LC seem to the clear winner. But are you 100% sure why? Not sure about you, but I am not convinced. My intuition says that probably map is slower because it has to make function calls at each step. LC might just be calculating the value of the same expression for all elements. \nWe can quickly check this out. Let's make a function call in LC as well:", "def x2_lc(arr):\n def mul(x):\n return x*2\n [mul(i) for i in arr]\n%timeit x2_lc(range(1,11))", "Aha! So the guess was right. When we force LC to make function calls, it ends up being more expensive than map function. \nSo I guess the bottom line is that LC is faster in case where simple expressions are required to be applied to each element. But if complex functions are required, map and LC would be nearly the same. We can choose the one which works best.\nAs promised, let's think of a slightly advanced application of LC:\nUsing LC as generators:", "def my_first_gen(n):\n for i in range(n):\n yield i\n\nprint my_first_gen(10)\n\ngen = my_first_gen(3)\n\nprint gen.next()\n\ndef flow_of_info_gen(N):\n print 'function runs for first time'\n for i in range(N):\n print 'execution before yielding value %d' % i\n yield i\n print 'execution after yielding value %d' % i\n print 'function runs for last time'\n\ngen2 = flow_of_info_gen(3)\ngen2.next()\n\ngen2.next()\n\ngen2.next()\n\ngen2.next()\n\ngen3 = my_first_gen(10)\ngen3.next()\ngen3.next()\ngen3.next()\ngen3.next()\nsum(gen3)\n\n#LC returning a list\n[x for x in range(10)]\n\n#LC working as a generator\n(x for x in range(10))\n\nsum(x for x in range(10))\n\ndef sum_list(N):\n return sum([x for x in range(N)])\n\ndef sum_gen(N):\n return sum((x for x in range(N)))\n\nN=1000\nprint 'Time for LC : ',\n%timeit sum_list(N)\nprint '\\nTime for Generator : ',\n%timeit sum_gen(N)\n\nN=100000 #100K\nprint 'Time for LC : ',\n%timeit sum_list(N)\nprint '\\nTime for Generator : ',\n%timeit sum_gen(N)\n\nN=10000000 #10Mn\nprint 'Time for LC : ',\n%timeit sum_list(N)\nprint '\\nTime for Generator : ',\n%timeit sum_gen(N)\n\nN=100000000 #100Mn\nprint '\\nTime for Generator : ',\n%timeit sum_gen(N)\nprint 'Time for LC : ',\n%timeit sum_list(N)", "Data Science Examples:\nExample 4: Reading list of list:", "import pandas as pd\ndata = pd.read_csv(\"skills.csv\")\nprint data\n\n#Split text with the separator ';'\ndata['skills_list'] = data['skills'].apply(lambda x: x.split(';'))\nprint data['skills_list']\n\n#Initialize the set\nskills_unq = set()\n#Update each entry into set. Since it takes only unique value, duplicates will be ignored automatically.\nskills_unq.update( (sport for l in data['skills_list'] for sport in l) )\nprint skills_unq\n\n#Convert set to list:\nskills_unq = list(skills_unq)\nsport_matrix = [ [1 if skill in row else 0 for skill in skills_unq] for row in data['skills_list'] ]\nsport_matrix\n\ndata = pd.concat([data, pd.DataFrame(sport_matrix,columns=skills_unq)],axis=1)\nprint data", "Eg5: Creating powers for Polynomial regression", "data2 = pd.DataFrame([1,2,3,4,5], columns=['number'])\nprint data2\n\ndeg = 6\ncols = ['power_%d'%i for i in range(2,deg+1)]\nprint cols\n\npower_matrix = [ [i**p for p in range(2,deg+1) ] for i in data2['number'] ] \npower_matrix\n\ndata2 = pd.concat([data2, pd.DataFrame(power_matrix,columns=cols)],axis=1)\n\nprint data2", "Eg6: Filtering columns:", "cols = ['a','b','c','d','a_transform','b_transform','c_transform','d_power2','d_power3','d_power4','d_power5','temp1','temp2']\n#Here a,b,c,d are original variables; transform are transformation, power are for polynomial reg, temp are intermediate\n\n#Select only variables with 'transform':\ncol_set1 = [x for x in cols if x.endswith('transform')]\ncol_set2 = [x for x in cols if 'power' in x]\ncol_set3 = [x for x in cols if (x.endswith('transform')) | ('power' in x)]\ncol_set4 = [x for x in cols if x not in ['temp1','temp2']]\n\nprint 'Set1: ', col_set1\nprint 'Set2: ', col_set2\nprint 'Set3: ', col_set3\nprint 'Set4: ', col_set4" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
piscataway/datascience
lab/02 Regression-v2.ipynb
mit
[ "Additional Lab work - Your turn!\n\nPrasanna Joshi\nRakesh Babu\n\nData source,\n\nhttps://www.eia.gov/totalenergy/data/browser/index.php?tbl=T09.01#/?f=A&start=1949&end=2016&charted=0-6\n../data/data2.csv\n\nTask on hand,\n\nData: Crude oil prices\nLoad data\nClean data\nVisualize data\nFit data\nPrediction - predict crude price for a given year-month\n\nTime ~ 15 minutes\nSet up environment\nFirst, make sure that you import the libraries you'll need. In this case, you will want NumPy, Pandas, MatPlotLib.PyPlot, and sklearn.linear_model.LinearRegression. It's easiest to just copy the import statements from the regression1 notebook.", "# Put your code to import the libraries here.\n", "Next, you'll want to import the csv file using pandas. You should assign the file \"../data/data2.csv\" to a variable called oilData.", "# Put your code to load the data from our csv file here.\n\n\n# This is some data cleaning which we haven't gone over.\n# You can ignore this for now, we'll revisit it in a future session.\n\n# Convert the Value field to a numeric.\noilData[['Value']] = oilData[['Value']].apply(pd.to_numeric, errors=\"coerce\")\n\n# Cast the YYYYMM field to a date-time.\noilData['YYYYMM'] = pd.to_datetime(oilData['YYYYMM'], format='%Y%m', errors='coerce')\n\n# Get rid of rows missing data. Most of the stuff we'll do from now on REALLY doesn't like having missing data\noilData2 = oilData.dropna()\n\n# Print out a nice summary of the columns\noilData2.describe(include = \"all\")\n", "Visually Explore the Data\nNow that we have the data, we'd like to see what it looks like, to make sure that a linear regression is a good idea. To do this, we'll plot YYYYMM as the x axis, and Value as the y axis.", "# This is a way to plot data directly from the data frame without having to transform anything.\n# We'll learn more about it later.\n\noilData2.plot(x='YYYYMM', y='Value')\nplt.show()", "Create the linear regression\nYou can copy the example from the regression1 notebook and adapt it to work with our data here. Once again, X should be YYYYMM and y axis should be Value.\nDon't forget to change the column indexes when you assign values to X and y!", "# Put your code to define X and y here.\n\n\n# You'll run into a little bit of trouble when trying to plot the data directly.\n# The code in this cell converts the datetimes into a numeric so we can run the linear regression.\n# This isn't strictly the correct thing to do, but it's quick and easy.\n# We'll learn the \"right\" way to do this in another session.\n\nX2 = np.zeros(X.shape)\nfor i in range(0,X.shape[0]):\n X2[i, 0] = X[i,0] - X[0,0]\n\n# Put your linear regression training and plotting code here.\n# Make sure that you change X to X2 everywhere it exists!\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
junhwanjang/DataSchool
Lecture/14. 선형 회귀 분석/7) 분산 분석 기반의 카테고리 분석.ipynb
mit
[ "분산 분석 기반의 카테고리 분석\n회귀 분석 대상이 되는 독립 변수가 카테고리 값을 가지는 변수인 경우에는 카테고리 값에 의해 연속 변수인 y값이 달라진다. 이러한 경우, 분산 분석(ANOVA)을 사용하면 카테고리 값의 영향을 정량적으로 분석할 수 있다. 또한 이는 카테고리 값에 의해 회귀 모형이 달라지는 것으로도 볼 수 있기 때문에 모형 비교에도 사용될 수 있다.\n카테고리 독립 변수와 더미 변수\n카테고리 값은 여러개의 다른 상태를 나타내는 값이다. 분석시에는 편의상 이 값은 0, 1과 같은 정수로 표현하지만 원래 카테고리값은 1, 2, 3과 같이 숫자로 표현되어 있어도 이는 단지 \"A\", \"B\", \"C\"라는 라벨을 숫자로 대신 쓴 것에 지나지 않으며 실제로 크기의 의미가 없다는 점에 주의해야 한다. 즉, 2라는 값이 1보다 2배 더 크다는 뜻이 아니고 3이라는 값도 1보다 3배 더 크다는 뜻이 아니다.\n따라서 카테고리 값을 그냥 정수로 쓰면 회귀 분석 모형은 이 값을 크기를 가진 숫자로 인식할 수 있는 위험이 있기 때문에 반드시 one-hot-encoding 등을 통해 더미 변수(dummy variable)의 형태로 변환해야 함\n더미 변수는 0 또는 1만으로 표현되는 값으로 어떤 요인이 존재하는가 존재하지 않는가를 표시하는 독립 변수이다. 다음과 같은 명칭으로도 불린다.\n\nindicator variable\ndesign variable\nBoolean indicator\nbinary variable\ntreatment", "from sklearn.preprocessing import OneHotEncoder\nencoder = OneHotEncoder()\n\nx0 = np.random.choice(3, 10)\nx0\n\nencoder.fit(x0[:, np.newaxis])\nX = encoder.transform(x0[:, np.newaxis]).toarray()\nX\n\ndfX = pd.DataFrame(X, columns=encoder.active_features_)\ndfX", "더미 변수와 모형 비교\n더미 변수를 사용하면 사실상 회귀 모형 복수개를 동시에 사용하는 것과 실질적으로 동일하다.\n더미 변수의 예 1\n$$ Y = \\alpha_{1} + \\alpha_{2} D_2 + \\alpha_{3} D_3 $$\n\n$D_2 = 0, D_3 = 0$ 이면 $Y = \\alpha_{1} $\n$D_2 = 1, D_3 = 0$ 이면 $Y = \\alpha_{1} + \\alpha_{2} $\n$D_2 = 0, D_3 = 1$ 이면 $Y = \\alpha_{1} + \\alpha_{3} $\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/6/61/Anova_graph.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">\n더미 변수의 예 2\n$$ Y = \\alpha_{1} + \\alpha_{2} D_2 + \\alpha_{3} D_3 + \\alpha_{4} X $$\n\n$D_2 = 0, D_3 = 0$ 이면 $Y = \\alpha_{1} + \\alpha_{4} X $\n$D_2 = 1, D_3 = 0$ 이면 $Y = \\alpha_{1} + \\alpha_{2} + \\alpha_{4} X $\n$D_2 = 0, D_3 = 1$ 이면 $Y = \\alpha_{1} + \\alpha_{3} + \\alpha_{4} X $\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/2/20/Ancova_graph.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">\n더미 변수의 예 3\n$$ Y = \\alpha_{1} + \\alpha_{2} D_2 + \\alpha_{3} D_3 + \\alpha_{4} X + \\alpha_{5} D_4 X + \\alpha_{6} D_5 X $$\n\n$D_2 = 0, D_3 = 0$ 이면 $Y = \\alpha_{1} + \\alpha_{4} X $\n$D_2 = 1, D_3 = 0$ 이면 $Y = \\alpha_{1} + \\alpha_{2} + (\\alpha_{4} + \\alpha_{5}) X $\n$D_2 = 0, D_3 = 1$ 이면 $Y = \\alpha_{1} + \\alpha_{3} + (\\alpha_{4} + \\alpha_{6}) X $\n\n<img src=\"https://docs.google.com/drawings/d/1U1ahMIzvOq74T90ZDuX5YOQJ0YnSJmUhgQhjhV4Xj6c/pub?w=1428&h=622\" style=\"width:90%; margin: 0 auto 0 auto;\">\n더미 변수의 예 4: Boston Dataset", "from sklearn.datasets import load_boston\nboston = load_boston()\ndfX0_boston = pd.DataFrame(boston.data, columns=boston.feature_names)\ndfy_boston = pd.DataFrame(boston.target, columns=[\"MEDV\"])\n\nimport statsmodels.api as sm\ndfX_boston = sm.add_constant(dfX0_boston)\n\ndf_boston = pd.concat([dfX_boston, dfy_boston], axis=1)\ndf_boston.tail()\n\ndfX_boston.CHAS.plot()\ndfX_boston.CHAS.unique()\n\nmodel = sm.OLS(dfy_boston, dfX_boston)\nresult = model.fit()\nprint(result.summary())\n\nparams1 = result.params.drop(\"CHAS\")\nparams1\n\nparams2 = params1.copy()\nparams2[\"const\"] += result.params[\"CHAS\"]\nparams2\n\ndf_boston.boxplot(\"MEDV\", \"CHAS\")\nplt.show()\n\nsns.stripplot(x=\"CHAS\", y=\"MEDV\", data=df_boston, jitter=True, alpha=.3)\nsns.pointplot(x=\"CHAS\", y=\"MEDV\", data=df_boston, dodge=True, color='r')\nplt.show()", "분산 분석을 이용한 모형 비교\n$K$개의 복수의 카테고리 값을 가지는 더미 변수의 영향을 보기 위해서는 F-검정을 통해 복수 개의 모형을 비교하는 분산 분석을 사용할 수 있다. \n이 경우에는 분산 분석에 사용되는 각 분산의 의미가 다음과 같다.\n\n\nESS: 각 그룹 평균의 분산 (Between-Group Variance) \n $$ BSS = \\sum_{k=1}^K (\\bar{x} - \\bar{x}_k)^2 $$\n\n\nRSS: 각 그룹 내의 오차의 분산의 합 (Within-Group Variance)\n $$ WSS = \\sum_{k=1}^K \\sum_{i} (x_{i} - \\bar{x}_k)^2 $$\n\n\nTSS : 전체 오차의 분산\n $$ TSS = \\sum_{i} (x_{i} - \\bar{x})^2 $$\n\n\n| | source | degree of freedom | mean square | F statstics | \n|-|-|-|-|-|\n| Between | $$\\text{BSS}$$ | $$K-1$$ | $$\\dfrac{\\text{ESS}}{K-1}$$ | $$F$$ |\n| Within | $$\\text{WSS}$$ | $$N-K$$ | $$\\dfrac{\\text{RSS}}{N-K}$$ |\n| Total | $$\\text{TSS}$$ | $$N-1$$ | $$\\dfrac{\\text{TSS}}{N-1}$$ |\n| $R^2$ | $$\\text{BSS} / \\text{TSS}$$ | \n이 때 F-검정의 귀무가설은 $\\text{BSS}=0$ 즉 $\\text{WSS}=\\text{TSS}$ 이다. 즉, 그룹간 차이가 없는 경우이다.", "import statsmodels.api as sm\nmodel = sm.OLS.from_formula(\"MEDV ~ C(CHAS)\", data=df_boston)\nresult = model.fit()\ntable = sm.stats.anova_lm(result)\ntable\n\nmodel1 = sm.OLS.from_formula(\"MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT\", data=df_boston)\nmodel2 = sm.OLS.from_formula(\"MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT + C(CHAS)\", data=df_boston)\nresult1 = model1.fit()\nresult2 = model2.fit()\ntable = sm.stats.anova_lm(result1, result2)\ntable" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]