repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
Nachtfeuer/concept-py
notebooks/2d-math-primer.ipynb
mit
import math len_vector = lambda vector: math.sqrt(vector[0]**2 + vector[1]**2) vector = [3, 4] print("vector is %s" % vector) print("vector length is %g" % len_vector(vector)) """ Explanation: Welcome to the 2d math primer The underlying repository does use Python but this math can be definitely implemented with other languages. I hope you might find it usefull. It's to say - when reading correctly - that you need a quite up-to-date jupyter version to work with this document. Let's start with 2d vector. The 2d vector Please check the repository for concept/math/vector.py containing the Python class wrapping the math logic and simplifying the usage. A vector basically provides two information: * direction * length The vector looks like $\vec{v} = \begin{pmatrix}v_x\v_y \end{pmatrix}$ and the length of the vector is calculated by $|\vec{v}| = \sqrt{v_x \cdot v_x + v_y \cdot v_y}$ End of explanation """ sum_vector = lambda a, b: [a[0]+b[0], a[1]+b[1]] diff_vector = lambda a, b: [a[0]-b[0], a[1]-b[1]] vector_a = [1, 2] vector_b = [3, 4] print("%s + %s = %s" % (vector_a, vector_b, sum_vector(vector_a, vector_b))) print("%s - %s = %s" % (vector_a, vector_b, diff_vector(vector_a, vector_b))) """ Explanation: Sum and difference The sum of two vectors: $\vec{a} + \vec{b} = \begin{pmatrix}a_x + b_x\a_y + b_y\end{pmatrix}$. For the difference: $\vec{a} - \vec{b} = \begin{pmatrix}a_x - b_x\a_y - b_y\end{pmatrix}$ End of explanation """ scale_vector = lambda f, v: [f * v[0], f * v[1]] scalar_product = lambda a, b: a[0]*b[0] + a[1]*b[1] cross_product = lambda a, b: a[0]*b[1] - a[1]*b[0] print("%g * %s = %s" % (3, vector_a, scale_vector(3, vector_a))) print("%s * %s = %s (scalar product)" % (vector_a, vector_b, scalar_product(vector_a, vector_b))) print("%s x %s = %s (cross product)" % (vector_a, vector_b, cross_product(vector_a, vector_b))) """ Explanation: Products There are different kind of products: * Scale a vector: $f \cdot \vec{v} = \begin{pmatrix}f \cdot v_x\f \cdot v_y\end{pmatrix}$. * The scalar product of two vectors is $\vec{a} \cdot \vec{b} = a_x \cdot b_x + a_y \cdot b_y$ * The cross product of two vectors is $\vec{a} x \vec{b} = a_x \cdot b_y - a_y \cdot b_x$ End of explanation """ turned_Left = lambda v: [-v[1], v[0]] turned_right = lambda v: [v[1], -v[0]] vector = [0, 1] print("%s rotated 90 degree counter clockwise: %s" % (vector, turned_Left(vector))) print("%s rotated 90 degree clockwise: %s" % (vector, turned_right(vector))) """ Explanation: Orthogonal vector For a current vector there are two orthogonal vectors: $\vec{v} = \begin{pmatrix}v_x\v_y\end{pmatrix}$, $\vec{v_{left}} = \begin{pmatrix}-v_y\v_x\end{pmatrix}$, $\vec{v_{right}} = \begin{pmatrix}v_y\-v_x\end{pmatrix}$. End of explanation """ vector_angle = lambda u, v: math.atan2(u[1], u[0]) - math.atan2(v[1], v[0]) vector_a = [0, 1] vector_b = [1, 0] print("Angle between %s and %s is %g degree" \ % (vector_a, vector_b, vector_angle(vector_a, vector_b) * 180.0/math.pi)) """ Explanation: Angle between vectors The angle between two vectors is calculated as: $atan2(u_y, u_x) - atan2(v_y, v_x)$ End of explanation """ vector_rotated = lambda v, a: [round(v[0] * math.cos(a) - v[1] * math.sin(a), 5),\ round(v[0] * math.sin(a) + v[1] * math.cos(a), 5)] vector = [1, 0] print("Angle %s rotated 90 degree to the left is %s" \ %(vector, vector_rotated(vector, 90 * math.pi / 180.0))) print("Angle %s rotated 90 degree to the right is %s" \ %(vector, vector_rotated(vector, -90 * math.pi / 180.0))) """ Explanation: Rotation of a vector The rotation of vector is calculated using an angle $\alpha$ with $\vec{v}_{\alpha} =\begin{pmatrix}v_x \cdot \cos{\alpha} - v_y \cdot \sin{\alpha}\v_x \cdot \sin{\alpha} + v_y \cdot\cos{\alpha}\end{pmatrix}$ End of explanation """
Benedicto/ML-Learning
Classifier_1_linear_regression.ipynb
gpl-3.0
from __future__ import division import graphlab import math import string """ Explanation: Predicting sentiment from product reviews The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions. In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative. Use SFrames to do some feature engineering Train a logistic regression model to predict the sentiment of product reviews. Inspect the weights (coefficients) of a trained logistic regression model. Make a prediction (both class and probability) of sentiment for a new product review. Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model. Inspect the coefficients of the logistic regression model and interpret their meanings. Compare multiple logistic regression models. Let's get started! Fire up GraphLab Create Make sure you have the latest version of GraphLab Create. End of explanation """ products = graphlab.SFrame('amazon_baby.gl/') """ Explanation: Data preperation We will use a dataset consisting of baby product reviews on Amazon.com. End of explanation """ products """ Explanation: Now, let us see a preview of what the dataset looks like. End of explanation """ products[269] """ Explanation: Build the word count vector for each review Let us explore a specific example of a baby product. End of explanation """ def remove_punctuation(text): import string return text.translate(None, string.punctuation) review_without_puctuation = products['review'].apply(remove_punctuation) products['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation) """ Explanation: Now, we will perform 2 simple data transformations: Remove punctuation using Python's built-in string functionality. Transform the reviews into word-counts. Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations. End of explanation """ products[269]['word_count'] """ Explanation: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs. End of explanation """ products = products[products['rating'] != 3] len(products) products = products.filter_by([3.], 'rating', exclude=True) len(products) """ Explanation: Extract sentiments We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. End of explanation """ products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1) products """ Explanation: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label. End of explanation """ train_data, test_data = products.random_split(.8, seed=1) print len(train_data) print len(test_data) """ Explanation: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1). Split data into training and test sets Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result. End of explanation """ sentiment_model = graphlab.logistic_classifier.create(train_data, target = 'sentiment', features=['word_count'], validation_set=None) sentiment_model """ Explanation: Train a sentiment classifier with logistic regression We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else. Note: This line may take 1-2 minutes. End of explanation """ weights = sentiment_model.coefficients weights.column_names() """ Explanation: Aside. You may get an warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above. Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows: End of explanation """ num_positive_weights = len(weights[weights['value']>=0]) num_negative_weights = len(weights[weights['value']<0]) print "Number of positive weights: %s " % num_positive_weights print "Number of negative weights: %s " % num_negative_weights """ Explanation: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)). End of explanation """ sample_test_data = test_data[10:13] print sample_test_data['rating'] sample_test_data """ Explanation: Quiz question: How many weights are >= 0? Making predictions with logistic regression Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data. End of explanation """ sample_test_data[0]['review'] """ Explanation: Let's dig deeper into the first row of the sample_test_data. Here's the full review: End of explanation """ sample_test_data[1]['review'] """ Explanation: That review seems pretty positive. Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative. End of explanation """ scores = sentiment_model.predict(sample_test_data, output_type='margin') print scores sentiment_model.predict? """ Explanation: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as: $$ \mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i) $$ where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf]. End of explanation """ print "Class predictions according to GraphLab Create:" print sentiment_model.predict(sample_test_data) """ Explanation: Predicting sentiment These scores can be used to make class predictions as follows: $$ \hat{y} = \left{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \ \end{array} \right. $$ Using scores, write code to calculate $\hat{y}$, the class predictions: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create. End of explanation """ def probability(score): import math return 1./(1+math.exp(-score)) for score in scores: print probability(score) """ Explanation: Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create. Probability predictions Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$ Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1]. End of explanation """ print "Class predictions according to GraphLab Create:" print sentiment_model.predict(sample_test_data, output_type='probability') """ Explanation: Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create. End of explanation """ probabilities = sentiment_model.predict(test_data, output_type='probability') test_data['prediction'] = probabilities top20 = test_data.topk('prediction', 20)['name'] for product in ['Snuza Portable Baby Movement Monitor', 'MamaDoo Kids Foldable Play Yard Mattress Topper, Blue', 'Britax Decathlon Convertible Car Seat, Tiffany', 'Safety 1st Exchangeable Tip 3 in 1 Thermometer']: print product, product in top20 """ Explanation: Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review? Find the most positive (and negative) review We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance. Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews." To calculate these top-20 reviews, use the following steps: 1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.) 2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.) End of explanation """ worst20 = test_data.sort('prediction')[0:20]['name'] for product in ['The First Years True Choice P400 Premium Digital Monitor, 2 Parent Unit', 'JP Lizzy Chocolate Ice Classic Tote Set', 'Peg-Perego Tatamia High Chair, White Latte', 'Safety 1st High-Def Digital Monitor']: print product, product in worst20 """ Explanation: Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice] Now, let us repeat this excercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order. End of explanation """ def get_classification_accuracy(model, data, true_labels): # First get the predictions ## YOUR CODE HERE predictions = model.predict(data) # Compute the number of correctly classified examples ## YOUR CODE HERE correct = (predictions == true_labels).sum() # Then compute accuracy by dividing num_correct by total number of examples ## YOUR CODE HERE accuracy = correct * 1. / len(data) return accuracy """ Explanation: Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice] Compute accuracy of the classifier We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ This can be computed as follows: Step 1: Use the trained model to compute class predictions (Hint: Use the predict method) Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below). Step 3: Divide the total number of correct predictions by the total number of data points in the dataset. Complete the function below to compute the classification accuracy: End of explanation """ get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']) """ Explanation: Now, let's compute the classification accuracy of the sentiment_model on the test_data. End of explanation """ significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', 'work', 'product', 'money', 'would', 'return'] len(significant_words) """ Explanation: Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76). Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better? Learn another classifier with fewer words There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are: End of explanation """ train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False) test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False) """ Explanation: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set. End of explanation """ train_data[0]['review'] """ Explanation: Let's see what the first example of the dataset looks like: End of explanation """ print train_data[0]['word_count'] """ Explanation: The word_count column had been working with before looks like the following: End of explanation """ print train_data[0]['word_count_subset'] """ Explanation: Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review. End of explanation """ simple_model = graphlab.logistic_classifier.create(train_data, target = 'sentiment', features=['word_count_subset'], validation_set=None) simple_model """ Explanation: Train a logistic regression model on a subset of data We will now build a classifier with word_count_subset as the feature and sentiment as the target. End of explanation """ get_classification_accuracy(simple_model, test_data, test_data['sentiment']) """ Explanation: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier. End of explanation """ simple_model.coefficients """ Explanation: Now, we will inspect the weights (coefficients) of the simple_model: End of explanation """ simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21) """ Explanation: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment. End of explanation """ coefficients = simple_model.coefficients coefficients_words = coefficients[coefficients['name'] != '(intercept)'] (coefficients_words['value'] > 0).sum() """ Explanation: Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model? End of explanation """ joined = coefficients_words.join(sentiment_model.coefficients, 'index') print (joined['value']>0) print (joined['value.1']>0) """ Explanation: Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model? End of explanation """ get_classification_accuracy(sentiment_model, train_data, train_data['sentiment']) """ Explanation: Comparing models We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above. First, compute the classification accuracy of the sentiment_model on the train_data: End of explanation """ get_classification_accuracy(simple_model, train_data, train_data['sentiment']) """ Explanation: Now, compute the classification accuracy of the simple_model on the train_data: End of explanation """ get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']) """ Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set? Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data: End of explanation """ get_classification_accuracy(simple_model, test_data, test_data['sentiment']) """ Explanation: Next, we will compute the classification accuracy of the simple_model on the test_data: End of explanation """ num_positive = (train_data['sentiment'] == +1).sum() num_negative = (train_data['sentiment'] == -1).sum() print num_positive print num_negative """ Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set? Baseline: Majority class prediction It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless. What is the majority class in the train_data? End of explanation """ num_positive = (test_data['sentiment'] == +1).sum() num_negative = (test_data['sentiment'] == -1).sum() print num_positive print num_negative num_positive * 1. / len(test_data) """ Explanation: Now compute the accuracy of the majority class classifier on test_data. Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76). End of explanation """
IBMDecisionOptimization/docplex-examples
examples/mp/jupyter/pasta_production.ipynb
apache-2.0
import sys try: import docplex.mp except: raise Exception('Please install docplex. See https://pypi.org/project/docplex/') """ Explanation: The Pasta Production Problem This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer. When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics. This notebook is part of Prescriptive Analytics for Python It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account and you can start using IBM Cloud Pak for Data as a Service right away). CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>: - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used: - <i>Python 3.x</i> runtime: Community edition - <i>Python 3.x + DO</i> runtime: full edition - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition Table of contents: Describe the business problem How decision optimization (prescriptive analytics) can help Use decision optimization Step 1: Import the library Step 2: Model the data Step 3: Prepare the data Step 4: Set up the prescriptive model Define the decision variables Express the business constraints Express the objective Solve with Decision Optimization Step 5: Investigate the solution and run an example analysis Summary Describe the business problem This notebook describes how to use CPLEX Modeling for Python to manage the production of pasta to meet demand with your resources. The model aims at minimizing the production cost for a number of products while satisfying customer demand. Each product can be produced either inside the company or outside, at a higher cost. The inside production is constrained by the company's resources, while outside production is considered unlimited. The model first declares the products and the resources. The data consists of the description of the products (the demand, the inside and outside costs, and the resource consumption) and the capacity of the various resources. The variables for this problem are the inside and outside production for each product. How decision optimization can help Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. <br/> <u>With prescriptive analytics, you can:</u> Automate the complex decisions and trade-offs to better manage your limited resources. Take advantage of a future opportunity or mitigate a future risk. Proactively update recommendations based on changing events. Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Import the library Run the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp). End of explanation """ products = [("kluski", 100, 0.6, 0.8), ("capellini", 200, 0.8, 0.9), ("fettucine", 300, 0.3, 0.4)] # resources are a list of simple tuples (name, capacity) resources = [("flour", 20), ("eggs", 40)] consumptions = {("kluski", "flour"): 0.5, ("kluski", "eggs"): 0.2, ("capellini", "flour"): 0.4, ("capellini", "eggs"): 0.4, ("fettucine", "flour"): 0.3, ("fettucine", "eggs"): 0.6} """ Explanation: Step 2: Model the data The data consists of the description of the products (the demand, the inside and outside costs, and the resource consumption) and the capacity of the various resources. End of explanation """ from docplex.mp.model import Model mdl = Model(name="pasta") """ Explanation: Step 3: Prepare the data Data is very simple and is ready to use without any cleasning, massage, refactoring. Step 4: Set up the prescriptive model Create the DOcplex model The model contains all the business constraints and defines the objective. We now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem. End of explanation """ inside_vars = mdl.continuous_var_dict(products, name='inside') outside_vars = mdl.continuous_var_dict(products, name='outside') """ Explanation: Define the decision variables End of explanation """ # --- constraints --- # demand satisfaction mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products) # --- resource capacity --- mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources) mdl.print_information() """ Explanation: Express the business constraints Each product can be produced either inside the company or outside, at a higher cost. The inside production is constrained by the company's resources, while outside production is considered unlimited. End of explanation """ total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products) total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products) mdl.minimize(total_inside_cost + total_outside_cost) """ Explanation: Express the objective Minimizing the production cost for a number of products while satisfying customer demand End of explanation """ mdl.solve() """ Explanation: Solve with Decision Optimization Now we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable). End of explanation """ obj = mdl.objective_value print("* Production model solved with objective: {:g}".format(obj)) print("* Total inside cost=%g" % total_inside_cost.solution_value) for p in products: print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value)) print("* Total outside cost=%g" % total_outside_cost.solution_value) for p in products: print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value)) """ Explanation: Step 5: Investigate the solution and then run an example analysis End of explanation """
Cat-n-Dog/follow-m
Restaurant.ipynb
mit
train_df = pd.read_csv('train.csv', parse_dates=[1]) #print train_df.head(n=5) train_df.City = train_df.City.astype('category') train_df.Type = train_df.Type.astype('category') train_df['City Group'] = train_df['City Group'].astype('category') #train_df.dtypes #d = train_df['Open Date'] #d.map( lambda x : x.year ) train_df['Open_Year'] = train_df['Open Date'].map( lambda x : x.year) train_df['Open_Month'] = train_df['Open Date'].map( lambda x : x.month) print train_df.head() train_df['City Group'].unique() train_df_grouped_by_city = train_df.groupby(by='City') city_avg = train_df_grouped_by_city.revenue.aggregate(np.average) city_avg.plot(kind='bar') train_df_grouped_by_city = train_df.groupby(by='City') city_count = train_df_grouped_by_city.revenue.aggregate(np.count_nonzero) #print city_count city_count.plot(kind='bar') train_df_grouped_by_city_group = train_df.groupby(by='City Group') city_group_avg = train_df_grouped_by_city_group.revenue.aggregate(np.average) print city_group_avg print train_df_grouped_by_city_group.City.value_counts() city_group_avg.plot(kind='bar') train_df_grouped_by_year = train_df.groupby(by='Open_Year') year_avg = train_df_grouped_by_year.revenue.aggregate(np.average) print train_df_grouped_by_year.Open_Year.value_counts() year_avg.plot(kind='line') print train_df[ train_df.Open_Year == 2000 ] print train_df.iloc[[16, 85]]['P29'] train_df.P29.plot(kind='box') train_df.P29.plot() train_df.P29.plot(kind='hist') train_df.plot(x='P29', y='revenue', kind='scatter') """ Explanation: Restaurant Revenue Prediction ---- A Kaggle Competition In this notebook, we'll try to tackle a Kaggle problem. The object is to predict annual restaurant sales based on objective measurements. This is an interesting problem to me because it's set up in a classic regression setting. So all classic regression techniques should be applicable to the problem. The trick though, as like all machine learning problem, is to carefully test the various models and select the most generalizable one. Competition Description With over 1,200 quick service restaurants across the globe, TFI is the company behind some of the world's most well-known brands: Burger King, Sbarro, Popeyes, Usta Donerci, and Arby’s. They employ over 20,000 people in Europe and Asia and make significant daily investments in developing new restaurant sites. Right now, deciding when and where to open new restaurants is largely a subjective process based on the personal judgement and experience of development teams. This subjective data is difficult to accurately extrapolate across geographies and cultures. New restaurant sites take large investments of time and capital to get up and running. When the wrong location for a restaurant brand is chosen, the site closes within 18 months and operating losses are incurred. Finding a mathematical model to increase the effectiveness of investments in new restaurant sites would allow TFI to invest more in other important business areas, like sustainability, innovation, and training for new employees. Using demographic, real estate, and commercial data, this competition challenges you to predict the annual restaurant sales of 100,000 regional locations. Data Exploration End of explanation """ train_df[ train_df.City == 'İstanbul' ].revenue.plot('box') train_df_grouped_by_type = train_df.groupby(by='Type') type_avg = train_df_grouped_by_type.revenue.aggregate(np.average) type_avg.plot(kind='bar') trainingdf = pd.read_csv('train.csv') print trainingdf.columns """ Explanation: Now this is a very interesting observation. Record 16 and 85 are identical in predictors(EXCEPT FOR ...) but give very different revenue amounts The question we face here is, do we want to exclude the outlier or do we want to include it? End of explanation """ import datetime opendate = trainingdf['Open Date'] dates = [datetime.datetime.strptime(date, '%m/%d/%Y') for date in opendate] mindate = min(dates) ageByDays = [(item-mindate).days for item in dates] print ageByDays[:10] """ Explanation: convert date strings to days from the first restaurant opened End of explanation """ import numpy as np city = trainingdf['City'] b,c = np.unique(city, return_inverse=True) print c[:10] citygroup = trainingdf['City Group'] b,citygroupcat = np.unique(citygroup, return_inverse=True) print citygroupcat[:10] t = trainingdf['Type'] b,tcat = np.unique(t, return_inverse=True) print tcat[:10] trainingdf['Open Date'] = ageByDays trainingdf['City'] = c trainingdf['City Group'] = citygroupcat trainingdf['Type'] = tcat X_train = trainingdf[trainingdf.columns[1:-1]] y_train = trainingdf['revenue'] print X_train.head() """ Explanation: convert string categories to integer categories End of explanation """ train_df_without_id = pd.read_csv('train.csv', parse_dates=[1], index_col=0) print train_df_without_id train_df_without_id_revenue = train_df_without_id.drop('revenue', axis=1) print train_df_without_id_revenue.shape train_df_without_id_revenue_unique = train_df_without_id_revenue.drop_duplicates() print train_df_without_id_revenue_unique.shape two_row = train_df_without_id_revenue.iloc[[16,85],:] p_list = ['P' + str(i + 1) for i in xrange(37)] print two_row.drop_duplicates(subset=['Open Date', 'City', 'Type'] + p_list) """ Explanation: Remove Duplicates End of explanation """ from sklearn import linear_model lin_reg = linear_model.LinearRegression() lin_reg_fit = lin_reg.fit(X_train, y_train) in_sample_diff = (lin_reg_fit.predict(X_train) - y_train) in_sample_diff2 = in_sample_diff.map(lambda x : x * x) np.sqrt(in_sample_diff2.mean()) """ Explanation: Shit... They are **de First cut, linear regression End of explanation """ from sklearn.linear_model import LinearRegression k = 5 features = [] columns = list(X_train.columns) for i in range(k): rSquare = [] for j in range(len(columns)): model = LinearRegression() X = X_train[X_train.columns[j]] f = pd.DataFrame(X_train, columns=features) f['newX'] = X model.fit(f, y_train) rscore = model.score(f, y_train) rSquare.append(rscore) feature = X_train.columns[np.argmax(map(abs,rSquare))] features.append(feature) columns.remove(feature) print features """ Explanation: forward stepwise selection select five features using forward stepwise selection with highest linear regression r square TODO: note that some of the variables are categorical and should add dummy variable to do that. End of explanation """ import statsmodels.formula.api as sm df = pd.DataFrame({'y':y_train, 'x':X_train['Open Date']}) result = sm.ols(formula="y ~ x", data = df).fit() print result.summary() import matplotlib.pyplot as plt model = LinearRegression() X = standardize(X_train[feature1]) model.fit(X, standardize(y_train)) predy = model.predict(X) * y_train.std() + y_train.mean() plt.scatter(X_train['Open Date'], y_train, color='black') plt.plot(X_train['Open Date'], predy, color='blue', linewidth=3) plt.title('regress revenue on opendate') plt.xlabel('open date diff first open date') plt.ylabel('revenue') """ Explanation: simple regression on Open Date End of explanation """ from sklearn.decomposition.pca import PCA import statsmodels.formula.api as sm pcolumns = ['P'+str(i) for i in range(1,38)] X_P = pd.DataFrame(X_train, columns=pcolumns) pca = PCA(n_components=3) train_xp = pca.fit_transform(X_P) print 'the explained variance ratio of pca is ', pca.explained_variance_ratio_ df = pd.DataFrame({'y':y_train, 'x1':train_xp[:,0], 'x2':train_xp[:,1], 'x3':train_xp[:,2]}) result = sm.ols(formula='y~x1+x2+x3',data=df).fit() print result.summary() # todo # looks like P1-P37 can be represented by two or three variables by PCA # and also column City has 37 categories, which also need to be classfied into two or three categories, # so that there are not so much dummy variables to be added # select five features by forward stepwise selection(aic? bic? adjusted r square?) """ Explanation: factor analysis on Ps -- regress revenue on three factor loadings of P1 to P37 the result looks pretty sad, because a simple regression on P2 would give a r square of 0.037, three factors decrease the r square, the adjusted r square even go negative... End of explanation """ from sklearn.linear_model import LinearRegression from sklearn.feature_selection import RFE model = LinearRegression() rfe = RFE(model, 5) rfe = rfe.fit(X_train, y_train) print rfe.support_ print rfe.ranking_ """ Explanation: feature selection in scikit learn package End of explanation """
mohanprasath/Course-Work
coursera/data_science_methodology/DS0103EN-2-2-1-From-Requirements-to-Collection-v1.0.ipynb
gpl-3.0
# check Python version !python -V """ Explanation: <a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a> <h1 align=center><font size = 5>From Requirements to Collection</font></h1> Introduction In this lab, we will continue learning about the data science methodology, and focus on the Data Requirements and the Data Collection stages. Table of Contents <div class="alert alert-block alert-info" style="margin-top: 20px"> 1. [Data Requirements](#0)<br> 2. [Data Collection](#2)<br> </div> <hr> Data Requirements <a id="0"></a> <img src="https://ibm.box.com/shared/static/dv60vn9nq3kb3n7efc5lxtxfgp11stfz.png" width=500> In the videos, we learned that the chosen analytic approach determines the data requirements. Specifically, the analytic methods to be used require certain data content, formats and representations, guided by domain knowledge. In the From Problem to Approach Lab, we determined that automating the process of determining the cuisine of a given recipe or dish is potentially possible using the ingredients of the recipe or the dish. In order to build a model, we need extensive data of different cuisines and recipes. Identifying the required data fulfills the data requirements stage of the data science methodology. Data Collection <a id="2"></a> <img src = "https://ibm.box.com/shared/static/hgocgq6no4d09pbr140hmt3jtjizy2da.png" width=500> In the initial data collection stage, data scientists identify and gather the available data resources. These can be in the form of structured, unstructured, and even semi-structured data relevant to the problem domain. Web Scraping of Online Food Recipes A researcher named Yong-Yeol Ahn scraped tens of thousands of food recipes (cuisines and ingredients) from three different websites, namely: <img src = "https://ibm.box.com/shared/static/4fruwan7wmjov3gywiz3swlojw0srv54.png" width=500> www.allrecipes.com <img src = "https://ibm.box.com/shared/static/cebfdbr22fjxa47lltp0bs533r103g0z.png" width=500> www.epicurious.com <img src = "https://ibm.box.com/shared/static/epk727njg7xrz49pbkpkzd05cm5ywqmu.png" width=500> www.menupan.com For more information on Yong-Yeol Ahn and his research, you can read his paper on Flavor Network and the Principles of Food Pairing. Luckily, we will not need to carry out any data collection as the data that we need to meet the goal defined in the business understanding stage is readily available. We have already acquired the data and placed it on an IBM server. Let's download the data and take a look at it. <strong>Important note:</strong> Please note that you are not expected to know how to program in Python. The following code is meant to illustrate the stage of data collection, so it is totally fine if you do not understand the individual lines of code. We have a full course on programming in Python, <a href="http://cocl.us/PY0101EN_DS0103EN_LAB2_PYTHON_Coursera"><strong>Python for Data Science</strong></a>, which is also offered on Coursera. So make sure to complete the Python course if you are interested in learning how to program in Python. Using this notebook: To run any of the following cells of code, you can type Shift + Enter to excute the code in a cell. Get the version of Python installed. End of explanation """ import pandas as pd # download library to read data into dataframe pd.set_option('display.max_columns', None) recipes = pd.read_csv("https://ibm.box.com/shared/static/5wah9atr5o1akuuavl2z9tkjzdinr1lv.csv") print("Data read into dataframe!") # takes about 30 seconds """ Explanation: Read the data from the IBM server into a pandas dataframe. End of explanation """ recipes.head() """ Explanation: Show the first few rows. End of explanation """ recipes.shape """ Explanation: Get the dimensions of the dataframe. End of explanation """
LivingProgram/kaggle-sea-lion-data
Correct Coordinates & Sea Lion Counts.ipynb
cc0-1.0
# imports import numpy as np import pandas as pd import os import cv2 import matplotlib.pyplot as plt import skimage.feature from tqdm import tqdm # nice progress bars %matplotlib inline # constants TRAIN_PATH = '../data/Train/' DOTTED_PATH = '../data/TrainDotted/' OUT_PATH = '../output/' ALL_FILE_NAMES = os.listdir(DOTTED_PATH) # all our training file names ALL_FILE_NAMES = sorted(ALL_FILE_NAMES, key = lambda item: int(item.partition('.')[0])) """ Explanation: Correct Coordinates & Sea Lion Counts by LivingProgram The Explanation Introduction In this competition we were given dotted training files as labels for the sea lions in each image instead of coordinates. Not only are the dotted training files useless without coordinates, but many of the reported numbers in train.csv are incorrect. The goal of this kernel is to illustrate exactly what I did to generate a coordinates file, and a corrected train.csv. These steps do not need to be performed again, you can just download the coordinates file here and count file here, then check my results using the coordinate marking function in the "Visualizing Coordinates" section. What I will cover: - Setting Up Folder Structure - Creation of Initial Coordinates File, using Radu's Method - Visualizing Coordinates, using Radu's Method - Correcting Initial Coordinates File - Correcting train.csv Setting Up Folder Structure you can setup your folders however you want, but to be compatible with the code below, I will illustrate my folder structure* I have 3 folders: 1. the "code" folder contains this kernel 2. the "data" folder contains the extracted "Test", "Train" and "TrainDotted" data folders 3. the "output" folder is where the "correct_coordinates.csv" and "correct_train.csv" file will be created <img style="float: left;border:5px solid black;" src="folder_structure.jpg"> Creation of Initial Coordinates File Radu's Method was implemented here to extract the coordinates End of explanation """ MISMATCHED_TRAIN = [3, 7, 9, 21, 30, 34, 71, 81, 89, 97, 151, 184, 215, 234, 242, 268, 290, 311, 331, 344, 380, 384, 406, 421, 469, 475, 490, 499, 507, 530, 531, 605, 607, 614, 621, 638, 644, 687, 712, 721, 767, 779, 781, 794, 800, 811, 839, 840, 869, 882, 901, 903, 905, 909, 913, 927, 946] FILE_NAMES = [] for filename in ALL_FILE_NAMES: if int(filename.partition('.')[0]) in MISMATCHED_TRAIN: pass else: FILE_NAMES.append(filename) # create FILE_NAMES without MISMATCHED_TRAIN images """ Explanation: Due to the fact some images are mismatched in the training set, and will not work for this method (reference to datacanary's post), I removed those images from the entire list of training files. End of explanation """ count_df = pd.DataFrame(index = FILE_NAMES, columns = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"]).fillna(0) """ Explanation: Now I created two pandas dataframes that will later be saved to csv files. One of them (count_df), will record the number of each category of sea lion the method was able to discover in the image. End of explanation """ coordinates_df = pd.DataFrame(columns = ["filename", "y_coord", "x_coord", "category"]).fillna(0) """ Explanation: The other dataframe (coordinates_df), will record the coordinates of every discovered sea lion (y_coord, x_coord), the image the sea lion is from (filename), and the category of the sea lion. End of explanation """ for filename in tqdm(FILE_NAMES): img_dotted = cv2.imread(DOTTED_PATH + filename) img_train = cv2.imread(TRAIN_PATH + filename) img_diff = cv2.absdiff(img_train , img_dotted) mask_1 = cv2.cvtColor(img_dotted, cv2.COLOR_BGR2GRAY) mask_1[mask_1 < 20] = 0 mask_1[mask_1 > 0] = 255 mask_2 = cv2.cvtColor(img_train, cv2.COLOR_BGR2GRAY) mask_2[mask_2 < 20] = 0 mask_2[mask_2 > 0] = 255 img_diff = cv2.bitwise_or(img_diff, img_diff, mask=mask_1) img_diff = cv2.bitwise_or(img_diff, img_diff, mask=mask_2) img_diff = cv2.cvtColor(img_diff, cv2.COLOR_BGR2GRAY) blobs = skimage.feature.blob_log(img_diff, min_sigma=3, max_sigma=4, num_sigma=1, threshold=0.02) for blob in blobs: y, x, s = blob b,g,r = img_dotted[int(y)][int(x)][:] if r > 204 and g < 29 and b < 26: # RED count_df["adult_males"][filename] += 1 new_row = pd.Series([filename, int(y), int(x), "adult_males"], index=["filename", "y_coord", "x_coord", "category"]) coordinates_df = coordinates_df.append(new_row, ignore_index=True) elif r > 220 and g < 25 and b > 204: # MAGENTA count_df["subadult_males"][filename] += 1 new_row = pd.Series([filename, int(y), int(x), "subadult_males"], index=["filename", "y_coord", "x_coord", "category"]) coordinates_df = coordinates_df.append(new_row, ignore_index=True) elif 6 < r < 64 and 156 < g < 199 and b < 52: # GREEN count_df["pups"][filename] += 1 new_row = pd.Series([filename, int(y), int(x), "pups"], index=["filename", "y_coord", "x_coord", "category"]) coordinates_df = coordinates_df.append(new_row, ignore_index=True) elif r < 78 and 31 < g < 85 and 124 < b < 221: # BLUE count_df["juveniles"][filename] += 1 new_row = pd.Series([filename, int(y), int(x), "juveniles"], index=["filename", "y_coord", "x_coord", "category"]) coordinates_df = coordinates_df.append(new_row, ignore_index=True) elif 59 < r < 115 and 19 < g < 80 and b < 49: # BROWN count_df["adult_females"][filename] += 1 new_row = pd.Series([filename, int(y), int(x), "adult_females"], index=["filename", "y_coord", "x_coord", "category"]) coordinates_df = coordinates_df.append(new_row, ignore_index=True) count_df.to_csv(OUT_PATH + 'initial_count.csv') coordinates_df.to_csv(OUT_PATH + 'initial_coordinates.csv') """ Explanation: Then I iterated over all the training files, and extracted the y_coord, x_coord, and category according to Radu's Method, and saved the count_df dataframe to initial_count.csv, and the coordinates_df dataframe to initial_coordinates.csv End of explanation """ def report_error(count_file): # checking that the generated "initial_count.csv" matches "train.csv" true sea lion numbers count_df = pd.read_csv(OUT_PATH + count_file, index_col=0) true_count_df = pd.read_csv(TRAIN_PATH + 'train.csv') categories = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"] wrong_files_dict = {} for filename, row in count_df.iterrows(): train_id = int(filename.partition('.')[0]) wrong_list = [] for category in categories: predicted_val = int(row[category]) true_val = int(true_count_df[category][train_id]) if predicted_val != true_val: wrong_list.append([category, predicted_val, true_val]) if len(wrong_list) != 0: wrong_files_dict[int(filename.partition('.')[0])] = wrong_list wrong_files_list = list(wrong_files_dict.keys()) wrong_files_list = sorted(wrong_files_list, key=int) for img_id in wrong_files_list: filename = str(img_id) + '.jpg' wrong_categories = wrong_files_dict[img_id] print(filename) for item in wrong_categories: category = item[0] predicted_val = item[1] true_val = item[2] print(' ' + category + ': predicted=' + str(predicted_val) + ', True=' + str(true_val)) report_error('initial_count.csv') """ Explanation: Generating an Error Report Here I compared the number of sea lions counted by Radu's method, to the "true" counts given in train.csv. The function below takes in the count file (initial_count.csv) and compares it to train.csv. It then prints a list of images along with the specific categories of sea lions and the predicted values by Radu's method, versus the "true" values from train.csv End of explanation """ def graph_coord_circles(FILE_NAMES, coord_file): coordinates_df = pd.read_csv(OUT_PATH + coord_file) for filename in FILE_NAMES: new_df = coordinates_df.loc[coordinates_df['filename'] == filename] dotted_img = cv2.imread(DOTTED_PATH + filename) for index, row in new_df.iterrows(): if row['category'] == 'adult_males': cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (0,0,255), 2) elif row['category'] == 'subadult_males': cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (250,10,250), 2) elif row['category'] == 'pups': cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (20,180,35), 2) elif row['category'] == 'juveniles': cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (180,60,30), 2) elif row['category'] == 'adult_females': cv2.circle(dotted_img, (int(row['x_coord']), int(row['y_coord'])), 8, (0,42,84), 2) cv2.imwrite(OUT_PATH + str(filename.partition('.')[0]) + '_marked.jpg', dotted_img) # uncomment the line below and run this cell to generate marked images for all the training files # graph_coord_circles(FILE_NAMES, 'initial_coordinates.csv') """ Explanation: I noticed that there were many images with discrepancies between our generated counts and the train.csv counts. The only way to verify whether or not the generated counts were correct or the train.csv were correct, was to manually check the coordinates for the images with discrepencies. Note: Images without discrepancies between generated and train.csv counts will not be manually checked, and will be assumed to have accurate coordinates and counts. Other images may be manually checked later, check the bottom for "Edits" to the data files In order to check the validity of the coordinates, they must be visualized on the image. Visualizing Coordinates Radu's Method was implemented here to visualize the extracted coordinates on the dotted training images Using the function defined below (graph_coord_circles), all we need to do is give it a list of file names, and the name of the coordinates file generated before (initial_coordinates.csv) and the function will draw circles around the coordinates of every file, and save a new jpg file to the output folder with those circles on the image. This will allow us to check if the coordinates are centered correctly on the sea lions. End of explanation """ # first load in the data from initial_coordinates.csv correct_coordinates_df = pd.read_csv(OUT_PATH + 'initial_coordinates.csv', index_col=0) """ Explanation: Manually Checking Images Using the images with circles on the coordinates, I checked the images with discrepencies for a couple things: 1. Each circle is centered on a sea lion, not off in the sea (ensuring that the coordinates are on target) 2. Each sea lion has a circle on them (ensuring that every sea lion has coordinates that point towards it, and no sea lion is unmarked) 3. The color of the circle, which indicates category of sea lion, matches the appearance of the sea lion (ensures that the sea lions are labeled with not just the correct coordinates but the correct category) Based on the conditions above, the data (coordinates and counts) for a specific image may be incorrect. The correct data is assumed to be the initial_coordinates.csv, and I modify the coordinates in initial_coordinates.csv by adding coordinates for missing sea lions and removing coordinates for non-sea lions. End of explanation """ # getting list of good image ids IMG_IDS = [] for filename in FILE_NAMES: IMG_IDS.append(int(filename.partition('.')[0])) # function to apply changes, and get corect coordinates and counts def apply_all_changes(): changes_df = pd.read_csv('./changes.csv', index_col='img_id') # getting all image ids img_ids = list(changes_df.index) for img_id in img_ids: # first change new_coord_df filename = str(img_id) + '.jpg' mini_changes_df = changes_df.ix[int(img_id)] # only 1 row coord_add_list = ast.literal_eval(mini_changes_df[0]) coord_remove_list = ast.literal_eval(mini_changes_df[1]) for coord_add in coord_add_list: if len(coord_add) == 0: continue y_coord = int(coord_add[0]) x_coord = int(coord_add[1]) category = coord_add[2] # changing new_coord_df to add coordinate new_row = pd.Series([filename, y_coord, x_coord, category], index=["filename", "y_coord", "x_coord", "category"]) new_coord_df = new_coord_df.append(new_row, ignore_index=True) for coord_remove in coord_remove_list: if len(coord_remove) == 0: continue y_coord = coord_remove[0] x_coord = coord_remove[1] category = coord_remove[2] # changing new_coord_df to remove coordinate mask = (new_coord_df['filename'] == filename) & (new_coord_df['y_coord'] == y_coord) & (new_coord_df['x_coord'] == x_coord) & (new_coord_df['category'] == category) new_coord_df= new_coord_df[~mask] new_coord_df.to_csv(OUT_PATH + 'correct_coordinates.csv') # save correct coordinates # next create a new file with correct counts of sea lions new_counts_df = pd.DataFrame(index = IMG_IDS, columns = ["adult_males", "subadult_males", "adult_females", "juveniles", "pups"]).fillna(0) for row in new_coord_df.iterrows(): filename = row[1]['filename'] file_id = int(filename.partition('.')[0]) category = row[1]['category'] new_counts_df[category][file_id] +=1 new_counts_df.to_csv(OUT_PATH + 'correct_train.csv',index_label='train_id') apply_all_changes() """ Explanation: My changes to the images were recorded in the changes.csv file located in this repository. In changes.csv I made lists of coordinates to add (coord_add column of csv) and coordinates to remove (coord_remove column of csv) for every single image. To apply the changes from the changes.csv, coordinates must be added and removed from initial_coordinates.csv, and the counts of sea lions in each image readjusted. End of explanation """
planetlabs/notebooks
jupyter-notebooks/analytics/user-guide/01_getting_started_with_the_planet_analytics_api.ipynb
apache-2.0
# Here, we've already stored our Planet API key as an environment variable on our system # We use the `os` package to read it into the notebook. import os API_KEY = os.environ['PL_API_KEY'] # Alternatively, you can just set your API key directly as a string variable: # API_KEY = "YOUR_PLANET_API_KEY_HERE" # Use our API key as the basic authentication username apiAuth = (API_KEY, '') # Alternatively, you can use your Planet username and password # apiAuth = ("me@planet.com", 'mypassword') """ Explanation: Planet Analytics API Tutorial <h1 style="margin-top:10px;">Getting Started with Planet Analytics API</h1> Overview Introduction Introduction to Planet Analytics capabilities and concepts: Feeds, Subscriptions, Results. API Mechanics Overview of available API endpoints, specifications, documentation, and authentication. We'll also introduce the Requests library. Making our first request to the Planet Analytics API Use the Requests python library to authenticate, connect, and download data from the Planet Analytics API Working with Planet Analytics Data Explore and visualize Planet Analytics data using GeoPandas and GeoViews 1. Introduction The Planet Analytics API leverages computer vision to transform Planet imagery into analytic feeds that detect and classify objects, identify geographic features, and monitor change over time across the globe. This tutorial series corresponds with the Analytics Feeds User Guide, and is intended to help developers access the Planet Analytics API in order to build applications and solutions on top of this unique dataset. This installment of the tutorial series will focus on connecting to the API and serve as an overview of the concepts of Feeds and Subscriptions, the two basic building blocks of an "analytics feed". Feeds A Feed represents an analytic derived from Planet imagery. Each feed essentially represents an analytic capability that has been uniquely configured to optimize performance and quality, and each have unique types of outputs. For example, a Road Detection Feed represents roads detected on monthly Planet Basemaps, and outputs raster "segmentation mask" data. Several types of Feeds are currently available on the Planet Analytics API, and we plan on releasing new feeds in the future. Subscriptions Users have subscriptions to feeds in a specific Area of Interest (AOI) and Time Interval of Interest (TOI). For example, a Subscription could be Road Detections over 12 months in San Francisco, California. "Subscribing" to a Feed is how a user can initiate the process of leveraging an analytic capability over an AOI/TOI and generate analytics datasets called "Results". We'll be covering how to access the available Feeds and Subscriptions in this tutorial. Results When new Planet imagery is published that intersects a Subscription's AOI and TOI, Planet’s computer vision models process the imagery and the output is added to a "collection" of Results associated with the Subscription. The next tutorial in this series will examine results in more detail. Visit us at planet.com to learn more about Planet's unique analytics capabilities and offerings. 2. API mechanics API Endpoints The Planet Analytics API can be accessed at the following base URL: api.planet.com/analytics The main endpoints available correspond to the three types of data exposed on the API: Feeds, Subscriptions, and Results /feeds - Feeds /subscriptions - Subscriptions /collections - Subscription Results "Collections" /collections/{COLLECTION ID}/items - Subscription Results "Features" (Detections) In this tutorial, we'll make some example requests to the Planet Analytics API programmatically to demonstrate how the Feeds and Subscriptions endpoints work and what data they can provide us. The next tutorials in this series will cover Results endpoints and working with analytics data in more detail. Before we dive into working with these endpoints, let's go over some more API mechanics and make sure we know how to access the documentation! Documentation The documentation extensively lists all the available endpoints and their available HTTP methods, as well as any options or query parameters that are available for us to control. For a full listing and more information on each endpoint, view the interactive API documentation website: developers.planet.com/docs/analytics There you can view in depth listings on all available endpoints, query parameters, and response payloads and statuses. API Specifications The Planet Analytics API follows a RESTful (Representational State Transfer) API interface over HTTP: HTTP Basics Communicating with the Planet Analytics API is achieved via Hypertext Transfer Protocol (HTTP) by sending "HTTP requests" from a "client" (your computer or another server) to the Planet Analytics API server, which will issue "responses" back to the requester. There are many ways to make an HTTP request including command line programs such as cURL and libraries written for specific programming languages like httplib in Python. Requests can also be made from your favorite web browser, or other graphical user interfaces (GUI) such as Insomnia. We can even use QGIS to request our Planet Analytics API results. To facilitate ease-of-use and ensure quality and compliance for a wide variety of applications, the Planet Analytics API implements two standardized specifications: * OpenAPI * WFS 3.0 OpenAPI <img src="https://www.openapis.org/wp-content/uploads/sites/3/2018/02/OpenAPI_Logo_Pantone-1.png" align="right" style="margin-left: 20px" width="100"></img> The Planet Analytics API conforms to the OpenAPI Specfication. The OpenAPI Specification, formerly known as the Swagger Specification, is the world’s standard for defining RESTful interfaces. The OAS enables developers to design a technology-agnostic API interface that forms the basis of their API development and consumption. WFS 3.0 <img src="http://www.opengeospatial.org/pub/www/files/OGC_Logo_2D_Blue_x_0_0.png" align="right" style="margin-left: 20px" width="100"></img> The Planet Analytics API's Results (collections and items) endpoints follow the Open Geospatial Consortium's (OGC) Web Feature Service 3.0 (WFS) specification. A Web Feature Service (WFS) is a standard API that represents collections of geospatial data. Conformance information for the Planet Analytics API is available at https://api.planet.com/analytics/conformance You can view the the Planet Analytics API spec swagger definition document at https://api.planet.com/analytics/swagger.json Using the Requests library In this tutorial, we'll use the Requests python library to make our HTTP requests to the Planet Analytics API. <img src="imgs/requests-logo.png" align="right" style="margin-right: 20px" width="50"></img> Requests is an elegant and simple HTTP library for Python, built for human beings. Remember, any libraries or applications that can perform HTTP requests can be used to access the Planet Analytics API. The mechanics will be fairly identical to how we use the Requests library here, so feel free to try your favorite client! API Authentication Setup authentication for Requests Some of the content on the Planet Analytics API is only be available to specific users. We'll need to authenticate each of our requests to the Planet Analytics API in order to access the content that is available to us. We can do this by setting an Authorization HTTP header on our requests to one of the three available authentication types: api-key Basic Bearer (JWT - JSON Web Token) To find more information on each of these authentication approaches, see the authentication section in the Planet Analytics API documentation. Basic Authentication We'll use Basic Authentication to authenticate with the Planet Analytics API, via Requests' HTTPBasicAuth package. Using the Requests library's helpful shorthand argument auth=('USERNAME','PASSWORD') makes it very easy to send an authenticated HTTP request! We can use either our Planet username and password, or simply pass in our Planet API key as the username. To find your Planet API key, you can visit your Planet Account Page. If you don't have an account yet, sign up for one here ! Let's set our API key in a variable called API_KEY. End of explanation """ # Planet Analytics API base url PAA_BASE_URL = "https://api.planet.com/analytics/" """ Explanation: After we've got our API credentials set up, we're ready to continue to the next section and make our first request! 3. Making our first request to the Planet Analytics API Let's start to explore the Feeds and Subscriptions data available to us by interacting directly with the API and making our first request! Configure the base URL Once we have our authentication variable set up, we can create a variable that holds the "base URL" of the Planet Analytics API. This is the root URL onto which we add endpoints. End of explanation """ # Define our endpoint to point to "feeds" feeds_endpoint = "feeds" # Construct the URL for the HTTP request #(Planet Analytics API base URL + desired endpoint) request_url = PAA_BASE_URL + feeds_endpoint """ Explanation: First request: Get a list of available Feeds Let's make a request to get information on all our available Feeds. The request should go to the following address: https://api.planet.com/analytics/feeds Setup the Request Endpoint End of explanation """ # Import the requests library import requests # Make the GET request response = requests.get(request_url, auth=apiAuth) print(response) """ Explanation: Make the Request Since we're making a GET request, we'll use Requests' .get method. Now, let's create our request by passing our request URL and auth variable. Running the next cell should make a call out to the Planet Analytics API. End of explanation """ # Decode the response JSON body to a python dict response_json = response.json() print(response_json) """ Explanation: If our request call above was successful we should get back a response with a 200 OK HTTP status code! If we get any other type response, we may be doing something wrong and be running into an error. There should be a message and an HTTP status code in the response with more information to help us debug. All expected response codes and their messages are listed in the Response Schema section for each endpoint in the Planet Analytics API documentation. Most importantly, our successful response also delivers a payload in the response body that should contain JSON data. Congratulations! You've just made your first request to the Planet Analytics API! Next, let's take a look at the data we received. Reading Response Data We need to decode our response JSON to inspect the payload. Once we do, we should see a data property in the payload that contains a list or array of Feed objects: End of explanation """ import json # "Beautify" JSON with json.dumps beautified_json = json.dumps(response_json, sort_keys=True, indent=4) print(beautified_json) """ Explanation: If the response field 'data' property is an empty array, this means you don't have any Analaytic subscriptions in your account. Please contact your customer support manager or contact sales at https://www.planet.com/contact-sales/ to resolve this problem. Further on in this tutorial we'll cover the specifics of this response, but for now let's focus on the format of our responses. JSON Responses If you're used to working with JSON then you should be able to understand the above output. JSON contains key-value pairs of data. All responses from the Planet Analytics API should return JSON data, either single JSON objects or sometimes nested lists of objects along with other properties. The raw, unformatted output above is a bit hard to read for most humans though... Using the json python package, we can "beautify" the response data to make it easier to read: End of explanation """ # Use the pprint module import pprint pp = pprint.PrettyPrinter(indent=4) # Beautify our response_json dict with pp pp.pprint(response_json) """ Explanation: Alternatively, we can also use python's pprint module to "beautify" our dict End of explanation """ # Write a new .json file to disk with open('feeds.json', 'w') as file: file.write(beautified_json) # Alternatively, you could write the data directly from the response without beautifying it first # with open('subscriptions.json', 'w') as file: # file.write(response.text) """ Explanation: Finally, let's export the "beautified" data into a JSON file that we'll call feeds.json: End of explanation """ # Get a feed ID the user has access to feed_id = requests.get("https://api.planet.com/analytics/feeds", auth=(API_KEY,'')).json()['data'][0]['id'] print('feed_id: {}'.format(feed_id)) # URL to request a single feed single_feed_url = PAA_BASE_URL + feeds_endpoint + "/" + feed_id print('single_feed_url: {}'.format(single_feed_url)) """ Explanation: The code in the cell above should save a new file with our feeds list. Interpreting Response Data Our response from the feeds endpoint request contains a property called "data", which in this case contains a list of individual Feed objects available on the API. The subscriptions endpoint also contains a data property with analogous objects for Subscriptions. The data property in the top level response object contains most of the data we'd expect to be looking for, while the other properties like links are meta data. Links We also see the OpenAPI style links property which is useful to get direct links to the request we made. We'll see similar links properties for both Subscriptions and Results that will give us an easy way to follow pagination and get related assets as well! Since the Results section of the API follows the WFS3 spec, the responses will look a bit different. We'll go in depth on Results in the second tutorial in this series. Query Parameters Many endpoints also expose query parameters like before and limit, that can help us paginate through responses or look for specific types of responses. We'll cover some of these later on, but you can always refer the documentation to see which query parameters are available for each endpoint. Putting it all together We've broken the steps down above, but making this request is as easy as this simple one-liner in python code: requests.get("https://api.planet.com/analytics/feeds", auth=(API_KEY,'')).json() 4. Working with Planet Analytics API data Let's take a closer look at Feeds and Subscriptions by looking at a single Feed and a list of Subscriptions for a Feed. Feed Info The response from the /feeds/ endpoint gives us a list of available Feeds under that data property. We'll discuss what the data for each of these Feeds means next, but first let's use the id property to request an individual Feed directly. The id is a Universally Unique Identifier (UUID) string, and as the name suggests, is unique to each element. Here's how we can construct the url to a single Feed using it's id: End of explanation """ # Get the single Feed from our GET request response single_feed = requests.get(single_feed_url, auth=(API_KEY,'')).json() print(json.dumps(single_feed, indent=1)) """ Explanation: Now let's make a request using the single Feed url: End of explanation """ # Define the subscriptions endpoint subscriptions_endpoint = 'subscriptions' # Construct a URL to list all available subscriptions feed_subscriptions_url = PAA_BASE_URL + subscriptions_endpoint print(feed_subscriptions_url) """ Explanation: We can see that the Feed includes information like the title, description, id, and dates indicating when the Feed was created and updated. We also see a target property containing type, which will let us know what kind of Results the Feed generates (collections of features vs raster mosaics). Under the source property, we see the configuration for the source imagery that the Feed operates on. The query property under source config should be familiar if you've worked with the Planet Data API, and we can see which Planet item type the Feed is configured to use (ex. PSScene). Finally the links property is also available as we've seen before. In the next section, let's take a look at a Subscription associated with a particular Feed. Working with Subscriptions We can get a list of available Subscriptions in the same we we did for Feeds, by making a request to the /subscriptions endpoint. We can also get a single Subscription using it's id by appending it to the Subscriptions endpoint: /subscriptions/{SUBSCRIPTION_ID}. End of explanation """ # Set query parameters for the request # Use the feedID` feed_subscriptions_params = {"feedID": feed_id} # Make the request to the api feed_subscriptions_response = requests.get(feed_subscriptions_url, params=feed_subscriptions_params, auth=(API_KEY,'')).json() # Get the list of subscriptions from the 'data' property of the response subscriptions = feed_subscriptions_response['data'] # Print the number of subscriptions found for the given feed print("{} subscriptions found for Feed with id:\n{}\n".format(len(subscriptions), feed_id)) # Print the subscriptions list print(json.dumps(subscriptions, indent=1)) """ Explanation: The /subscriptions endpoint additionally exposes a query parameter that let's us get a list of available Subscriptions that are associated with a particular Feed, using the Feed's id. The parameter is called feedID and takes a valid Feed id uuid. Let's make a request that lists all Subscriptions for the Feed we just looked at: End of explanation """ subscriptions[0]['geometry'] """ Explanation: We should now have a list of Subscriptions associated with the Feed from the previous section! Subscription Info Once again, we can see that each Subscription object in the list contains properties for id, title, description, links, and timestamps for created and updated. For Subscriptions, we also see a feedID property which contains the uuid of the associated Feed. Time of Interest (TOI) There are also two additional timestamps available in our Subscription data, under the startTime and endTime properties. These two timestamps are used to indicate the "Time of Interest" (TOI) for the Subscription, meaning the Subscription will process over Planet imagery that was collected or published (according to the Feed configuration) over that time span. No endTime property means that the Subscription will continue to run indefinitely. Subscription Geometry (AOI) The Subscription's geometry property is a GeoJSON geometry object, and indicates the shape and location of the "Area of Interest" (AOI) where the Feed is processing Planet imagery and making new detections. Valid geometry types for subscriptions are: Point LineString Polygon MultiPoint MultiLineString MultiPolygon Here's an example of what a Subscription's geometry looks like: End of explanation """ import pandas as pd # Create a Pandas DataFrame with our subscriptions list data df = pd.DataFrame(subscriptions) # Same as: # pd.DataFrame.from_dict(subscriptions) # Show the first 5 rows of our DataFrame in a table df.head() """ Explanation: We can see that the geometry object is made up of longitutde and latitude coordinate pairs, and has a GeoJSON type property, in this case "Polygon". You may already have a sense of what part of the Earth our Subscription AOI covers from these coordinates, but let's see how we can use Python to visualize the AOI and also explore our data in the next section. Exploring and visualizing Planet Analytics Data The python data science eco-system is teeming with useful libraries, packages, and tools to help us explore our data. Let's use Pandas to take a closer look. Pandas In order to make our Planet Analytics API data a little easier to work with in our Jupyter Notebook, we can take our Subscriptions list from the response data and convert it to a Pandas DataFrame. Python Data Analysis Library (pandas) <img src="https://pandas.pydata.org/static/img/pandas.svg" align="right" style="margin-left: 20px" width="150"></img> pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. It's pretty easy to create a new DataFrame from our JSON data. We just need to pass in our Subscriptions array/list: End of explanation """ df[['title', 'description']].head(20) """ Explanation: Great! We now have a neat table output that's easy to read and comes with some built in data manipulation functionality. This should be familiar to anyone that's done some data-science with Python and Jupyter Notebooks. We can see all the properties we discussed earlier are now our DataFrame columns. Let's browse the titles and descriptions for our Subscriptions: End of explanation """ import geopandas as gpd from shapely.geometry import shape # Create a new GeoPandas DataFrame from our subscriptions data gdf = gpd.GeoDataFrame(subscriptions) # Transform geometry column values into shapely objects gdf.set_geometry(gdf['geometry'].apply(shape), inplace=True) # Show first 5 subscriptions gdf.head() """ Explanation: GeoPandas Since one of the most important aspects of our Subscription data is the location/AOI or geometry which it covers, we can also use GeoPandas to work with our Subscriptions. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on Fiona for file access and descartes and matplotlib for plotting. In order to take advantage of GeoPanda's geographic functionality, we'll need to convert each subscription geometry from a GeoJSON geometry object to a shapely geometry object. GeoPandas will be able to understand the geometry natively from shapely and create a GeoSeries, which it uses to unlock the additional geospatial capabilities it brings to Pandas. End of explanation """ # Get the type for the geometry column type(gdf['geometry']) """ Explanation: Our geometry data type has now changed to a GeoSeries: End of explanation """ # Select the first subscription row as a DataFrame subscription_1 = gdf.iloc[[0]] # Print out the GeoDataFrame for the first subscription row subscription_1 """ Explanation: Let's take a look at a single subscription in a bit more detail. End of explanation """ # Check our initial CRS initial_crs = subscription_1.crs print("Initial CRS: {}".format(initial_crs)) # Should be 'None' at first """ Explanation: Geospatial applications with GeoPandas GeoPandas allows us to manipulate and do calculations on our Subscription geometry. Example: Getting the Subscription AOI's Area For starters, let's use GeoPandas to get the area of our geometry (in km²)! Before we do, we need to understand how GeoPandas handles our data in a bit more detail. Geometry CRS The coordinate pairs that make up our original GeoJSON geometries are all expressed in Longitude and Latitude. But the geometric shapes we created with shapely in our GeoDataFrame's GeoSeries are currently just a collection of coordinates in an arbitrary space. Before we can do geospatial queries on our DataFrame, we need to set a coordinate reference system (CRS) for the GeoSeries to ensure our calculations are done in the correct units, for example meters. For more about how GeoPandas handles projections, visit the GeoPandas documentation here. Let's start by checking for a CRS in our row's GeoSeries: End of explanation """ from fiona.crs import from_epsg # Get a projection definition using Fiona's `from_epsg` to access the proj4 definition proj_def = from_epsg('4326')['init'] print("Projection Definition: {}".format(proj_def)) """ Explanation: We didn't specify a CRS when we created our GeoSeries, so None is expected here. So what CRS should we use then? According to the GeoJSON spec, the CRS used by GeoJSON is the World Geodetic System 1984 (WGS84). The coordinate reference system for all GeoJSON coordinates is a geographic coordinate reference system, using the World Geodetic System 1984 (WGS84) datum, with longitude and latitude units of decimal degrees. This means our original geometry data uses WGS84. The EPSG code for WGS84 is EPSG:4326. This will be true for any geographic (GeoJSON) data available via the Planet Analytics API. Let's get the CRS definition for EPSG:4326 using a helper function from the Fiona library (represented as a mapping of proj.4 parameters): End of explanation """ # Set the GeoSeries CRS subscription_1.crs = proj_def # Check our NEW CRS print("New CRS: {}".format(subscription_1.crs)) """ Explanation: Now that we've got the definition, let's set the intial CRS on our GeoSeries to WGS84 using the proj_def from the previous cell: End of explanation """ # Re-project row geometry to EPSG:3857 projected = subscription_1.to_crs(epsg=3857) # Display the CRS of our re-projected row geometry projected.crs """ Explanation: Perfect! Our GeoSeries now uses the correct CRS! Now that we've understood what's going on under the hood, there's actually a way we could have set the CRS when we initially transformed our GeoJSON geometries into shapely objects to create the GeoSeries by using the crs argument and passing in our projection definition proj_def: gdf.set_geometry(gdf['geometry'].apply(shape), inplace=True, crs=proj_def) Either way, it's important that an intial CRS is set on our GeoSeries* so that we can re-project the data if we need to, which we'll see is the case when calculating the area. Undoubtedly these concepts will be important when we take other approaches as well. * So far in this notebook, we've only set the CRS on the first subscription row. The alternative method described in this cell would set the CRS for all rows' geometries Projected Coordinate Systems Now that we have an initial CRS set to WGS84, we know that the current units of our GeoSeries are in (decimal) degrees. Since we're interested in getting the area in square kilometers, first we'll need to reproject out geometry to a Cartesian projected coordinate system like EPSG:3857, whose units are expressed in meters: End of explanation """ # Get the area (will be in meters) area_m = projected.area.values[0] # Convert the area to squared kilometers and round to two decimal places area_km2 = round(area_m / 100000, 2) # Print the subscription area in km2 print("\n Subscription Area: {} km\xB2".format(area_km2)) """ Explanation: Calculate the area of the subscription geometry Finally we can use the .area GeoSeries attribute on the re-projected GeoSeries to get the area of our subscription geometry End of explanation """ # Plot the subscription geometry subscription_1.plot() """ Explanation: Visualizing the subscription geometry Let's visually inspect at our first subscription's geometry. We can use GeoPanda's built in .plot() method to render a matplotlib chart: End of explanation """ # Import GeoViews import geoviews as gv # Set GeoViews' rendering backends gv.extension('bokeh') """ Explanation: The above gives us the shape and location of the subscription geometry, but doesn't provide much context... Interactive Visualizations with GeoViews Let's use GeoViews to create an interactive web map with the subscription geometry so we can see a bit more context around the geometry. <img src="https://geoviews.org/_static/logo_horizontal.png" align="right" style="margin-left: 20px" width="100"></img> GeoViews is built on HoloViews and uses Cartopy, matplotlib and Bokeh for visualizations. It is released as part of the PyViz suite. GeoViews is a Python library that makes it easy to explore and visualize geographical, meteorological, and oceanographic datasets, such as those used in weather, climate, and remote sensing research. First, let's import GeoViews and set the rendering backends: End of explanation """ # Get the geometry type from our subscription `GeoDataFrame` geo_type = subscription_1.geometry.type.values[0] print("Geometry Type: {}".format(geo_type)) """ Explanation: Although we already know that the subscription geometry's shape is a polygon from both the GeoJSON geometry type property and our previous plot, let's see how we can find the geometry type using GeoPandas. This can help us understand how to use GeoViews to plot our geometry. End of explanation """ # Use GeoViews to render a polygon polygon = gv.Polygons(subscription_1).opts(padding=0.1) polygon """ Explanation: Again, the geometry type is a Polygon, so we can use GeoView's gv.Polygons to easily render our GeoDataFrame as an interactive plot using bokeh as the rendering backend: End of explanation """ # Create the label from the subscription title label = "Subscription Geometry for '{}'".format(subscription_1['title'].values[0]) # Create a GeoViews shape object for our subscription plot subscription_plot = gv.Shape(subscription_1['geometry'][0], label=label).opts(style=dict(alpha=0.3)) """ Explanation: We can also use the GeoViews' Shape object to automatically render our geometry without explicitly using the gv.Polygons object, by passing the shapely object which underlies our GeoSeries. The gv.Shape object wraps around any shapely geometry End of explanation """ # List built in GeoViews tile sources # help(gv.tile_sources) # Create a basemap basemap = gv.tile_sources.CartoLight # Create a "web map" by combining a tile layer plot with the subscription plot into a gv.Overlay object webmap = basemap * subscription_plot # Render the webmap using some options webmap.opts(width=800, height=400, padding=0.2) """ Explanation: Using the * operator, we can combine (overlay) various GeoViews (and Holoviews) plots. End of explanation """
usantamaria/iwi131
ipynb/18-Actividad-ListasYTuplas/Actividad3ListasYTuplas.ipynb
cc0-1.0
olim2015 = [('Aleman', 'ajedrez', 8, 224), ('Pasteur', 'pinpon', 12, 38), ('Wilquimvoe', 'ajedrez', 5, 134), ('Mariano', 'natacion', 5, 500), ('LuisCampino', 'ajedrez', 10, 45), ('Wilquimvoe', 'pinpon',7, 434), # ... ] olim2014 = [('Aleman', 13), ('Comercial', 8), ('Pasteur', 4), ('RubenCastro', 10), ('LuisCampino', 5), # ... ] """ Explanation: <header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http://progra.usm.cl/ https://www.github.com/usantamaria/iwi131 Actividad 3 - Tuplas y Listas Las Olimpiadas escolares nacionales se realizaron el año 2015 en Viña del Mar, la información se almacenó en una lista de tuplas de la forma (colegio, disciplina, participantes, puntos). Un ejemplo de lo anterior: olim2015 = [('Aleman', 'ajedrez', 8, 224), ('Pasteur', 'pinpon', 12, 38), ('Wilquimvoe', 'ajedrez', 5, 134), ('Mariano', 'natacion', 5, 500), ('LuisCampino', 'ajedrez', 10, 45), ('Wilquimvoe', 'pinpon',7, 434), # ... ] Actividad 3 - Tuplas y Listas Para las olimpiadas del año 2014, sólo se almacenó el colegio y el número de participantes : olim2014 = [('Aleman', 13), ('Comercial', 8), ('Pasteur', 4), ('RubenCastro', 10), ('LuisCampino', 5), # ... ] Los datos que aparecen son sólo de ejemplo, en realidad son muchos más. Ahora, se le solicita a los alumnos de IWI-131 de la UTFSM desarrollar las funciones necesarias para satisfacer los siguientes requerimientos. End of explanation """ def disciplina_impopular(olim2015): # Obtener todas las disciplinas lista_disciplinas = [] for colegio,disc,part,p in olim2015: if disc not in lista_disciplinas: lista_disciplinas.append(disc) # Sumar participaciones de cada colegio lista_disciplina_y_partipantes = [] for disciplina in lista_disciplinas: suma = 0 for colegio,disc,part,p in olim2015: if disc==disciplina: suma += part lista_disciplina_y_partipantes.append( (disciplina, suma) ) # Obtener el mínimo min_part = float("inf") min_disc = "" for disc, part in lista_disciplina_y_partipantes: if part < min_part: min_disc = disc min_part = part # Regresar el minimo return (min_disc, min_part) # Probando la función disc_impopular, part_impopular = disciplina_impopular(olim2015) print "La disciplina con menor número de participantes es", disc_impopular, "con", part_impopular, "personas" """ Explanation: Actividad 3 - Tuplas y Listas Pregunta 1 Indicar cuál es la disciplina deportiva con menor número de participantes y cuántos son. Actividad 3 - Tuplas y Listas Análisis de pregunta 1 Análisis: * ¿Forma más fácil de resolver el problema? -> Función. * Input: * Lista olim2015 con elementos en formato (colegio, disciplina, participantes, puntos) * Output: * Regresar tupla con la disciplina deportiva con menor numero de participantes y cuántos son. End of explanation """ # Pregunta 2 def disciplina_popular_por_colegio(olim2015, colegio_buscado): max_disc_nombre = "" max_disc_participantes = -float("inf") for colegio, disciplina, participantes, puntos in olim2015: if participantes>max_disc_participantes and colegio==colegio_buscado: max_disc_participantes = participantes max_disc_nombre = disciplina return (max_disc_nombre, max_disc_participantes) # Probando la función colegio = "Mariano" # "Mariano", etc.. disciplina, participantes = disciplina_popular_por_colegio(olim2015, colegio) print "Para el colegio", colegio, print "la disciplina con mayor número de participantes es", disciplina, print "con", participantes, "personas" """ Explanation: Actividad 3 - Tuplas y Listas Pregunta 2 Para un colegio indicar la disciplina deportiva con mayor número de participantes y la cantidad de participantes. Actividad 3 - Tuplas y Listas Análisis de pregunta 2 ¿Forma más fácil de resolver el problema? -> Función. Input: Lista olim2015 con elementos en formato (colegio, disciplina, participantes, puntos) Output: Tupla con la disciplina deportiva con mayor numero de participantes para un colegio dado y cuantos son. End of explanation """ # Pregunta 3 def colegios_aumento_participantes(olim2015, olim2014): listado = [] for colegio2014,part2014 in olim2014: suma=0 for colegio2015,actividad,part2015,puntaje in olim2015: if colegio2014==colegio2015: suma+=part2015 if suma>part2014: listado.append(colegio2014) # No colegio2015, ojo! return listado # Probando la función listado = colegios_aumento_participantes(olim2015, olim2014) print listado """ Explanation: Actividad 3 - Tuplas y Listas Pregunta 3 Listados de los colegios que han aumentado el volumen de participantes desde la edición anterior. Actividad 3 - Tuplas y Listas Análisis de pregunta 3 ¿Forma más fácil de resolver el problema? -> Función. Input: Lista olim2015 con elementos en formato (colegio, disciplina, participantes, puntos) Lista olim2014 con elementos en formato (colegio, total_participantes) Output: Imprimir en pantalla los colegios que han aumentado el volumen de participantes desde la edición anterior. End of explanation """
google/qkeras
notebook/QRNNTutorial.ipynb
apache-2.0
units = 64 embedding_dim = 64 loss = 'binary_crossentropy' def create_model(batch_size=None): x = x_in = Input(shape=(maxlen,), batch_size=batch_size, dtype=tf.int32) x = Embedding(input_dim=max_features, output_dim=embedding_dim)(x) x = Activation('linear', name='embedding_act')(x) x = Bidirectional(LSTM(units))(x) x = Dense(1)(x) x = Activation('sigmoid')(x) model = tf.keras.Model(inputs=[x_in], outputs=[x]) return model tf.keras.backend.clear_session() with strategy.scope(): model = create_model(BATCH_SIZE) custom_objects = {} model.compile( optimizer=Adam(learning_rate=0.01), loss=loss, metrics=['acc']) model.summary() print('Train...') model.fit( train_dataset, epochs=10, batch_size=BATCH_SIZE, validation_data=test_dataset, verbose=2) """ Explanation: Using QKeras QKeras works by tagging all variables and weights/bias created by Keras as well as output of arithmetic layers by quantized functions. Quantized functions can be instantiated directly in QSimpleRNN/QLSTM/QGRU/QBidirectional/QDense/QConv2D/QSeparableConv2D functions, and they can be passed to QActivation, which act as a merged quantization and activation function. In order to successfully quantize a model, users need to replace layers that create variables (trainable or not) (LSTM, Conv2D, etc) by their equivalent ones in QKeras (QLSTM/QDense, etc), and any layers that perform math operations need to be quantized afterwards. Quantized values are clipped between their maximum and minimum quantized representation (which may be different than $[-1.0, 1.0]$), although for po2 type of quantizers, we still recommend the users to specify the parameter for max_value. An example of a very simple recurrent network is given below in Keras. End of explanation """ def create_qmodel(batch_size=None): x = x_in = Input(shape=(maxlen,), batch_size=batch_size, dtype=tf.int32) x = Embedding(input_dim=max_features, output_dim=embedding_dim)(x) x = QActivation('binary', name='embedding_act')(x) x = QLSTM( units, activation='quantized_tanh(4)', recurrent_activation='quantized_relu(4,0,1)', kernel_quantizer='stochastic_ternary("auto")', recurrent_quantizer='quantized_bits(2,1,1,alpha=1.0)', bias_quantizer='quantized_bits(4,0,1)')(x) x = QDense( 1, kernel_quantizer="quantized_bits(4,0,1)", bias_quantizer='quantized_bits(4,0,1)')(x) x = QActivation('sigmoid')(x) model = tf.keras.Model(inputs=[x_in], outputs=[x]) return model tf.keras.backend.clear_session() with strategy.scope(): qmodel = create_qmodel(BATCH_SIZE) custom_objects = {} qmodel.compile( optimizer=Adam(learning_rate=0.01), loss=loss, metrics=['acc']) qmodel.summary() print('Train...') qmodel.fit(train_dataset, batch_size=BATCH_SIZE, epochs=10, verbose=2, validation_data=test_dataset) """ Explanation: Replacing with quantized layers End of explanation """ bits = 4 quantizer_config = { "bidirectional": { 'activation' : f"quantized_tanh({bits})", 'recurrent_activation' : f"quantized_relu(4,0,1)", 'kernel_quantizer' : f"quantized_bits({bits}, alpha='auto')", 'recurrent_quantizer' : f"quantized_bits({bits}, alpha='auto')", 'bias_quantizer' : f"quantized_bits({bits}, alpha='auto')", }, "dense": { 'kernel_quantizer' : f"quantized_bits({bits}), alpha='auto'", 'bias_quantizer' : f"quantized_bits({bits}), alpha='auto'" }, "embedding_act": f"quantized_bits({bits}), alpha='auto'", } tf.keras.backend.clear_session() with strategy.scope(): model = create_model(BATCH_SIZE) custom_objects = {} qmodel = model_quantize(model, quantizer_config, bits, custom_objects) qmodel.compile( optimizer=Adam(learning_rate=0.01), loss=loss, metrics=['acc']) qmodel.summary() print('Train...') qmodel.fit(train_dataset, batch_size=BATCH_SIZE, epochs=10, verbose=2, validation_data=test_dataset) """ Explanation: Converting a Model Automatically In addition to the drop-in replacement of Keras functions, we have written the following function to assist anyone who wants to quantize a network. model_quantize(model, quantizer_config, activation_bits, custom_objects=None, transfer_weights=False) This function converts an non-quantized model (such as the one from model in the previous example) into a quantized version, by applying a configuration specified by the dictionary quantizer_config, and activation_bits specified for unamed activation functions, with this parameter probably being removed in future versions. The parameter custom_objects specifies object dictionary unknown to Keras, required when you copy a model with lambda layers, or customized layer functions, for example, and if transfer_weights is True, the returned model will have as initial weights the weights from the original model, instead of using random initial weights. The dictionary specified in quantizer_config can be indexed by a layer name or layer class name. In the example below, conv2d_1 corresponds to the first convolutional layer of the example, while QConv2D corresponds to the default behavior of two dimensional convolutional layers. The reader should note that right now we recommend using QActivation with a dictionary to avoid the conversion of activations such as softmax and linear. In addition, although we could use activation field in the layers, we do not recommend that. { "conv2d_1": { "kernel_quantizer": "stochastic_ternary", "bias_quantizer": "quantized_po2(4)" }, "QConv2D": { "kernel_quantizer": "stochastic_ternary", "bias_quantizer": "quantized_po2(4)" }, "QDense": { "kernel_quantizer": "quantized_bits(3,0,1)", "bias_quantizer": "quantized_bits(3)" }, "act_1": "quantized_relu(2)", "QActivation": { "relu": "quantized_relu(2)" } } In the following example, we will quantize the model using a different strategy. End of explanation """ tf.keras.backend.clear_session() with strategy.scope(): model = create_model(BATCH_SIZE) custom_objects = {} model.compile( optimizer=Adam(learning_rate=0.01), loss=loss, metrics=['acc']) quantization_config = { "kernel": { "stochastic_binary": 1, "stochastic_ternary": 2, "quantized_bits(4,0,1,alpha=1.0)": 4, "quantized_po2(4,1)": 4 }, "recurrent_kernel": { "stochastic_binary": 1, "stochastic_ternary": 2, "quantized_bits(4,0,1,alpha=1.0)": 4, "quantized_po2(4,1)": 4 }, "recurrent_activation": { "quantized_relu(4,0,1)": 4 }, "bias": { "quantized_bits(4,0,1)": 4, "quantized_po2(4,1)": 4 }, "activation" : { "stochastic_ternary('auto')": 2, "quantized_tanh(4)" : 4, "quantized_relu_po2(4,1)": 4, "quantized_relu(4,2)": 4, }, "linear": { "stochastic_ternary('auto')" : 2, "quantized_tanh(4)" : 4, "quantized_relu_po2(4,1)": 4, "quantized_relu(3,1)": 3, "quantized_relu(4,2)": 4, } } limit = { "Dense": [4], "Bidirectional": [4], "Activation": [4], "default" : [4]*4 } goal = { "type": "bits", "params": { "delta_p": 8.0, "delta_n": 8.0, "rate": 2.0, "stress": 1.0, "input_bits": 4, "output_bits": 4, "ref_bits": 4, "config": { "default": ["parameters", "activations"] } } } run_config = { "output_dir": tempfile.mkdtemp(), "goal": goal, "quantization_config": quantization_config, "learning_rate_optimizer": False, "transfer_weights": False, "mode": "random", "seed": 42, "limit": limit, "tune_filters": "layer", "tune_filters_exceptions": "^dense", "distribution_strategy": strategy, "layer_indexes": range(2, len(model.layers) - 1), "max_trials": 1000 } print("quantizing layers:", [model.layers[i].name for i in run_config["layer_indexes"]]) autoqk = AutoQKeras(model, metrics=["acc"], custom_objects={}, **run_config) autoqk.fit( train_dataset, validation_data=test_dataset, batch_size=BATCH_SIZE, epochs=10, verbose=2) qmodel = autoqk.get_best_model() qmodel.save_weights("qmodel.h5") print_qmodel_summary(qmodel) print(get_quantization_dictionary(qmodel)) """ Explanation: Quantizing a Model With AutoQKeras To quantize this model with AutoQKeras, we need to define the quantization for kernels, biases and activations; forgiving factors and quantization strategy. Below we define which quantizers are allowed for kernel, bias, activations and linear. Linear is a proxy that we use to capture Activation("linear") to apply quantization without applying a non-linear operation. In some networks, we found that this trick may be necessary to better represent the quantization space. End of explanation """
napjon/ds-nd
p0-intro/Data_Analyst_ND_Project0.ipynb
mit
import pandas as pd # pandas is a software library for data manipulation and analysis # We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd. # hit shift + enter to run this cell or block of code path = r'chopstick-effectiveness.csv' # Change the path to the location where the chopstick-effectiveness.csv file is located on your computer. # If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer. dataFrame = pd.read_csv(path) dataFrame.head() """ Explanation: Chopsticks! A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC). An investigation for determining the optimum length of chopsticks. Link to Abstract and Paper the abstract below was adapted from the link Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost. For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students. Download the data set for the adults, then answer the following questions based on the abstract and the data set. If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text. 1. What is the independent variable in the experiment? You can either double click on this cell to add your answer in this cell, or use the plus sign in the toolbar (Insert cell below) to add your answer in a new cell. 2. What is the dependent variable in the experiment? 3. How is the dependent variable operationally defined? 4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled. Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.) One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics. End of explanation """ dataFrame['Food.Pinching.Efficiency'].mean() """ Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths. End of explanation """ meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index() meansByChopstickLength # reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5. """ Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below. End of explanation """ # Causes plots to display within the notebook rather than in a new window %pylab inline import matplotlib.pyplot as plt plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency']) # title="") plt.xlabel("Length in mm") plt.ylabel("Efficiency in PPPC") plt.title("Average Food Pinching Efficiency by Chopstick Length") plt.show() """ Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students? End of explanation """ dataFrame['Food.Pinching.Efficiency'].mean() dataFrame.ix[dataFrame['Chopstick.Length'] == 240,'Food.Pinching.Efficiency'].mean() 31*6 """ Explanation: 6. Based on the scatterplot created from the code above, interpret the relationship you see. What do you notice? In the abstract the researchers stated that their results showed food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 mm long were optimal for adults. 7a. Based on the data you have analyzed, do you agree with the claim? 7b. Why? End of explanation """
robertoalotufo/ia898
master/FerramentasdeEdicaoHTML.ipynb
mit
# Ajuste de largura do notebook no display from IPython.core.display import display, HTML display(HTML("<style>.container { width:95% !important; }</style>")) """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Jupyter-Notebook:-Ferramentas-de-edição-multimídia" data-toc-modified-id="Jupyter-Notebook:-Ferramentas-de-edição-multimídia-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Jupyter Notebook: Ferramentas de edição multimídia</a></div><div class="lev2 toc-item"><a href="#Títulos-e-subtítulos" data-toc-modified-id="Títulos-e-subtítulos-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Títulos e subtítulos</a></div><div class="lev3 toc-item"><a href="#Subtítulo-de-terceiro-nível" data-toc-modified-id="Subtítulo-de-terceiro-nível-111"><span class="toc-item-num">1.1.1&nbsp;&nbsp;</span>Subtítulo de terceiro nível</a></div><div class="lev4 toc-item"><a href="#Subítulo-de-quarto-nível" data-toc-modified-id="Subítulo-de-quarto-nível-1111"><span class="toc-item-num">1.1.1.1&nbsp;&nbsp;</span>Subítulo de quarto nível</a></div><div class="lev2 toc-item"><a href="#Negrito,-itálico-e-literal" data-toc-modified-id="Negrito,-itálico-e-literal-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Negrito, itálico e literal</a></div><div class="lev2 toc-item"><a href="#Texto-colorido" data-toc-modified-id="Texto-colorido-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Texto colorido</a></div><div class="lev2 toc-item"><a href="#Colocando-links-Internet" data-toc-modified-id="Colocando-links-Internet-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Colocando links Internet</a></div><div class="lev2 toc-item"><a href="#Itemização-de-texto" data-toc-modified-id="Itemização-de-texto-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Itemização de texto</a></div><div class="lev2 toc-item"><a href="#Linha-de-separação" data-toc-modified-id="Linha-de-separação-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Linha de separação</a></div><div class="lev2 toc-item"><a href="#Equações-LaTeX" data-toc-modified-id="Equações-LaTeX-17"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Equações LaTeX</a></div><div class="lev2 toc-item"><a href="#Mostrando-imagens-na-célula-markdown" data-toc-modified-id="Mostrando-imagens-na-célula-markdown-18"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Mostrando imagens na célula markdown</a></div><div class="lev3 toc-item"><a href="#Na-forma-de-link-HTML" data-toc-modified-id="Na-forma-de-link-HTML-181"><span class="toc-item-num">1.8.1&nbsp;&nbsp;</span>Na forma de link HTML</a></div><div class="lev3 toc-item"><a href="#Na-forma-de-link-Jupyter" data-toc-modified-id="Na-forma-de-link-Jupyter-182"><span class="toc-item-num">1.8.2&nbsp;&nbsp;</span>Na forma de link Jupyter</a></div><div class="lev2 toc-item"><a href="#Inserindo-vídeo-e-website" data-toc-modified-id="Inserindo-vídeo-e-website-19"><span class="toc-item-num">1.9&nbsp;&nbsp;</span>Inserindo vídeo e website</a></div><div class="lev2 toc-item"><a href="#Forma-de-baixar-um-arquivo-de-dados-para-computador-local" data-toc-modified-id="Forma-de-baixar-um-arquivo-de-dados-para-computador-local-110"><span class="toc-item-num">1.10&nbsp;&nbsp;</span>Forma de baixar um arquivo de dados para computador local</a></div> # Jupyter Notebook: Ferramentas de edição multimídia Este documento ilustra as principais ferramentas para editar o notebook, utilizando células de texto Markdown: End of explanation """ from IPython.display import YouTubeVideo # a talk about IPython at Sage Days at U. Washington, Seattle. # Video credit: William Stein. YouTubeVideo('1j_HxD4iLn8') """ Explanation: Títulos e subtítulos Títulos e subtítulos são colocados com #, ##, ### ou mais # como a seguir Subtítulo de terceiro nível Subítulo de quarto nível Negrito, itálico e literal Negrito coloca-se com dois asteriscos. Por exemplo: - o texto **Este texto em negrito** fica Este texto em negrito O texto em Itálico é utilizado com apenas um asterisco. Por exemplo - o texto *Este texto em itálico* aparece Este texto em itálico Para mostrar um texto original (verbatim) usa-se texto separado por crases. Por exemplo - este texto aparece com espaçamento fixo: Este é um texto cru Pode-se criar um bloco de texto original usando linhas com três crases: Este é um bloco literal Os espaçamentos são preservados, {assim como &lt;&lt;&lt;[] } os caracteres especiais não têm significado Texto colorido Forma simples de criar texto colorido é utilizando sintaxe do HTML: - escrevendo-se &lt;code style="color:red"&gt;vermelho&lt;/code&gt;, o texto aparece como <code style="color:red">vermelho</code> Colocando links Internet Coloca-se [Link para divulgação curso DL](http://adessowiki.fee.unicamp.br/rnpi) Veja como aparece: Link para divulgação curso DL Fazendo um link para uma seção dentro do próprio notebook. Utiliza-se o link usando-se a sintaxe do HTML para referências dentro de arquivo. Lembrar de trocar os espaços dos subtítulos por traço. Por exemplo: Link para [Negrito e itálico](#Negrito-e-itálico) fica Link para Negrito, itálico e cru Itemização de texto A itemização utiliza caracteres como -, * e números. Para criar itens aninhados, utiliza-se um afastamento maior. Veja estes exemplos: item 1 item 1.1 item 2 Itemização numérica item 1 item 1.a este texto pertence ao item 1.a item 2 Linha de separação Uma linha de separação é feita colocando-se três caracteres _ no começo de uma linha: Equações LaTeX Equações LaTeX podem ser inseridas com o caractere \$, de forma in line: $ a = \sum i $ ou de forma destacada: $$ A = \sum_{i=0}^{n-1} a_i $$ Mostrando imagens na célula markdown As imagens gravadas em disco em disponíveis na Internet podem ser mostradas tanto como elementos HTML usando &lt;img src= link&gt; como um link jupyter especial para a image utilizando o caracter !. No primeiro caso, a vantagem é que se tem um maior controle do seu posicionamento e dimensão. Na forma de link HTML No primeiro exemplo a seguir, é utilizado uma imagem com a largura de 256 pontos &lt;img src=link, width=256&gt;: <img src="https://lh5.googleusercontent.com/-4z1aOb-x0Co/AAAAAAAAAAI/AAAAAAAADXc/sAeBMo6pz_k/photo.jpg",width=256> Na forma de link Jupyter Já esta outra imagem, é mostrada como um link do IPython no formato ![texto](link): Inserindo vídeo e website A inserção de vídeo e website somente na célula de programação Python. O display do IPython possui aplicação específica para importar vídeos do YouTube: End of explanation """ from IPython.display import IFrame IFrame('http://wikipedia.com', width=700, height=350) """ Explanation: Para inserir website, o display do IPython tem suporte para mostrar um IFrame, onde é possível colocar o endereço de um website: End of explanation """
tensorflow/tfx
docs/tutorials/data_validation/tfdv_basic.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ try: import colab !pip install --upgrade pip except: pass """ Explanation: TensorFlow Data Validation An Example of a Key Component of TensorFlow Extended Note: You can run this example right now in a Jupyter-style notebook, no setup required! Just click "Run in Google Colab" <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/data_validation/tfdv_basic"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/data_validation/tfdv_basic.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/data_validation/tfdv_basic.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/data_validation/tfdv_basic.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td> </table></div> This example colab notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent. We'll use data from the Taxi Trips dataset released by the City of Chicago. Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk. Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI. Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about ML fairness. The columns in the dataset are: <table> <tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr> <tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr> <tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr> <tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr> <tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr> <tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr> </table> Install and import packages Install the packages for TensorFlow Data Validation. Upgrade Pip To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately. End of explanation """ print('Installing TensorFlow Data Validation') !pip install --upgrade 'tensorflow_data_validation[visualization]<2' """ Explanation: Install Data Validation packages Install the TensorFlow Data Validation packages and dependencies, which takes a few minutes. You may see warnings and errors regarding incompatible dependency versions, which you will resolve in the next section. End of explanation """ import pkg_resources import importlib importlib.reload(pkg_resources) """ Explanation: Import TensorFlow and reload updated packages The prior step updates the default packages in the Gooogle Colab environment, so you must reload the package resources to resolve the new dependencies. Note: This step resolves the dependency error from the installation. If you are still experiencing code execution problems after running this code, restart the runtime (Runtime > Restart runtime ...). End of explanation """ import tensorflow as tf import tensorflow_data_validation as tfdv print('TF version:', tf.__version__) print('TFDV version:', tfdv.version.__version__) """ Explanation: Check the versions of TensorFlow and the Data Validation before proceeding. End of explanation """ import os import tempfile, urllib, zipfile # Set up some globals for our file paths BASE_DIR = tempfile.mkdtemp() DATA_DIR = os.path.join(BASE_DIR, 'data') OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output') TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv') EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv') SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv') # Download the zip file from GCP and unzip it zip, headers = urllib.request.urlretrieve('https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/chicago_data.zip') zipfile.ZipFile(zip).extractall(BASE_DIR) zipfile.ZipFile(zip).close() print("Here's what we downloaded:") !ls -R {os.path.join(BASE_DIR, 'data')} """ Explanation: Load the dataset We will download our dataset from Google Cloud Storage. End of explanation """ train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA) """ Explanation: Compute and visualize statistics First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings) TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation. End of explanation """ # docs-infra: no-execute tfdv.visualize_statistics(train_stats) """ Explanation: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data: Notice that numeric features and catagorical features are visualized separately, and that charts are displayed showing the distributions for each feature. Notice that features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature. Notice that there are no examples with values for pickup_census_tract. This is an opportunity for dimensionality reduction! Try clicking "expand" above the charts to change the display Try hovering over bars in the charts to display bucket ranges and counts Try switching between the log and linear scales, and notice how the log scale reveals much more detail about the payment_type categorical feature Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages End of explanation """ schema = tfdv.infer_schema(statistics=train_stats) tfdv.display_schema(schema=schema) """ Explanation: <!-- <img class="tfo-display-only-on-site" src="images/statistics.png"/> --> Infer a schema Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics. Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it. End of explanation """ # Compute stats for evaluation data eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA) # docs-infra: no-execute # Compare evaluation data with training data tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET') """ Explanation: Check evaluation data for errors So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface. Notice that each feature now includes statistics for both the training and evaluation datasets. Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them. Notice that the charts now include a percentages view, which can be combined with log or the default linear scales. Notice that the mean and median for trip_miles are different for the training versus the evaluation datasets. Will that cause problems? Wow, the max tips is very different for the training versus the evaluation datasets. Will that cause problems? Click expand on the Numeric Features chart, and select the log scale. Review the trip_seconds feature, and notice the difference in the max. Will evaluation miss parts of the loss surface? End of explanation """ # Check eval data for errors by validating the eval data stats using the previously inferred schema. anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies) """ Explanation: <!-- <img class="tfo-display-only-on-site" src="images/statistics_eval.png"/> --> Check for evaluation anomalies Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values. Key Point: What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset? End of explanation """ # Relax the minimum fraction of values that must come from the domain for feature company. company = tfdv.get_feature(schema, 'company') company.distribution_constraints.min_domain_mass = 0.9 # Add new value to the domain of feature payment_type. payment_type_domain = tfdv.get_domain(schema, 'payment_type') payment_type_domain.value.append('Prcard') # Validate eval stats after updating the schema updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies) """ Explanation: Fix evaluation anomalies in the schema Oops! It looks like we have some new values for company in our evaluation data, that we didn't have in our training data. We also have a new value for payment_type. These should be considered anomalies, but what we decide to do about them depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset. Key Point: How would our evaluation results be affected if we did not fix these problems? Unless we change our evaluation dataset we can't fix everything, but we can fix things in the schema that we're comfortable accepting. That includes relaxing our view of what is and what is not an anomaly for particular features, as well as updating our schema to include missing values for categorical features. TFDV has enabled us to discover what we need to fix. Let's make those fixes now, and then review one more time. End of explanation """ serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) """ Explanation: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;) Schema Environments We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary. Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment. For example, in this dataset the tips feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly. End of explanation """ options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) """ Explanation: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation. In this case, we can safely convert INT values to FLOATs, so we want to tell TFDV to use our schema to infer the type. Let's do that now. End of explanation """ # All features are by default in both TRAINING and SERVING environments. schema.default_environment.append('TRAINING') schema.default_environment.append('SERVING') # Specify that 'tips' feature is not in SERVING environment. tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING') serving_anomalies_with_env = tfdv.validate_statistics( serving_stats, schema, environment='SERVING') tfdv.display_anomalies(serving_anomalies_with_env) """ Explanation: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that. End of explanation """ # Add skew comparator for 'payment_type' feature. payment_type = tfdv.get_feature(schema, 'payment_type') payment_type.skew_comparator.infinity_norm.threshold = 0.01 # Add drift comparator for 'company' feature. company=tfdv.get_feature(schema, 'company') company.drift_comparator.infinity_norm.threshold = 0.001 skew_anomalies = tfdv.validate_statistics(train_stats, schema, previous_statistics=eval_stats, serving_statistics=serving_stats) tfdv.display_anomalies(skew_anomalies) """ Explanation: Check for drift and skew In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema. Drift Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation. Skew TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew. Schema Skew Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema. Feature Skew Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when: A data source that provides some feature values is modified between training and serving time There is different logic for generating features between training and serving. For example, if you apply some transformation only in one of the two code paths. Distribution Skew Distribution skew occurs when the distribution of the training dataset is significantly different from the distribution of the serving dataset. One of the key causes for distribution skew is using different code or different data sources to generate the training dataset. Another reason is a faulty sampling mechanism that chooses a non-representative subsample of the serving data to train on. End of explanation """ from tensorflow.python.lib.io import file_io from google.protobuf import text_format file_io.recursive_create_dir(OUTPUT_DIR) schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file} """ Explanation: In this example we do see some drift, but it is well below the threshold that we've set. Freeze the schema Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. End of explanation """
Tsiems/machine-learning-projects
In_Class/ICA1_MachineLearning_PartA.ipynb
mit
from sklearn.datasets import load_diabetes import numpy as np from __future__ import print_function ds = load_diabetes() # this holds the continuous feature data # because ds.data is a matrix, there are some special properties we can access (like 'shape') print('features shape:', ds.data.shape, 'format is:', ('rows','columns')) # there are 442 instances and 10 features per instance print('range of target:', np.min(ds.target),np.max(ds.target)) from pprint import pprint # we can set the fields inside of ds and set them to new variables in python pprint(ds.data) # prints out elements of the matrix pprint(ds.target) # prints the vector (all 442 items) """ Explanation: Enter Team Member Names here (double click to edit): Name 1: Name 2: Name 3: In Class Assignment One In the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the rendered notebook) before the end of class (or right after class). The initial portion of this notebook is given before class and the remainder is given during class. Please answer the initial questions before class, to the best of your ability. Once class has started you may rework your answers as a team for the initial part of the assignment. <a id="top"></a> Contents <a href="#Loading">Loading the Data</a> <a href="#linearnumpy">Linear Regression</a> For in class session (not included here): <a href="#sklearn">Using Scikit Learn for Regression</a> <a href="#classification">Linear Classification</a> <a id="Loading"></a> <a href="#top">Back to Top</a> Loading the Data Please run the following code to read in the "diabetes" dataset from sklearn's data loading module. This will load the data into the variable ds. ds is a bunch object with fields like ds.data and ds.target. The field ds.data is a numpy matrix of the continuous features in the dataset. The object is not a pandas dataframe. It is a numpy matrix. Each row is a set of observed instances, each column is a different feature. It also has a field called ds.target that is a continuous value we are trying to predict. Each entry in ds.target is a label for each row of the ds.data matrix. End of explanation """ # Enter your answer here (or write code to calculate it) # 11 """ Explanation: <a id="linearnumpy"></a> <a href="#top">Back to Top</a> Using Linear Regression In the videos, we derived the formula for calculating the optimal values of the regression weights (you must be connected to the internet for this equation to show up properly): $$ w = (X^TX)^{-1}X^Ty $$ where $X$ is the matrix of values with a bias column of ones appended onto it. For the diabetes dataset one could construct this $X$ matrix by stacking a column of ones onto the ds.data matrix. $$ X=\begin{bmatrix} & \vdots & & 1 \ \dotsb & \text{ds.data} & \dotsb & \vdots\ & \vdots & & 1\ \end{bmatrix} $$ Question 1: For the diabetes dataset, how many elements will the vector $w$ contain? End of explanation """
mtetkosk/European_Soccer_Prediction
notebooks/20170122_Data_Exploration.ipynb
mit
import pandas as pd import numpy as np from matplotlib import pyplot as plt %matplotlib inline """ Explanation: Data Exploration This notebook will perform exploratory analysis on the european soccer dataset before new feature creation. Additional exploration of new features is located within the feature creation notebook. End of explanation """ matches = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_matches.csv') print len(matches) print matches.head() matches.columns[:11] # Columns 1 - 10 identify the match and the number of goals scored by each team matches.columns[85:] # Columns 85 - 115 are betting odds from different websites matches.columns[11:55] # Columns 11-55 are (X,Y) coordinates for players on the pitch - Describing formation matches.columns[55:85] # Columns 55 - 77 give the player names. Columns 77-84 give some statistics based on the match. """ Explanation: First step is to read in the csv files created by the extraction notebook End of explanation """ matches_reduced = matches.copy() removecols = matches.columns[11:85] removecols_other = ['country_id','league_id'] for col in matches_reduced.columns: if col in removecols or col in removecols_other: del matches_reduced[col] print matches_reduced.shape #Reduced from 115 columns to 106 columns matches_reduced.season.value_counts() #Equal numer of matches per-season # What does the 'stage' variable mean? matches_reduced[matches_reduced.season=='2008/2009'].stage.value_counts() """ Explanation: Lets remove any variables from matches df that we won't need for this analysis End of explanation """ matches_reduced.head() """ Explanation: 'Stage' variable must mean 'week' of the season. Each 'stage' consists of 10 matches. This is a way to group matches by date. End of explanation """ null_dict = {} for col in matches_reduced.columns[4:]: nulls = matches_reduced[col].isnull().sum() if nulls > 0: null_dict[col] = nulls null_dict """ Explanation: Now let's check for missing values End of explanation """ for key in null_dict.keys(): if null_dict[key] > 10: del matches_reduced[key] matches_reduced.shape matches_reduced.to_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_Matches_Reduced.csv',index= False) team_attributes = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_team_attributes.csv') print len(team_attributes) print team_attributes.head() team_attributes['date'] team_attributes.columns null_dict = {} for col in team_attributes.columns[4:]: nulls = team_attributes[col].isnull().sum() if nulls > 0: null_dict[col] = nulls if team_attributes[col].dtype == 'int64' or team_attributes[col].dtype == 'float64': team_attributes[col].plot(kind = 'hist') plt.xlabel(col) plt.title(col + 'Histogram') plt.show() elif team_attributes[col].dtype == 'object': team_attributes[col].value_counts().plot(kind ='bar') #Build up play passing class value counts totals to 204, no nulls plt.title(col + 'Bar Chart') plt.show() null_dict """ Explanation: Many of the betting odds have null values. Let's remove the columns that have excessive nulls. End of explanation """ teams = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_teams.csv') print len(teams) print teams.head() teams.head() """ Explanation: From 'null_dict' object, only the attribute 'buildUpPlayDribbling' numeric attribute has null values. End of explanation """ player_attributes = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/Player_Attributes.csv') print len(player_attributes) print player_attributes.head() players = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/Players.csv') print len(players) print players.head() """ Explanation: Because players will not be used in this version of the model, we will not explore these attributes. End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/1a105d401683707ed0696f30397d6253/40_artifact_correction_ica.ipynb
bsd-3-clause
import os import mne from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs, corrmap) sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) raw.crop(tmax=60.) """ Explanation: Repairing artifacts with ICA This tutorial covers the basics of independent components analysis (ICA) and shows how ICA can be used for artifact repair; an extended example illustrates repair of ocular and heartbeat artifacts. For conceptual background on ICA, see this scikit-learn tutorial &lt;sphx_glr_auto_examples_decomposition_plot_ica_blind_source_separation.py&gt;. We begin as always by importing the necessary Python modules and loading some example data &lt;sample-dataset&gt;. Because ICA can be computationally intense, we'll also crop the data to 60 seconds; and to save ourselves from repeatedly typing mne.preprocessing we'll directly import a few functions and classes from that submodule: End of explanation """ # pick some channels that clearly show heartbeats and blinks regexp = r'(MEG [12][45][123]1|EEG 00.)' artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp) raw.plot(order=artifact_picks, n_channels=len(artifact_picks), show_scrollbars=False) """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe the artifacts in your data to make sure you choose the right repair tool. Sometimes the right tool is no tool at all — if the artifacts are small enough you may not even need to repair them to get good analysis results. See `tut-artifact-overview` for guidance on detecting and visualizing various types of artifact.</p></div> What is ICA? Independent components analysis (ICA) is a technique for estimating independent source signals from a set of recordings in which the source signals were mixed together in unknown ratios. A common example of this is the problem of blind source separation_: with 3 musical instruments playing in the same room, and 3 microphones recording the performance (each picking up all 3 instruments, but at varying levels), can you somehow "unmix" the signals recorded by the 3 microphones so that you end up with a separate "recording" isolating the sound of each instrument? It is not hard to see how this analogy applies to EEG/MEG analysis: there are many "microphones" (sensor channels) simultaneously recording many "instruments" (blinks, heartbeats, activity in different areas of the brain, muscular activity from jaw clenching or swallowing, etc). As long as these various source signals are statistically independent_ and non-gaussian, it is usually possible to separate the sources using ICA, and then re-construct the sensor signals after excluding the sources that are unwanted. ICA in MNE-Python .. sidebar:: ICA and dimensionality reduction If you want to perform ICA with *no* dimensionality reduction (other than the number of Independent Components (ICs) given in ``n_components``, and any subsequent exclusion of ICs you specify in ``ICA.exclude``), simply pass ``n_components``. However, if you *do* want to reduce dimensionality, consider this example: if you have 300 sensor channels and you set ``n_components=50`` during instantiation and pass ``n_pca_components=None`` to `~mne.preprocessing.ICA.apply`, then the the first 50 PCs are sent to the ICA algorithm (yielding 50 ICs), and during reconstruction `~mne.preprocessing.ICA.apply` will use the 50 ICs plus PCs number 51-300 (the full PCA residual). If instead you specify ``n_pca_components=120`` in `~mne.preprocessing.ICA.apply`, it will reconstruct using the 50 ICs plus the first 70 PCs in the PCA residual (numbers 51-120), thus discarding the smallest 180 components. **If you have previously been using EEGLAB**'s ``runica()`` and are looking for the equivalent of its ``'pca', n`` option to reduce dimensionality, set ``n_components=n`` during initialization and pass ``n_pca_components=n`` to `~mne.preprocessing.ICA.apply`. MNE-Python implements three different ICA algorithms: fastica (the default), picard, and infomax. FastICA and Infomax are both in fairly widespread use; Picard is a newer (2017) algorithm that is expected to converge faster than FastICA and Infomax, and is more robust than other algorithms in cases where the sources are not completely independent, which typically happens with real EEG/MEG data. See :footcite:AblinEtAl2018 for more information. The ICA interface in MNE-Python is similar to the interface in scikit-learn_: some general parameters are specified when creating an ~mne.preprocessing.ICA object, then the ~mne.preprocessing.ICA object is fit to the data using its ~mne.preprocessing.ICA.fit method. The results of the fitting are added to the ~mne.preprocessing.ICA object as attributes that end in an underscore (_), such as ica.mixing_matrix_ and ica.unmixing_matrix_. After fitting, the ICA component(s) that you want to remove must be chosen, and the ICA fit must then be applied to the ~mne.io.Raw or ~mne.Epochs object using the ~mne.preprocessing.ICA object's ~mne.preprocessing.ICA.apply method. As is typically done with ICA, the data are first scaled to unit variance and whitened using principal components analysis (PCA) before performing the ICA decomposition. This is a two-stage process: To deal with different channel types having different units (e.g., Volts for EEG and Tesla for MEG), data must be pre-whitened. If noise_cov=None (default), all data of a given channel type is scaled by the standard deviation across all channels. If noise_cov is a ~mne.Covariance, the channels are pre-whitened using the covariance. The pre-whitened data are then decomposed using PCA. From the resulting principal components (PCs), the first n_components are then passed to the ICA algorithm if n_components is an integer number. It can also be a float between 0 and 1, specifying the fraction of explained variance that the PCs should capture; the appropriate number of PCs (i.e., just as many PCs as are required to explain the given fraction of total variance) is then passed to the ICA. After visualizing the Independent Components (ICs) and excluding any that capture artifacts you want to repair, the sensor signal can be reconstructed using the ~mne.preprocessing.ICA object's ~mne.preprocessing.ICA.apply method. By default, signal reconstruction uses all of the ICs (less any ICs listed in ICA.exclude) plus all of the PCs that were not included in the ICA decomposition (i.e., the "PCA residual"). If you want to reduce the number of components used at the reconstruction stage, it is controlled by the n_pca_components parameter (which will in turn reduce the rank of your data; by default n_pca_components=None resulting in no additional dimensionality reduction). The fitting and reconstruction procedures and the parameters that control dimensionality at various stages are summarized in the diagram below: .. raw:: html <a href= "../../_images/graphviz-7483cb1cf41f06e2a4ef451b17f073dbe584ba30.png"> .. graphviz:: ../../_static/diagrams/ica.dot :alt: Diagram of ICA procedure in MNE-Python :align: left .. raw:: html </a> See the Notes section of the ~mne.preprocessing.ICA documentation for further details. Next we'll walk through an extended example that illustrates each of these steps in greater detail. Example: EOG and ECG artifact repair Visualizing the artifacts Let's begin by visualizing the artifacts that we want to repair. In this dataset they are big enough to see easily in the raw data: End of explanation """ eog_evoked = create_eog_epochs(raw).average() eog_evoked.apply_baseline(baseline=(None, -0.2)) eog_evoked.plot_joint() """ Explanation: We can get a summary of how the ocular artifact manifests across each channel type using ~mne.preprocessing.create_eog_epochs like we did in the tut-artifact-overview tutorial: End of explanation """ ecg_evoked = create_ecg_epochs(raw).average() ecg_evoked.apply_baseline(baseline=(None, -0.2)) ecg_evoked.plot_joint() """ Explanation: Now we'll do the same for the heartbeat artifacts, using ~mne.preprocessing.create_ecg_epochs: End of explanation """ filt_raw = raw.copy() filt_raw.load_data().filter(l_freq=1., h_freq=None) """ Explanation: Filtering to remove slow drifts Before we run the ICA, an important step is filtering the data to remove low-frequency drifts, which can negatively affect the quality of the ICA fit. The slow drifts are problematic because they reduce the independence of the assumed-to-be-independent sources (e.g., during a slow upward drift, the neural, heartbeat, blink, and other muscular sources will all tend to have higher values), making it harder for the algorithm to find an accurate solution. A high-pass filter with 1 Hz cutoff frequency is recommended. However, because filtering is a linear operation, the ICA solution found from the filtered signal can be applied to the unfiltered signal (see :footcite:WinklerEtAl2015 for more information), so we'll keep a copy of the unfiltered ~mne.io.Raw object around so we can apply the ICA solution to it later. End of explanation """ ica = ICA(n_components=15, max_iter='auto', random_state=97) ica.fit(filt_raw) """ Explanation: Fitting and plotting the ICA solution .. sidebar:: Ignoring the time domain The ICA algorithms implemented in MNE-Python find patterns across channels, but ignore the time domain. This means you can compute ICA on discontinuous `~mne.Epochs` or `~mne.Evoked` objects (not just continuous `~mne.io.Raw` objects), or only use every Nth sample by passing the ``decim`` parameter to ``ICA.fit()``. .. note:: `~mne.Epochs` used for fitting ICA should not be baseline-corrected. Because cleaning the data via ICA may introduce DC offsets, we suggest to baseline correct your data **after** cleaning (and not before), should you require baseline correction. Now we're ready to set up and fit the ICA. Since we know (from observing our raw data) that the EOG and ECG artifacts are fairly strong, we would expect those artifacts to be captured in the first few dimensions of the PCA decomposition that happens before the ICA. Therefore, we probably don't need a huge number of components to do a good job of isolating our artifacts (though it is usually preferable to include more components for a more accurate solution). As a first guess, we'll run ICA with n_components=15 (use only the first 15 PCA components to compute the ICA decomposition) — a very small number given that our data has over 300 channels, but with the advantage that it will run quickly and we will able to tell easily whether it worked or not (because we already know what the EOG / ECG artifacts should look like). ICA fitting is not deterministic (e.g., the components may get a sign flip on different runs, or may not always be returned in the same order), so we'll also specify a random seed_ so that we get identical results each time this tutorial is built by our web servers. End of explanation """ raw.load_data() ica.plot_sources(raw, show_scrollbars=False) """ Explanation: Some optional parameters that we could have passed to the ~mne.preprocessing.ICA.fit method include decim (to use only every Nth sample in computing the ICs, which can yield a considerable speed-up) and reject (for providing a rejection dictionary for maximum acceptable peak-to-peak amplitudes for each channel type, just like we used when creating epoched data in the tut-overview tutorial). Now we can examine the ICs to see what they captured. ~mne.preprocessing.ICA.plot_sources will show the time series of the ICs. Note that in our call to ~mne.preprocessing.ICA.plot_sources we can use the original, unfiltered ~mne.io.Raw object: End of explanation """ ica.plot_components() """ Explanation: Here we can pretty clearly see that the first component (ICA000) captures the EOG signal quite well, and the second component (ICA001) looks a lot like a heartbeat &lt;qrs_&gt; (for more info on visually identifying Independent Components, this EEGLAB tutorial is a good resource). We can also visualize the scalp field distribution of each component using ~mne.preprocessing.ICA.plot_components. These are interpolated based on the values in the ICA mixing matrix: End of explanation """ # blinks ica.plot_overlay(raw, exclude=[0], picks='eeg') # heartbeats ica.plot_overlay(raw, exclude=[1], picks='mag') """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.ICA.plot_components` (which plots the scalp field topographies for each component) has an optional ``inst`` parameter that takes an instance of `~mne.io.Raw` or `~mne.Epochs`. Passing ``inst`` makes the scalp topographies interactive: clicking one will bring up a diagnostic `~mne.preprocessing.ICA.plot_properties` window (see below) for that component.</p></div> In the plots above it's fairly obvious which ICs are capturing our EOG and ECG artifacts, but there are additional ways visualize them anyway just to be sure. First, we can plot an overlay of the original signal against the reconstructed signal with the artifactual ICs excluded, using ~mne.preprocessing.ICA.plot_overlay: End of explanation """ ica.plot_properties(raw, picks=[0, 1]) """ Explanation: We can also plot some diagnostics of each IC using ~mne.preprocessing.ICA.plot_properties: End of explanation """ ica.exclude = [0, 1] # indices chosen based on various plots above """ Explanation: In the remaining sections, we'll look at different ways of choosing which ICs to exclude prior to reconstructing the sensor signals. Selecting ICA components manually Once we're certain which components we want to exclude, we can specify that manually by setting the ica.exclude attribute. Similar to marking bad channels, merely setting ica.exclude doesn't do anything immediately (it just adds the excluded ICs to a list that will get used later when it's needed). Once the exclusions have been set, ICA methods like ~mne.preprocessing.ICA.plot_overlay will exclude those component(s) even if no exclude parameter is passed, and the list of excluded components will be preserved when using mne.preprocessing.ICA.save and mne.preprocessing.read_ica. End of explanation """ # ica.apply() changes the Raw object in-place, so let's make a copy first: reconst_raw = raw.copy() ica.apply(reconst_raw) raw.plot(order=artifact_picks, n_channels=len(artifact_picks), show_scrollbars=False) reconst_raw.plot(order=artifact_picks, n_channels=len(artifact_picks), show_scrollbars=False) del reconst_raw """ Explanation: Now that the exclusions have been set, we can reconstruct the sensor signals with artifacts removed using the ~mne.preprocessing.ICA.apply method (remember, we're applying the ICA solution from the filtered data to the original unfiltered signal). Plotting the original raw data alongside the reconstructed data shows that the heartbeat and blink artifacts are repaired. End of explanation """ ica.exclude = [] # find which ICs match the EOG pattern eog_indices, eog_scores = ica.find_bads_eog(raw) ica.exclude = eog_indices # barplot of ICA component "EOG match" scores ica.plot_scores(eog_scores) # plot diagnostics ica.plot_properties(raw, picks=eog_indices) # plot ICs applied to raw data, with EOG matches highlighted ica.plot_sources(raw, show_scrollbars=False) # plot ICs applied to the averaged EOG epochs, with EOG matches highlighted ica.plot_sources(eog_evoked) """ Explanation: Using an EOG channel to select ICA components It may have seemed easy to review the plots and manually select which ICs to exclude, but when processing dozens or hundreds of subjects this can become a tedious, rate-limiting step in the analysis pipeline. One alternative is to use dedicated EOG or ECG sensors as a "pattern" to check the ICs against, and automatically mark for exclusion any ICs that match the EOG/ECG pattern. Here we'll use ~mne.preprocessing.ICA.find_bads_eog to automatically find the ICs that best match the EOG signal, then use ~mne.preprocessing.ICA.plot_scores along with our other plotting functions to see which ICs it picked. We'll start by resetting ica.exclude back to an empty list: End of explanation """ ica.exclude = [] # find which ICs match the ECG pattern ecg_indices, ecg_scores = ica.find_bads_ecg(raw, method='correlation', threshold='auto') ica.exclude = ecg_indices # barplot of ICA component "ECG match" scores ica.plot_scores(ecg_scores) # plot diagnostics ica.plot_properties(raw, picks=ecg_indices) # plot ICs applied to raw data, with ECG matches highlighted ica.plot_sources(raw, show_scrollbars=False) # plot ICs applied to the averaged ECG epochs, with ECG matches highlighted ica.plot_sources(ecg_evoked) """ Explanation: Note that above we used ~mne.preprocessing.ICA.plot_sources on both the original ~mne.io.Raw instance and also on an ~mne.Evoked instance of the extracted EOG artifacts. This can be another way to confirm that ~mne.preprocessing.ICA.find_bads_eog has identified the correct components. Using a simulated channel to select ICA components If you don't have an EOG channel, ~mne.preprocessing.ICA.find_bads_eog has a ch_name parameter that you can use as a proxy for EOG. You can use a single channel, or create a bipolar reference from frontal EEG sensors and use that as virtual EOG channel. This carries a risk however: you must hope that the frontal EEG channels only reflect EOG and not brain dynamics in the prefrontal cortex (or you must not care about those prefrontal signals). For ECG, it is easier: ~mne.preprocessing.ICA.find_bads_ecg can use cross-channel averaging of magnetometer or gradiometer channels to construct a virtual ECG channel, so if you have MEG channels it is usually not necessary to pass a specific channel name. ~mne.preprocessing.ICA.find_bads_ecg also has two options for its method parameter: 'ctps' (cross-trial phase statistics :footcite:DammersEtAl2008) and 'correlation' (Pearson correlation between data and ECG channel). End of explanation """ # refit the ICA with 30 components this time new_ica = ICA(n_components=30, max_iter='auto', random_state=97) new_ica.fit(filt_raw) # find which ICs match the ECG pattern ecg_indices, ecg_scores = new_ica.find_bads_ecg(raw, method='correlation', threshold='auto') new_ica.exclude = ecg_indices # barplot of ICA component "ECG match" scores new_ica.plot_scores(ecg_scores) # plot diagnostics new_ica.plot_properties(raw, picks=ecg_indices) # plot ICs applied to raw data, with ECG matches highlighted new_ica.plot_sources(raw, show_scrollbars=False) # plot ICs applied to the averaged ECG epochs, with ECG matches highlighted new_ica.plot_sources(ecg_evoked) """ Explanation: The last of these plots is especially useful: it shows us that the heartbeat artifact is coming through on two ICs, and we've only caught one of them. In fact, if we look closely at the output of ~mne.preprocessing.ICA.plot_sources (online, you can right-click → "view image" to zoom in), it looks like ICA014 has a weak periodic component that is in-phase with ICA001. It might be worthwhile to re-run the ICA with more components to see if that second heartbeat artifact resolves out a little better: End of explanation """ # clean up memory before moving on del raw, ica, new_ica """ Explanation: Much better! Now we've captured both ICs that are reflecting the heartbeat artifact (and as a result, we got two diagnostic plots: one for each IC that reflects the heartbeat). This demonstrates the value of checking the results of automated approaches like ~mne.preprocessing.ICA.find_bads_ecg before accepting them. End of explanation """ mapping = { 'Fc5.': 'FC5', 'Fc3.': 'FC3', 'Fc1.': 'FC1', 'Fcz.': 'FCz', 'Fc2.': 'FC2', 'Fc4.': 'FC4', 'Fc6.': 'FC6', 'C5..': 'C5', 'C3..': 'C3', 'C1..': 'C1', 'Cz..': 'Cz', 'C2..': 'C2', 'C4..': 'C4', 'C6..': 'C6', 'Cp5.': 'CP5', 'Cp3.': 'CP3', 'Cp1.': 'CP1', 'Cpz.': 'CPz', 'Cp2.': 'CP2', 'Cp4.': 'CP4', 'Cp6.': 'CP6', 'Fp1.': 'Fp1', 'Fpz.': 'Fpz', 'Fp2.': 'Fp2', 'Af7.': 'AF7', 'Af3.': 'AF3', 'Afz.': 'AFz', 'Af4.': 'AF4', 'Af8.': 'AF8', 'F7..': 'F7', 'F5..': 'F5', 'F3..': 'F3', 'F1..': 'F1', 'Fz..': 'Fz', 'F2..': 'F2', 'F4..': 'F4', 'F6..': 'F6', 'F8..': 'F8', 'Ft7.': 'FT7', 'Ft8.': 'FT8', 'T7..': 'T7', 'T8..': 'T8', 'T9..': 'T9', 'T10.': 'T10', 'Tp7.': 'TP7', 'Tp8.': 'TP8', 'P7..': 'P7', 'P5..': 'P5', 'P3..': 'P3', 'P1..': 'P1', 'Pz..': 'Pz', 'P2..': 'P2', 'P4..': 'P4', 'P6..': 'P6', 'P8..': 'P8', 'Po7.': 'PO7', 'Po3.': 'PO3', 'Poz.': 'POz', 'Po4.': 'PO4', 'Po8.': 'PO8', 'O1..': 'O1', 'Oz..': 'Oz', 'O2..': 'O2', 'Iz..': 'Iz' } raws = list() icas = list() for subj in range(4): # EEGBCI subjects are 1-indexed; run 3 is a left/right hand movement task fname = mne.datasets.eegbci.load_data(subj + 1, runs=[3])[0] raw = mne.io.read_raw_edf(fname) # remove trailing `.` from channel names so we can set montage raw.rename_channels(mapping) raw.set_montage('standard_1005') # high-pass filter raw_filt = raw.copy().load_data().filter(l_freq=1., h_freq=None) # fit ICA ica = ICA(n_components=30, max_iter='auto', random_state=97) ica.fit(raw_filt) raws.append(raw) icas.append(ica) """ Explanation: Selecting ICA components using template matching When dealing with multiple subjects, it is also possible to manually select an IC for exclusion on one subject, and then use that component as a template for selecting which ICs to exclude from other subjects' data, using mne.preprocessing.corrmap :footcite:CamposViolaEtAl2009. The idea behind ~mne.preprocessing.corrmap is that the artifact patterns are similar enough across subjects that corresponding ICs can be identified by correlating the ICs from each ICA solution with a common template, and picking the ICs with the highest correlation strength. ~mne.preprocessing.corrmap takes a list of ICA solutions, and a template parameter that specifies which ICA object and which component within it to use as a template. Since our sample dataset only contains data from one subject, we'll use a different dataset with multiple subjects: the EEGBCI dataset :footcite:SchalkEtAl2004,GoldbergerEtAl2000. The dataset has 109 subjects, we'll just download one run (a left/right hand movement task) from each of the first 4 subjects: End of explanation """ # use the first subject as template; use Fpz as proxy for EOG raw = raws[0] ica = icas[0] eog_inds, eog_scores = ica.find_bads_eog(raw, ch_name='Fpz') corrmap(icas, template=(0, eog_inds[0])) """ Explanation: Now let's run ~mne.preprocessing.corrmap: End of explanation """ for index, (ica, raw) in enumerate(zip(icas, raws)): fig = ica.plot_sources(raw, show_scrollbars=False) fig.subplots_adjust(top=0.9) # make space for title fig.suptitle('Subject {}'.format(index)) """ Explanation: The first figure shows the template map, while the second figure shows all the maps that were considered a "match" for the template (including the template itself). There were only three matches from the four subjects; notice the output message No maps selected for subject(s) 1, consider a more liberal threshold. By default the threshold is set automatically by trying several values; here it may have chosen a threshold that is too high. Let's take a look at the ICA sources for each subject: End of explanation """ corrmap(icas, template=(0, eog_inds[0]), threshold=0.9) """ Explanation: Notice that subject 1 does seem to have an IC that looks like it reflects blink artifacts (component ICA000). Notice also that subject 3 appears to have two components that are reflecting ocular artifacts (ICA000 and ICA002), but only one was caught by ~mne.preprocessing.corrmap. Let's try setting the threshold manually: End of explanation """ corrmap(icas, template=(0, eog_inds[0]), threshold=0.9, label='blink', plot=False) print([ica.labels_ for ica in icas]) """ Explanation: Now we get the message At least 1 IC detected for each subject (which is good). At this point we'll re-run ~mne.preprocessing.corrmap with parameters label='blink', plot=False to label the ICs from each subject that capture the blink artifacts (without plotting them again). End of explanation """ icas[3].plot_components(picks=icas[3].labels_['blink']) icas[3].exclude = icas[3].labels_['blink'] icas[3].plot_sources(raws[3], show_scrollbars=False) """ Explanation: Notice that the first subject has 3 different labels for the IC at index 0: "eog/0/Fpz", "eog", and "blink". The first two were added by ~mne.preprocessing.ICA.find_bads_eog; the "blink" label was added by the last call to ~mne.preprocessing.corrmap. Notice also that each subject has at least one IC index labelled "blink", and subject 3 has two components (0 and 2) labelled "blink" (consistent with the plot of IC sources above). The labels_ attribute of ~mne.preprocessing.ICA objects can also be manually edited to annotate the ICs with custom labels. They also come in handy when plotting: End of explanation """ template_eog_component = icas[0].get_components()[:, eog_inds[0]] corrmap(icas, template=template_eog_component, threshold=0.9) print(template_eog_component) """ Explanation: As a final note, it is possible to extract ICs numerically using the ~mne.preprocessing.ICA.get_components method of ~mne.preprocessing.ICA objects. This will return a :class:NumPy array &lt;numpy.ndarray&gt; that can be passed to ~mne.preprocessing.corrmap instead of the :class:tuple of (subject_index, component_index) we passed before, and will yield the same result: End of explanation """ filt_raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True).load_data() filt_raw.filter(1, 30, fir_design='firwin') # peak-to-peak amplitude rejection parameters reject = dict(grad=4000e-13, mag=4e-12) # create longer and more epochs for more artifact exposure events = mne.find_events(filt_raw, stim_channel='STI 014') # don't baseline correct epochs epochs = mne.Epochs(filt_raw, events, event_id=None, tmin=-0.2, tmax=0.5, reject=reject, baseline=None) """ Explanation: An advantage of using this numerical representation of an IC to capture a particular artifact pattern is that it can be saved and used as a template for future template-matching tasks using ~mne.preprocessing.corrmap without having to load or recompute the ICA solution that yielded the template originally. Put another way, when the template is a NumPy array, the ~mne.preprocessing.ICA object containing the template does not need to be in the list of ICAs provided to ~mne.preprocessing.corrmap. .. LINKS https://en.wikipedia.org/wiki/Signal_separation https://en.wikipedia.org/wiki/Independence_(probability_theory) Compute ICA components on Epochs ICA is now fit to epoched MEG data instead of the raw data. We assume that the non-stationary EOG artifacts have already been removed. The sources matching the ECG are automatically found and displayed. <div class="alert alert-info"><h4>Note</h4><p>This example is computationally intensive, so it might take a few minutes to complete.</p></div> Read and preprocess the data. Preprocessing consists of: MEG channel selection 1-30 Hz band-pass filter epoching -0.2 to 0.5 seconds with respect to events rejection based on peak-to-peak amplitude Note that we don't baseline correct the epochs here – we'll do this after cleaning with ICA is completed. Baseline correction before ICA is not recommended by the MNE-Python developers, as it doesn't guarantee optimal results. End of explanation """ ica = ICA(n_components=15, method='fastica', max_iter="auto").fit(epochs) ecg_epochs = create_ecg_epochs(filt_raw, tmin=-.5, tmax=.5) ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, threshold='auto') ica.plot_components(ecg_inds) """ Explanation: Fit ICA model using the FastICA algorithm, detect and plot components explaining ECG artifacts. End of explanation """ ica.plot_properties(epochs, picks=ecg_inds) """ Explanation: Plot the properties of the ECG components: End of explanation """ ica.plot_sources(filt_raw, picks=ecg_inds) """ Explanation: Plot the estimated sources of detected ECG related components: End of explanation """
chemiskyy/simmit
Examples/Continuum_Mechanics/contimech.ipynb
gpl-3.0
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from simmit import smartplus as sim import os """ Explanation: contimech : tools and functions useful in Continuum Mechanics End of explanation """ v = np.random.rand(6) trace = sim.tr(v) print v print trace """ Explanation: tr(vec) Provides the trace of a second order tensor written as a vector v in the SMART+ formalism. Return a double. Example: End of explanation """ v = np.random.rand(6) v_dev = sim.dev(v) print v print v_dev """ Explanation: dev(vec) Provides the deviatoric part of a second order tensor written as a vector v in the SMART+ formalism. Return a vec. Example: End of explanation """ v = np.random.rand(6) Mises_sig = sim.Mises_stress(v) print v print Mises_sig """ Explanation: Mises_stress(vec) Provides the Von Mises stress $\sigma^{Mises}$ of a second order stress tensor written as a vector v in the SMART+ formalism. Return a double. End of explanation """ v = np.random.rand(6) Mises_eps = sim.Mises_strain(v) print v print Mises_eps """ Explanation: Mises_strain(vec) Provides the Von Mises strain $\varepsilon^{Mises}$ of a second order stress tensor written as a vector v in the SMART+ formalism. Return a double. End of explanation """ v = np.random.rand(6) sigma_f = sim.eta_stress(v) print v print sigma_f """ Explanation: eta_stress(vec) Provides the stress flow: $$\eta_{stress}=\frac{3/2\sigma_{dev}}{\sigma_{Mises}}$$ from a second order stress tensor written as a vector v in the SMART+ formalism (i.e. the shear terms are multiplied by 2, providing shear angles). Return a vec. End of explanation """ v = np.random.rand(6) eps_f = sim.eta_strain(v) print v print eps_f """ Explanation: eta_strain(vec) Provides the strain flow $$\eta_{strain}=\frac{2/3\varepsilon_{dev}}{\varepsilon_{Mises}}$$ from a second order strain tensor written as a vector v in the SMART+ formalism (i.e. the shear terms are multiplied by 2, providing shear angles). Return a vec. End of explanation """ v = np.random.rand(6) m = sim.v2t_stress(v); print v print m """ Explanation: v2t_stress(vec) Converts a second order stress tensor written as a vector v in the SMART+ formalism into a second order stress tensor written as a matrix m. Return a mat. End of explanation """ m = np.random.rand(3,3) m_symm = (m + m.T)/2 v = sim.t2v_stress(m_symm); print m_symm print v """ Explanation: t2v_stress(vec) Converts a second order stress tensor written as a matrix m in the SMART+ formalism into a second order stress tensor written as a vector v. Return a vec. End of explanation """ v = np.random.rand(6) m = sim.v2t_strain(v); print v print m """ Explanation: v2t_strain(vec) Converts a second order strain tensor written as a vector v in the SMART+ formalism into a second order strain tensor written as a matrix m. Return a mat. End of explanation """ m = np.random.rand(3,3) m_symm = (m + m.T)/2 v = sim.t2v_strain(m_symm); print m_symm print v """ Explanation: t2v_strain(vec) Converts a second order strain tensor written as a matrix m in the SMART+ formalism into a second order strain tensor written as a vector v. Return a vec. End of explanation """ v = np.random.rand(6) J2 = sim.J2_stress(v) print v print J2 """ Explanation: J2_stress(vec) Provides the second invariant of a second order stress tensor written as a vector v in the SMART+ formalism. Return a double. $$ J_2 (\boldsymbol{\sigma}) = \frac{1}{2} \sigma_{ij} \sigma_{ij} $$ End of explanation """ v = np.random.rand(6) J2 = sim.J2_strain(v) print v print J2 """ Explanation: J2_strain(vec) Provides the second invariant of a second order strain tensor written as a vector v in the SMART+ formalism. Return a vec. $$ J_2 (\boldsymbol{\varepsilon}) = \frac{1}{2} \varepsilon_{ij} \varepsilon_{ij} $$ End of explanation """ v = np.random.rand(6) J3 = sim.J3_stress(v) print v print J3 """ Explanation: J3_stress(vec) Provides the third invariant of a second order stress tensor written as a vector v in the SMART+ formalism. Return a double. $$ J_3 (\boldsymbol{\sigma}) = \frac{1}{2} \sigma_{ij} \sigma_{jk} \sigma_{ki} $$ End of explanation """ v = np.random.rand(6) J3 = sim.J3_strain(v) print v print J3 """ Explanation: J3_strain(vec) Provides the third invariant of a second order strain tensor written as a vector v in the SMART+ formalism. Return a vec. $$ J_3 (\boldsymbol{\varepsilon}) = \frac{1}{2} \varepsilon_{ij} \varepsilon_{jk} \varepsilon_{ki} $$ End of explanation """
ZhiangChen/deep_learning
thesis/Data Preprocess.ipynb
mit
from six.moves import cPickle as pickle import matplotlib.pyplot as plt import os from random import sample, shuffle import numpy as np """ Explanation: Data Preprocess Zhiang Chen, March 2017 This notebook is to get training dataset, validation dataset and test dataset. First, it reads the 24 pickle files. These 24 pickle files contain data from three different height of desk. For example, file [1-8] are from the height_1 of the desk; file [9-16] are from the height_2 of the desk; file [17-24] are from the height_3 of the desk. Two of the pickle files are randomly chosen from each 8 files to compose the validation dataset and test dataset. Three of them are randomly chosen as validation dataset, and the rest are the test dataset. 1. Import the necessary packages End of explanation """ files = os.listdir('pickle') dataset = dict() for file_name in files: with open('pickle/'+file_name, 'rb') as f: save = pickle.load(f) dataset.setdefault(file_name, save['image']) del save """ Explanation: 2. Read pickle files End of explanation """ v_t = sample(xrange(1,9),2) + sample(xrange(9,17),2) + sample(xrange(16,25),2) shuffle(v_t) valid = v_t[:3] test = v_t[3:] train = list(set(range(1,25)) - set(v_t)) def get_names(ls): return ['p'+str(x) for x in ls] train = get_names(train) valid = get_names(valid) test = get_names(test) print('train',train) print('valid',valid) print('test',test) def add_dic(x,y): return dict(x.items() + y.items()) def get_data(name_list): data = [dataset.get(name,False) for name in name_list] return reduce(add_dic,data) # the dictionary is {name:numpy}; for example, one of the names is '30-8-1-gball-288.png' train_dataset = get_data(train) valid_dataset = get_data(valid) test_dataset = get_data(test) """ Explanation: 3. Group dataset End of explanation """ non_orientations = ['empty','cup','tball','pball','gball'] image_size = 50 def label_data(data): objects = list() orientations = list() values = list() for name, value in data.iteritems(): obj = name.split('.')[0].split('-')[-2] # object name ori = name.split('.')[0].split('-')[-1] # orientation objects.append(obj) if obj in non_orientations: orientations.append(0) elif obj == 'gstick': if name.split('.')[0].split('-')[2] in ['1','3']: orientations.append(0) else: orientations.append(int(ori)) else: orientations.append(int(ori)) values.append(value.reshape(image_size,image_size,1).astype(np.float32)) return objects, orientations, values train_objects, train_orientations, train_values = label_data(train_dataset) valid_objects, valid_orientations, valid_values = label_data(valid_dataset) test_objects, test_orientations, test_values = label_data(test_dataset) """ Explanation: 4. Label the data End of explanation """ object2value = {'empty':0,'duck':1,'cup':2,'sponge':3,'tball':4,'pball':5,'gball':6,'gstick':7,'nerf':8,'calc':9,'stapler':10} value2object = dict((value,name) for name,value in object2value.items()) orientations = [18*x for x in range(20)] def convert_objects(objects): obj_values = np.asarray([object2value[obj] for obj in objects]) return (np.arange(len(object2value)) == obj_values[:,None]).astype(np.float32) def convert_orientations(orientations): ori_values = np.asarray(orientations)/18%10 return (np.arange(10) == ori_values[:,None]).astype(np.float32) train_objects_ = convert_objects(train_objects) valid_objects_ = convert_objects(valid_objects) test_objects_ = convert_objects(test_objects) train_orientations_ = convert_orientations(train_orientations) valid_orientations_ = convert_orientations(valid_orientations) test_orientations_ = convert_orientations(test_orientations) train_values_ = np.asarray(train_values).astype(np.float32) valid_values_ = np.asarray(valid_values).astype(np.float32) test_values_ = np.asarray(test_values).astype(np.float32) """ Explanation: 5. Convert one-hot code End of explanation """ data_file = 'depth_data' with open(data_file,'wb') as f: save={ 'train_orientations':train_orientations_, 'valid_orientations':valid_orientations_, 'test_orientations':test_orientations_, 'train_objects':train_objects_, 'valid_objects':valid_objects_, 'test_objects':test_objects_, 'train_values':train_values_, 'valid_values':valid_values_, 'test_values':test_values_, 'object2value':object2value, 'value2object':value2object } pickle.dump(save,f) f.close() statinfo = os.stat(data_file) file_size = float(statinfo.st_size)/1000 print('Compressed data size: %0.1fkB' % file_size) """ Explanation: 6. Save data End of explanation """ def randomize(dataset, classes, angles): permutation = np.random.permutation(classes.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_classes = classes[permutation] shuffled_angles = angles[permutation] return shuffled_dataset, shuffled_classes, shuffled_angles train_dataset, train_classes, train_angles = randomize(train_values_, train_objects_, train_orientations_) small_data = train_dataset[0:100,:,:,:] with open('small_data','wb') as f: save={ 'small_data':small_data, } pickle.dump(save,f,pickle.HIGHEST_PROTOCOL) f.close() ##test image = train_dataset[11,:,:,:].reshape(-1,image_size, image_size,1) image = np.append(image,small_data,axis=0) print(image.shape) """ Explanation: 7. Pick some data for batch normalization inference End of explanation """
CentroGeo/Analisis-Espacial
taller_regionalizacion/Regionalización (segunda parte).ipynb
gpl-2.0
gdf = GeoDataFrame.from_file('datos/desaparecidos_estatal.shp') """ Explanation: Los desaparecidos en México Para el taller vamos a trabajar con los datos de desaparecidos del Secretariado Ejecutivo del Sistema Nacional de Seguridad Pública (SESNSP), estos datos fueron procesados originalmente por el grupo de Data4mx. Lo primero que vamos a hacer es leer los datos del archivo desaparecidos.csv e importarlos en una estructura llamada DataFrame de la librería Pandas (Pandas es una librería de python para el manejo y análisis de series de datos, en escencia provee una estructura análoga al DataFrame de R y herramientas para manipularla) Primero vamos a leer los datos usando geopandas: End of explanation """ gdf.head() """ Explanation: Vamos a ver qué datos tenemos: End of explanation """ fig = plt.figure() plt.plot(gdf['2006'].values,gdf['2007'].values,'b.') fig = plt.figure() plt.plot(gdf['2007'].values,gdf['2008'].values,'b.') """ Explanation: La instrucción gdf.head() nos da los primeros 10 registros, es útil para darnos una idea de cómo están organizados los datos. Lo que tenemos es un shapefile con los datos de desaparecidos totales para cada año entre el 2006 y el 2014, además tenemos la población total (para el 2010) y un identificador de estado. Lo que queremos es hacer regiones de acuerdo a la evolución temporal de los desaparecidos. Como recordarán del ejercicio anterior, cuando queremos usar múltiples variables debemos asegurarnos que no estén demasiado correlacionadas. Entonces, vamos a explorar primero haciendo algunos scatterplots: End of explanation """ gdf.corr() """ Explanation: Los datos no parecen estar demasiado correlacionados, pero, para estar seguros, calculemos su matriz de correlación: End of explanation """ corr = gdf.corr() corr[corr<0.7] """ Explanation: Como podemos ver, en realidad los datos sí están bastante correlacionados, aunque para cada año sólo tenemos 32 observaciones, piensen qué quiere decir esto en términos de la significancia. Por lo pronto, vamos a seleccionar como candidatos todos los años que tengan un valor de correlación menor a 0.7 End of explanation """ variables = gdf[['2006','2007','2008','2010','2011','2012','2013','2014','geometry','nombre','CVEGEO']] variables.head() """ Explanation: Como pueden ver, el único año que se elimina por completo es el 2009, entonces usemos todos los demás años: End of explanation """ variables.to_file('datos/variables.shp') """ Explanation: Lo que hicimos aquí fue definir un Nuevo GeoDataFrame, sólo con las variables que nos interesan. Ahora, como recordarán, la librería que usamos para regionalizar (clusterpy) necesita shapefiles, afortunadamente, GeoPandas nos da una forma muy facil de exportar archivos: End of explanation """ desaparecidos = clusterpy.importArcData("datos/variables") desaparecidos.cluster('arisel', ['2006','2007','2008','2010','2011','2012','2013','2014'] , 6, wType='rook', inits=10, dissolve=0) desaparecidos.exportArcData('datos/regiones_desaparecidos') """ Explanation: Ahora tenemos un shapefile con as variables que queremos, hagamos una regionalización: End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_tabular_regression.ipynb
apache-2.0
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex client library: Hyperparameter tuning tabular regression model <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_tabular_regression.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_tabular_regression.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to do hyperparameter tuning for a custom tabular regression model. Dataset The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD. Objective In this notebook, you learn how to create a hyperparameter tuning job for a custom tabular regression model from a Python script in a docker container using the Vertex client library. You can alternatively hyperparameter tune models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create an Vertex hyperparameter turning job for training a custom model. Tune the custom model. Evaluate the study results. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation """ REGION = "us-central1" # @param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a custom training job using the Vertex client library, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex runs the code from this package. In this tutorial, Vertex also saves the trained model that results from your job in the same bucket. You can then create an Endpoint resource based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import time from google.cloud.aiplatform import gapic as aip """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. End of explanation """ # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) """ Explanation: Vertex constants Setup up the following constants for Vertex: API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. End of explanation """ if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2-1" if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) """ Explanation: Container (Docker) image Next, we will set the Docker container images for training. Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available: TensorFlow 1.15 gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest TensorFlow 2.1 gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest TensorFlow 2.2 gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest TensorFlow 2.3 gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest TensorFlow 2.4 gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest XGBoost gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1 Scikit-learn gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest Pytorch gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest For the latest list, see Pre-built containers for training. End of explanation """ if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) """ Explanation: Machine Type Next, set the machine type to use for training. Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation """ # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client clients = {} clients["job"] = create_job_client() clients["model"] = create_model_client() for client in clients.items(): print(client) """ Explanation: Tutorial Now you are ready to start creating your own hyperparameter tuning and training of a custom tabular regression. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Model Service for Model resources. Job Service for hyperparameter tuning. End of explanation """ if TRAIN_GPU: machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": TRAIN_NGPU, } else: machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} """ Explanation: Tuning a model - Hello World There are two ways you can hyperparameter tune and train a custom model using a container image: Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for hyperparameter tuning and training a custom model. Use your own custom container image. If you use your own container, the container needs to contain your code for hyperparameter tuning and training a custom model. Prepare your hyperparameter tuning job specification Now that your clients are ready, your first step is to create a Job Specification for your hyperparameter tuning job. The job specification will consist of the following: trial_job_spec: The specification for the custom job. worker_pool_spec : The specification of the type of machine(s) you will use for hyperparameter tuning and how many (single or distributed) python_package_spec : The specification of the Python package to be installed with the pre-built container. study_spec: The specification for what to tune. parameters: This is the specification of the hyperparameters that you will tune for the custom training job. It will contain a list of the metrics: This is the specification on how to evaluate the result of each tuning trial. Prepare your machine specification Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning. - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8. - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU. - accelerator_count: The number of accelerators. End of explanation """ DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard] DISK_SIZE = 200 # GB disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE} """ Explanation: Prepare your disk specification (optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning. boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. boot_disk_size_gb: Size of disk in GB. End of explanation """ JOB_NAME = "custom_job_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 DIRECT = True if DIRECT: CMDARGS = [ "--model-dir=" + MODEL_DIR, "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] else: CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_NAME + "/trainer_boston.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ] """ Explanation: Define the worker pool specification Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following: replica_count: The number of instances to provision of this machine type. machine_spec: The hardware specification. disk_spec : (optional) The disk storage specification. python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module. Let's dive deeper now into the python package specification: -executor_image_spec: This is the docker image which is configured for your custom hyperparameter tuning job. -package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image. -python_module: The Python module (script) to invoke for running the custom hyperparameter tuning job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix. -args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the hyperparameter tuning script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or - indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification. - "--epochs=" + EPOCHS: The number of epochs for training. - "--steps=" + STEPS: The number of steps (batches) per epoch. - "--distribute=" + TRAIN_STRATEGY" : The hyperparameter tuning distribution strategy to use for single or distributed hyperparameter tuning. - "single": single device. - "mirror": all GPU devices on a single compute instance. - "multi": all GPU devices on all compute instances. End of explanation """ study_spec = { "metrics": [ { "metric_id": "val_accuracy", "goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE, } ], "parameters": [ { "parameter_id": "lr", "discrete_value_spec": {"values": [0.001, 0.01, 0.1]}, "scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE, } ], "algorithm": aip.StudySpec.Algorithm.GRID_SEARCH, } """ Explanation: Create a study specification Let's start with a simple study. You will just use a single parameter -- the learning rate. Since its just one parameter, it doesn't make much sense to do a random search. Instead, we will do a grid search over a range of values. metrics: metric_id: In this example, the objective metric to report back is 'val_accuracy' goal: In this example, the hyperparameter tuning service will evaluate trials to maximize the value of the objective metric. parameters: The specification for the hyperparameters to tune. parameter_id: The name of the hyperparameter that will be passed to the Python package as a command line argument. scale_type: The scale type determines the resolution the hyperparameter tuning service uses when searching over the search space. UNIT_LINEAR_SCALE: Uses a resolution that is the same everywhere in the search space. UNIT_LOG_SCALE: Values close to the bottom of the search space are further away. UNIT_REVERSE_LOG_SCALE: Values close to the top of the search space are further away. search space: This is where you will specify the search space of values for the hyperparameter to select for tuning. integer_value_spec: Specifies an integer range of values between a min_value and max_value. double_value_spec: Specifies a continuous range of values between a min_value and max_value. discrete_value_spec: Specifies a list of values. algorithm: The search method for selecting hyperparameter values per trial: GRID_SEARCH: Combinatorically search -- which is used in this example. RANDOM_SEARCH: Random search. End of explanation """ hpt_job = { "display_name": JOB_NAME, "trial_job_spec": {"worker_pool_specs": worker_pool_spec}, "study_spec": study_spec, "max_trial_count": 6, "parallel_trial_count": 1, } """ Explanation: Assemble a hyperparameter tuning job specification Now assemble the complete description for the custom hyperparameter tuning specification: display_name: The human readable name you assign to this custom hyperparameter tuning job. trial_job_spec: The specification for the custom hyperparameter tuning job. study_spec: The specification for what to tune. max_trial_count: The maximum number of tuning trials. parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially. End of explanation """ # Make folder for Python hyperparameter tuning script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration hyperparameter tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py """ Explanation: Examine the hyperparameter tuning package Package layout Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py). Package Assembly In the following cells, you will assemble the training package. End of explanation """ %%writefile custom/trainer/task.py # HP Tuning hello world example from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib from hypertune import HyperTune import argparse import os import sys import time tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--model-dir', dest='model_dir', default='/tmp/saved_model', type=str, help='Model dir.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) print(device_lib.list_local_devices()) # Instantiate the HyperTune reporting object hpt = HyperTune() for epoch in range(1, args.epochs+1): # mimic metric result at the end of an epoch acc = args.lr * epoch # save the metric result to communicate back to the HPT service hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='val_accuracy', metric_value=acc, global_step=epoch) print('epoch: {}, accuracy: {}'.format(epoch, acc)) time.sleep(1) """ Explanation: Task.py contents In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary: Passes the hyperparameter values for a trial as a command line argument (parser.add_argument('--lr',...)) Mimics a training loop, where on each loop (epoch) the variable accuracy is set to the loop iteration * the learning rate. Reports back the objective metric accuracy back to the hyperparameter tuning service using report_hyperparameter_tuning_metric(). End of explanation """ ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz """ Explanation: Store hyperparameter tuning script on your Cloud Storage bucket Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket. End of explanation """ def create_hyperparameter_tuning_job(hpt_job): response = clients["job"].create_hyperparameter_tuning_job( parent=PARENT, hyperparameter_tuning_job=hpt_job ) print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = create_hyperparameter_tuning_job(hpt_job) """ Explanation: Reporting back the result of the trial using hypertune For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial. For this example, you will specify in the study specification that the objective metric will be reported back as val_loss. You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps: Import the HyperTune module: from hypertune import HyperTune(). At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are: hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification. metric_value: The value of the objective metric to report back to the hyperparameter service. global_step: The epoch iteration, starting at 0. Hyperparameter Tune the model Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter: -hpt_job: The specification for the hyperparameter tuning job. The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters: -parent: The Vertex location path to Dataset, Model and Endpoint resources. -hyperparameter_tuning_job: The specification for the hyperparameter tuning job. You will display a handful of the fields returned in response object, with the two that are of most interest are: response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps. response.state: The current state of the custom hyperparameter tuning job. End of explanation """ # The full unique ID for the hyperparameter tuning job hpt_job_id = response.name # The short numeric ID for the hyperparameter tuning job hpt_job_short_id = hpt_job_id.split("/")[-1] print(hpt_job_id) """ Explanation: Now get the unique identifier for the hyperparameter tuning job you created. End of explanation """ def get_hyperparameter_tuning_job(name, silent=False): response = clients["job"].get_hyperparameter_tuning_job(name=name) if silent: return response print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = get_hyperparameter_tuning_job(hpt_job_id) """ Explanation: Get information on a hyperparameter tuning job Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter: name: The Vertex fully qualified identifier for the hyperparameter tuning job. The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter: name: The Vertex fully qualified identifier for the hyperparameter tuning job. If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id. End of explanation """ while True: job_response = get_hyperparameter_tuning_job(hpt_job_id, True) if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("Study trials have not completed:", job_response.state) if job_response.state == aip.JobState.JOB_STATE_FAILED: break else: if not DIRECT: MODEL_DIR = MODEL_DIR + "/model" print("Study trials have completed") break time.sleep(60) """ Explanation: Wait for tuning to complete Hyperparameter tuning the above model may take upwards of 20 minutes time. Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/&lt;trial_number&gt;/saved_model.pb'. End of explanation """ best = (None, None, None, 0.0) for trial in job_response.trials: print(trial) # Keep track of the best outcome if float(trial.final_measurement.metrics[0].value) > best[3]: try: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: best = ( trial.id, float(trial.parameters[0].value), None, float(trial.final_measurement.metrics[0].value), ) """ Explanation: Review the results of the study Now review the results of trials. End of explanation """ print("ID", best[0]) print("Learning Rate", best[1]) print("Decay", best[2]) print("Validation Accuracy", best[3]) """ Explanation: Best trial Now look at which trial was the best: End of explanation """ BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model" """ Explanation: Get the Best Model If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at: MODEL_DIR/&lt;best_trial_id&gt;/model End of explanation """ study_spec = { "metrics": [ {"metric_id": "val_loss", "goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE} ], "parameters": [ { "parameter_id": "lr", "discrete_value_spec": {"values": [0.001, 0.01, 0.1]}, "scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE, }, { "parameter_id": "decay", "double_value_spec": {"min_value": 1e-6, "max_value": 1e-2}, "scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE, }, ], "algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH, } """ Explanation: Tuning a model - Boston Housing Now that you have seen the overall steps for hyperparameter tuning a custom training job using a Python package that mimics training a model, you will do a new hyperparameter tuning job for a custom training job for a Boston Housing model. For this example, you will change two parts: Specify the Boston Housing custom hyperparameter tuning Python package. Specify a study specification specific to the hyperparameters used in the Boston Housing custom hyperparameter tuning Python package. Create a study specification In this study, you will tune for two hyperparameters using the random search algorithm: learning rate: The search space is a set of discrete values. learning rate decay: The search space is a continuous range between 1e-6 and 1e-2. The objective (goal) is to maximize the validation accuracy. You will run a maximum of six trials. End of explanation """ hpt_job = { "display_name": JOB_NAME, "trial_job_spec": {"worker_pool_specs": worker_pool_spec}, "study_spec": study_spec, "max_trial_count": 6, "parallel_trial_count": 1, } """ Explanation: Assemble a hyperparameter tuning job specification Now assemble the complete description for the custom hyperparameter tuning specification: display_name: The human readable name you assign to this custom hyperparameter tuning job. trial_job_spec: The specification for the custom hyperparameter tuning job. study_spec: The specification for what to tune. max_trial_count: The maximum number of tuning trials. parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially. End of explanation """ %%writefile custom/trainer/task.py # Custom Training for Boston Housing import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib from hypertune import HyperTune import numpy as np import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--decay', dest='decay', default=0.98, type=float, help='Decay rate') parser.add_argument('--units', dest='units', default=64, type=int, help='Number of units.') parser.add_argument('--epochs', dest='epochs', default=20, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--param-file', dest='param_file', default='/tmp/param.txt', type=str, help='Output file for parameters') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) def make_dataset(): # Scaling Boston Housing data features def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float) return feature, max (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) params = [] for _ in range(13): x_train[_], max = scale(x_train[_]) x_test[_], _ = scale(x_test[_]) params.append(max) # store the normalization (max) value for each feature with tf.io.gfile.GFile(args.param_file, 'w') as f: f.write(str(params)) return (x_train, y_train), (x_test, y_test) # Build the Keras model def build_and_compile_dnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(args.units, activation='relu', input_shape=(13,)), tf.keras.layers.Dense(args.units, activation='relu'), tf.keras.layers.Dense(1, activation='linear') ]) model.compile( loss='mse', optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr, decay=args.decay)) return model model = build_and_compile_dnn_model() # Instantiate the HyperTune reporting object hpt = HyperTune() # Reporting callback class HPTCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): global hpt hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='val_loss', metric_value=logs['val_loss'], global_step=epoch) # Train the model BATCH_SIZE = 16 (x_train, y_train), (x_test, y_test) = make_dataset() model.fit(x_train, y_train, epochs=args.epochs, batch_size=BATCH_SIZE, validation_split=0.1, callbacks=[HPTCallback()]) model.save(args.model_dir) """ Explanation: Task.py contents In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary: Parse the command line arguments for the hyperparameter settings for the current trial. Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR. Download and preprocess the Boston Housing dataset. Build a DNN model. The number of units per dense layer and learning rate hyperparameter values are used during the build and compile of the model. A definition of a callback HPTCallback which obtains the validation loss at the end of each epoch (on_epoch_end()) and reports it to the hyperparameter tuning service using hpt.report_hyperparameter_tuning_metric(). Train the model with the fit() method and specify a callback which will report the validation loss back to the hyperparameter tuning service. End of explanation """ ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz """ Explanation: Store hyperparameter tuning script on your Cloud Storage bucket Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket. End of explanation """ def create_hyperparameter_tuning_job(hpt_job): response = clients["job"].create_hyperparameter_tuning_job( parent=PARENT, hyperparameter_tuning_job=hpt_job ) print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = create_hyperparameter_tuning_job(hpt_job) """ Explanation: Reporting back the result of the trial using hypertune For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial. For this example, you will specify in the study specification that the objective metric will be reported back as val_loss. You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps: Import the HyperTune module: from hypertune import HyperTune(). At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are: hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification. metric_value: The value of the objective metric to report back to the hyperparameter service. global_step: The epoch iteration, starting at 0. Hyperparameter Tune the model Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter: -hpt_job: The specification for the hyperparameter tuning job. The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters: -parent: The Vertex location path to Dataset, Model and Endpoint resources. -hyperparameter_tuning_job: The specification for the hyperparameter tuning job. You will display a handful of the fields returned in response object, with the two that are of most interest are: response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps. response.state: The current state of the custom hyperparameter tuning job. End of explanation """ # The full unique ID for the custom job hpt_job_id = response.name # The short numeric ID for the custom job hpt_job_short_id = hpt_job_id.split("/")[-1] print(hpt_job_id) """ Explanation: Now get the unique identifier for the custom job you created. End of explanation """ def get_hyperparameter_tuning_job(name, silent=False): response = clients["job"].get_hyperparameter_tuning_job(name=name) if silent: return response print("name:", response.name) print("display_name:", response.display_name) print("state:", response.state) print("create_time:", response.create_time) print("update_time:", response.update_time) return response response = get_hyperparameter_tuning_job(hpt_job_id) """ Explanation: Get information on a hyperparameter tuning job Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter: name: The Vertex fully qualified identifier for the hyperparameter tuning job. The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter: name: The Vertex fully qualified identifier for the hyperparameter tuning job. If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id. End of explanation """ while True: job_response = get_hyperparameter_tuning_job(hpt_job_id, True) if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("Study trials have not completed:", job_response.state) if job_response.state == aip.JobState.JOB_STATE_FAILED: break else: if not DIRECT: MODEL_DIR = MODEL_DIR + "/model" print("Study trials have completed") break time.sleep(60) """ Explanation: Wait for tuning to complete Hyperparameter tuning the above model may take upwards of 20 minutes time. Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/&lt;trial_number&gt;/saved_model.pb'. End of explanation """ best = (None, None, None, 0.0) for trial in job_response.trials: print(trial) # Keep track of the best outcome if float(trial.final_measurement.metrics[0].value) > best[3]: try: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: best = ( trial.id, float(trial.parameters[0].value), None, float(trial.final_measurement.metrics[0].value), ) """ Explanation: Review the results of the study Now review the results of trials. End of explanation """ print("ID", best[0]) print("Learning Rate", best[1]) print("Decay", best[2]) print("Validation Accuracy", best[3]) """ Explanation: Best trial Now look at which trial was the best: End of explanation """ BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model" """ Explanation: Get the Best Model If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at: MODEL_DIR/&lt;best_trial_id&gt;/model End of explanation """ import tensorflow as tf model = tf.keras.models.load_model(MODEL_DIR) """ Explanation: Load the saved model Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction. To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR. End of explanation """ import numpy as np from tensorflow.keras.datasets import boston_housing (_, _), (x_test, y_test) = boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float32) return feature # Let's save one data item that has not been scaled x_test_notscaled = x_test[0:1].copy() for _ in range(13): x_test[_] = scale(x_test[_]) x_test = x_test.astype(np.float32) print(x_test.shape, x_test.dtype, y_test.shape) print("scaled", x_test[0]) print("unscaled", x_test_notscaled) """ Explanation: Evaluate the model Now let's find out how good the model is. Load evaluation data You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home). You don't need the training data, and hence why we loaded it as (_, _). Before you can run the data through evaluation, you need to preprocess it: x_test: 1. Normalize (rescaling) the data in each column by dividing each value by the maximum value of that column. This will replace each single value with a 32-bit floating point number between 0 and 1. End of explanation """ model.evaluate(x_test, y_test) """ Explanation: Perform the model evaluation Now evaluate how well the model in the custom job did. End of explanation """ delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation """
desihub/desimodel
doc/nb/ELG_SNR.ipynb
bsd-3-clause
%pylab inline import astropy.table import astropy.cosmology import astropy.io.fits as fits import astropy.units as u """ Explanation: ELG Signal-to-Noise Calculations This notebook provides a standardized calculation of the DESI emission-line galaxy (ELG) signal-to-noise (SNR) figure of merit, for tracking changes to simulation inputs and models. See the accompanying technical note DESI-3977 for details. End of explanation """ import os.path assert 'DESIMODEL' in os.environ assert os.path.exists(os.path.join(os.getenv('DESIMODEL'), 'data', 'spectra', 'spec-sky.dat')) """ Explanation: Parts of this notebook assume that the desimodel package is installed (both its git and svn components) and its data/ directory is accessible via the $DESIMODEL environment variable: End of explanation """ import desimodel import specsim print(f'Using desimodel {desimodel.__version__}, specsim {specsim.__version__}') """ Explanation: Document relevant version numbers: End of explanation """ elg_spec = astropy.table.Table.read( os.path.join(os.environ['DESIMODEL'], 'data', 'spectra', 'spec-elg-o2flux-8e-17-average-line-ratios.dat'), format='ascii') elg_wlen0 = elg_spec['col1'].data elg_flux0 = 1e-17 * elg_spec['col2'].data """ Explanation: ELG Spectrum All peaks are assumed to have the same log-normal rest lineshape specified by a velocity dispersion $\sigma_v$, total flux $F_0$ and central wavelength $\lambda_0$ as: $$ f(\lambda; F_0, \lambda_0) = \frac{F_0}{\sqrt{2\pi}\,\lambda\,\sigma_{\log}}\, \exp\left[ -\frac{1}{2}\left( \frac{\log_{10}\lambda - \log_{10}\lambda_0}{\sigma_{\log}}\right)^2\right]\; , $$ where $$ \sigma_{\log} \equiv \frac{\sigma_v}{c \log 10} \; . $$ We use the pretabulated spectrum in $DESIMODEL/data/spectra/spec-elg-o2flux-8e-17-average-line-ratios.dat described in Section 2.3 of DESI-867-v1, which consists of only the following emission lines: - [OII](3727A) and [OII](3730A) - H-beta - [OIII](4960A) and [OIII](5008A) - H-alpha Note that H-alpha is never observable for $z > 0.5$, as is always the case for DESI ELG targets. Continuum is omitted since we are primarily interested in how well the [OII] doublet can be identified and measured. All lines are assumed to have the same velocity dispersion of 70 km/s. End of explanation """ def get_elg_nz(): # Read the nz file from $DESIMODEL. full_name = os.path.join(os.environ['DESIMODEL'], 'data', 'targets', 'nz_elg.dat') table = astropy.table.Table.read(full_name, format='ascii') # Extract the n(z) histogram into numpy arrays. z_lo, z_hi = table['col1'], table['col2'] assert np.all(z_hi[:-1] == z_lo[1:]) z_edge = np.hstack((z_lo, [z_hi[-1]])) nz = table['col3'] # Trim to bins where n(z) > 0. non_zero = np.where(nz > 0)[0] lo, hi = non_zero[0], non_zero[-1] + 1 nz = nz[lo: hi] z_edge = z_edge[lo: hi + 1] return nz, z_edge elg_nz, elg_z_edge = get_elg_nz() """ Explanation: DESI ELG Sample Look up the expected redshift distribution of DESI ELG targets from $DESIMODEL/data/targets/nz_elg.dat. Note that the [OII] doublet falls off the spectrograph around z = 1.63. End of explanation """ def get_nz_weight(z): """Calculate n(z) weights corresponding to input z values. """ nz = np.zeros_like(z) idx = np.digitize(z, elg_z_edge) sel = (idx > 0) & (idx <= len(elg_nz)) nz[sel] = elg_nz[idx[sel] - 1] return nz """ Explanation: Calculate n(z) weights corresponding to an array of ELG redshifts: End of explanation """ def generate_elg_z(n=100, seed=123): cdf = np.cumsum(elg_nz) cdf = np.hstack(([0], cdf / cdf[-1])) gen = np.random.RandomState(seed) return np.interp(gen.rand(n), cdf, elg_z_edge) z=generate_elg_z(n=20000) plt.hist(z, bins=elg_z_edge, histtype='stepfilled') plt.xlim(elg_z_edge[0], elg_z_edge[-1]) print(f'Mean ELG redshift is {np.mean(z):.3f}') """ Explanation: Sample random redshifts from n(z): End of explanation """ LCDM = astropy.cosmology.Planck15 """ Explanation: Define a background cosmology for the angular-diameter distance used to scale galaxy angular sizes: End of explanation """ def generate_elg_profiles(z, seed=123, verbose=False): """ELG profiles are assumed to be disk (Sersic n=1) only. """ gen = np.random.RandomState(seed) nsrc = len(z) source_fraction = np.zeros((nsrc, 2)) source_half_light_radius = np.zeros((nsrc, 2)) source_minor_major_axis_ratio = np.zeros((nsrc, 2)) source_position_angle = 360. * gen.normal(size=(nsrc, 2)) # Precompute cosmology scale factors. angscale = ( LCDM.angular_diameter_distance(1.0) / LCDM.angular_diameter_distance(z)).to(1).value if verbose: print(f'mean n(z) DA(1.0)/DA(z) = {np.mean(angscale):.3f}') # Disk only with random size and ellipticity. source_fraction[:, 0] = 1. source_half_light_radius[:, 0] = 0.427 * np.exp(0.25 * gen.normal(size=nsrc)) * angscale source_minor_major_axis_ratio[:, 0] = np.minimum(0.99, 0.50 * np.exp(0.15 * gen.normal(size=nsrc))) if verbose: print(f'mean HLR = {np.mean(source_half_light_radius[:, 0]):.3f}"') return dict( source_fraction=source_fraction, source_half_light_radius=source_half_light_radius, source_minor_major_axis_ratio=source_minor_major_axis_ratio, source_position_angle=source_position_angle) """ Explanation: Generate random ELG profiles for each target. The mean half-light radius is 0.45" and scales with redshift. End of explanation """ def plot_elg_profiles(save=None): z = generate_elg_z(50000) sources = generate_elg_profiles(z, verbose=True) fig, ax = plt.subplots(2, 2, figsize=(8, 6)) ax = ax.flatten() ax[0].hist(sources['source_minor_major_axis_ratio'][:, 0], range=(0,1), bins=25) ax[0].set_xlabel('ELG minor/major axis ratio') ax[0].set_xlim(0, 1) ax[1].hist(z, bins=np.arange(0.6, 1.8, 0.1)) ax[1].set_xlim(0.6, 1.7) ax[1].set_xlabel('ELG redshift') ax[2].hist(sources['source_half_light_radius'][:, 0], bins=25) ax[2].set_xlabel('ELG half-light radius [arcsec]') ax[2].set_xlim(0.1, 1.1) ax[3].scatter(z, sources['source_half_light_radius'][:, 0], s=0.5, alpha=0.5) ax[3].set_xlabel('ELG redshift') ax[3].set_ylabel('ELG half-light radius [arcsec]') ax[3].set_xlim(0.6, 1.7) ax[3].set_ylim(0.1, 1.1) plt.tight_layout() if save: plt.savefig(save) plot_elg_profiles(save='elg-sample.png') """ Explanation: Diagnostic plot showing the assumed ELG population (Figure 1 of DESI-3977): End of explanation """ def calculate_elg_snr(simulator, save, description, z1=0.6, z2=1.65, dz=0.002, zref=1.20, seed=123, wlen=elg_wlen0, flux=elg_flux0): """Calculate the ELG [OII] SNR as a function of redshift. Parameters ---------- simulator : specsim.simulator.Simulator Instance of an initialized Simulator object to use. Each fiber will be simulated independently to study variations across the focal plane. save : str Filename to use for saving FITS results. description : str Short description for the saved file header, also used for plots later. z1 : float Minimum ELG redshift to calculate. z2 : float Maximum ELG redshift to calculate. dz : float Spacing of equally spaced grid to cover [z1, z2]. z2 will be increased by up to dz if necessary. zref : float Reference redshift used to save signal, noise and fiberloss. Must be on the grid specified by (z1, z2, dz). seed : int or None Random seed used to generate fiber positions and galaxy profiles. wlen : array 1D array of N rest wavelengths in Angstroms. flux : array 1D array of N corresponding rest fluxes in erg / (s cm2 Angstrom). """ zooms = (3715., 3742.), (4850., 4875.), (4950., 5020.) gen = np.random.RandomState(seed=seed) # Generate random focal plane (x,y) positions for each fiber in mm units. nfibers = simulator.num_fibers focal_r = np.sqrt(gen.uniform(size=nfibers)) * simulator.instrument.field_radius phi = 2 * np.pi * gen.uniform(size=nfibers) xy = (np.vstack([np.cos(phi), np.sin(phi)]) * focal_r).T # Build the grid of redshifts to simulate. nz = int(np.ceil((z2 - z1) / dz)) + 1 z2 = z1 + (nz - 1) * dz z_grid = np.linspace(z1, z2, nz) iref = np.argmin(np.abs(z_grid - zref)) assert np.abs(zref - z_grid[iref]) < 1e-5, 'zref not in z_grid' snr2 = np.zeros((4, nz, simulator.num_fibers)) # Initialize the results. hdus = fits.HDUList() hdus.append(fits.PrimaryHDU( header=fits.Header({'SEED': seed, 'NFIBERS': nfibers, 'DESCRIBE': description}))) # Zero-pad the input spectrum if necessary. wlo = 0.99 * desi.simulated['wavelength'][0] / (1 + z2) if wlen[0] > wlo: wlen = np.hstack([[wlo], wlen]) flux = np.hstack([[0.], flux]) # Simulate the specified rest-frame flux. simulator.source.update_in( 'ELG [OII] doublet', 'elg', wlen * u.Angstrom, flux * u.erg/(u.s * u.cm**2 * u.Angstrom), z_in=0.) # Simulate each redshift. for i, z in enumerate(z_grid): # Redshift the ELG spectrum. simulator.source.update_out(z_out=z) source_flux = np.tile(simulator.source.flux_out, [nfibers, 1]) # Generate source profiles for each target at this redshift. Since the seed is # fixed, only the redshift scaling of the HLR will change. sources = generate_elg_profiles(np.full(nfibers, z), seed=seed) # Simulate each source. simulator.simulate(source_fluxes=source_flux, focal_positions=xy, **sources) # Calculate the quadrature sum of SNR in each camera, by fiber. for output in simulator.camera_output: rest_wlen = output['wavelength'] / (1 + z) # Loop over emission lines. for j, (lo, hi) in enumerate(zooms): sel = (rest_wlen >= lo) & (rest_wlen < hi) if not np.any(sel): continue # Sum SNR2 over pixels. pixel_snr2 = output['num_source_electrons'][sel] ** 2 / output['variance_electrons'][sel] snr2[j, i] += pixel_snr2.sum(axis=0) if i == iref: # Save the fiberloss fraction and total variance tabulated on the simulation grid. table = astropy.table.Table(meta={'ZREF': zref}) sim = simulator.simulated table['WLEN'] = sim['wavelength'].data table['FLUX'] = sim['source_flux'].data table['FIBERLOSS'] = sim['fiberloss'].data table['NSRC'] = sim['num_source_electrons_b'] + sim['num_source_electrons_r'] + sim['num_source_electrons_z'] table['SKYVAR'] = sim['num_sky_electrons_b'] + sim['num_sky_electrons_r'] + sim['num_sky_electrons_z'] table['NOISEVAR'] = ( sim['read_noise_electrons_b'] ** 2 + sim['read_noise_electrons_r'] ** 2 + sim['read_noise_electrons_z'] ** 2 + sim['num_dark_electrons_b'] + sim['num_dark_electrons_r'] + sim['num_dark_electrons_z']) hdus.append(fits.table_to_hdu(table)) hdus[-1].name = 'REF' # Calculate the n(z) weighted mean SNR for [OII], using the median over fibers at each redshift. snr_oii = np.median(np.sqrt(snr2[0]), axis=-1) wgt = get_nz_weight(z_grid) snr_oii_eff = np.sum(snr_oii * wgt) / np.sum(wgt) print(f'n(z)-weighted effective [OII] SNR = {snr_oii_eff:.3f}') # Save the SNR vs redshift arrays for each emission line. table = astropy.table.Table(meta={'SNREFF': snr_oii_eff}) table['Z'] = z_grid table['ZWGT'] = wgt table['SNR_OII'] = np.sqrt(snr2[0]) table['SNR_HBETA'] = np.sqrt(snr2[1]) table['SNR_OIII'] = np.sqrt(snr2[2]) hdus.append(fits.table_to_hdu(table)) hdus[-1].name = 'SNR' hdus.writeto(save, overwrite=True) """ Explanation: Simulated SNR Given an initialized simulator object, step through different redshifts and calculate the SNR recorded by all fibers for a fixed ELG spectrum. Save the results to a FITS file that can be used by plot_elg_snr(). End of explanation """ def get_flux_limits(z, snr, nominal_flux=8., nominal_snr=7., ax=None): fluxlim = np.zeros_like(snr) nonzero = snr > 0 fluxlim[nonzero] = nominal_flux * (nominal_snr / snr[nonzero]) bins = np.linspace(0.6, 1.6, 6) nlim = len(bins) - 1 medians = np.empty(nlim) for i in range(nlim): sel = (z >= bins[i]) & (z < bins[i + 1]) medians[i] = np.median(fluxlim[sel]) if ax is not None: zmid = 0.5 * (bins[1:] + bins[:-1]) dz = 0.5 * (bins[1] - bins[0]) ax.errorbar(zmid, medians, xerr=dz, color='b', fmt='o', zorder=10, capsize=3) return fluxlim, medians """ Explanation: Calculate flux limits in bins of redshift, to compare with SRD L3.1.3: End of explanation """ def plot_elg_snr(name, save=True): """Plot a summary of results saved by calculate_elg_snr(). Parameters ---------- name : str Name of the FITS file saved by calculate_elg_snr(). """ hdus = fits.open(name) hdr = hdus[0].header nfibers = hdr['NFIBERS'] description = hdr['DESCRIBE'] fig, axes = plt.subplots(2, 1, figsize=(8, 6)) plt.suptitle(description, fontsize=14) snr_table = astropy.table.Table.read(hdus['SNR']) snr_oii_eff = snr_table.meta['SNREFF'] ref_table = astropy.table.Table.read(hdus['REF']) zref = ref_table.meta['ZREF'] ax = axes[0] color = 'rgb' labels = '[OII]', 'H$\\beta$', '[OIII]' z_grid = snr_table['Z'].data for i, tag in enumerate(('SNR_OII', 'SNR_HBETA', 'SNR_OIII')): snr = snr_table[tag].data snr_q = np.percentile(snr, (5, 50, 95), axis=-1) ax.fill_between(z_grid, snr_q[0], snr_q[2], color=color[i], alpha=0.25, lw=0) ax.plot(z_grid, snr_q[1], c=color[i], ls='-', label=labels[i]) ax.plot([], [], 'k:', label='n(z)') ax.legend(ncol=4) ax.set_xlabel('ELG redshift') ax.set_ylabel(f'Total signal-to-noise ratio') ax.axhline(7, c='k', ls='--') rhs = ax.twinx() rhs.plot(z_grid, snr_table['ZWGT'], 'k:') rhs.set_yticks([]) ax.set_xlim(z_grid[0], z_grid[-1]) ax.set_ylim(0, 12) rhs.set_ylim(0, None) ax.text(0.02, 0.03, f'n(z)-wgtd [OII] SNR={snr_oii_eff:.3f}', fontsize=12, transform=ax.transAxes) # Calculate the median [OII] flux limits. _, fluxlim = get_flux_limits(z_grid, np.median(snr_table['SNR_OII'], axis=-1)) # Print latex-format results for DESI-3977 Table 2. print(f'&{snr_oii_eff:7.3f}', end='') for m in fluxlim: print(f' &{m:5.1f}', end='') print(' \\\\') ax = axes[1] wlen = ref_table['WLEN'].data dwlen = wlen[1] - wlen[0] sky_q = np.percentile(ref_table['SKYVAR'].data, (5, 50, 95), axis=-1) sky_q[sky_q > 0] = 1 / sky_q[sky_q > 0] ax.fill_between(wlen, sky_q[0], sky_q[2], color='b', alpha=0.5, lw=0) ax.plot([], [], 'b-', label='sky ivar') ax.plot(wlen, sky_q[1], 'b.', ms=0.25, alpha=0.5) noise_q = np.percentile(ref_table['NOISEVAR'].data, (5, 50, 95), axis=-1) noise_q[noise_q > 0] = 1 / noise_q[noise_q > 0] ax.fill_between(wlen, noise_q[0], noise_q[2], color='r', alpha=0.25, lw=0) ax.plot(wlen, noise_q[1], c='r', ls='-', label='noise ivar') floss_q = np.percentile(ref_table['FIBERLOSS'].data, (5, 50, 95), axis=-1) ax.plot([], [], 'k-', label='fiberloss') rhs = ax.twinx() rhs.fill_between(wlen, floss_q[0], floss_q[2], color='k', alpha=0.25, lw=0) rhs.plot(wlen, floss_q[1], 'k-') rhs.set_ylim(0.2, 0.6) rhs.yaxis.set_major_locator(matplotlib.ticker.MultipleLocator(0.1)) rhs.set_ylabel('Fiberloss') ax.set_xlabel('Wavelength [A]') ax.set_ylabel(f'Inverse Variance / {dwlen:.1f}A') ax.set_xlim(wlen[0], wlen[-1]) ax.set_ylim(0, 0.25) ax.legend(ncol=3) plt.subplots_adjust(wspace=0.1, top=0.95, bottom=0.08, left=0.10, right=0.92) if save: base, _ = os.path.splitext(name) plot_name = base + '.png' plt.savefig(plot_name) print(f'Saved {plot_name}') """ Explanation: Plot a summary of the results saved by calculate_elg_snr(). Shaded bands show the 5-95 percentile range, with the median drawn as a solid curve. The fiberloss in the lower plot is calculated at the redshift zref specified in calculate_elg_snr() (since the ELG size distribution is redshift dependent). End of explanation """ import specsim.simulator desi = specsim.simulator.Simulator('desi', num_fibers=100) """ Explanation: Examples Demonstrate this calculation for the baseline DESI configuration with 100 fibers: End of explanation """ %time calculate_elg_snr(desi, save='desimodel-0.9.6.fits', description='desimodel 0.9.6') """ Explanation: NOTE: the next cell takes about 15 minutes to run. End of explanation """ plot_elg_snr('desimodel-0.9.6.fits') """ Explanation: Plot the results (Figure 2 of DESI-3977): End of explanation """ desi.instrument.fiberloss_method = 'galsim' """ Explanation: Check that the results with GalSim are compatible with those using the (default) fastsim mode of fiberloss calculations: End of explanation """ %time calculate_elg_snr(desi, save='desimodel-0.9.6-galsim.fits', description='desimodel 0.9.6 (galsim)') plot_elg_snr('desimodel-0.9.6-galsim.fits') """ Explanation: NOTE: the next cell takes about 30 minutes to run. End of explanation """ desi867 = astropy.table.Table.read('elg_snr2_desimodel-0-3-1.fits', hdu=1) """ Explanation: This comparison shows that the "fastsim" fiberloss fractions are about 1% (absolute) higher than "galsim", leading to a slight increase in signal and therefore SNR. The reason for this increase is that "fastsim" assumes a fixed minor / major axis ratio of 0.7 while our ELG population has a distribution of ratios with a median of 0.5. The weighted [OII] SNR values are 6.764 (fastsim) and 6.572 (galsim), which agree at the few percent level. We use GalSim fiberloss calculations consistently in Figure 2 and Table 2 of DESI-3977. CDR Comparison Compare with the CDR forecasts based on desimodel 0.3.1 and documented in DESI-867, using data from this FITS file: End of explanation """ def desi_867_fig1(): z = desi867['Z'] snr_all = np.sqrt(desi867['SNR2']) snr_oii = np.sqrt(desi867['SNR2_OII']) fig = plt.figure(figsize=(6, 5)) plt.plot(z, snr_all, 'k-', lw=1, label='all lines') plt.plot(z, snr_oii, 'r-', lw=1, label='[OII] only') plt.legend(fontsize='large') plt.axhline(7, c='b', ls='--') plt.ylim(0, 22) plt.xlim(z[0], z[-1]) plt.xticks([0.5, 1.0, 1.5]) plt.xlabel('Redshift') plt.ylabel('S/N') desi_867_fig1() def desi_867_fig2(): z = desi867['Z'] snr_all = np.sqrt(desi867['SNR2']) snr_oii = np.sqrt(desi867['SNR2_OII']) flux_limit_all, _ = get_flux_limits(z, snr_all) flux_limit_oii, medians = get_flux_limits(z, snr_oii) fig = plt.figure(figsize=(6, 5)) plt.plot(z, flux_limit_all, 'k-', lw=1, label='all lines') plt.plot(z, flux_limit_oii, 'r-', lw=1, label='[OII] only') plt.legend(loc='upper right', fontsize='large') _, _ = get_flux_limits(z, snr_oii, ax=plt.gca()) plt.ylim(0, 40) plt.xlim(z[0], z[-1]) plt.xticks([0.5, 1.0, 1.5]) plt.xlabel('Redshift') plt.ylabel('[OII] Flux limit ($10^{-17}$ ergs cm$^{-2}$ s$^{-1}$)') desi_867_fig2() """ Explanation: Check that we can reproduce the figures from DESI-867: End of explanation """ def cdr_summary(): z = desi867['Z'] snr_oii = np.sqrt(desi867['SNR2_OII']) wgt = get_nz_weight(z) snreff = np.sum(wgt * snr_oii) / wgt.sum() _, medians = get_flux_limits(z, snr_oii) print(f'0.3.1 (CDR) & {snreff:6.3f}', end='') for m in medians: print(f' &{m:5.1f}', end='') print(' \\\\') cdr_summary() """ Explanation: Print a summary for Table 2 of DESI-3977: End of explanation """
khyatiparekh/data-512-a1
hcds-a1-data-curation.ipynb
mit
import requests import json import csv import numpy as np import pandas as pd endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}' headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'} params = {'project' : 'en.wikipedia.org', 'access' : 'mobile-web', 'agent' : 'user', 'granularity' : 'monthly', 'start' : '2015070100', 'end' : '2017101000'#use the first day of the following month to ensure a full month of data is collected } api_call = requests.get(endpoint.format(**params)) response_pageviews_mobile_web = api_call.json() with open('pageviews_mobile_web_201507_201709.json', 'w') as outfile: json.dump(response_pageviews_mobile_web, outfile) import requests import json import csv endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}' headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'} params = {'project' : 'en.wikipedia.org', 'access' : 'mobile-app', 'agent' : 'user', 'granularity' : 'monthly', 'start' : '2015070100', 'end' : '2017101000'#use the first day of the following month to ensure a full month of data is collected } api_call = requests.get(endpoint.format(**params)) response_pageviews_mobile_app = api_call.json() with open('pageviews_mobile_app_201507_201709.json', 'w') as outfile: json.dump(response_pageviews_mobile_app, outfile) import requests import json import csv endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}' headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'} params = {'project' : 'en.wikipedia.org', 'access' : 'desktop', 'agent' : 'user', 'granularity' : 'monthly', 'start' : '2015070100', 'end' : '2017101000'#use the first day of the following month to ensure a full month of data is collected } api_call = requests.get(endpoint.format(**params)) response_pageviews_desktop = api_call.json() with open('pageviews_desktop_201507_201709.json', 'w') as outfile: json.dump(response_pageviews_desktop, outfile) """ Explanation: English Wikipedia View analysis for 2008-2017 The data is obtained through an API created by the wikimedia foundation enabling us to access the traffic data for mobile-web, mobile-app and desktop respectively Step 1: Data acquisition We will collect monthly data on web traffic to English Wikipedia from two Wikipedia APIs and save the data in JSON files. We will make five API calls and then save the JSON results into five separate JSON data files. We will make the API requests in python code below, but these are the complete URLs for all five API requests: End of explanation """ import requests import json import csv endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}' headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'} params = {'project' : 'de.wikipedia.org', 'access' : 'desktop-site', 'granularity' : 'monthly', 'start' : '2008060100', 'end' : '2016060100'#use the first day of the following month to ensure a full month of data is collected } api_call = requests.get(endpoint.format(**params)) response_pagecounts_desktop = api_call.json() with open('pagecounts_desktop_200806_201606.json', 'w') as outfile: json.dump(response_pagecounts_desktop, outfile) import requests import json import csv endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}' headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'} params = {'project' : 'de.wikipedia.org', 'access' : 'mobile-site', 'granularity' : 'monthly', 'start' : '2008010100', 'end' : '2016060100'#use the first day of the following month to ensure a full month of data is collected } api_call = requests.get(endpoint.format(**params)) response_pagecounts_mobile = api_call.json() with open('pagecounts_mobile_200801_201606.json', 'w') as outfile: json.dump(response_pagecounts_mobile, outfile) """ Explanation: API for pagecounts initiated End of explanation """ (pd.DataFrame.from_dict(data=response_pageviews_mobile_web['items'], orient='columns'). to_csv('pageviews_mobile_web_201507_201709.csv', header=True)) (pd.DataFrame.from_dict(data=response_pageviews_desktop['items'], orient='columns'). to_csv('pageviews_desktop_201507_201709.csv', header=True)) (pd.DataFrame.from_dict(data=response_pageviews_mobile_app['items'], orient='columns'). to_csv('pageviews_mobile_app_201507_201709.csv', header=True)) (pd.DataFrame.from_dict(data=response_pagecounts_desktop['items'], orient='columns'). to_csv('pagecounts_desktop_200801_201606.csv', header=True)) (pd.DataFrame.from_dict(data=response_pagecounts_mobile['items'], orient='columns'). to_csv('pagecounts_mobile_200801_201606.csv', header=True)) """ Explanation: Step 2: Data processing In this step, we perform a series of processing steps on the saved JSON data files generated by the API requests. This processing is being done to prepare the data for analysis and combine the relevant features into a single CSV-formatted data file. In the following cells, I have converted the JSON files to csv files End of explanation """ pvmw = pd.read_csv('pageviews_mobile_web_201507_201709.csv') # To remove extra column since on doing read_csv, the column 'Unnamed: 0' is automatically added to the dataframe del pvmw['Unnamed: 0'] pvma = pd.read_csv('pageviews_mobile_app_201507_201709.csv') del pvma['Unnamed: 0'] pcd = pd.read_csv('pagecounts_desktop_200801_201606.csv') del pcd['Unnamed: 0'] pcm = pd.read_csv('pagecounts_mobile_200801_201606.csv') del pcm['Unnamed: 0'] pvd = pd.read_csv('pageviews_desktop_201507_201709.csv') del pvd['Unnamed: 0'] """ Explanation: Giving names based on acronyms of the type of data. Naming convention: pvmw = pageview mobile-web pvma = pageview mobile app pcd = pagecount desktop pcm = pagecount mobile End of explanation """ def change_csv(df): df_time = df['timestamp'].values.tolist() df_time = [str(x) for x in df_time] df_year=[] for i in range(0,len(df_time)): df_year.append(df_time[i][0:4]) df['year'] = df_year df_month = [] for i in range(0,len(df_time)): df_month.append(df_time[i][4:6]) df['month'] = df_month del df['timestamp'] return df """ Explanation: Filling the dataframe with values of page views and counts To separate year and month for each csv file, created a function to do that. End of explanation """ pcm = change_csv(pcm) pcd = change_csv(pcd) pvmw = change_csv(pvmw) pvma = change_csv(pvma) pvd = change_csv(pvd) """ Explanation: Changed all dataframes to dfs with year and month separated End of explanation """ #Merging pagecount mobile and pagecount desktop and replacing Nan values with 0 pc = pcm.merge(pcd, on=['year','month'],how='outer') pc= pc.fillna(0) # To calculate total views pc["pagecount_all_views"] = pc['count_y'] + pc['count_x'] #Merging pageview mobile and pageview desktop and replacing Nan values with 0 mo = pvmw.merge(pvma,on=['year','month'],how='outer') mo["pageview_mobile_views"] = mo["views_x"] + mo["views_y"] pv = mo.merge(pvd, how='outer',on=["year", "month"]) pv = pv.fillna(0) #To get total views pv["pageview_all_views"] = pv["pageview_mobile_views"] + pv["views_x"] mo.drop(["views_x", "views_y"],axis=1,inplace=True) #Combining both pagecount and pageview dataframes to make the final dataframe final = pc.merge(pv, how='outer',on=["year", "month"]) final = final[["year", "month", "pagecount_all_views","count_y", "count_x","pageview_all_views", "views", "pageview_mobile_views"]] final = final.fillna(0) final.to_csv('en_wikipedia_traffic_200801_201709 .csv') """ Explanation: Dropping any unnecessary columns to avoid too many columns in the intermediate dataframes while merging Making final dataframe End of explanation """ import matplotlib.pyplot as plt import pandas as pd import datetime as dt #Creating a date range for the x-axis daterange = pd.date_range('2008-01', '2017-05',freq='M') x = daterange pc1 = final["pagecount_all_views"] pc2 = final["count_y"] pc3 = final["count_x"] pv1 = final["pageview_all_views"] pv2 = final["views"] pv3 = final["pageview_mobile_views"] %matplotlib inline plt.plot(x, pc1, label = "PageCount All", color = "black") plt.plot(x, pc2, label = "PageCount Desktop", color = "blue") plt.plot(x, pc3, label = "PageCount Mobile", color = "yellow") plt.plot(x, pv1, label = "PageView All", color = "green") plt.plot(x, pv2, label = "PageView Desktop", color = "red") plt.plot(x, pv3, label = "PageView Mobile", color = "orange") plt.legend(loc=2) plt.xlabel("Time") plt.ylabel("Page Views (Raised to the power 9)") plt.show() fig.savefig("Plot.png") """ Explanation: Analysis In this part, I have plotted the timeseries graph using matplotlib End of explanation """
mwegrzyn/mindReading2017
content/_002_blindTraining.ipynb
gpl-3.0
# module um dateien zu lesen import os import fnmatch # liste mit allen hirnbildern die im Ordner blindTraining liegen imgList = ['../blindTraining/%s'%x for x in os.listdir('../blindTraining/') if fnmatch.fnmatch(x,'*.nii.gz')] imgList.sort() imgList """ Explanation: Bevor wir die Daten auswerten, wollen wir uns zunächst mit den Daten vertraut machen. Das heißt, wir schauen Sie uns an. Warum sollten wir das tun? Ein Grund könnte sein, dass wir lernen wollen, was allen an Informationen in den Daten steckt. Ein anderer Grund, dass wir lernen wollen, wie wir als menschliche Beobachterinnen diese Informationen nutzen. Erst danach sollten wir versuchen, unser intuitives Vorgehen zu formalisieren, also mit den Daten zu rechnen. Ein weiterer Grund ist unser ultimates Ziel. Unser Ziel ist es, den verblindeten dritten run zu dekodieren. Wir könnten natürlich sofort damit anfangen, aber wir könnten auch ersteinmal üben Vorhersagen zu treffen. Wie bei der Kalibrierung eines Polygraphen könnte es eine gute Idee sein, zunächst Fragen zu stellen auf die wir die Antworten bereits kennen. Wir wissen Alles über die Daten in den ersten beiden runs, daher bieten diese sich zu Übungszwecken an. Das heißt, wir können zwei Dinge tun: 1. die Daten unverblindet anschauen und versuchen Regelmäßigkeiten zu finden, welche die Blöcke auszeichnen 2. die Daten verblindet anschauen und versuchen, Vorhersagen zu treffen Was ist der Unterschied? Bei der ersten Variante könnte es sein, dass wir interessante Muster finden, aber wir wissen nicht, wie gut unsere Vorsagen wären, wenn wir diese Muster oder Merkmale benutzen. Dabei kann man schnell in eine Denkweise verfallen, bei der man jeglichem zufälligem Rauschen Bedeutsamkeit zuschreibt. Bei der zweiten Variante (verblindet) haben wir auch ein Maß dafür, wie prädiktiv die Muster oder Merkmale sein könnten. Wir kriegen Feedback darüber, was funktioniert und was nicht. Das ist eine gute Vorbereitung für die tatsächliche Vorhersage von run 3. Kurz gesagt: Die beste Übung um Vorhersagen zu treffen, ist Vorhersagen zu treffen. Blindes Training Wir haben in dem Ordner "blindTraining" Hirnbilder von jedem Block, aber ohne die Namen. Die Namen stehen jeweils in der entsprechenden Textdatei. Ein mögliches Training wäre nun, sich ein Bild anzuschauen, eine Vorhersage zu treffen und dann die eigene Vorhersage mit Hilfe der hinterlegten Textdatei zu überprüfen. End of explanation """ # liste mit allen hirnbildern die im Ordner blindTraining liegen codeList = ['../blindTraining/%s'%x for x in os.listdir('../blindTraining/') if fnmatch.fnmatch(x,'*.txt')] codeList.sort() codeList """ Explanation: Liste mit allen Antworten End of explanation """ from nilearn import plotting import matplotlib.pylab as plt %matplotlib inline for img in imgList[:3]: plotting.plot_stat_map(img,title=img) plt.show() """ Explanation: Beispiel: Wir haben von jedem Block drei Bilder: - kein Smoothing (s000) - 8mm Smoothing (s008) - 16mm Smoothing (s016) End of explanation """ %%bash fslview ../data/wbet_mpr_2mm.nii.gz ../blindTraining/img001_s000.nii.gz ../blindTraining/img001_s008.nii.gz ../blindTraining/img001_s016.nii.gz """ Explanation: Diese Bilder können wir uns z.B. mit fslview oder mricroGL anschauen; entweder einzeln oder kombiniert (so könnten wir auch etwas über den Effekt der Glättung auf unterschiedliche Regionen lernen). End of explanation """ %%bash gedit ../blindTraining/ans001.txt with open('../blindTraining/ans001.txt','r') as f: print f.read() """ Explanation: Danach können wir uns die Auflösung anschauen End of explanation """ %%bash fslview ../data/gemittelteHirnbilderSmoo.nii.gz -l Red-Yellow -b 0.2,1.0 ../data/wbet_mpr_2mm.nii.gz -l Grey -b 200,1000 """ Explanation: Wie immer gilt: "anything goes". D.h. jede Art von Darstellung, Glättung und Schwellenwert sind erlaubt, ebenso Wikipedia, Neurosynth, Atlanten, Darstellung im Volumen oder auf der Hirnoberfläche etc. Eine weitere Hilfe könnten die gemittelten Hirnantworten pro Bedingung sein. D.h. es könnte Sinn machen zu schauen, wie eigentlich die 5-Minuten langen Mittlungen pro Bedingung aussehen. Zur Erinnerung: Diese liegen in &gt;&gt; ../data/gemittelteHirnbilderSmoo.nii.gz Volume 0: Gesichter Volume 1: Homewalk Volume 2: Motorik Volume 3: Ruhe Volume 4: Sprache End of explanation """ from IPython.display import IFrame IFrame('http://www.neurovault.org/collections/2282/', width=800, height=600) """ Explanation: Außerdem sind sie auch online auf neurovault.org End of explanation """
hightower8083/chimera
doc/space-charge-demo(vs_ocelot).ipynb
gpl-3.0
%matplotlib inline import numpy as np from scipy.constants import e import matplotlib.pyplot as plt import sys import ocelot as oclt from chimera.moduls.species import Specie from chimera.moduls.solvers import Solver from chimera.moduls.chimera_main import ChimeraRun from chimera.moduls.diagnostics import Diagnostics from chimera.moduls import fimera as chimera """ Explanation: Space charge fields of a relativistic electron beam Here is a test of the account for the space-charge fields of a relativistic electron beam. We compare - the particle tracker code OCELOT (https://github.com/iagapov/ocelot/), where the Poisson equation is solverd in a beam rest frame via FFT technique with CHIMERA, where two modes of space-charge modeling are avaliable full PIC modeling with quasi-cylindrical PSATD solver space-charge kicker, where relativistic Maxwell equations are solved in static approximation via FFT+DHT technique End of explanation """ SimLgth = 20. pz0 = 50. Chrg = 30e-12 Size = 3. nmax = Chrg/e/((Size*1.e-4)**3*(2*np.pi)**1.5)/(1.1e21) """ Explanation: As a simplest test, we consider a 20 $\mu$m drift of a 30 pC electron beam with the gaussian density profiles $\sigma_z=\sigma_r=3$ $\mu$m, with the longitudinal momentum $p_z=50\, m_e c $. Initially beam has no energy spread and zero divergence. End of explanation """ Np = int(1e6) BeamCharge = Chrg parts0 = np.zeros((6,Np)) parts0[0] = Size*1e-6*np.random.randn(Np) parts0[2] = Size*1e-6*np.random.randn(Np) parts0[4] = Size*1e-6*np.random.randn(Np) p_array_init = oclt.ParticleArray(n=Np) p_array_init.rparticles[:] = parts0 #.T.flatten() p_array_init.E = (1+pz0**2)**.5*0.511e-3 p_array_init.s = 0.0 p_array_init.q_array = (BeamCharge/Np)*np.ones(Np) D1 = oclt.Drift(l = SimLgth*1e-6 ) D2 = oclt.Drift(l=0) cell = (D1,D2) sc1 = oclt.SpaceCharge() sc1.nmesh_xyz = [63, 63, 63] sc1.low_order_kick = False sc1.step = 1 method = oclt.MethodTM() lat = oclt.MagneticLattice(cell,method=method) print("length of the cell: ", lat.totalLen, "m") navi = oclt.Navigator(lat) navi.add_physics_proc(sc1,lat.sequence[0],lat.sequence[-1]) p_array = oclt.deepcopy(p_array_init) dz = 1e-6 LL = p_array.s SS = [] Sx = [] Sy = [] Sz = [] Ex = [] Ey = [] while LL<lat.totalLen: oclt.tracking_step(lat, p_array, dz,navi) proc_list = navi.get_proc_list() for p in proc_list: p.z0 = navi.z0 p.apply(p_array, dz) LL += dz sys.stdout.write('\r'+str(LL)+'/'+str(lat.totalLen)) sys.stdout.flush() """ Explanation: OCELOT simulation End of explanation """ xmin, xmax = -7.*Size,7.*Size lrg = 30*Size Nx = 300 Nr = 300 dx = (xmax-xmin)/Nx dr = lrg/Nr dt = 1.0 solver_in = { 'Grid':(xmin, xmax,lrg,dx,dr),'TimeStep':dt,'MaxAzimuthMode':1, 'Features':('StaticKick',),'Xchunked':(4,6) } beam_in = { 'Grid':(xmin, xmax,lrg,dx,dr),'TimeStep':dt,'Charge':-1.,'Mass':1., 'Density':nmax, 'FixedCell':(4,8,8),'MomentaMeans':(pz0,0.,0.), 'Xchunked':(4,6) } solver = Solver(solver_in) beam = Specie(beam_in) MovingFrame = {'TimeStep':dt,'Steps':1,'Features':('NoSorting','Staged')} chimera_in = { 'Solvers':(solver,),'Particles':(beam,),'MovingFrames':(MovingFrame,) } fu = lambda x,y,z: np.exp(-0.5*(x**2+y**2+z**2)/Size**2) beam.add_particles(*beam.gen_parts(Domain=(-3.5*Size,3.5*Size, 0.0, 3.5*Size),ProfileFunc=fu)) Chimera = ChimeraRun(chimera_in) Diags = Diagnostics(Chimera,(),out_folder=None) for i in range(int(SimLgth/dt)+1): Chimera.make_step(i) sys.stdout.write('\r'+str(i)+'/'+str(int(SimLgth/dt))) sys.stdout.flush() """ Explanation: CHIMERA with solver in "StaticKick" mode End of explanation """ xmin, xmax = -7.*Size,7.*Size lrg = 30*Size Nx = 300 Nr = 300 dx = (xmax-xmin)/Nx dr = lrg/Nr dt = 0.06 solver_in1 = { 'Grid':(xmin, xmax,lrg,dx,dr),'TimeStep':dt,'MaxAzimuthMode':1, 'Features':('SpaceCharge',),'Xchunked':(4,6) } beam_in1 = { 'Grid':(xmin, xmax,lrg,dx,dr),'TimeStep':dt,'Charge':-1.,'Mass':1., 'Density':nmax, 'FixedCell':(4,8,8),'MomentaMeans':(pz0,0.,0.), 'Features':(),'Xchunked':(4,6) } solver1 = Solver(solver_in1) beam1 = Specie(beam_in1) MovingFrame = {'TimeStep':dt,'Steps':1,'Features':('NoSorting','Staged')} chimera_in1 = { 'Solvers':(solver1,),'Particles':(beam1,),'MovingFrames':(MovingFrame,) } fu = lambda x,y,z: np.exp(-0.5*(x**2+y**2+z**2)/Size**2) beam1.add_particles(*beam1.gen_parts(Domain=(-3.5*Size,3.5*Size, 0.0, 3.5*Size),ProfileFunc=fu)) Chimera1 = ChimeraRun(chimera_in1) Diags1 = Diagnostics(Chimera1,(),out_folder=None) for i in range(int(SimLgth/dt)+1): Chimera1.make_step(i) sys.stdout.write('\r'+str(i)+'/'+str(int(SimLgth/dt))) sys.stdout.flush() fig,((ax1,ax2,ax3),(ax4,ax5,ax6)) = plt.subplots(2,3,figsize=(16,12),dpi=200) dpmax = (beam.Data['momenta'][0].max()-pz0)*1e3 range1 = [[SimLgth-10,SimLgth+10],[-2*dpmax,+2*dpmax]] range2=[[-15,15],[-2,2]] weight2pC = beam.Args['weight2pC'] dns_max = 1e-3 ax1.hist2d(beam1.Data['coords'][0],(beam1.Data['momenta'][0]-pz0)*1e3,weights=-beam1.Data['weights'], bins=120,range=range1,vmax=dns_max,cmap=plt.cm.spectral); ax4.hist2d(beam1.Data['coords'][2],beam1.Data['momenta'][2]*1e4,weights=-beam1.Data['weights'], bins=120,range=range2,vmax=dns_max,cmap=plt.cm.spectral); ax2.hist2d(beam.Data['coords'][0],(beam.Data['momenta'][0]-pz0)*1e3,weights=-beam.Data['weights'], bins=120,range=range1,vmax=dns_max,cmap=plt.cm.spectral); ax5.hist2d(beam.Data['coords'][2],beam.Data['momenta'][2]*1e4,weights=-beam.Data['weights'], bins=120,range=range2,vmax=5e-4,cmap=plt.cm.spectral); ax3.hist2d((-p_array.tau()+p_array.s)*1e6,(p_array.p())*p_array.E*1e6/0.511,weights=p_array.q_array, bins=120,range=range1,vmax=dns_max*weight2pC*1e-12,cmap=plt.cm.spectral); ax6.hist2d(p_array.x()*1e6,p_array.px()*p_array.E*1e7/0.511,weights=p_array.q_array, bins=120,range=range2,vmax=dns_max*weight2pC*1e-12,cmap=plt.cm.spectral); ax1.set_title('CHIMERA full PIC',fontsize=18); ax2.set_title('CHIMERA kicker',fontsize=18) ax3.set_title('OCELOT',fontsize=18) for ax in (ax4,ax5,ax6): ax.set_xlabel('coordinate ($\mu$m)',fontsize=18) ax1.set_ylabel('$(p_\parallel-p_{z0})\cdot 10^3$',fontsize=18) ax4.set_ylabel('$p_\perp\cdot 10^4$',fontsize=18); fig,((ax1,ax2),(ax3,ax4)) = plt.subplots(2,2,figsize=(16,10),dpi=200) extent = (solver1.Args['Xgrid'].min(),solver1.Args['Xgrid'].max(), -solver1.Args['Rgrid'].max(),solver1.Args['Rgrid'].max()) ee = Diags1.fld_out( {'Features':('Return',)} )[0] nn = Diags1.dns_out( {'Features':{'Return':0,'MaxMode':1}} )[0] ex = np.real(np.hstack((ee[:,::-1,0,0] + ee[:,::-1,1,0],ee[:,1:,0,0] -ee[:,1:,1,0]))) ey = np.real(np.hstack((ee[:,::-1,0,1] - ee[:,::-1,1,1],ee[:,1:,0,1] +ee[:,1:,1,1]))) bz = np.real(np.hstack((ee[:,::-1,0,5] + ee[:,::-1,1,5],ee[:,1:,0,5] -ee[:,1:,1,5]))) ne = np.real(np.hstack((nn[:,::-1,0] + nn[:,::-1,1],nn[:,1:,0] -nn[:,1:,1]))) pl1 = ax1.imshow(ex.T ,aspect='auto', extent=extent,origin='lower',cmap=plt.cm.seismic) pl2 = ax2.imshow((ey+bz).T ,aspect='auto', extent=extent,origin='lower',cmap=plt.cm.seismic) extent = (solver.Args['Xgrid'].min(),solver.Args['Xgrid'].max(), -solver.Args['Rgrid'].max(),solver.Args['Rgrid'].max()) ee = Diags.fld_out( {'Features':('Return',)} )[0] nn = Diags.dns_out( {'Features':{'Return':0,'MaxMode':1}} )[0] ex = np.real(np.hstack((ee[:,::-1,0,0] + ee[:,::-1,1,0],ee[:,1:,0,0] -ee[:,1:,1,0]))) ey = np.real(np.hstack((ee[:,::-1,0,1] - ee[:,::-1,1,1],ee[:,1:,0,1] +ee[:,1:,1,1]))) bz = np.real(np.hstack((ee[:,::-1,0,5] + ee[:,::-1,1,5],ee[:,1:,0,5] -ee[:,1:,1,5]))) ne = np.real(np.hstack((nn[:,::-1,0] + nn[:,::-1,1],nn[:,1:,0] -nn[:,1:,1]))) pl3 = ax3.imshow(ex.T ,aspect='auto', extent=extent,origin='lower',cmap=plt.cm.seismic) pl4 = ax4.imshow((ey+bz).T ,aspect='auto', extent=extent,origin='lower',cmap=plt.cm.seismic) #fig.colorbar(pl1,ax=ax1) #fig.colorbar(pl2,ax=ax2) #fig.colorbar(pl3,ax=ax3) #fig.colorbar(pl4,ax=ax4) ax1.set_title('$E_x$',fontsize=18); ax2.set_title('$E_y$',fontsize=18) for ax in (ax3,ax4,): ax.set_xlabel('x-coordinate ($\mu$m)',fontsize=18) ax1.set_ylabel('CHIMERA full PIC \n y-coordinate ($\mu$m)',fontsize=18) ax3.set_ylabel('CHIMERA kicker \n y-coordinate ($\mu$m)',fontsize=18) """ Explanation: CHIMERA with solver in full PIC mode End of explanation """
jegibbs/phys202-2015-work
assignments/assignment08/InterpolationEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import numpy as np from scipy.interpolate import interp1d """ Explanation: Interpolation Exercise 1 End of explanation """ trajectory = np.load('trajectory.npz') x = trajectory['x'] y = trajectory['y'] t = trajectory['t'] assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40 """ Explanation: 2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). x which has values of the y position at those times: y[i] = y(t[i]). Load those arrays into this notebook and save them as variables x, y and t: End of explanation """ newt = np.linspace(min(t),max(t),200) sin_approx = interp1d(t, x, kind='cubic') sin_approx2 = interp1d(t, x, kind='cubic') newx = sin_approx(newt) newy = sin_approx(newt) assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert len(newx)==200 assert len(newy)==200 """ Explanation: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times. End of explanation """ plt.plot(newx, newy, marker='o', linestyle='', label='original data') plt.plot(newx, newy, marker='.', label='interpolated'); plt.legend(loc=4); plt.xlabel('$x(t)$') plt.ylabel('$y(t)$'); plt.xlim(-0.7, 0.9) plt.ylim(-0.7,0.9) assert True # leave this to grade the trajectory plot """ Explanation: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful. End of explanation """
afronski/playground-notes
introduction-to-big-data-with-apache-spark/solutions/lab2_apache_log_student.ipynb
mit
import re import datetime from pyspark.sql import Row month_map = {'Jan': 1, 'Feb': 2, 'Mar':3, 'Apr':4, 'May':5, 'Jun':6, 'Jul':7, 'Aug':8, 'Sep': 9, 'Oct':10, 'Nov': 11, 'Dec': 12} def parse_apache_time(s): """ Convert Apache time format into a Python datetime object Args: s (str): date and time in Apache time format Returns: datetime: datetime object (ignore timezone for now) """ return datetime.datetime(int(s[7:11]), month_map[s[3:6]], int(s[0:2]), int(s[12:14]), int(s[15:17]), int(s[18:20])) def parseApacheLogLine(logline): """ Parse a line in the Apache Common Log format Args: logline (str): a line of text in the Apache Common Log format Returns: tuple: either a dictionary containing the parts of the Apache Access Log and 1, or the original invalid log line and 0 """ match = re.search(APACHE_ACCESS_LOG_PATTERN, logline) if match is None: return (logline, 0) size_field = match.group(9) if size_field == '-': size = long(0) else: size = long(match.group(9)) return (Row( host = match.group(1), client_identd = match.group(2), user_id = match.group(3), date_time = parse_apache_time(match.group(4)), method = match.group(5), endpoint = match.group(6), protocol = match.group(7), response_code = int(match.group(8)), content_size = size ), 1) # A regular expression pattern to extract fields from the log line APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)' """ Explanation: version 1.0.1 + Web Server Log Analysis with Apache Spark This lab will demonstrate how easy it is to perform web server log analysis with Apache Spark. Server log analysis is an ideal use case for Spark. It's a very large, common data source and contains a rich set of information. Spark allows you to store your logs in files on disk cheaply, while still providing a quick and simple way to perform data analysis on them. This homework will show you how to use Apache Spark on real-world text-based production logs and fully harness the power of that data. Log data comes from many sources, such as web, file, and compute servers, application logs, user-generated content, and can be used for monitoring servers, improving business and customer intelligence, building recommendation systems, fraud detection, and much more. How to complete this assignment This assignment is broken up into sections with bite-sized examples for demonstrating Spark functionality for log processing. For each problem, you should start by thinking about the algorithm that you will use to efficiently process the log in a parallel, distributed manner. This means using the various RDD operations along with lambda functions that are applied at each worker. This assignment consists of 4 parts: Part 1: Apache Web Server Log file format Part 2: Sample Analyses on the Web Server Log File Part 3: Analyzing Web Server Log File Part 4: Exploring 404 Response Codes Part 1: Apache Web Server Log file format The log files that we use for this assignment are in the Apache Common Log Format (CLF). The log file entries produced in CLF will look something like this: 127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839 Each part of this log entry is described below. 127.0.0.1 This is the IP address (or host name, if available) of the client (remote host) which made the request to the server. - The "hyphen" in the output indicates that the requested piece of information (user identity from remote machine) is not available. - The "hyphen" in the output indicates that the requested piece of information (user identity from local logon) is not available. [01/Aug/1995:00:00:01 -0400] The time that the server finished processing the request. The format is: [day/month/year:hour:minute:second timezone] * ####day = 2 digits * ####month = 3 letters * ####year = 4 digits * ####hour = 2 digits * ####minute = 2 digits * ####second = 2 digits * ####zone = (+ | -) 4 digits "GET /images/launch-logo.gif HTTP/1.0" This is the first line of the request string from the client. It consists of a three components: the request method (e.g., GET, POST, etc.), the endpoint (a Uniform Resource Identifier), and the client protocol version. 200 This is the status code that the server sends back to the client. This information is very valuable, because it reveals whether the request resulted in a successful response (codes beginning in 2), a redirection (codes beginning in 3), an error caused by the client (codes beginning in 4), or an error in the server (codes beginning in 5). The full list of possible status codes can be found in the HTTP specification (RFC 2616 section 10). 1839 The last entry indicates the size of the object returned to the client, not including the response headers. If no content was returned to the client, this value will be "-" (or sometimes 0). Note that log files contain information supplied directly by the client, without escaping. Therefore, it is possible for malicious clients to insert control-characters in the log files, so care must be taken in dealing with raw logs. NASA-HTTP Web Server Log For this assignment, we will use a data set from NASA Kennedy Space Center WWW server in Florida. The full data set is freely available (http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html) and contains two month's of all HTTP requests. We are using a subset that only contains several days worth of requests. (1a) Parsing Each Log Line Using the CLF as defined above, we create a regular expression pattern to extract the nine fields of the log line using the Python regular expression search function. The function returns a pair consisting of a Row object and 1. If the log line fails to match the regular expression, the function returns a pair consisting of the log line string and 0. A '-' value in the content size field is cleaned up by substituting it with 0. The function converts the log line's date string into a Python datetime object using the given parse_apache_time function. End of explanation """ import sys import os from test_helper import Test baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT') logFile = os.path.join(baseDir, inputPath) def parseLogs(): """ Read and parse log file """ parsed_logs = (sc .textFile(logFile) .map(parseApacheLogLine) .cache()) access_logs = (parsed_logs .filter(lambda s: s[1] == 1) .map(lambda s: s[0]) .cache()) failed_logs = (parsed_logs .filter(lambda s: s[1] == 0) .map(lambda s: s[0])) failed_logs_count = failed_logs.count() if failed_logs_count > 0: print 'Number of invalid logline: %d' % failed_logs.count() for line in failed_logs.take(20): print 'Invalid logline: %s' % line print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (parsed_logs.count(), access_logs.count(), failed_logs.count()) return parsed_logs, access_logs, failed_logs parsed_logs, access_logs, failed_logs = parseLogs() """ Explanation: (1b) Configuration and Initial RDD Creation We are ready to specify the input log file and create an RDD containing the parsed log file data. The log file has already been downloaded for you. To create the primary RDD that we'll use in the rest of this assignment, we first load the text file using sc.textfile(logFile) to convert each line of the file into an element in an RDD. Next, we use map(parseApacheLogLine) to apply the parse function to each element (that is, a line from the log file) in the RDD and turn each line into a pair Row object. Finally, we cache the RDD in memory since we'll use it throughout this notebook. End of explanation """ APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[\+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)\s*" (\d{3}) (\S+)' parsed_logs, access_logs, failed_logs = parseLogs() # TEST Data cleaning (1c) Test.assertEquals(failed_logs.count(), 0, 'incorrect failed_logs.count()') Test.assertEquals(parsed_logs.count(), 1043177 , 'incorrect parsed_logs.count()') Test.assertEquals(access_logs.count(), parsed_logs.count(), 'incorrect access_logs.count()') """ Explanation: (1c) Data Cleaning Notice that there are a large number of log lines that failed to parse. Examine the sample of invalid lines and compare them to the correctly parsed line, an example is included below. Based on your observations, alter the APACHE_ACCESS_LOG_PATTERN regular expression below so that the failed lines will correctly parse, and press Shift-Enter to rerun parseLogs(). 127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839 If you not familar with Python regular expression search function, now would be a good time to check up on the documentation. One tip that might be useful is to use an online tester like http://pythex.org or http://www.pythonregex.com. To use it, copy and paste the regular expression string below (located between the single quotes ') and test it against one of the 'Invalid logline' above. End of explanation """ # Calculate statistics based on the content size. content_sizes = access_logs.map(lambda log: log.content_size).cache() print 'Content Size Avg: %i, Min: %i, Max: %s' % ( content_sizes.reduce(lambda a, b : a + b) / content_sizes.count(), content_sizes.min(), content_sizes.max()) """ Explanation: Part 2: Sample Analyses on the Web Server Log File Now that we have an RDD containing the log file as a set of Row objects, we can perform various analyses. (2a) Example: Content Size Statistics Let's compute some statistics about the sizes of content being returned by the web server. In particular, we'd like to know what are the average, minimum, and maximum content sizes. We can compute the statistics by applying a map to the access_logs RDD. The lambda function we want for the map is to extract the content_size field from the RDD. The map produces a new RDD containing only the content_sizes (one element for each Row object in the access_logs RDD). To compute the minimum and maximum statistics, we can use min() and max() functions on the new RDD. We can compute the average statistic by using the reduce function with a lambda function that sums the two inputs, which represent two elements from the new RDD that are being reduced together. The result of the reduce() is the total content size from the log and it is to be divided by the number of requests as determined using the count() function on the new RDD. End of explanation """ # Response Code to Count responseCodeToCount = (access_logs .map(lambda log: (log.response_code, 1)) .reduceByKey(lambda a, b : a + b) .cache()) responseCodeToCountList = responseCodeToCount.take(100) print 'Found %d response codes' % len(responseCodeToCountList) print 'Response Code Counts: %s' % responseCodeToCountList assert len(responseCodeToCountList) == 7 assert sorted(responseCodeToCountList) == [(200, 940847), (302, 16244), (304, 79824), (403, 58), (404, 6185), (500, 2), (501, 17)] """ Explanation: (2b) Example: Response Code Analysis Next, lets look at the response codes that appear in the log. As with the content size analysis, first we create a new RDD by using a lambda function to extract the response_code field from the access_logs RDD. The difference here is that we will use a pair tuple instead of just the field itself. Using a pair tuple consisting of the response code and 1 will let us count how many records have a particular response code. Using the new RDD, we perform a reduceByKey function. reduceByKey performs a reduce on a per-key basis by applying the lambda function to each element, pairwise with the same key. We use the simple lambda function of adding the two values. Then, we cache the resulting RDD and create a list by using the take function. End of explanation """ labels = responseCodeToCount.map(lambda (x, y): x).collect() print labels count = access_logs.count() fracs = responseCodeToCount.map(lambda (x, y): (float(y) / count)).collect() print fracs import matplotlib.pyplot as plt def pie_pct_format(value): """ Determine the appropriate format string for the pie chart percentage label Args: value: value of the pie slice Returns: str: formated string label; if the slice is too small to fit, returns an empty string for label """ return '' if value < 7 else '%.0f%%' % value fig = plt.figure(figsize=(4.5, 4.5), facecolor='white', edgecolor='white') colors = ['yellowgreen', 'lightskyblue', 'gold', 'purple', 'lightcoral', 'yellow', 'black'] explode = (0.05, 0.05, 0.1, 0, 0, 0, 0) patches, texts, autotexts = plt.pie(fracs, labels=labels, colors=colors, explode=explode, autopct=pie_pct_format, shadow=False, startangle=125) for text, autotext in zip(texts, autotexts): if autotext.get_text() == '': text.set_text('') # If the slice is small to fit, don't show a text label plt.legend(labels, loc=(0.80, -0.1), shadow=True) pass """ Explanation: (2c) Example: Response Code Graphing with matplotlib Now, lets visualize the results from the last example. We can visualize the results from the last example using matplotlib. First we need to extract the labels and fractions for the graph. We do this with two separate map functions with a lambda functions. The first map function extracts a list of of the response code values, and the second map function extracts a list of the per response code counts divided by the total size of the access logs. Next, we create a figure with figure() constructor and use the pie() method to create the pie plot. End of explanation """ # Any hosts that has accessed the server more than 10 times. hostCountPairTuple = access_logs.map(lambda log: (log.host, 1)) hostSum = hostCountPairTuple.reduceByKey(lambda a, b : a + b) hostMoreThan10 = hostSum.filter(lambda s: s[1] > 10) hostsPick20 = (hostMoreThan10 .map(lambda s: s[0]) .take(20)) print 'Any 20 hosts that have accessed more then 10 times: %s' % hostsPick20 # An example: [u'204.120.34.185', u'204.243.249.9', u'slip1-32.acs.ohio-state.edu', u'lapdog-14.baylor.edu', u'199.77.67.3', u'gs1.cs.ttu.edu', u'haskell.limbex.com', u'alfred.uib.no', u'146.129.66.31', u'manaus.bologna.maraut.it', u'dialup98-110.swipnet.se', u'slip-ppp02.feldspar.com', u'ad03-053.compuserve.com', u'srawlin.opsys.nwa.com', u'199.202.200.52', u'ix-den7-23.ix.netcom.com', u'151.99.247.114', u'w20-575-104.mit.edu', u'205.25.227.20', u'ns.rmc.com'] """ Explanation: (2d) Example: Frequent Hosts Let's look at hosts that have accessed the server multiple times (e.g., more than ten times). As with the response code analysis in (2b), first we create a new RDD by using a lambda function to extract the host field from the access_logs RDD using a pair tuple consisting of the host and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then filter the result based on the count of accesses by each host (the second element of each pair) being greater than ten. Next, we extract the host name by performing a map with a lambda function that returns the first element of each pair. Finally, we extract 20 elements from the resulting RDD - note that the choice of which elements are returned is not guaranteed to be deterministic. End of explanation """ endpoints = (access_logs .map(lambda log: (log.endpoint, 1)) .reduceByKey(lambda a, b : a + b) .cache()) ends = endpoints.map(lambda (x, y): x).collect() counts = endpoints.map(lambda (x, y): y).collect() fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, len(ends), 0, max(counts)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Endpoints') plt.ylabel('Number of Hits') plt.plot(counts) pass """ Explanation: (2e) Example: Visualizing Endpoints Now, lets visualize the number of hits to endpoints (URIs) in the log. To perform this task, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then cache the results. Next we visualize the results using matplotlib. We previously imported the matplotlib.pyplot library, so we do not need to import it again. We perform two separate map functions with lambda functions. The first map function extracts a list of endpoint values, and the second map function extracts a list of the visits per endpoint values. Next, we create a figure with figure() constructor, set various features of the plot (axis limits, grid lines, and labels), and use the plot() method to create the line plot. End of explanation """ # Top Endpoints endpointCounts = (access_logs .map(lambda log: (log.endpoint, 1)) .reduceByKey(lambda a, b : a + b)) topEndpoints = endpointCounts.takeOrdered(10, lambda s: -1 * s[1]) print 'Top Ten Endpoints: %s' % topEndpoints assert topEndpoints == [(u'/images/NASA-logosmall.gif', 59737), (u'/images/KSC-logosmall.gif', 50452), (u'/images/MOSAIC-logosmall.gif', 43890), (u'/images/USA-logosmall.gif', 43664), (u'/images/WORLD-logosmall.gif', 43277), (u'/images/ksclogo-medium.gif', 41336), (u'/ksc.html', 28582), (u'/history/apollo/images/apollo-logo1.gif', 26778), (u'/images/launch-logo.gif', 24755), (u'/', 20292)], 'incorrect Top Ten Endpoints' """ Explanation: (2f) Example: Top Endpoints For the final example, we'll look at the top endpoints (URIs) in the log. To determine them, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then extract the top ten endpoints by performing a takeOrdered with a value of 10 and a lambda function that multiplies the count (the second element of each pair) by -1 to create a sorted list with the top endpoints at the bottom. End of explanation """ not200 = access_logs.filter(lambda log: log.response_code <> 200) endpointCountPairTuple = not200.map(lambda log: (log.endpoint, 1)) endpointSum = endpointCountPairTuple.reduceByKey(lambda a, b : a + b) topTenErrURLs = endpointSum.takeOrdered(10, lambda s: -1 * s[1]) print 'Top Ten failed URLs: %s' % topTenErrURLs # TEST Top ten error endpoints (3a) Test.assertEquals(endpointSum.count(), 7689, 'incorrect count for endpointSum') Test.assertEquals(topTenErrURLs, [(u'/images/NASA-logosmall.gif', 8761), (u'/images/KSC-logosmall.gif', 7236), (u'/images/MOSAIC-logosmall.gif', 5197), (u'/images/USA-logosmall.gif', 5157), (u'/images/WORLD-logosmall.gif', 5020), (u'/images/ksclogo-medium.gif', 4728), (u'/history/apollo/images/apollo-logo1.gif', 2907), (u'/images/launch-logo.gif', 2811), (u'/', 2199), (u'/images/ksclogosmall.gif', 1622)], 'incorrect Top Ten failed URLs (topTenErrURLs)') """ Explanation: Part 3: Analyzing Web Server Log File Now it is your turn to perform analyses on web server log files. (3a) Exercise: Top Ten Error Endpoints What are the top ten endpoints which did not have return code 200? Create a sorted list containing top ten endpoints and the number of times that they were accessed with non-200 return code. Think about the steps that you need to perform to determine which endpoints did not have a 200 return code, how you will uniquely count those endpoints, and sort the list. You might want to refer back to the previous Lab (Lab 1 Word Count) for insights. End of explanation """ hosts = access_logs.map(lambda log: log.host) uniqueHosts = hosts.countByValue() uniqueHostCount = len(uniqueHosts) print 'Unique hosts: %d' % uniqueHostCount # TEST Number of unique hosts (3b) Test.assertEquals(uniqueHostCount, 54507, 'incorrect uniqueHostCount') """ Explanation: (3b) Exercise: Number of Unique Hosts How many unique hosts are there in the entire log? Think about the steps that you need to perform to count the number of different hosts in the log. End of explanation """ def empty(a): return set([ a ]) def add(acc, a): acc.update([ a ]) return acc def merge(acc1, acc2): return acc1.union(acc2) dayToHostPairTuple = access_logs.map(lambda log: (log.date_time.day, log.host)) dayGroupedHosts = dayToHostPairTuple.combineByKey(empty, add, merge) dayHostCount = dayGroupedHosts.map(lambda (day, hosts): (day, len(hosts))) dailyHosts = dayHostCount.sortByKey().cache() dailyHostsList = dailyHosts.take(30) print 'Unique hosts per day: %s' % dailyHostsList # TEST Number of unique daily hosts (3c) Test.assertEquals(dailyHosts.count(), 21, 'incorrect dailyHosts.count()') Test.assertEquals(dailyHostsList, [(1, 2582), (3, 3222), (4, 4190), (5, 2502), (6, 2537), (7, 4106), (8, 4406), (9, 4317), (10, 4523), (11, 4346), (12, 2864), (13, 2650), (14, 4454), (15, 4214), (16, 4340), (17, 4385), (18, 4168), (19, 2550), (20, 2560), (21, 4134), (22, 4456)], 'incorrect dailyHostsList') Test.assertTrue(dailyHosts.is_cached, 'incorrect dailyHosts.is_cached') """ Explanation: (3c) Exercise: Number of Unique Daily Hosts For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month which includes the day of the month and the associated number of unique hosts for that day. Make sure you cache the resulting RDD dailyHosts so that we can reuse it in the next exercise. Think about the steps that you need to perform to count the number of different hosts that make requests each day. Since the log only covers a single month, you can ignore the month. End of explanation """ daysWithHosts = dailyHosts.map(lambda (day, hosts): day).collect() hosts = dailyHosts.map(lambda (day, hosts): hosts).collect() # TEST Visualizing unique daily hosts (3d) test_days = range(1, 23) test_days.remove(2) Test.assertEquals(daysWithHosts, test_days, 'incorrect days') Test.assertEquals(hosts, [2582, 3222, 4190, 2502, 2537, 4106, 4406, 4317, 4523, 4346, 2864, 2650, 4454, 4214, 4340, 4385, 4168, 2550, 2560, 4134, 4456], 'incorrect hosts') fig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white') plt.axis([min(daysWithHosts), max(daysWithHosts), 0, max(hosts)+500]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Hosts') plt.plot(daysWithHosts, hosts) pass """ Explanation: (3d) Exercise: Visualizing the Number of Unique Daily Hosts Using the results from the previous exercise, use matplotlib to plot a "Line" graph of the unique hosts requests by day. daysWithHosts should be a list of days and hosts should be a list of number of unique hosts for each corresponding day. * How could you convert a RDD into a list? See the collect() method* End of explanation """ reqsPerDay = access_logs.map(lambda log: (log.date_time.day, 1)).reduceByKey(lambda a, b: a + b).sortByKey() groupedByDay = reqsPerDay.join(dailyHosts) avgDailyReqPerHost = groupedByDay.map(lambda (day, (r, h)): (day, r / h)).sortByKey().cache() avgDailyReqPerHostList = avgDailyReqPerHost.take(30) print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList # TEST Average number of daily requests per hosts (3e) Test.assertEquals(avgDailyReqPerHostList, [(1, 13), (3, 12), (4, 14), (5, 12), (6, 12), (7, 13), (8, 13), (9, 14), (10, 13), (11, 14), (12, 13), (13, 13), (14, 13), (15, 13), (16, 13), (17, 13), (18, 13), (19, 12), (20, 12), (21, 13), (22, 12)], 'incorrect avgDailyReqPerHostList') Test.assertTrue(avgDailyReqPerHost.is_cached, 'incorrect avgDailyReqPerHost.is_cache') """ Explanation: (3e) Exercise: Average Number of Daily Requests per Hosts Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD avgDailyReqPerHost so that we can reuse it in the next exercise. To compute the average number of requests per host, get the total number of request across all hosts and divide that by the number of unique hosts. Since the log only covers a single month, you can skip checking for the month. Also to keep it simple, when calculating the approximate average use the integer value - you do not need to upcast to float End of explanation """ daysWithAvg = avgDailyReqPerHost.map(lambda (day, r): day).collect() avgs = avgDailyReqPerHost.map(lambda (day, r): r).collect() # TEST Average Daily Requests per Unique Host (3f) Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect days') Test.assertEquals(avgs, [13, 12, 14, 12, 12, 13, 13, 14, 13, 14, 13, 13, 13, 13, 13, 13, 13, 12, 12, 13, 12], 'incorrect avgs') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithAvg), 0, max(avgs)+2]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Average') plt.plot(daysWithAvg, avgs) pass """ Explanation: (3f) Exercise: Visualizing the Average Daily Requests per Unique Host Using the result avgDailyReqPerHost from the previous exercise, use matplotlib to plot a "Line" graph of the average daily requests per unique host by day. daysWithAvg should be a list of days and avgs should be a list of average daily requests per unique hosts for each corresponding day. End of explanation """ badRecords = (access_logs .filter(lambda log: log.response_code == 404) .cache()) print 'Found %d 404 URLs' % badRecords.count() # TEST Counting 404 (4a) Test.assertEquals(badRecords.count(), 6185, 'incorrect badRecords.count()') Test.assertTrue(badRecords.is_cached, 'incorrect badRecords.is_cached') """ Explanation: Part 4: Exploring 404 Response Codes Let's drill down and explore the error 404 response code records. 404 errors are returned when an endpoint is not found by the server (i.e., a missing page or object). (4a) Exercise: Counting 404 Response Codes Create a RDD containing only log records with a 404 response code. Make sure you cache() the RDD badRecords as we will use it in the rest of this exercise. How many 404 records are in the log? End of explanation """ badEndpoints = badRecords.map(lambda log: log.endpoint) badUniqueEndpoints = badEndpoints.distinct() badUniqueEndpointsPick40 = badUniqueEndpoints.take(40) print '404 URLS: %s' % badUniqueEndpointsPick40 # TEST Listing 404 records (4b) badUniqueEndpointsSet40 = set(badUniqueEndpointsPick40) Test.assertEquals(len(badUniqueEndpointsSet40), 40, 'badUniqueEndpointsPick40 not distinct') """ Explanation: (4b) Exercise: Listing 404 Response Code Records Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list up to 40 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list. End of explanation """ badEndpointsCountPairTuple = badRecords.map(lambda log: (log.endpoint, 1)) badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda a, b: a + b) badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda s: -1 * s[1]) print 'Top Twenty 404 URLs: %s' % badEndpointsTop20 # TEST Top twenty 404 URLs (4c) Test.assertEquals(badEndpointsTop20, [(u'/pub/winvn/readme.txt', 633), (u'/pub/winvn/release.txt', 494), (u'/shuttle/missions/STS-69/mission-STS-69.html', 431), (u'/images/nasa-logo.gif', 319), (u'/elv/DELTA/uncons.htm', 178), (u'/shuttle/missions/sts-68/ksc-upclose.gif', 156), (u'/history/apollo/sa-1/sa-1-patch-small.gif', 146), (u'/images/crawlerway-logo.gif', 120), (u'/://spacelink.msfc.nasa.gov', 117), (u'/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif', 100), (u'/history/apollo/a-001/a-001-patch-small.gif', 97), (u'/images/Nasa-logo.gif', 85), (u'/shuttle/resources/orbiters/atlantis.gif', 64), (u'/history/apollo/images/little-joe.jpg', 62), (u'/images/lf-logo.gif', 59), (u'/shuttle/resources/orbiters/discovery.gif', 56), (u'/shuttle/resources/orbiters/challenger.gif', 54), (u'/robots.txt', 53), (u'/elv/new01.gif>', 43), (u'/history/apollo/pad-abort-test-2/pad-abort-test-2-patch-small.gif', 38)], 'incorrect badEndpointsTop20') """ Explanation: (4c) Exercise: Listing the Top Twenty 404 Response Code Endpoints Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty endpoints that generate the most 404 errors. Remember, top endpoints should be in sorted order End of explanation """ errHostsCountPairTuple = badRecords.map(lambda log: (log.host, 1)) errHostsSum = errHostsCountPairTuple.reduceByKey(lambda a, b: a + b) errHostsTop25 = errHostsSum.takeOrdered(25, lambda s: -1 * s[1]) print 'Top 25 hosts that generated errors: %s' % errHostsTop25 # TEST Top twenty-five 404 response code hosts (4d) Test.assertEquals(len(errHostsTop25), 25, 'length of errHostsTop25 is not 25') Test.assertEquals(len(set(errHostsTop25) - set([(u'maz3.maz.net', 39), (u'piweba3y.prodigy.com', 39), (u'gate.barr.com', 38), (u'm38-370-9.mit.edu', 37), (u'ts8-1.westwood.ts.ucla.edu', 37), (u'nexus.mlckew.edu.au', 37), (u'204.62.245.32', 33), (u'163.206.104.34', 27), (u'spica.sci.isas.ac.jp', 27), (u'www-d4.proxy.aol.com', 26), (u'www-c4.proxy.aol.com', 25), (u'203.13.168.24', 25), (u'203.13.168.17', 25), (u'internet-gw.watson.ibm.com', 24), (u'scooter.pa-x.dec.com', 23), (u'crl5.crl.com', 23), (u'piweba5y.prodigy.com', 23), (u'onramp2-9.onr.com', 22), (u'slip145-189.ut.nl.ibm.net', 22), (u'198.40.25.102.sap2.artic.edu', 21), (u'gn2.getnet.com', 20), (u'msp1-16.nas.mr.net', 20), (u'isou24.vilspa.esa.es', 19), (u'dial055.mbnet.mb.ca', 19), (u'tigger.nashscene.com', 19)])), 0, 'incorrect errHostsTop25') """ Explanation: (4d) Exercise: Listing the Top Twenty-five 404 Response Code Hosts Instead of looking at the endpoints that generated 404 errors, let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty-five hosts that generate the most 404 errors. End of explanation """ errDateCountPairTuple = badRecords.map(lambda log: (log.date_time.day, 1)) errDateSum = errDateCountPairTuple.reduceByKey(lambda a,b : a + b) errDateSorted = errDateSum.sortByKey().cache() errByDate = errDateSorted.collect() print '404 Errors by day: %s' % errByDate # TEST 404 response codes per day (4e) Test.assertEquals(errByDate, [(1, 243), (3, 303), (4, 346), (5, 234), (6, 372), (7, 532), (8, 381), (9, 279), (10, 314), (11, 263), (12, 195), (13, 216), (14, 287), (15, 326), (16, 258), (17, 269), (18, 255), (19, 207), (20, 312), (21, 305), (22, 288)], 'incorrect errByDate') Test.assertTrue(errDateSorted.is_cached, 'incorrect errDateSorted.is_cached') """ Explanation: (4e) Exercise: Listing 404 Response Codes per Day Let's explore the 404 records temporally. Break down the 404 requests by day (cache() the RDD errDateSorted) and get the daily counts sorted by day as a list. Since the log only covers a single month, you can ignore the month in your checks. End of explanation """ daysWithErrors404 = errDateSorted.map(lambda (day, num): day).collect() errors404ByDay = errDateSorted.map(lambda (day, num): num).collect() # TEST Visualizing the 404 Response Codes by Day (4f) Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect daysWithErrors404') Test.assertEquals(errors404ByDay, [243, 303, 346, 234, 372, 532, 381, 279, 314, 263, 195, 216, 287, 326, 258, 269, 255, 207, 312, 305, 288], 'incorrect errors404ByDay') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithErrors404), 0, max(errors404ByDay)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('404 Errors') plt.plot(daysWithErrors404, errors404ByDay) pass """ Explanation: (4f) Exercise: Visualizing the 404 Response Codes by Day Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by day. End of explanation """ topErrDate = errDateSorted.takeOrdered(5, lambda s: s[1] * -1) print 'Top Five dates for 404 requests: %s' % topErrDate # TEST Five dates for 404 requests (4g) Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate') """ Explanation: (4g) Exercise: Top Five Days for 404 Response Codes Using the RDD errDateSorted you cached in the part (4e), what are the top five days for 404 response codes and the corresponding counts of 404 response codes? End of explanation """ hourCountPairTuple = badRecords.map(lambda log: (log.date_time.hour, 1)) hourRecordsSum = hourCountPairTuple.reduceByKey(lambda a,b: a + b) hourRecordsSorted = hourRecordsSum.sortByKey().cache() errHourList = hourRecordsSorted.collect() print 'Top hours for 404 requests: %s' % errHourList # TEST Hourly 404 response codes (4h) Test.assertEquals(errHourList, [(0, 175), (1, 171), (2, 422), (3, 272), (4, 102), (5, 95), (6, 93), (7, 122), (8, 199), (9, 185), (10, 329), (11, 263), (12, 438), (13, 397), (14, 318), (15, 347), (16, 373), (17, 330), (18, 268), (19, 269), (20, 270), (21, 241), (22, 234), (23, 272)], 'incorrect errHourList') Test.assertTrue(hourRecordsSorted.is_cached, 'incorrect hourRecordsSorted.is_cached') """ Explanation: (4h) Exercise: Hourly 404 Response Codes Using the RDD badRecords you cached in the part (4a) and by hour of the day and in increasing order, create an RDD containing how many requests had a 404 return code for each hour of the day (midnight starts at 0). Cache the resulting RDD hourRecordsSorted and print that as a list. End of explanation """ hoursWithErrors404 = hourRecordsSorted.map(lambda (day, num): day).collect() errors404ByHours = hourRecordsSorted.map(lambda (day, num): num).collect() # TEST Visualizing the 404 Response Codes by Hour (4i) Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'incorrect hoursWithErrors404') Test.assertEquals(errors404ByHours, [175, 171, 422, 272, 102, 95, 93, 122, 199, 185, 329, 263, 438, 397, 318, 347, 373, 330, 268, 269, 270, 241, 234, 272], 'incorrect errors404ByHours') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(hoursWithErrors404), 0, max(errors404ByHours)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Hour') plt.ylabel('404 Errors') plt.plot(hoursWithErrors404, errors404ByHours) pass """ Explanation: (4i) Exercise: Visualizing the 404 Response Codes by Hour Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by hour. End of explanation """
geoneill12/phys202-2015-work
assignments/assignment03/NumpyEx01.ipynb
mit
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import antipackage import github.ellisonbg.misc.vizarray as va """ Explanation: Numpy Exercise 1 Imports End of explanation """ def checkerboard(size): a = np.zeros((size,size), dtype = np.float) b = 2 if size % 2 != 0: for element in np.nditer(a, op_flags=['readwrite']): if size % 2 != 0: if b % 2 == 0: element[...] = element + 1.0 b += 1 else: b += 1 return a else: c = [1,0] d = [0,1] e = [] f = size / 2 g = list(range(1, size + 1)) for item in g: if item % 2 != 0: e.append(c * f) else: e.append(d * f) h = np.array(e, dtype = np.float) return h print checkerboard(4) """ Explanation: Checkerboard Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0: Your function should work for both odd and even size. The 0,0 element should be 1.0. The dtype should be float. End of explanation """ va.set_block_size(10) va.vizarray(checkerboard(20)) assert True """ Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px. End of explanation """ va.set_block_size(5) va.vizarray(checkerboard(27)) assert True a = checkerboard(4) assert a[0,0]==1.0 assert a.sum()==8.0 assert a.dtype==np.dtype(float) assert np.all(a[0,0:5:2]==1.0) assert np.all(a[1,0:5:2]==0.0) b = checkerboard(5) assert b[0,0]==1.0 assert b.sum()==13.0 assert np.all(b.ravel()[0:26:2]==1.0) assert np.all(b.ravel()[1:25:2]==0.0) """ Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_artifacts_correction_filtering.ipynb
bsd-3-clause
import numpy as np import mne from mne.datasets import sample data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' proj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif' tmin, tmax = 0, 20 # use the first 20s of data # Setup for reading the raw data (save memory by cropping the raw data # before loading it) raw = mne.io.read_raw_fif(raw_fname).crop(tmin, tmax).load_data() raw.info['bads'] = ['MEG 2443', 'EEG 053'] # bads + 2 more fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2 # Pick a subset of channels (here for speed reason) selection = mne.read_selection('Left-temporal') picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=selection) # Let's first check out all channel types raw.plot_psd(area_mode='range', tmax=10.0, picks=picks) """ Explanation: .. _tut_artifacts_filter: Filtering and Resampling Certain artifacts are restricted to certain frequencies and can therefore be fixed by filtering. An artifact that typically affects only some frequencies is due to the power line. Power-line noise is a noise created by the electrical network. It is composed of sharp peaks at 50Hz (or 60Hz depending on your geographical location). Some peaks may also be present at the harmonic frequencies, i.e. the integer multiples of the power-line frequency, e.g. 100Hz, 150Hz, ... (or 120Hz, 180Hz, ...). End of explanation """ raw.notch_filter(np.arange(60, 241, 60), picks=picks) raw.plot_psd(area_mode='range', tmax=10.0, picks=picks) """ Explanation: Removing power-line noise with notch filtering Removing power-line noise can be done with a Notch filter, directly on the Raw object, specifying an array of frequency to be cut off: End of explanation """ raw.filter(None, 50.) # low pass filtering below 50 Hz raw.plot_psd(area_mode='range', tmax=10.0, picks=picks) """ Explanation: Removing power-line noise with low-pas filtering If you're only interested in low frequencies, below the peaks of power-line noise you can simply low pass filter the data. End of explanation """ raw.filter(1., None) # low pass filtering above 1 Hz raw.plot_psd(area_mode='range', tmax=10.0, picks=picks) """ Explanation: High-pass filtering to remove slow drifts If you're only interested in low frequencies, below the peaks of power-line noise you can simply high pass filter the data. End of explanation """ raw.filter(1., 50.) # band-pass filtering in the range 1 Hz - 50 Hz """ Explanation: To do the low-pass and high-pass filtering in one step you can do a so-called band-pass filter by running End of explanation """ raw.resample(100, npad="auto") # set sampling frequency to 100Hz raw.plot_psd(area_mode='range', tmax=10.0, picks=picks) """ Explanation: Down-sampling (for performance reasons) When performing experiments where timing is critical, a signal with a high sampling rate is desired. However, having a signal with a much higher sampling rate than necessary needlessly consumes memory and slows down computations operating on the data. To avoid that, you can down-sample your time series. Data resampling can be done with resample methods. End of explanation """
yvesdubief/UVM-ME249-CFD
ME249-RANS-help.ipynb
gpl-2.0
import matplotlib.pyplot as plt import numpy as np r = np.linspace(0.,5.,1000) r0 = 0.8 r1 = 1.0 u = 0.5*(np.power(np.abs(1-r/r0),1./7.))*0.5*((r0-r)+np.abs(r0-r))/(np.abs(r0-r)+1e-6) \ +0.01*0.5*((r-r1)+np.abs(r-r1))/(np.abs(r-r1)+1e-6) plt.plot(r,u,lw = 2) #plt.legend(loc=3, bbox_to_anchor=[0, 1], # ncol=3, shadow=True, fancybox=True) plt.xlabel('$r$', fontdict = font) plt.ylabel('$U$', fontdict = font) plt.xticks(fontsize = 16) plt.yticks(fontsize = 16) plt.show() """ Explanation: Guidelines for final exam <p class='alert alert-success'> This notebook lists the guidelines for a successful final exam. You will find also an example of velocity profile for a jet and a small python code to provide turbulent inlet conditions for the $k-\omega$ model. </p> Final exam objectives and presentation The following describes the slides expected during the presentation Title slide <h3> 1. Title slide </h3> Name of the project and a picture of your CAD <h3> 2. Objective slide </h3> Describe the purpose of your simulation, which could be: <ul> <li> Computation of drag or lift on an object </li> <li> Computation of pressure drop in a pipe/duct with obstacle</li> <li> Comparison of two different designs (defined the metric to be compared) </li> <li> Investigation of the flow topology for an unusual system</li> <li> Assessment of OpenFOAM (the simscale flow solver) to predict known flow properties</li> <li> Investigation of the possible existence of turbulence in a flow outside of the body of knowledge</li> </ul> Describe the number of simulations ideally needed to achieve the objectives and the number of simulations performed for this exam <h3> 3. Flow parameter slide</h3> Self-explanatory: Velocity and length scales, Re, Ma (if necessary), any non-dimensional number that characterizes the flow, do you think the flow is turbulent or laminar? <h3> 4. Computational domain slide (may need 2 slides)</h3> Provide a sketch of screen grab of your bounding box and any region of refinement you defined. Define boundary conditions on the faces of your computational domain <h3> 5. Mesh refinement </h3> Show evidence that you did some mesh refinement study. Two level of refinements would be fine, higher level of convergence would be appreciated if the CPU cost is low enough. You need to clearly define and justify the metric (variable) used for assessing the adequacy of the mesh. <h3> 6. Significant results </h3> Show what worked and did not work. Show velocity profiles and contours, same for pressure or anything that you found interesting and characteristic of your flow. <h3> 7. Future work</h3> Discuss what needs to be done to make your CFD work and industry-quality project. Basic Functions $$ u(r) = U_c\frac{(r_0-r)+\vert r_0-r\vert}{2\vert r_0-r\vert+\epsilon}+U_\infty\frac{(r-r_1)+\vert r-r_1\vert}{2\vert r-r_1\vert+\epsilon} $$ End of explanation """ geo = input('Boundary layer (b), channel (c), pipe (p), duct (d), or annulus (a)?') if geo == 'b': D = float(input('boundary layer thickness (m):')) elif geo == 'c': D = float(input('Channel height (m):')) elif geo == 'p': D = float(input('Diameter (m):')) elif geo == 'd': a = float(input('first dimension (m):')) b = float(input('second dimension (m):')) D = 4.*a*b/(2.*(a+b)) elif geo == 'a': Di = float(input('Inner diameter (m):')) Do = float(input('Outer diameter (m):')) D = Do - Di else: print('your geometry is unknown') u_cal = input('for velocity scale: mass flow (m) or bulk velocity (u)?') if u_cal == 'u': U_b = float(input('Velocity scale (m/s):')) elif u_cal == 'm': m = float(input('mass flow rate (kg/s):')) rho = float(input('fluid density (kg/m^3):')) if geo == 'p': area = np.pi*(D**2)/4. elif geo == 'd': area = a*b elif geo == 'a': area = np.pi/4.*(Do**2-Di**2) U_b = m/(rho*area) print('bulk velocity: %10.3e m/s' %U_b) nu = float(input('Kinematic viscosity (m^2/s):')) Re = U_b*D/nu print("Reynolds number: %10.3e" %Re) I_rec = 0.16*Re**(-1./8.) print("Recommended turbulent intensity for pipe/duct: %10.3e" %I_rec) I = float(input('Turbulent intensity:')) if 'geo' == 'b': l = 0.04 * D else: l = 0.07 * D TKE = 3./2.*(I*U_b)**2 print("TKE = %10.3e m^2/s^2" %TKE) omega = TKE**0.5/(0.09)**0.25/l print("omega = %10.3e 1/s" %omega) """ Explanation: Inlet conditions for $k-\omega$ RANS model in internal flows Internal flows are defined as cylindrical pipes, ducts and annulus. The flow is characterized by: <ol> <li> Velocity scale $U_b$, typically the bulk or average velocity (m/s).</li> <li> The hydraulic diameter $D$ (m).</li> <li> The fluid kinematic viscosity $\nu$ (m^2/s), or kinematic viscosity $\mu$ (kg/(m.s) and density $\rho$ (kg/m^3).</li> </ol> These parameters are used in the Reynolds number: $$ Re=\frac{\rho U_bD}{\mu}=\frac{U_bD}{\nu} $$ The following calculator provides estimates of the turbulent kinetic energy $k$ and $\omega$ based on the parameters above and the turbulent intensity $$ I = \frac{u'}{U_b}, $$ where $u'$ is the typical standard deviation of the velocity fluctuations experienced by the flow. For ducts, the following correlation may be used: $$ I=0.16Re^{-1/8} $$ but you may use a larger value if your inlet is located immediately downstream of geometry creating intense turbulence. From the knowledge of $I$, $k$ and $\omega$ can be computed as: $$ k=\frac{3}{2}(IU_b)^2 $$ $$ \omega = \frac{\sqrt{k}}{C_\mu^{1/4}{\cal l}} $$ where $C_\mu\approx 0.09$ and ${\cal l}$ is the turbulent length scale, for ducts: $$ {\cal l}=0.07D $$ End of explanation """
Bihaqo/tf_einsum_opt
example.ipynb
mit
def func(a, b, c): res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1 res = tf.einsum('iab,kb->iak', res, c) return res a = tf.random_normal((10, 11, 12)) b = tf.random_normal((11, 13)) c = tf.random_normal((12, 14)) # res = func(a, b, c) orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c) res1 = func(a, b, c) %timeit sess.run(res1) res2 = optimized_func(a, b, c) %timeit sess.run(res2) # Check that the results of optimized and the original function are the same. np.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5) """ Explanation: Small scale example End of explanation """ def func(a, b, c, d): res = tf.einsum('si,sj,sk,ij->s', a, b, d, c) res += tf.einsum('s,si->s', res, a) return res a = tf.random_normal((100, 101)) b = tf.random_normal((100, 102)) c = tf.random_normal((101, 102)) d = tf.random_normal((100, 30)) orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d) res1 = func(a, b, c, d) %timeit sess.run(res1) res2 = optimized_func(a, b, c, d) %timeit sess.run(res2) """ Explanation: Example with more savings, but slower to optimize End of explanation """ orders """ Explanation: Look at the recommendations: End of explanation """
robertoalotufo/ia898
src/isccsym.ipynb
mit
import numpy as np def isccsym2(F): if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:] if len(F.shape) == 2: F = F[np.newaxis,:,:] n,m,p = F.shape x,y,z = np.indices((n,m,p)) Xnovo = np.mod(-1*x,n) Ynovo = np.mod(-1*y,m) Znovo = np.mod(-1*z,p) aux = np.conjugate(F[Xnovo,Ynovo,Znovo]) return (abs(F-aux)<10E-4).all() def isccsym(F): import ia898.src as ia if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:] if len(F.shape) == 2: F = F[np.newaxis,:,:] n,m,p = F.shape return(abs(F-np.conjugate(ia.ptrans(F[::-1,::-1,::-1],(1,1,1))))<10E-4).all() """ Explanation: Function isccsym Description Check if the input image is symmetric and return a boolean value. Synopse Check for conjugate symmetry b = isccsym(F) b: Boolean. F: Image. Complex image. End of explanation """ testing = (__name__ == "__main__") if testing: ! jupyter nbconvert --to python isccsym.ipynb import numpy as np import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia import matplotlib.image as mpimg """ Explanation: Examples End of explanation """ if testing: F = np.arange(5) print('Is 1d odd dimension vetor symmetric?',ia.isccsym(F),'\n') F = np.arange(6) print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n') F = np.array( [1j,1j,0,1j,1j] ) print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n') """ Explanation: Numeric Example: 1D data End of explanation """ if testing: F = np.array( [ [0,1,1], [2,4,3], [2,3,4]] ) print('Is function F symmetric?',ia.isccsym(F),'\n') """ Explanation: Numeric Example: real symmetric matrix End of explanation """ if testing: F = np.array([ [ 0j,1j,-1j], [ 2j,4j,-3j], [-2j,3j,-4j]] ) print('Is function F symmetric?',ia.isccsym(F),'\n') F = np.array( [ [ 2j,1j,-1j], [ 2j,4j,-3j], [-2j,3j,-4j]] ) print('Is function F symmetric?',ia.isccsym(F),'\n') """ Explanation: Numeric Example: imaginary matrix End of explanation """ if testing: print('Is this function symmetric?') print(ia.isccsym(np.fft.fft2(np.random.rand(100,100)))) # dimension variation print(ia.isccsym(np.fft.fft2(np.random.rand(101,100)))) print(ia.isccsym(np.fft.fft2(np.random.rand(101,101)))) """ Explanation: Numeric Example: Fourier transformation of a real image is symmetric End of explanation """ if testing: img = mpimg.imread('../data/cameraman.tif') F = ia.dft(img) imgc = 1 * ia.circle(img.shape, 50, [img.shape[0]/2, img.shape[1]/2]) imgct = ia.ptrans(imgc, np.array(imgc.shape)//2) ia.adshow(ia.normalize(imgct),'circular filter') res = F * imgct ia.adshow(ia.dftview(res)) print('Is this filter symmetric?', ia.isccsym(res)) """ Explanation: Image Example: circular filter End of explanation """ if False: # testing: mquadra = ia.rectangle(img.shape, [50,50], [img.shape[0]/2, img.shape[1]/2]) ia.adshow(mquadra,'RETANGULO') mquadra = ia.ptrans(mquadra, array(mquadra.shape)/2) ia.adshow(ia.normalize(mquadra),'retangular filter') mfiltrada = F * mquadra print('Is this filter symmetric?', ia.isccsym(mfiltrada)) """ Explanation: Image Example 2: retangular filter End of explanation """
cesarcontre/Simulacion2017
Modulo2/Clase14_ManejoDatosPandas.ipynb
mit
# Importamos pandas import pandas as pd """ Explanation: Aplicando Python para análisis de precios: manejando, organizando y bajando datos con pandas <img style="float: left; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/8/86/Microsoft_Excel_2013_logo.svg" width="400px" height="125px" /> <img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/0/0a/Python.svg" width="400px" height="125px" /> En esta y en las siguientes dos clases veremos un caso de aplicación de simulación montecarlo en la toma de decisiones. Para lograr este objetivo, primero veremos (en esta clase) como manipular datos con pandas, tanto desde un archivo local de excel como remotamente desde Yahoo Finance. Python Data Analysis Library: pandas es una librería de código abierto, fácil de usar y que provee alto rendimiento en structuras de datos y herramientas de análisis de datos para el lenguaje de programación Python. Referencias: - http://pandas.pydata.org/ - http://www.learndatasci.com/python-finance-part-yahoo-finance-api-pandas-matplotlib/ - https://www.datacamp.com/community/tutorials/python-excel-tutorial 1. Importar datos desde holas de cálculo (como las de excel) 1.1. ¿Porqué hojas de cálculo? Seguramente todos han trabajado con hojas de cálculo de excel, almenos para cosas básicas. Esta herramienta nos ayuda a organizar, analizar y guardar datos en tablas. Este software es ampliamente usado en distintos campos de aplicación en todo el mundo. Nos guste o no, esto también aplica a ciencia de datos (ingeniería financiera). Muchos de ustedes en su futuro académico y profesional tendrán que trabajar con estas hojas de cálculo, pero no siempre querrán trabajar directamente con ellas si tienen que hacer un análisis un poco más avanzado de los datos. Por eso en Python se han implementado herramientas para leer, escribir y manipular este tipo de archivos. En esta clase veremos cómo podemos trabajar con Excel y Python de manera básica utilizando la librería pandas. 1.2. Reglas básicas para antes de leer hojas de cálculo Antes de comenzar a leer una hoja de cálculo en Python (o cualquier otro programa), debemos considerar el ajustar nuestro archivo para cumplir ciertos principios, como: La primer fila de la hoja de cálculo se reserva para los títulos, mientras que la primer columna se usa para identificar la unidad de muestreo o indización de los datos (tiempo, fecha, eventos...) Evitar nombres, valores o campos con espacios en blanco. De otra manera, cada palabra se interpreta como variable separada y resultan errores relacionados con el número de elementos por línea. Para esto usar buscar y reemplazar con guiones bajos, puntos, etcétera. Los nombres cortos se prefieren sobre nombre largos. Evite símbolos como ?, $,%, ^, &, *, (,),-,#, ?,,,<,>, /, |, \, [ ,] ,{, and }. Borre cualquier tipo de comentario que haya hecho en su archivo para evitar columnas extras. Asegúrese de que cualquier valor inexistente esté indicado como NA. Si se hizo algún cambio, estar seguro de guardarlo. Si estás trabajando con Microsoft Excel, verás que hay muchas opciones para guardar archivos, a parte de las extensiones por defecto .xls or .xlsx. Para esto ir a “Save As” y seleccionar una de las extensiones listadas en “Save as Type”. La extensión más común es .csv (archivos de texto separados por comas). Actividad. Descargar precios de acciones de Apple (AAPL), Amazon (AMZN), Microsoft (MSFT) y NVIDIA (NVDA) de Yahoo Finance, con una ventana de tiempo desde el 01-01-2011 al 31-12-2016 y frecuencia diaria. Ir a https://finance.yahoo.com/. Buscar cada una de las compañías solicitadas. Dar click en la pestaña 'Historical Data'. Cambiar las fechas en 'Time Period', click en 'Apply' y, finalmente, click en 'Download Data'. ¡POR FAVOR! GUARDAR ESTOS ARCHIVOS EN UNA CARPETA LLAMADA precios EN EL MISMO DIRECTORIO DONDE TIENEN ESTE ARCHIVO. Luego de esto, seguir las indicaciones dadas en 1.2. 1.3. Carguemos archivos .csv como ventanas de datos de pandas Ahora podemos comenzar a importar nuestros archivos. Una de las formas más comunes de trabajar con análisis de datos es en pandas. Esto es debido a que pandas está construido sobre NumPy y provee estructuras de datos y herramientas de análisis fáciles de usar. End of explanation """ help(pd.read_csv) # Cargamos hoja de calculo en un dataframe # file_apple = '/home/esteban/AnacondaProjects/Simulacion2017/Modulo2/precios/AAPL.csv' file_apple = 'precios/AAPL.csv' df_apple = pd.read_csv(file_apple) df_apple """ Explanation: Para leer archivos .csv, utilizaremos la función pd.read_csv... End of explanation """ # Cargamos hoja de calculo en un dataframe file_apple = 'precios/AAPL.csv' df_apple = pd.read_csv(file_apple, index_col='Date', usecols=['Date', 'Adj_Close']) df_apple """ Explanation: Acá hay varias cosas por notar. Quisieramos indizar por fecha. Para nuestra aplicación solo nos interesan los precios de cierre de las acciones (columna Adj_Close). End of explanation """ import matplotlib.pyplot as plt %matplotlib inline df_apple.plot(figsize=(8,6)); """ Explanation: Ahora, grafiquemos... End of explanation """ file_amazon = 'precios/AMZN.csv' file_microsoft = 'precios/MSFT.csv' file_nvidia = 'precios/NVDA.csv' df_amazon = pd.read_csv(file_amazon, index_col='Date', usecols=['Date', 'Adj_Close']) df_microsoft = pd.read_csv(file_microsoft, index_col='Date', usecols=['Date', 'Adj_Close']) df_nvidia = pd.read_csv(file_nvidia, index_col='Date', usecols=['Date', 'Adj_Close']) closes = pd.DataFrame(index=df_amazon.index, columns=['AAPL', 'AMZN', 'MSFT', 'NVDA']) closes.index.name = 'Date' closes['AAPL'] = df_apple closes['AMZN'] = df_amazon closes['MSFT'] = df_microsoft closes['NVDA'] = df_nvidia closes closes.plot(figsize=(8,6)); """ Explanation: Actividad. Importen todos los archivos .csv como acabamos de hacerlo con el de apple. Además, crear un solo DataFrame que cuyos encabezados por columna sean los nombres respectivos (AAPL, AMZN,...) y contengan los datos de precio de cierre. End of explanation """ from pandas_datareader import data """ Explanation: 2. Descargar directamente los datos Para esto utilizaremos el paquete pandas_datareader. Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda: conda install -c conda-forge pandas-datareader Anaconda prompt https://pandas-datareader.readthedocs.io/en/latest/ End of explanation """ help(data.DataReader) # Definimos los instrumentos que vamos a descargar. Como antes tendremos Apple, Amazon, Microsoft y Nvidia. tickers = ['AAPL', 'AMZN', 'MSFT', 'NVDA'] # Definimos que fuente online vamos a usar (yahoo finance) data_source = 'yahoo' # Queremos los datos desde 01/01/2011 hasta 31/12/2016. start_date = '2011-01-01' end_date = '2016-12-31' # Usamos la función DataReader. Si, así de fácil... panel_data = data.DataReader(tickers, data_source, start_date, end_date) """ Explanation: Utilizaremos la función data.DataReader... End of explanation """ panel_data """ Explanation: ¿Qué contiene esta variable? End of explanation """ # Notar que los índices se indican como Major_axis closes = panel_data.ix['Adj Close'] closes """ Explanation: Como antes, solo nos interesan los precios de cierre ajustados... End of explanation """ # Generamos todas los días entre las fechas dadas all_weekdays = pd.date_range(start=start_date, end=end_date, freq='B') # Reindizamos en este orden closes = closes.reindex(all_weekdays) closes """ Explanation: Reordenamos las fechas... End of explanation """ # Se verán 'huecos' en la gráfica closes.plot(figsize=(8,6)); # Llenamos los huecos con el precio de cierre del día anterior closes = closes.fillna(method='ffill') closes # Gráfico limpio... closes.plot(figsize=(8,6)); """ Explanation: Las fechas para las que no se tienen datos quedan marcadas con ǸaN End of explanation """ closes.describe() """ Explanation: Una vez tenemos los datos, podemos operar con ellos. Por ejemplo un resumen de datos estadísticos se podría obtener con End of explanation """
google-aai/sc17
cats/step_8_to_9.ipynb
apache-2.0
# Enter your username: YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address # Libraries for this section: import os import datetime import numpy as np import pandas as pd import cv2 import matplotlib.pyplot as plt import matplotlib.image as mpimg import tensorflow as tf from tensorflow.contrib.learn import RunConfig, Experiment from tensorflow.contrib.learn.python.learn import learn_runner # Directory settings: TRAIN_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_images/') # Directory where the training dataset lives. DEBUG_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/debugging_images/') # Directory where the debugging dataset lives. VALID_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/validation_images/') # Directory where the validation dataset lives. TEST_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/test_images/') # Directory where the test dataset lives. OUTPUT_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/output_cnn_big/') # Directory where we store our logging and models. # TensorFlow setup: NUM_CLASSES = 2 # This code can be generalized beyond 2 classes (binary classification). QUEUE_CAP = 5000 # Number of images the TensorFlow queue can store during training. # For debugging, QUEUE_CAP is ignored in favor of using all images available. TRAIN_BATCH_SIZE = 500 # Number of images processed every training iteration. DEBUG_BATCH_SIZE = 100 # Number of images processed every debugging iteration. TRAIN_STEPS = 3000 # Number of batches to use for training. DEBUG_STEPS = 2 # Number of batches to use for debugging. # Example: If dataset is 5 batches ABCDE, train_steps = 2 uses AB, train_steps = 7 uses ABCDEAB). # Monitoring setup: TRAINING_LOG_PERIOD_SECS = 60 # How often we want to log training metrics (from training hook in our model_fn). CHECKPOINT_PERIOD_SECS = 60 # How often we want to save a checkpoint. # Hyperparameters we'll tune in the tutorial: DROPOUT = 0.6 # Regularization parameter for neural networks - must be between 0 and 1. # Additional hyperparameters: LEARNING_RATE = 0.001 # Rate at which weights update. CNN_KERNEL_SIZE = 3 # Receptive field will be square window with this many pixels per side. CNN_STRIDES = 2 # Distance between consecutive receptive fields. CNN_FILTERS = 16 # Number of filters (new receptive fields to train, i.e. new channels) in first convolutional layer. FC_HIDDEN_UNITS = 512 # Number of hidden units in the fully connected layer of the network. """ Explanation: Feline Neural Network Author(s): kozyr@google.com, bfoo@google.com Reviewer(s): Let's train a basic convolutional neural network to recognize cats. Setup Download all of your image sets to the VM. Then set aside a couple thousand training images for debugging. mkdir -p ~/data/training_images gsutil -m cp gs://$BUCKET/catimages/training_images/*.png ~/data/training_images/ mkdir -p ~/data/validation_images gsutil -m cp gs://$BUCKET/catimages/validation_images/*.png ~/data/validation_images/ mkdir -p ~/data/test_images gsutil -m cp gs://$BUCKET/catimages/test_images/*.png ~/data/test_images/ mkdir -p ~/data/debugging_images mv ~/data/training_images/000*.png ~/data/debugging_images/ mv ~/data/training_images/001*.png ~/data/debugging_images/ echo "done!" If you've already trained the model below once, SSH into your VM and run the following: rm -r ~/data/output_cnn_big so that you can start over. End of explanation """ def show_inputs(dir, filelist=None, img_rows=1, img_cols=3, figsize=(20, 10)): """Display the first few images. Args: dir: directory where the files are stored filelist: list of filenames to pull from, if left as default, all files will be used img_rows: number of rows of images to display img_cols: number of columns of images to display figsize: sizing for inline plots Returns: pixel_dims: pixel dimensions (height and width) of the image """ if filelist is None: filelist = os.listdir(dir) # Grab all the files in the directory filelist = np.array(filelist) plt.close('all') fig = plt.figure(figsize=figsize) print('File names:') for i in range(img_rows * img_cols): print(str(filelist[i])) a=fig.add_subplot(img_rows, img_cols,i + 1) img = mpimg.imread(os.path.join(dir, str(filelist[i]))) plt.imshow(img) plt.show() return np.shape(img) pixel_dim = show_inputs(TRAIN_DIR) print('Images have ' + str(pixel_dim[0]) + 'x' + str(pixel_dim[1]) + ' pixels.') pixels = pixel_dim[0] * pixel_dim[1] """ Explanation: Let's visualize what we're working with and get the pixel count for our images. They should be square for this to work, but luckily we padded them with black pixels where needed previously. End of explanation """ # Input function: def generate_input_fn(dir, batch_size, queue_capacity): """Return _input_fn for use with TF Experiment. Will be called in the Experiment section below (see _experiment_fn). Args: dir: directory we're taking our files from, code is written to collect all files in this dir. batch_size: number of rows ingested in each training iteration. queue_capacity: number of images the TF queue can store. Returns: _input_fn: a function that returns a batch of images and labels. """ file_pattern = os.path.join(dir, '*') # We're pulling in all files in the directory. def _input_fn(): """A function that returns a batch of images and labels. Args: None Returns: image_batch: 4-d tensor collection of images. label_batch: 1-d tensor of corresponding labels. """ height, width, channels = [pixel_dim[0], pixel_dim[1], 3] # [height, width, 3] because there are 3 channels per image. filenames_tensor = tf.train.match_filenames_once(file_pattern) # Collect the filenames # Queue that periodically reads in images from disk: # When ready to run iteration, TF will take batch_size number of images out of filename_queue. filename_queue = tf.train.string_input_producer( filenames_tensor, shuffle=False) # Do not shuffle order of the images ingested. # Convert filenames from queue into contents (png images pulled into memory): reader = tf.WholeFileReader() filename, contents = reader.read(filename_queue) # Decodes contents pulled in into 3-d tensor per image: image = tf.image.decode_png(contents, channels=channels) # If dimensions mismatch, pad with zeros (black pixels) or crop to make it fit: image = tf.image.resize_image_with_crop_or_pad(image, height, width) # Parse out label from filename: label = tf.string_to_number(tf.string_split([tf.string_split([filename], '_').values[-1]], '.').values[0]) # All your filenames should be in this format number_number_label.extension where label is 0 or 1. # Execute above in a batch of batch_size to create a 4-d tensor of collection of images: image_batch, label_batch = tf.train.batch( [image, label], batch_size, num_threads=1, # We'll decline the multithreading option so that everything stays in filename order. capacity=queue_capacity) # Normalization for better training: # Change scale from pixel uint8 values between 0 and 255 into normalized float32 values between 0 and 1: image_batch = tf.to_float(image_batch) / 255 # Rescale from (0,1) to (-1,1) so that the "center" of the image range is 0: image_batch = (image_batch * 2) - 1 return image_batch, label_batch return _input_fn """ Explanation: Step 5 - Get tooling for training convolutional neural networks Here is where we enable training convolutional neural networks on data inputs like ours. We'll build it using a TensorFlow estimator. TensorFlow (TF) is designed for scale, which means it doesn't pull all our data into memory all at once, but instead it's all about lazy execution. We'll write functions which it will run when it's efficient to do so. TF will pull in batches of our image data and run the functions we wrote. In order to make this work, we need to write code for the following: Input function: generate_input_fn() Neural network architecture: cnn() Model function: generate_model_fn() Estimator: tf.estimator.Estimator() Experiment: generate_experiment_fn() Prediction generator: cat_finder() Input function The input function tells TensorFlow what format of feature and label data to expect. We'll set ours up to pull in all images in a directory we point it at. It expects images with filenames in the following format: number_number_label.extension, so if your file naming scheme is different, please edit the input function. End of explanation """ # CNN architecture: def cnn(features, dropout, reuse, is_training): """Defines the architecture of the neural network. Will be called within generate_model_fn() below. Args: features: feature data as 4-d tensor (of batch_size) pulled in when_input_fn() is executed. dropout: regularization parameter in last layer (between 0 and 1, exclusive). reuse: a scoping safeguard. First time training: set to False, after that, set to True. is_training: if True then fits model and uses dropout, if False then doesn't consider the dropout Returns: 2-d tensor: each images [logit(1-p), logit(p)] where p=Pr(1), i.e. probability that class is 1 (cat in our case). Note: logit(p) = logodds(p) = log(p / (1-p)) """ # Next, we define a scope for reusing our variables, choosing our network architecture and naming our layers. with tf.variable_scope('cnn', reuse=reuse): layer_1 = tf.layers.conv2d( # 2-d convolutional layer; size of output image is (pixels/stride) a side with channels = filters. inputs=features, # previous layer (inputs) is features argument to the main function kernel_size=CNN_KERNEL_SIZE, # 3x3(x3 because we have 3 channels) receptive field (only square ones allowed) strides=CNN_STRIDES, # distance between consecutive receptive fields filters=CNN_FILTERS, # number of receptive fields to train; think of this as a CNN_FILTERS-channel image which is input to next layer) padding='SAME', # SAME uses zero padding if not all CNN_KERNEL_SIZE x CNN_KERNEL_SIZE positions are filled, VALID will ignore missing activation=tf.nn.relu) # activation function is ReLU which is f(x) = max(x, 0) # For simplicity, this neural network doubles the number of receptive fields (filters) with each layer. # By using more filters, we are able to preserve the spatial dimensions better by storing more information. # # To determine how much information is preserved by each layer, consider that with each layer, # the output width and height is decimated by the `strides` value. # When strides=2 for example, the input width W and height H is reduced by 2x, resulting in # an "image" (formally, an activation field) for each filter output with dimensions W/2 x H/2. # By doubling the number of filters compared to the input number of filters, the total output # dimension becomes W/2 x H/2 x CNN_FILTERS*2, essentially compressing the input of the layer # (W x H x CNN_FILTERS) to half as many total "pixels" (hidden units) at the output. # # On the other hand, increasing the number of filters will also increase the training time proportionally, # as there are that more weights and biases to train and convolutions to perform. # # As an exercise, you can play around with different numbers of filters, strides, and kernel_sizes. # To avoid very long training time, make sure to keep kernel sizes small (under 5), # strides at least 2 but no larger than kernel sizes (or you will skip pixels), # and bound the number of filters at each level (no more than 512). # # When modifying these values, it is VERY important to keep track of the size of your layer outputs, # i.e. the number of hidden units, since the final layer will need to be flattened into a 1D vector with size # equal to the total number of hidden units. For this reason, using strides that are divisible by the width # and height of the input may be the easiest way to avoid miscalculations from rounding. layer_2 = tf.layers.conv2d( inputs=layer_1, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 1), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_3 = tf.layers.conv2d( inputs=layer_2, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 2), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_4 = tf.layers.conv2d( inputs=layer_3, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 3), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_5 = tf.layers.conv2d( inputs=layer_4, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 4), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_5_flat = tf.reshape( # Flattening to 2-d tensor (1-d per image row for feedforward fully-connected layer) layer_5, shape=[-1, # Reshape final layer to 1-d tensor per image. CNN_FILTERS * (2 ** 4) * # Number of filters (depth), times... pixels / (CNN_STRIDES ** 5) / (CNN_STRIDES ** 5)]) # Number of hidden units per filter (input pixels / width decimation / height decimation) dense_layer= tf.layers.dense( # fully connected layer inputs=layer_5_flat, units=FC_HIDDEN_UNITS, # number of hidden units activation=tf.nn.relu) dropout_layer = tf.layers.dropout( # Dropout layer randomly keeps only dropout*100% of the dense layer's hidden units in training and autonormalizes during prediction. inputs=dense_layer, rate=dropout, training=is_training) return tf.layers.dense(inputs=dropout_layer, units=NUM_CLASSES) # 2-d tensor: [logit(1-p), logit(p)] for each image in batch. """ Explanation: Neural network architecture This is where we define the architecture of the neural network we're using, such are the number of hidden layers and units. End of explanation """ # Model function: def generate_model_fn(dropout): """Return a function that determines how TF estimator operates. The estimator has 3 modes of operation: * train (fitting and updating the model) * eval (collecting and returning validation metrics) * predict (using the model to label unlabeled images) The returned function _cnn_model_fn below determines what to do depending on the mode of operation, and returns specs telling the estimator what to execute for that mode. Args: dropout: regularization parameter in last layer (between 0 and 1, exclusive) Returns: _cnn_model_fn: a function that returns specs for use with TF estimator """ def _cnn_model_fn(features, labels, mode): """A function that determines specs for the TF estimator based on mode of operation. Args: features: actual data (which goes into scope within estimator function) as 4-d tensor (of batch_size), pulled in via tf executing _input_fn(), which is the output to generate_input_fn() and is in memory labels: 1-d tensor of 0s and 1s mode: TF object indicating whether we're in train, eval, or predict mode. Returns: estim_specs: collections of metrics and tensors that are required for training (e.g. prediction values, loss value, train_op tells model weights how to update) """ # Use the cnn() to compute logits: logits_train = cnn(features, dropout, reuse=False, is_training=True) logits_eval = cnn(features, dropout, reuse=True, is_training=False) # We'll be evaluating these later. # Transform logits into predictions: pred_classes = tf.argmax(logits_eval, axis=1) # Returns 0 or 1, whichever has larger logit. pred_prob = tf.nn.softmax(logits=logits_eval)[:, 1] # Applies softmax function to return 2-d probability vector. # Note: we're not outputting pred_prob in this tutorial, that line just shows you # how to get it if you want it. Softmax[i] = exp(logit[i]) / sum(exp((logit[:])) # If we're in prediction mode, early return predicted class (0 or 1): if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=pred_classes) # If we're not in prediction mode, define loss function and optimizer. # Loss function: # This is what the algorithm minimizes to learn the weights. # tf.reduce_mean() just takes the mean over a batch, giving back a scalar. # Inside tf.reduce_mean() we'll select any valid binary loss function we want to use. loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits_train, labels=tf.cast(labels, dtype=tf.int32))) # Optimizer: # This is the scheme the algorithm uses to update the weights. # AdamOptimizer is adaptive moving average, feel free to replace with one you prefer. optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE) # The minimize method below doesn't minimize anything, it just takes a step. train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()) # Performance metric: # Should be whatever we chose as we defined in Step 1. This is what you said you care about! # This output is for reporting only, it is not optimized directly. acc = tf.metrics.accuracy(labels=labels, predictions=pred_classes) # Hooks - pick what to log and show: # Hooks are designed for monitoring; every time TF writes a summary, it'll append these. logging_hook = tf.train.LoggingTensorHook({ 'x-entropy loss': loss, 'training accuracy': acc[0], }, every_n_secs=TRAINING_LOG_PERIOD_SECS) # Stitch everything together into the estimator specs, which we'll output here so it can # later be passed to tf.estimator.Estimator() estim_specs = tf.estimator.EstimatorSpec( mode=mode, predictions=pred_classes, loss=loss, train_op=train_op, training_hooks=[logging_hook], eval_metric_ops={ 'accuracy': acc, # This line is Step 7! } ) # TF estim_specs defines a huge dict that stores different metrics and operations for useby TF Estimator. # This gives you the interaction between your architecture in cnn() and the weights, etc. in the current iteration which # will be used as input in the next iteration. return estim_specs return _cnn_model_fn """ Explanation: Model function The model function tells TensorFlow how to call the model we designed above and what to do when we're in training vs evaluation vs prediction mode. This is where we define the loss function, the optimizer, and the performance metric (which we picked in Step 1). End of explanation """ # TF Estimator: # WARNING: Don't run this block of code more than once without first changing OUTPUT_DIR. estimator = tf.estimator.Estimator( model_fn=generate_model_fn(DROPOUT), # Call our generate_model_fn to create model function model_dir=OUTPUT_DIR, # Where to look for data and also to paste output. config=RunConfig( save_checkpoints_secs=CHECKPOINT_PERIOD_SECS, keep_checkpoint_max=20, save_summary_steps=100, log_step_count_steps=100) ) """ Explanation: TF Estimator This is where it all comes together: TF Estimator takes in as input everything we've created thus far and when executed it will output everything that is necessary for training (fits a model), evaluation (outputs metrics), or prediction (outputs predictions). End of explanation """ # TF Experiment: def experiment_fn(output_dir): """Create _experiment_fn which returns a TF experiment To be used with learn_runner, which we imported from tf. Args: output_dir: which is where we write our models to. Returns: a TF Experiment """ return Experiment( estimator=estimator, # What is the estimator? train_input_fn=generate_input_fn(TRAIN_DIR, TRAIN_BATCH_SIZE, QUEUE_CAP), # Generate input function designed above. eval_input_fn=generate_input_fn(DEBUG_DIR, DEBUG_BATCH_SIZE, QUEUE_CAP), train_steps=TRAIN_STEPS, # Number of batches to use for training. eval_steps=DEBUG_STEPS, # Number of batches to use for eval. min_eval_frequency=1, # Run eval once every min_eval_frequency number of checkpoints. local_eval_frequency=1 ) """ Explanation: TF Experiment A TF Experiment defines how to run your TF estimator during training and debugging only. TF Experiments are not necessary for prediction once training is complete. TERMINOLOGY WARNING: The word "experiment" here is not used the way it is used by typical scientists and statisticians. End of explanation """ # Enable TF verbose output: tf.logging.set_verbosity(tf.logging.INFO) start_time = datetime.datetime.now() print('It\'s {:%H:%M} in London'.format(start_time) + ' --- Let\'s get started!') # Let the learning commence! Run the TF Experiment here. learn_runner.run(experiment_fn, OUTPUT_DIR) # Output lines using the word "Validation" are giving our metric on the non-training dataset (from DEBUG_DIR). end_time = datetime.datetime.now() print('\nIt was {:%H:%M} in London when we started.'.format(start_time)) print('\nWe\'re finished and it\'s {:%H:%M} in London'.format(end_time)) print('\nCongratulations! Training is complete!') print('\nIt was {:%H:%M} in London when we started.'.format(start_time)) print('\nWe\'re finished and it\'s {:%H:%M} in London'.format(end_time)) print('\nCongratulations! Training is complete!') # Observed labels from filenames: def get_labels(dir): """Get labels from filenames. Filenames must be in the following format: number_number_label.png Args: dir: directory containing image files Returns: labels: 1-d np.array of binary labels """ filelist = os.listdir(dir) # Use all the files in the directory labels = np.array([]) for f in filelist: split_filename = f.split('_') label = int(split_filename[-1].split('.')[0]) labels = np.append(labels, label) return labels # Cat_finder function for getting predictions: def cat_finder(dir, model_version): """Get labels from model. Args: dir: directory containing image files Returns: predictions: 1-d np array of binary labels """ num_predictions = len(os.listdir(dir)) predictions = [] # Initialize array. # Estimator.predict() returns a generator g. Call next(g) to retrieve the next value. prediction_gen = estimator.predict( input_fn=generate_input_fn(dir=dir, batch_size=TRAIN_STEPS, queue_capacity=QUEUE_CAP ), checkpoint_path=model_version ) # Use generator to ensure ordering is preserved and predictions match order of validation_labels: i = 1 for pred in range(0, num_predictions): predictions.append(next(prediction_gen)) #Append the next value of the generator to the prediction array i += 1 if i % 1000 == 0: print('{:d} predictions completed (out of {:d})...'.format(i, len(os.listdir(dir)))) print('{:d} predictions completed (out of {:d})...'.format(len(os.listdir(dir)), len(os.listdir(dir)))) return np.array(predictions) """ Explanation: Step 6 - Train a model! Let's run our lovely creation on our training data. In order to train, we need learn_runner(), which we imported from TensorFlow above. For prediction, we will only need estimator.predict(). End of explanation """ def get_accuracy(truth, predictions, threshold=0.5, roundoff = 2): """Compares labels with model predictions and returns accuracy. Args: truth: can be bool (False, True), int (0, 1), or float (0, 1) predictions: number between 0 and 1, inclusive threshold: we convert the predictions to 1s if they're above this value roundoff: report accuracy to how many decimal places? Returns: accuracy: number correct divided by total predictions """ truth = np.array(truth) == (1|True) predicted = np.array(predictions) >= threshold matches = sum(predicted == truth) accuracy = float(matches) / len(truth) return round(accuracy, roundoff) """ Explanation: Get training accuracy End of explanation """ files = os.listdir(TRAIN_DIR) model_version = OUTPUT_DIR + 'model.ckpt-' + str(TRAIN_STEPS) observed = get_labels(TRAIN_DIR) predicted = cat_finder(TRAIN_DIR, model_version) print('Training accuracy is ' + str(get_accuracy(observed, predicted))) """ Explanation: Get predictions and performance metrics Create functions for outputting observed labels, predicted labels, and accuracy. Filenames must be in the following format: number_number_label.extension End of explanation """ files = os.listdir(DEBUG_DIR) predicted = cat_finder(DEBUG_DIR, model_version) observed = get_labels(DEBUG_DIR) print('Debugging accuracy is ' + str(get_accuracy(observed, predicted))) df = pd.DataFrame({'files': files, 'predicted': predicted, 'observed': observed}) hit = df.files[df.observed == df.predicted] miss = df.files[df.observed != df.predicted] # Show successful classifications: show_inputs(DEBUG_DIR, hit, 3) # Show unsuccessful classifications: show_inputs(DEBUG_DIR, miss, 3) """ Explanation: Step 7 - Debugging and Tuning Debugging It's worth taking a look to see if there's something special about the images we misclassified. End of explanation """ files = os.listdir(VALID_DIR) predicted = cat_finder(VALID_DIR, model_version) observed = get_labels(VALID_DIR) print('\nValidation accuracy is ' + str(get_accuracy(observed, predicted))) """ Explanation: Step 8 - Validation Apply cat_finder() to the validation dataset. Since this is validation, we'll only look at the final performance metric (accuracy) and nothing else. End of explanation """ # Hypothesis test we'll use: from statsmodels.stats.proportion import proportions_ztest # Testing setup: SIGNIFICANCE_LEVEL = 0.05 TARGET_ACCURACY = 0.80 files = os.listdir(TEST_DIR) predicted = cat_finder(TEST_DIR, model_version) observed = get_labels(TEST_DIR) print('\nTest accuracy is ' + str(get_accuracy(observed, predicted, roundoff=4))) # Using standard notation for a one-sided test of one population proportion: n = len(predicted) x = round(get_accuracy(observed, predicted, roundoff=4) * n) p_value = proportions_ztest(count=x, nobs=n, value=TARGET_ACCURACY, alternative='larger')[1] if p_value < SIGNIFICANCE_LEVEL: print('Congratulations! Your model is good enough to build. It passes testing. Awesome!') else: print('Too bad. Better luck next project. To try again, you need a pristine test dataset.') """ Explanation: Step 9 - Statistical Testing Apply cat_finder() to the test dataset ONE TIME ONLY. Since this is testing, we'll only look at the final performance metric (accuracy) and the results of the statistical hypothesis test. End of explanation """
sns-chops/multiphonon
examples/getdos2-V_Ei120meV-noUI.ipynb
mit
import os, numpy as np import histogram.hdf as hh, histogram as H from matplotlib import pyplot as plt %matplotlib notebook # %matplotlib inline import mantid from multiphonon import getdos from multiphonon.sqe import plot as plot_sqe """ Explanation: Density of States Analysis Example This example demonatrates a routine procedure of calculating phonon density of states from an experimental NeXus data file for a powder vanadium sample measured at ARCS, a direct-geometry neutron chopper spectrometer at the Spallation Neutron Source (SNS), Oak Ridge National Lab. Summary of processing steps Gather experimental information and experimental raw data Reduce raw data to S(Q,E), the experimental dynamical structure factor, and inspect Convert S(Q,E) to phonon DOS Preparation Get python tools ready. This may takes a while End of explanation """ projectdir = os.path.abspath('./V_Ei120meV-noUI') !mkdir -p {projectdir} %cd {projectdir} """ Explanation: Create a new working directory and change into it. All inputs, intermediate results and final outputs will be in this new directory. End of explanation """ dest = 'ARCS_V_annulus.nxs' url = "https://mcvine.ornl.gov/multiphonon/ARCS_V_annulus.nxs" cmd = 'wget %r -O %r' % (url, dest) print cmd """ Explanation: Get experimental data For SNS users, experimental data are available in /SNS/"instrument_name"/IPTS-#### folders at the SNS analysis cluster. Here we will download the required data file from web. Build download command End of explanation """ %%time !{cmd} >log.download 2>err.download """ Explanation: Download: this will take a while (can be a few minutes to an hour, depending on internet speed) End of explanation """ ls """ Explanation: The following command should show the downloaded file "ARCS_V_annulus.nxs" End of explanation """ samplenxs = './ARCS_V_annulus.nxs' mtnxs = None Ei = 120 T = 300 """ Explanation: Experimental data and condition To start, we need to set the locations of the data files measured for the sample and empty can (for background correction), as well as the experimental conditions such as incident neutron energy (Ei, in meV) and sample temperature (T, in Kelvin). The example inputs explained: samplenxs: ARCS_V_annulus.nxs we just downloaded mtnxs: None. This means we will skip the empty can background correction for this example. Ei: 120. This is set by Fermi chopper settings during the experiment. An approximate number is fine. The actual Ei will be caculated from the experimental NeXus file. T: 300. This is set by sample environment. For room temperature measurement, use 300 (K). End of explanation """ Qaxis = Qmin, Qmax, dQ = 0, 17, 0.1 Eaxis = Emin, Emax, dE = -115., 115., 1. workdir = 'work' iqe_h5 = 'iqe.h5' from multiphonon import getdos %%time for m in getdos.reduce2iqe(samplenxs, Emin=Emin, Emax=Emax, dE=dE, Qmin=Qmin, Qmax=Qmax, dQ=dQ, iqe_h5=iqe_h5, workdir=workdir): print m ls -tl {workdir}/{iqe_h5} """ Explanation: Obtain S(Q,E) Now we are ready to reduce the experimental data to obtain the dynamical structure factor, S(Q,E). S(Q,E) spectra for both the sample and the empty can is the starting point for getdos processing. The Q and E axes need to be define: E axis Emin: -115. Usually -Ei Emax: 115. Usually slightly smaller than Ei dE: 1. Usually Ei/100 Q axis Qmin: 0. Usually 0 Qmax: 17. Usually 2 X E2Q(Ei) dQ: 0.1. Usually Emax/100 End of explanation """ iqe = hh.load(os.path.join(workdir, iqe_h5)) plt.figure(figsize=(6,4)) plot_sqe(iqe) # plt.xlim(0, 11) plt.clim(0, 3e-3) """ Explanation: Plot sample IQE End of explanation """ iqe2 = iqe.copy() I = iqe2.I; I[I!=I] = 0 # remove NaNs IE = iqe2.sum('Q') # sum over Q plt.figure(figsize=(6,4)) plt.plot(IE.energy, IE.I) """ Explanation: This is a plot of vanadium S(Q, E) histogram. * The colored region is within the dynamical range of the measurement * Vanadium is incoherent, therefore the intensity is mostly momentum-independent * Make sure the energy and momentum transfer axes are reasonable so that the S(Q,E) spectrum looks reasonable * You can improve the Q,E axis parameters if you like, by re-executing the relevant cells above Now integreate over the Q (momentum transfer) axis to obtain energy spectrum I(E) End of explanation """ for msg in getdos.getDOS( samplenxs, mt_fraction=0.9, const_bg_fraction=0.004, Emin=Emin, Emax=Emax, dE=dE, Qmin=Qmin, Qmax=Qmax, dQ=dQ, T=300., Ecutoff=40., elastic_E_cutoff=(-15, 7.), M=50.94, C_ms = 0.26, Ei = 120, initdos=None, workdir = workdir, ): print msg """ Explanation: At the center of this plot there is an enormous peak that is due to elastic scattering, which should be excluded from the phonon DOS calculation Zoom in to see the rough range of the elastic peak and take notes. We need them in the analysis below. Run GetDOS Phonon DOS will be obtained from SQE histogram by an iterative procedure where multiphonon and multiple scattering corrections are applied to the measured SQE spectrum, assuming incoherent approximation, and the corrected spectrum is then converted to DOS. Input parameters - Emin, Emax of elastic peak: -15, 7. Make an estimate from the I(E) spectrum - Average atomic mass: 50.94. Atomic mass of vanadium - mt_fraction: 0.9. Depends on the geometrical property of the sample and the empty can. Usually between 0.9 and 1. - Ecutoff: Max phonon energy. 40meV. This is also used as the "stiching point" if multiple Ei datasets are combined. - C_ms: 0.26: Ratio of multiple scattering to multiphon scattering. Depends on sample shape. - const_bg_fraction: 0.004: Background noise level. - initdos: leave it as None for standard DOS analysis. If working with multiple Ei datasets, this should be the DOS histgram obtained from larger Ei. End of explanation """ ls {workdir}/ """ Explanation: Check output Results are saved in "work" directory End of explanation """ dos = hh.load(os.path.join(workdir, 'final-dos.h5')) plt.figure(figsize=(5,3)) plt.plot(dos.E, dos.I) plt.xlabel('Energy (meV)') plt.xlim(0, 50) plt.tight_layout() """ Explanation: Plot the final result for DOS End of explanation """ from multiphonon.backward import plotutils as pu plt.figure(figsize=(5,3)) pu.plot_dos_iteration(workdir) plt.xlim(0, 50) plt.figure(figsize=(6,4)) pu.plot_residual(workdir) plt.figure(figsize=(8, 4)) pu.plot_intermediate_result_se(os.path.join(workdir, 'round-4')) """ Explanation: More plotting utils are available End of explanation """
tritemio/multispot_paper
out_notebooks/usALEX-5samples-PR-leakage-dir-ex-all-ph-out-27d.ipynb
mit
ph_sel_name = "None" data_id = "27d" # data_id = "7d" """ Explanation: Executed: Mon Mar 27 11:39:07 2017 Duration: 7 seconds. usALEX-5samples - Template This notebook is executed through 8-spots paper analysis. For a direct execution, uncomment the cell below. End of explanation """ from fretbursts import * init_notebook() from IPython.display import display """ Explanation: Load software and filenames definitions End of explanation """ data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir """ Explanation: Data folder: End of explanation """ from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict data_id """ Explanation: List of data files: End of explanation """ d = loader.photon_hdf5(filename=files_dict[data_id]) """ Explanation: Data load Initial loading of the data: End of explanation """ leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv' leakage = np.loadtxt(leakage_coeff_fname) print('Leakage coefficient:', leakage) """ Explanation: Load the leakage coefficient from disk: End of explanation """ dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv' dir_ex_aa = np.loadtxt(dir_ex_coeff_fname) print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa) """ Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk: End of explanation """ d.leakage = leakage d.dir_ex = dir_ex_aa """ Explanation: Update d with the correction coefficients: End of explanation """ d.ph_times_t, d.det_t """ Explanation: Laser alternation selection At this point we have only the timestamps and the detector numbers: End of explanation """ d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0) """ Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations: End of explanation """ plot_alternation_hist(d) """ Explanation: We should check if everithing is OK with an alternation histogram: End of explanation """ loader.alex_apply_period(d) """ Explanation: If the plot looks good we can apply the parameters with: End of explanation """ d """ Explanation: Measurements infos All the measurement data is in the d variable. We can print it: End of explanation """ d.time_max """ Explanation: Or check the measurements duration: End of explanation """ d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa """ Explanation: Compute background Compute the background using automatic threshold: End of explanation """ d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all')) print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) n_bursts_do = ds_do.num_bursts[0] n_bursts_fret = ds_fret.num_bursts[0] n_bursts_do, n_bursts_fret d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret) print('D-only fraction:', d_only_frac) dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False); """ Explanation: Burst search and selection End of explanation """ bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size', x_range=E_range_do, x_ax=E_ax, save_fitter=True) mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth]) plt.xlim(-0.3, 0.5) print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100)) """ Explanation: Donor Leakage fit End of explanation """ nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean """ Explanation: Burst sizes End of explanation """ E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(E_fitter, ax=ax[0]) mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100)) display(E_fitter.params*100) """ Explanation: Fret fit Max position of the Kernel Density Estimation (KDE): End of explanation """ ds_fret.fit_E_m(weights='size') """ Explanation: Weighted mean of $E$ of each burst: End of explanation """ ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None) """ Explanation: Gaussian fit (no weights): End of explanation """ ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err """ Explanation: Gaussian fit (using burst size as weights): End of explanation """ S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter, ax=ax[0]) mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100)) display(S_fitter.params*100) S_kde = S_fitter.kde_max_pos[0] S_gauss = S_fitter.params.loc[0, 'center'] S_gauss_sig = S_fitter.params.loc[0, 'sigma'] S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0])) S_kde, S_gauss, S_gauss_sig, S_gauss_err """ Explanation: Stoichiometry fit Max position of the Kernel Density Estimation (KDE): End of explanation """ S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit """ Explanation: The Maximum likelihood fit for a Gaussian population is the mean: End of explanation """ weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit """ Explanation: Computing the weighted mean and weighted standard deviation we get: End of explanation """ sample = data_id """ Explanation: Save data to file End of explanation """ variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err ' 'E_pr_do_kde nt_mean\n') """ Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. End of explanation """ variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str) """ Explanation: This is just a trick to format the different variables: End of explanation """
isaacmg/fb_scraper
data/Examining data using Spark.ipynb
apache-2.0
# Do an initial test of Spark to make sure it works. import findspark findspark.init() import pyspark sc = pyspark.SparkContext('local[*]') # do something to prove it works rdd = sc.parallelize(range(1000)) rdd.takeSample(False, 5) sc.stop() """ Explanation: Simple data analysis with Apache Spark In this example we are going to use Apache Spark to perform distributed analysis on a CSV generated by our Python scrapping class. The goal of this program is mainly to clean data for further analysis down the line. At the end, however we will also show some MLlib alogrithms as well. End of explanation """ from pyspark.sql import SQLContext from pyspark.sql.functions import regexp_replace, trim, col, lower from pyspark.ml.feature import StopWordsRemover from pyspark.ml.feature import Tokenizer #Comment and uncomment the following line as necessary #sc = pyspark.SparkContext('local[*]') sqlContext = SQLContext(sc) df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('160558090672531_facebook_statuses.csv') df.show() # Drop the null crap df = df.na.drop(subset=["status_message"]) # Remove punctation from status messages df2 = df.select(regexp_replace("status_message", "\p{Punct}", "").alias("status_message"), "status_id") df2.show() messages = df2 # Tokenize and remove stop words tokenizer = Tokenizer(inputCol="status_message", outputCol="filtered") filterw = tokenizer.transform(messages) filterw.show() remover = StopWordsRemover(inputCol="filtered", outputCol="filtered1") filtered_final = remover.transform(filterw) filtered_final.show() messages = filtered_final.select("filtered1") messages2 = filtered_final.select("status_id","status_message").rdd # We will use this to create our inverted index later in the file doc_index = messages2.collectAsMap() #Convert to RDD message_rdd=messages.rdd """ Explanation: Now that we have checked that PySpark is up and running let's start to do some processing using a csv file generated previously using fb_post.py (for instructions on using fb_scrapper please see the readme file). First we are going to read the CSV file into a data frame and filter out stop words and punctation. End of explanation """ # WordCount with RDD. Normally this would be used in conjunction with NLP to extract trending topics. from operator import add import matplotlib.pyplot as plt import numpy as np def plot_histogram(indexes, values): indexes2 = np.arange(len(indexes)) values = np.asarray(values) plt.bar(indexes2, values) bar_width = 2 plt.xticks(indexes2 + bar_width*.2, indexes) plt.show() # Prints the WordCount of words that appear more than 25 times and appends them to indexes and values. def print_word_count(output, indexes, values): for (word, count) in output: if count > 25 and word is not "": # append to indexes indexes.append(word) values.append(count) print("%s: %i" % (word, count)) statuses = message_rdd.flatMap(lambda x: x) words = statuses.flatMap(lambda x: x) #se = statuses.flatMap(String) counts = words.flatMap(lambda x: x.split(' ')) \ .map(lambda x: (x, 1)) \ .reduceByKey(add) output1 = counts.sortByKey().collectAsMap() wordsMap = counts.collectAsMap() #ouput1.saveAsTextFile("keys.txt") # Get just the most popular words output = counts.takeOrdered(9, key = lambda x: -x[1]) indexes = [] values = [] print("The top words are:") print_word_count(output, indexes, values) plot_histogram(indexes,values) """ Explanation: Now we are going to do a simple word count with the rdd that we just created and generate a graph of the top words. End of explanation """ def make_document_index(raw_status): status = raw_status.flatMap(lambda x : x) doc_index = status.zipWithIndex() doc_index = doc_index.map(lambda y: (y[1], y[0])).collectAsMap() return doc_index def inverted_index(documents): #Invert to (status_id, terms[]) and then flatMapValues in order to get (status_id, term) documents = documents.map(lambda x: (x[1],x[0])).flatMapValues(lambda x: x) #Undo to (term, status_id) and reduce to get (term, status_ids[]) documents = documents.map(lambda x: (x[1],[x[0]])).reduceByKey(lambda a,b: a+b) print("inverted index sucessfully created") return documents messages = filtered_final.select("filtered1","status_id").rdd status_inverted_index = inverted_index(messages) inverted_map = status_inverted_index.collectAsMap() # Load file def get_word_count(some_dict, aRDD): return aRDD.filter(lambda x: x in list(some_dict.keys())).map(lambda key: (key, some_dict[key])) names = sc.textFile("rivers.txt") riverNames = names.flatMap(lambda x: x.split("\n")) r = get_word_count(wordsMap, riverNames) #r = riverNames.filter(lambda x : x in list(wordsMap.keys())).map(lambda key: (key, wordsMap[key])) r.collect() """ Explanation: Create a document index and an inverted index Now that we have done some simple procesing lets get into the meat of this notebook. We are going to create an inverted index using MR. End of explanation """ # Let's try to make things a little more interactive from ipywidgets import interact, interactive, fixed import ipywidgets as widgets from IPython.display import display value_search = widgets.Text() display(value_search) value_search2 = widgets.Text() display(value_search2) def find_key(sender): text = value_search.value if text in output1: print(text + " occurs " + str(output1[text]) + " times in the file") # Todo clean up sloppy method def mini_search(sender): text = value_search2.value.strip(" ") if text in inverted_map: l = inverted_map[text] for i in l: if i in doc_index: l = i print(doc_index[i]) value_search.on_submit(find_key) value_search2.on_submit(mini_search) """ Explanation: We will now be doing some interactive analysis with iPython widgets. (Note this will not necessarily render in GitHub, but if you download the notebook you should be able to see it). End of explanation """ #Wordcloud example import matplotlib.pyplot as plt from wordcloud import WordCloud, STOPWORDS def makeWordCloud(words): stopwords = set(STOPWORDS) stopwords.add("thank") wordcloud = WordCloud(stopwords=stopwords, background_color='black').generate(" ".join(wordlist)) plt.figure(figsize=(15,10)) plt.imshow(wordcloud) plt.axis('off') plt.show() wordlist = output1.keys() makeWordCloud(words) # Filter to extract known names of rivers and get useful URLS df2 = df.na.drop(subset=["status_link"]) df3 = df2.select("status_link") df2.show() """ Explanation: Now we are going to generate a WordCloud. End of explanation """ #Now let's try to use these urls to get their the respective images or videos from pyspark.sql import SparkSession from lxml import html import requests from IPython.display import Image spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() df2.createOrReplaceTempView("posts") sqlDF = spark.sql("SELECT * FROM posts WHERE status_link LIKE '%photo%'") sqlDF.show() statusRDD = sqlDF.select('status_link').rdd urls = statusRDD.flatMap(lambda x: x) print(urls.take(3)) #Now let's save the images possibly for a Computer Vision application urls2 = urls.collect() #for url in urls2: #page = requests.get(url) #tree = html.fromstring(page.content) #imageUrl = tree.xpath('//img[@class="spotlight"]/@src') """ Explanation: Okay so now we want to get the images and or videos. Uses for the images might include image search for our database (with the river extracted using NLP) or computer vision applications. End of explanation """ # This section remains buggy and prone to jams run at your own risk! from pyspark.mllib.clustering import LDA, LDAModel from pyspark.mllib.feature import IDF from pyspark.ml.feature import CountVectorizer from pyspark.mllib.linalg import Vectors #vectorize tags array for each user vectorizer = CountVectorizer(inputCol="filtered1", outputCol="features").fit(filtered_final) countVectors = vectorizer.transform(filtered_final).select("status_id", "features") countVectors.show() #find TF-IDF coefficients for each tag print("begin") frequencyVectors = countVectors.rdd.map(lambda vector: vector[1]) frequencyDenseVectors = frequencyVectors.map(lambda vector: Vectors.dense(vector)) idf = IDF().fit(frequencyDenseVectors) print('fitting complete') tfidf = idf.transform(frequencyDenseVectors) print("tf idf complete") #prepare corpus for LDA corpus = tfidf.map(lambda x: [1, x]).cache() print("entering lda phase") #train LDA ldaModel = LDA.train(corpus, k = 15, maxIterations=100, optimizer="online", docConcentration=2.0, topicConcentration=3.0) print("lda model complete") #ldaModel.save(sc, "ldaModel") from operator import itemgetter topics = ldaModel.topicsMatrix() #for topic in range(3): #print("Topic " + str(topic) + ":") #for word in range(0, ldaModel.vocabSize()): #print(" " + str(topics[word][topic])) # Now lets get the words back topicIndices = ldaModel.describeTopics(maxTermsPerTopic=5) vocablist = vectorizer.vocabulary topicsRDD = sc.parallelize(topicIndices) termsRDD = topicsRDD.map(lambda topic: (zip(itemgetter(*topic[0])(vocablist), topic[1]))) indexedTermsRDD = termsRDD.zipWithIndex() termsRDD = indexedTermsRDD.flatMap(lambda term: [(t[0], t[1], term[1]) for t in term[0]]) termDF = termsRDD.toDF(['term', 'probability', 'topicId']) termDF.orderBy('topicID').show() spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() t2 = termDF t2.createOrReplaceTempView("topics") terms=spark.sql("SELECT * FROM topics WHERE probability>.0001") terms.show() import pyLDAvis import pyLDAvis.graphlab import graphlab as gl pyLDAvis.prepare(ldaModel.topicsMatrix(),vocabList,) """ Explanation: Latent Dirichlet allocation We are now going to construct a topic model using an algorithm called LDA. You can find out more information on LDA <a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation">here.</a> Our real goal is to get a set of topics for each document. These topic can them used to classify documents into sub-categories or themes. This will then be used in our search engine later on or for other analytics purposes. Unfortunately, at this point the PySpark LDA model is not all that useful for data analysis purposes but we will keep it around in the code anyways. (Note if anyone knows an easy way of visualizing a PySpark LDA model with pyLDAvis let me know!). End of explanation """ from pyspark.ml.feature import Word2Vec from pyspark.sql import SparkSession # Learn a mapping from words to Vectors. word2Vec = Word2Vec(vectorSize=3, minCount=0, inputCol="filtered1", outputCol="result") model = word2Vec.fit(filtered_final) result = model.transform(filtered_final) for feature in result.select("result").take(3): print(feature) topN = 13 synonymsDF = model.findSynonyms('boat', topN).toPandas() synonymsDF synonymsDF = model.findSynonyms('whitewater', topN).toPandas() synonymsDF """ Explanation: Word2Vec Example Here is a basic example of using PySpark's Word2Vec library. End of explanation """ # More Natural Language Processing using NLP. Lemmatization, Chunking, and Tagging. import nltk from nltk.stem.wordnet import WordNetLemmatizer messages = filtered_final.select("filtered") message_rdd=messages.rdd message_rdd = message_rdd.flatMap(lambda x:x) print(message_rdd.first()) pos_statuses = message_rdd.map(nltk.pos_tag) print(pos_statuses.take(5)) # Todo get lemmatization working """ Explanation: Further Processing Another useful thing to do before building a Word2Vec model is stemming and (depending on the model) tagging. We can accomplish this with NLTK. End of explanation """
CDNoyes/EDL-Py
Ipopt.ipynb
gpl-3.0
import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn-whitegrid') from Utils import ipopt from EntryGuidance import Mesh import EntryGuidance.Convex_PS as Convex # reload(Convex) OCP = Convex.OCP solver = ipopt.Solver() mesh = Mesh.Mesh(t0=0, tf=10, orders=[4]*10) N = len(mesh.times) x0 = [3,0] A0 = np.array([[0,1],[-1,0]]) B0 = np.array([0,1]) guess = np.array([np.linspace(x0[0], 0, N), np.zeros((N,))]).T guess_u = np.zeros(N,) X0 = solver.create_vars(guess) U0 = solver.create_vars(guess_u) Ai = [A0]*N Bi = [B0]*N X = mesh.chunk(X0) U = mesh.chunk(U0) A = mesh.chunk(Ai) B = mesh.chunk(Bi) # End point constraints solver.Equation(X0[0,0]==x0[0]) solver.Equation(X0[0,1]==x0[1]) solver.Equation(X0[-1,0]==0) solver.Equation(X0[-1,1]==0) L = [] for a, x, b, u, d, w in zip(A,X,B,U,mesh.diffs, mesh.weights): # Iterate over mesh segments solver.StateSpace(a, x, b, u, d) # linear dynamic constraints # Running cost computation lagrange = u**2 la_var = solver.model.Var(0.) solver.Equation(la_var == w.dot(lagrange)) # The use of these intermediate variables allows the obj to be written as a small sum. This avoids the 15k character limit. L.append(la_var) # Path constraints: [solver.Equation(-2.6-1.3*x1+x2 <= 0) for (x1,x2) in x] [solver.Equation(x2 >= -2.) for (x1,x2) in x] [solver.Equation(ui**2 <= 1) for ui in u] # control limited to 3 solver.Obj(sum(L)) # solver.model.options.SOLVER = 0 # for optimizer comparison solver.solve(True) x_sol = solver.get_values(X0) u_sol = solver.get_values(U0) print(dir(solver.model.options)) plt.figure() plt.plot(mesh.times, u_sol) plt.figure() plt.plot(mesh.times, x_sol) plt.figure() plt.title('Phase Portrait') plt.plot(x_sol.T[0],x_sol.T[1], label='Trajectory') x1 = np.linspace(-2,0) x2 = 1.3*x1 + 2.6 plt.plot(x1,x2,'k--', label='Constraints') x1 = np.linspace(-2,3) plt.plot(x1, -2.*np.ones_like(x1),'k--') plt.legend() """ Explanation: Uses an interface into IPOPT to solve sub-problems in iterative convex optimization. End of explanation """ solver = ipopt.Solver() mesh = Mesh.Mesh(t0=0, tf=1, orders=[4]*5) N = len(mesh.times) x0 = [1] A0 = np.array([0.5]) B0 = np.array([1]) guess = np.linspace(x0[0], 0, N) guess_u = np.zeros(N,) X0 = solver.create_vars(guess) U0 = solver.create_vars(guess_u) Ai = [A0]*N Bi = [B0]*N X = mesh.chunk(X0) U = mesh.chunk(U0) A = mesh.chunk(Ai) B = mesh.chunk(Bi) # End point constraints solver.Equation(X0[0]==x0[0]) # solver.Equation(X0[-1]==0) L = [] for a, x, b, u, d, w in zip(A,X,B,U,mesh.diffs, mesh.weights): # Iterate over mesh segments solver.StateSpace(a, x, b, u, d) # linear dynamic constraints # Running cost computation lagrange = u**2 + 2*x**2 la_var = solver.model.Var(0.) solver.Equation(la_var == w.dot(lagrange)) # The use of these intermediate variables allows the obj to be written as a small sum. This avoids the 15k character limit. L.append(la_var) solver.Obj(sum(L)) # solver.model.options.SOLVER = 0 # for optimizer comparison solver.solve(True) x_sol = solver.get_values(X0) u_sol = solver.get_values(U0) print(dir(solver.model.options)) """ Explanation: Scalar problem with known optimal solution End of explanation """ t = np.linspace(0,1) x_opt = (2*np.exp(3*t)+np.exp(3))/(np.exp(1.5*t)*(2+np.exp(3))) u_opt = (2*np.exp(3*t)-2*np.exp(3))/(np.exp(1.5*t)*(2+np.exp(3))) plt.figure() plt.plot(mesh.times, u_sol) plt.plot(t, u_opt, 'k--') plt.plot() plt.figure() plt.plot(mesh.times, x_sol) plt.plot(t,x_opt,'k--') """ Explanation: Here we confirm the results with the analytically known solution: End of explanation """ from scipy.interpolate import interp1d class TestClass(OCP): """ A very basic vanderpol oscillator for testing OCP solver """ def __init__(self, mu, x0, xf, tf): self.mu = mu self.x0 = x0 self.xf = xf self.tf = tf def dyn(self, x): # returns f,g evaluated at x (vectorized) return np.array([x[1],-x[0] + self.mu*(1-x[0]**2)*x[1]]), np.vstack((np.zeros_like(x[0]),np.ones_like(x[0]))).squeeze() def dynamics(self, x, t, u): # integrable function f, g = self.dyn(x) return f + g*u def jac(self, x, *args): x1, x2 = x shape = [x.shape[0]] shape.extend(x.shape) A = np.zeros(shape) B = np.vstack((np.zeros_like(x1),np.ones_like(x1))).squeeze() A[0,1,:] = np.ones_like(x[0]) A[1,0,:] = -np.ones_like(x[0]) - 2*self.mu*x1*x2 A[1,1,:] = self.mu*(1-x[0]**2) return np.moveaxis(A, -1, 0), np.moveaxis(B, -1, 0) def lagrange(self, t, x, u, xr, ur, *args): return (np.array(u)+ur)**2 # return np.array([ self.solver.model.sqrt(ui**2) for ui in (u+ur) ]) def mayer(self, *args, **kwargs): return 0 def constraints(self, t, x, u, x_ref, u_ref): """ Implements all constraints, including: boundary conditions control constraints trust regions """ for xi, x0i in zip(x[0], x_ref[0]): self.solver.Equation(xi==0) for xi, xfi in zip(x[-1], x_ref[-1]): self.solver.Equation(xfi+xi==0) trust_region = 4 umax = 3 for ti, (xi, xr) in enumerate(zip(x, x_ref)): # self.solver.Equation( sum((xi - xr)**2) < trust_region**2 ) self.solver.Equation((u[ti]+u_ref[ti])**2 <= umax**2) return def plot(self, T, U, X, J, ti, tcvx): for i, xux in enumerate(zip(T, U, X)): t, u, xc = xux # plt.figure(1) # plt.plot(x[0], x[1], label=str(i)) # plt.title('State Iterations (Integration)') plt.figure(5) plt.plot(xc[0], xc[1], label=str(i)) plt.title('State Iterations (Discretization)') plt.figure(2) plt.plot(t, u, label=str(i)) plt.title('Control Iterations') plt.figure(3) xcvx = interp1d(T[-1], X[-1].T, kind='linear', axis=0, assume_sorted=True)(ti).T plt.plot(X[-1][0], X[-1][1], '*-', label='Chebyshev Nodes') plt.plot(xcvx[0], xcvx[1], 'ko', label='Mesh Points') # plt.plot(X[-1][0], X[-1][1], label='Integration') plt.title('Optimal Trajectory') plt.legend() plt.figure(4) plt.plot(T[-1], U[-1]) plt.title('Optimal control') plt.figure(7) plt.semilogy(J, 'o-') plt.ylabel('Objective Function') plt.xlabel('Iteration') self.mesh.plot(show=False) for fig in [2,3,5]: plt.figure(fig) plt.legend() plt.show() vdp = TestClass(mu=0.1, x0=[2, 2], xf=[0, 0], tf=8) guess = {} t = np.linspace(0,8,20) u = np.zeros_like(t) x = vdp.integrate(vdp.x0, 0, t) # x = np.array([np.linspace(2,0,20)]*2).T guess['state'] = x guess['control'] = u guess['time'] = t guess['mesh'] = [4]*10 sol = vdp.solve(guess, max_iter=7, linesearch=False, plot=True, solver='ipopt', refine=True, verbose=False) """ Explanation: Next step: Rewrite (or extend) the OCP to use interior-point solver in place of the specific SOCP structured solver End of explanation """
lionfish0/Classification_talk
ipython/Classification.ipynb
mit
from matplotlib import pyplot as plt #plotting library (lets us draw graphs) %matplotlib inline from sklearn import datasets #the datasets from sklearn digits = datasets.load_digits() #load the digits into the variable 'digits' """ Explanation: Classification The Digit Dataset For these classification examples we will be using scikit-learn, a toolkit for python that contains lots of methods for solving machine learning problems. It also contains some datasets we can try out. For this exercise we'll use the 'digit' dataset. This is a set of 1797 pictures of hand-drawn digits (0,1,2,3,4,5,6,7,8,9). The challenge is whether we can get the computer to learn what the digits look like, using a training set of images, and then test the computer on a test-set of images. Run the code below to import the digits dataset. End of explanation """ digits.data.shape """ Explanation: To get an idea of the data we are going to be classifying we'll ask what shape the 'data' matrix is: End of explanation """ digits.data[35,:] """ Explanation: This tells us that it has 1797 rows (which are the samples) and 64 columns (which are the 8x8 pixels in the data, and make up the 64 dimensions of the data set). We can have a look at just one sample. Here I'm using python's matrix 'slicing' notation. It means I want row 35 and all the columns from that row. End of explanation """ #code to reshape the 64 numbers into an 8x8 matrix and then draw it plt.matshow(digits.data[35,:].reshape(8,8),cmap='gray') """ Explanation: Each of these numbers is one of the pixels in the image. It's unclear what digit this image is of still. We can draw the numbers as pixels in an image to see what image this represents: End of explanation """ #Exercise 1: Your code here! """ Explanation: It looks like a five! Exercise 1: Copy the code above into the box below, and modify it to find out what digit image 72 has inside End of explanation """ digits.target[35] """ Explanation: The problem is a supervised learning problem, which means we need to provide labels for our data points. The labels are also in the 'digits' object. They can be accessed using 'target': End of explanation """ #Exercise 2: Your code here! """ Explanation: As suspected image 35 is of the digit '5'. Exercise 2: What is image 72 supposed to be? Copy and alter the code above to find out. End of explanation """ #Your code here """ Explanation: Exercise 3: How many digits are of each type in the dataset? You can plot a histogram with the following code: plt.hist(list_of_numbers,bins=range(11)) You'll need to replace "list_of_numbers" with the array you want to draw the histogram of. The bins parameter tells the function where to draw the boundaries of the graph. We want them at each integer. End of explanation """ training_data = digits.data[0:-10,:] #this means all but the last 10 rows should be put in training_data training_target = digits.target[0:-10] #this puts all but the last 10 elements of the labels (targets) into training_target #similarly this takes the last digit and puts that in test_data and test_target test_data = digits.data[-10:,:] test_target = digits.target[-10:] """ Explanation: Training First we need to pick some training data: End of explanation """ from sklearn import neighbors #import the library that we need nn = neighbors.KNeighborsClassifier(n_neighbors=1) #this is our model (with just one nearest neighbour) nn.fit(training_data,training_target); #fit our model to the training data """ Explanation: The training step is quite simple. Here we fit the model to the data. End of explanation """ nn.predict(test_data) """ Explanation: We can then predict the results using the predict method End of explanation """ test_target """ Explanation: How many of these were correct? End of explanation """ plt.matshow(test_data[3].reshape(8,8),cmap='gray') """ Explanation: Remarkably the classifier has mostly got them correct. Below is the image that it misclassified End of explanation """ #Exercise 4: Answer here """ Explanation: Exercise 4: What was the image supposed to be, and what did the classifier think it was? End of explanation """ import numpy as np #classification libraries from sklearn import neighbors from sklearn import svm from sklearn import naive_bayes from sklearn import tree from sklearn import ensemble from sklearn.cross_validation import KFold #prepare k-fold cross validation kf = KFold(len(digits.target), n_folds=5) KFold(n=4, n_folds=2, shuffle=False, random_state=None) #variables to count up how many we got right tally_correct = 0 tally_total = 0 for train_index, test_index in kf: #here we split the dataset up into training and test sets, these change each iteration training_data = digits.data[train_index,:] training_target = digits.target[train_index] test_data = digits.data[test_index,:] test_target = digits.target[test_index] #TODO: Uncomment one of these classifiers to see how it does #csf = tree.DecisionTreeClassifier() #csf = ensemble.RandomForestClassifier(n_estimators=50, min_samples_split=1, max_depth=None, max_features=16) #csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=8) csf = neighbors.KNeighborsClassifier(n_neighbors=1) #csf= svm.LinearSVC(C=0.05) #Linear Support Vector Machine classifier #csf = naive_bayes.GaussianNB() csf.fit(training_data,training_target) predictions = csf.predict(test_data) number_correct = np.sum(predictions==test_target) total_number = len(predictions) print("%d of %d correct" % (number_correct,total_number)) tally_correct += number_correct tally_total += total_number print " " print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total) """ Explanation: Cross-validation In the lecture I briefly mentioned leave-one-out cross-validation. The same sort of idea is called k-fold cross-validation. In this we split the dataset into k groups, and train on all but one, then test on the remaining one. Then repeat while leaving out a different group. sklearn provides the 'KFold' object to let us organise our cross-validation. In the code below we repeatedly train and test, and report the accuracy in each fold. Exercise 5: In the code below try different classifiers and see which one does best. Try modifying the parameters to see what effect they have (e.g. the number of neighbours). Put the cursor inside the parameter brackets (e.g. on the "n_neighbors=1") and press SHIFT-TAB a couple of times and a box will appear describing what the different parameters do. End of explanation """ bc = datasets.load_breast_cancer() bc.data[4,:] #data from row number four. """ Explanation: The nearest neighbour classifier did particularly well on the digits dataset. Breast Cancer Dataset This is the wisconsin Breast Cancer dataset. It contains measurements of different cells, some of which are cancerous and some of which are not. It's been organised in the same way as before, with bc.data containing a matrix, each row is a cell, each column a feature. End of explanation """ #print bc['DESCR'] #uncomment and run to print a description of the dataset """ Explanation: You can find out more by running this code: End of explanation """ bc = datasets.load_breast_cancer() import numpy as np #classification libraries from sklearn import neighbors from sklearn import svm from sklearn import naive_bayes from sklearn import tree from sklearn import ensemble from sklearn.cross_validation import KFold #prepare k-fold cross validation kf = KFold(len(bc.target), n_folds=5) KFold(n=4, n_folds=2, shuffle=False, random_state=None) #variables to count up how many we got right tally_correct = 0 tally_total = 0 for train_index, test_index in kf: #here we split the dataset up into training and test sets, these change each iteration training_data = bc.data[train_index,:] training_target = bc.target[train_index] test_data = bc.data[test_index,:] test_target = bc.target[test_index] #TODO: Uncomment one of these classifiers to see how it does #csf = tree.DecisionTreeClassifier() #csf = ensemble.RandomForestClassifier(n_estimators=10, min_samples_split=1, max_depth=None, max_features=5) #csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=2) csf = neighbors.KNeighborsClassifier(n_neighbors=1) #csf= svm.LinearSVC(C=1) #csf = naive_bayes.GaussianNB() csf.fit(training_data,training_target) predictions = csf.predict(test_data) number_correct = np.sum(predictions==test_target) total_number = len(predictions) print("%d of %d correct" % (number_correct,total_number)) tally_correct += number_correct tally_total += total_number print " " print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total) """ Explanation: Exercise 6: Quickly try out different classifiers for the breast cancer dataset. Do the same ones do as well? If not, why not? End of explanation """ zscore=np.array([-1.59,-0.06,-2.11,0.57,1.35,0.03,0.11,-0.37,2.66,-1.24,-0.03,0.03,-0.53,3.06,1.97,1.01,0.51,-1.36,-1.44,1.45,2.55,0.4,1.03,1.72,1.,0.67,1.19,0.59,0.86,-2.16,0.87,-2.27,0.04,1.14,-0.78,1.76,-1.05,-0.7,1.58,0.11,-0.34,-2.89,0.37,0.77,0.61,-0.68,0.,-1.33]) muac=np.array([84.5,86.6,87.2,88.5,91.3,92.4,92.4,92.8,93.3,94.4,95.2,97.4,101.4,101.5,106.1,109.5,110.8,110.9,113.3,113.6,113.6,114.2,114.8,116.,116.8,117.9,119.1,119.8,122.,122.7,123.7,124.5,124.8,125.7,126.3,129.5,130.3,131.,132.5,132.5,136.5,138.,140.,140.4,143.6,146.5,146.7,146.9]) ok=np.array([False,False,False,False,False,False,False,False,True,False,False,False,False,True,True,True,True,False,False,True,True,False,True,True,True,True,True,True,True,False,True,False,True,True,False,True,True,True,True,True,True,False,True,True,True,True,True,True]) #data for later exercise... edema=np.array([True,True,True,True,True,True,True,True,False,True,False,True,True,False,False,False,False,True,True,False,False,True,False,False,False,False,False,False,False,True,False,True,False,False,True,False,True,True,False,False,True,True,False,False,False,True,False,False]) """ Explanation: The Nutrition (simulated) Dataset and Munging Data We often don't have data in quite the tidy format the above examples were in. As a reminder we want to guess whether a child will need treatment or not, based on their MUAC and ZSCORE End of explanation """ #Your code here! """ Explanation: Exercise 7: Plot the zscores against the MUAC values Hint: The command you want to use is: plt.scatter(A,B,marker='x',c='r') where A and B are the two lists you want to plot (so you'll need to replace them). The marker parameter is optional and lets you choose the symbol in your plot. The 'c' parameter lets you choose the colour. End of explanation """ #Your code here """ Explanation: Exercise 8: Can we plot the two classes in different colours? Hint #1: First just try plotting the 'ok' children. You can get a smaller list, that just contains the ok children by selecting just the items from the lists where ok is True. This can be done with: zscore[ok] Hint #2: To list the not ok children you can use the not-operator "~": zscore[~ok] Hint #3: To plot both ok and not-ok children, just call the scatter function twice (one after the other) End of explanation """ data = np.vstack([zscore,muac]).T #Here I combine the zscores and MUAC. # <<< Modify for exercise 7 target = np.array([1 if k else 0 for k in ok]) """ Explanation: Finally we want to try classifying the data. First we need to get it into the matrix form that we used earlier in the notebook. We need two matrices, one called 'data' and one called 'target'. 'data' should be a matrix with 48 rows (for the 48 children) and two columns (for the two types of data: zscore and muac). We can combine matrices like this using numpy's vstack command. Here I've provided the code to do that for you. Much of doing data science is really about getting your data prepared into the right format and quality. The actually machine learning itself is often quite a small part of your work. End of explanation """ data.shape import numpy as np #classification libraries from sklearn import neighbors from sklearn import svm from sklearn import naive_bayes from sklearn import tree from sklearn import ensemble from sklearn.cross_validation import KFold #prepare k-fold cross validation kf = KFold(len(target), n_folds=5) KFold(n=4, n_folds=2, shuffle=False, random_state=None) #variables to count up how many we got right tally_correct = 0 tally_total = 0 for train_index, test_index in kf: #here we split the dataset up into training and test sets, these change each iteration training_data = data[train_index,:] training_target = target[train_index] test_data = data[test_index,:] test_target = target[test_index] #TODO: Uncomment one of these classifiers to see how it does #csf = tree.DecisionTreeClassifier() #csf = ensemble.RandomForestClassifier(n_estimators=10, min_samples_split=1, max_depth=None, max_features=5) #csf = ensemble.ExtraTreesClassifier(n_estimators=100, min_samples_split=1, max_depth=None, max_features=2) #csf = neighbors.KNeighborsClassifier(n_neighbors=1) csf= svm.LinearSVC(C=1) #csf = naive_bayes.GaussianNB() csf.fit(training_data,training_target) predictions = csf.predict(test_data) number_correct = np.sum(predictions==test_target) total_number = len(predictions) print("%d of %d correct" % (number_correct,total_number)) tally_correct += number_correct tally_total += total_number print " " print "Total: %d of %d correct (%0.2f%%)" % (tally_correct, tally_total, 100.0*tally_correct/tally_total) """ Explanation: We can ask for the shape of the data matrix, so we can confirm we've got it in the correct shape: We have 48 children, and each one has two measurements. End of explanation """ edema """ Explanation: We have also been given data about whether the child has edema (fluid build-up). Can we make use of this additional data to improve our predictions? The variable is called 'edema': End of explanation """ from sklearn.metrics import confusion_matrix confusion_matrix(ok,edema) """ Explanation: To get a quick idea of if it's useful we can as for the confusion matrix, this counts the number of times both are true, one is true and one is false, vis-versa and when they are both false. End of explanation """ #Modify code above """ Explanation: The top row is the number of children that don't need help who have edema or not (only one child in this category has edema). The bottom row is the number of children who need help who have edema or not. Almost all of them had edema. Clearly this variable will be very useful in our prediction. Exercise 9: Modify the code above to include this extra variable. There is a hint in a comment to help! End of explanation """
gabicfa/RedesSociais
encontro02/5-kruskal.ipynb
gpl-3.0
import sys sys.path.append('..') import socnet as sn """ Explanation: Encontro 02, Parte 5: Algoritmo de Kruskal Este guia foi escrito para ajudar você a atingir os seguintes objetivos: implementar o algoritmo de Kruskal; praticar o uso da biblioteca da disciplina. Primeiramente, vamos importar a biblioteca: End of explanation """ sn.graph_width = 320 sn.graph_height = 180 """ Explanation: A seguir, vamos configurar as propriedades visuais: End of explanation """ g = sn.load_graph('5-kruskal.gml', has_pos=True) for e in g.edges_iter(): g.edge[e[0]][e[1]]['label'] = g.edge[e[0]][e[1]]['c'] sn.show_graph(g, elab=True) """ Explanation: Por fim, vamos carregar e visualizar um grafo: End of explanation """ class Forest(object): def __init__(self, g): self.g = g self.f = set() for n in g.nodes(): self._make_set(n) def _make_set(self, x): g.node[x]['p'] = x g.node[x]['rank'] = 0 def _union(self, x, y): self._link(self._find_set(x), self._find_set(y)) def _link(self, x, y): if g.node[x]['rank'] > g.node[y]['rank']: g.node[y]['p'] = x else: g.node[x]['p'] = y if g.node[x]['rank'] == g.node[y]['rank']: g.node[y]['rank'] = g.node[y]['rank'] + 1 def _find_set(self, x): if x != g.node[x]['p']: g.node[x]['p'] = self._find_set(g.node[x]['p']) return g.node[x]['p'] def adding_does_not_form_circuit(self, n, m): return self._find_set(n) != self._find_set(m) def add(self, n, m): self.f.add((n, m)) self._union(n, m) """ Explanation: Árvores geradoras mínimas Dizemos que: * um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ é um circuito se $\langle n_0, n_1, \ldots, n_{k-2} \rangle$ é um caminho e $n_0 = n_{k-1}$; * um conjunto de arestas $F$ é uma floresta se não existem circuitos no grafo $(N, F)$; * um grafo é conexo se para quaisquer nós $s$ e $t$ existe um caminho de $s$ a $t$; * uma floresta $T$ é uma árvore geradora se o grafo $(N, T)$ é conexo. O custo de uma árvore geradora $T$ é $\sum_{{n, m} \in T} c(n, m)$. Uma árvore geradora é mínima se não existe outra árvore geradora de custo menor. Note que podem existir múltiplas árvores geradoras mínimas. Algoritmo de Kruskal Podemos eficientemente obter uma árvore geradora mínima usando o algoritmo de Kruskal. A ideia desse algoritmo é simples: inicializamos uma floresta $F$ como o conjunto vazio e verificamos todas as arestas em ordem não-decrescente de custo. Para cada aresta, adicionamos ela a $F$ se essa adição não formar circuito no grafo $(N, F)$. Vamos especificar uma classe que representa a floresta. Não é necessário entender todos os detalhes dela, apenas que o atributo f é o conjunto das arestas e os dois últimos métodos são auto-explicativos. End of explanation """ from math import inf, isinf def snapshot(g, frames): frame = sn.generate_frame(g, nlab=False, elab=True) frames.append(frame) red = (255, 0, 0) blue = (0, 0, 255) green = (0, 255, 0) frames = [] f = Forest (g) edges = [] e = g.edges_iter() for i in e: edges.append((i[0],i[1],g.get_edge_data(i[0],i[1])['c'])) edges.sort(reverse = True, key=lambda x: (-x[2],x[0])) sn.reset_node_colors(g) sn.reset_edge_colors(g) snapshot(g, frames) for n,m,c in edges: g.edge[m][n]['color'] = green snapshot(g, frames) if(f.adding_does_not_form_circuit(n,m)): g.edge[m][n]['color'] = blue snapshot(g, frames) f.add(n,m) else: g.edge[m][n]['color'] = sn.edge_color snapshot(g, frames) sn.show_animation(frames) """ Explanation: Exercício Monte uma visualização do algoritmo de Kruskal. Use a classe Forest. End of explanation """
gth158a/learning
Keras - Multi-input and multi-output models.ipynb
apache-2.0
from keras.layers import Input, Embedding, LSTM, Dense, concatenate from keras.models import Model # Headline input: meant to receive sequences of 100 integers, between 1 and 10000. # Note that we can name any layer by passing it a "name" argument. main_input = Input(shape=(100,), dtype='int32', name='main_input') # This embedding layer will encode the input sequence # into a sequence of dense 512-dimensional vectors. x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input) # A LSTM will transform the vector sequence into a single vector, # containing information about the entire sequence lstm_out = LSTM(32)(x) """ Explanation: source: https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models Need to find data for this model Model uses keras.layers.concatenate to join to parts of the model there are two inputs and two outputs Let's consider the following model. We seek to predict how many retweets and likes a news headline will receive on Twitter. + The main input to the model will be the headline itself, as a sequence of words, but to spice things up, + our model will also have an auxiliary input, receiving extra data such as the time of day when the headline was posted, etc. The model will also be supervised via two loss functions. Using the main loss function earlier in a model is a good regularization mechanism for deep models. <img src="https://s3.amazonaws.com/keras.io/img/multi-input-multi-output-graph.png"> The main input will receive the headline, as a sequence of integers (each integer encodes a word). The integers will be between 1 and 10,000 (a vocabulary of 10,000 words) and the sequences will be 100 words long. End of explanation """ auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out) auxiliary_input = Input(shape=(5,), name='aux_input') x = concatenate([lstm_out, auxiliary_input]) # We stack a deep densely-connected network on top x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) # And finally we add the main logistic regression layer main_output = Dense(1, activation='sigmoid', name='main_output')(x) """ Explanation: Here we insert the auxiliary loss, allowing the LSTM and Embedding layer to be trained smoothly even though the main loss will be much higher in the model. End of explanation """ model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output]) model.summary() """ Explanation: This defines a model with two inputs and two outputs: End of explanation """ model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1., 0.2]) """ Explanation: We compile the model and assign a weight of 0.2 to the auxiliary loss. To specify different loss_weights or loss for each different output, you can use a list or a dictionary. Here we pass a single loss as the loss argument, so the same loss will be used on all outputs. End of explanation """ model.fit([headline_data, additional_data], [labels, labels], epochs=50, batch_size=32) """ Explanation: We can train the model by passing it lists of input arrays and target arrays: End of explanation """ model.compile(optimizer='rmsprop', loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'}, loss_weights={'main_output': 1., 'aux_output': 0.2}) # And trained it via: model.fit({'main_input': headline_data, 'aux_input': additional_data}, {'main_output': labels, 'aux_output': labels}, epochs=50, batch_size=32) """ Explanation: We could also do: End of explanation """
maxlit/powerindex
README.ipynb
mit
%matplotlib inline import powerindex as px game=px.Game(quota=51,weights=[51,49]) """ Explanation: powerindex A python library to compute power indices Installation: pip install powerindex What is all about The aim of the package is to compute different power indices of the so-called weighted voting systems (games). This package was employed to perform calculations at powdist.com Players have weights and can form coalitions. A coalition that achieves the required threshold wins. To start with a simple example, consider a system with two parties A and B having 51 and 49 seats respectively with a simple majority rule (i.e. the threshold is 51 seats). How much power do they have? It may appear that according to the number of the seats they have 51% and 49% respectively. However, party A can impose any decision without cooperating with party B. It leads to a conclusion that any reasonable rule would assign to party A 100% of the power (since it wins without cooperation) and to the party B 0% of the power and not 51% to 49%. The most popular approaches to measure power are Banzhaf and Shapley-Shubik power indices. How to use it Let's implement an example from the introduction: End of explanation """ game.calc_banzhaf() print(game.banzhaf) game.calc_shapley_shubik() print(game.shapley_shubik) """ Explanation: Now calculate Banzhaf and Shapley-Shubik power indices: End of explanation """ game=px.Game(51,weights=[50,50]) game.calc() print(game.banzhaf) print(game.shapley_shubik) """ Explanation: Function calc() computes all available indices. Thus, in this simple example both indices give 100% to 0% distribution. Now let's changes the seats distribution to the parity and see what happens: End of explanation """ game=px.Game(6,[4, 3, 2, 1]) game.calc_banzhaf() print(game.banzhaf) """ Explanation: As the result, the power distribution is also at parity. Now, consider a non-trivial, but still simple examples from Wikipedia: End of explanation """ game=px.Game(6,[3, 2, 1, 1]) game.calc_banzhaf() print(game.banzhaf) """ Explanation: Interpretation is simple. A committee where 4 parties hold 40%, 30%, 20% and 10% of seats with required qualified majority of 60%, have 41.7%, 25%, 25%, 8.3% of power respectively. In this example, having 2 or 3 seats leads to the same level of power. Another example: End of explanation """ game=px.Game(4,[3, 2, 1]) game.calc() # again it calculates all available indices print("Banzhaf index:") print(game.banzhaf) print("Shapley-Shubik index:") print(game.shapley_shubik) """ Explanation: Notice that in the previous two examples Banzhaf and Shapley-Shubik indices coincides. It doesn't hold in general even in the games of 3 voters: End of explanation """ game=px.Game(4,[3, 2, 1]) game.calc() game.pie_chart() """ Explanation: Plot results There's a possibility to plot the power distribution as a pie chart: End of explanation """ countries={"Germany":4,"France":4,"Italy":4,"Belgium":2,"Netherlands":2,"Luxembourg":1} parties=[px.Party(countries[country],country) for country in countries] game=px.Game(12,parties=parties) game.calc() game.pie_chart() """ Explanation: As you can see on the plot, the parties have numbers. In order, to put their names on the chart you need to work with Party class. Let's take Europen Economic Community (EEC) in the years 1958-1972, its members were Germany (4 votes), France (4 votes), Italy (4 votes), Belgium (2 votes), Netherlands (2 votes) and Luxembourg (1 vote) with qualified majority of 12 votes: End of explanation """
mbatchkarov/ExpLosion
notebooks/reduced_coverage_experiments.ipynb
bsd-3-clause
%cd ~/NetBeansProjects/ExpLosion/ from copy import deepcopy from notebooks.common_imports import * from gui.output_utils import * from gui.user_code import pretty_names from pprint import pprint sns.timeseries.algo.bootstrap = my_bootstrap sns.categorical.bootstrap = my_bootstrap def plot_matching(exp_with_constraints, labels=None, rotation=0): matching = [] for e in exp_with_constraints: settings = settings_of(e.id) settings['expansions__entries_of_id'] = None matching.append(Experiment.objects.get(**settings)) ids1 = list(exp_with_constraints.values_list('id', flat=True)) ids2 = [x.id for x in matching] print(ids1, '--->', ids2) if not labels: labels = ['%s-%s'%(a.id, b.id) for a,b in zip(exp_with_constraints, matching)] with sns.color_palette("cubehelix", 2): diff_plot_bar([ids1, ids2], ['Limited', 'Unlimited'], labels, rotation=rotation, hue_order=['Unlimited', 'Limited'], legend_title='Coverage') """ Explanation: Reduced coverage experiments Where the set of entries contrained in one thesaurus is constrained to match another, smaller thesaurus. This lets us look at performance differences due to higher coverage of space of all words/phrases (due to different filtering or algorithm specifics), versus better vector quality There are currently (19-5-15) two sets of such experiments, both for NPs only: - w2v vectors reduced to the coverage of count-windows vectors - count-windows reduced to coverage of Baroni vector End of explanation """ experiments = Experiment.objects.filter(expansions__entries_of__isnull=False, expansions__entries_of__composer='Baroni') names = [n.expansions.vectors.composer for n in experiments] plot_matching(experiments, labels=names) plt.axhline(random_vect_baseline(), c='k'); plt.savefig('plot-reduced-coverage1.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1) """ Explanation: count windows vectors (add, mult, ...) reduced to Baroni's coverage we know they are better and have a higher coverage, so can reducing the coverage reduce the accuracy too End of explanation """ exp_ids = Experiment.objects.filter(expansions__entries_of__isnull=False, expansions__entries_of__algorithm='count_windows' ).exclude(expansions__entries_of__composer='Baroni') names = [n.expansions.vectors.composer for n in exp_ids] plot_matching(exp_ids, labels=names) plt.axhline(random_vect_baseline(), c='k'); plt.savefig('plot-reduced-coverage2.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1) """ Explanation: word2vec reduced to coverage of count windows (there shouldn't be a large difference) End of explanation """ exp_ids = Experiment.objects.filter(expansions__entries_of__isnull=False, expansions__vectors__unlabelled_percentage=100, expansions__entries_of__unlabelled_percentage=15) names = [n.expansions.vectors.composer for n in exp_ids] plot_matching(exp_ids, labels=names) plt.axhline(random_vect_baseline(), c='k'); plt.savefig('plot-reduced-coverage3.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1) """ Explanation: wiki-100 reduced to coverage of wiki-15 End of explanation """ constrained_exp_ids = Experiment.objects.filter(expansions__entries_of__isnull=False, expansions__entries_of__unlabelled_percentage__in=[1, 10], expansions__vectors__composer='Add').values_list('id', flat=True) print(constrained_exp_ids) unconstrained_exp_ids = set() for eid in constrained_exp_ids: s = settings_of(eid) s['expansions__entries_of_id'] = None del s['expansions__vectors__unlabelled_percentage'] unconstrained_exp_ids.update(set(Experiment.objects.filter(**s).values_list('id', flat=True))) print(unconstrained_exp_ids, '-->', constrained_exp_ids) # notation from thesis names = {'N':'expansions__vectors__unlabelled_percentage', 'M':'expansions__entries_of__unlabelled_percentage'} df1 = dataframe_from_exp_ids(unconstrained_exp_ids, names) # df1['kind'] = 'Unlimited' df2 = dataframe_from_exp_ids(constrained_exp_ids, names) # df2['kind'] = 'Limited' df = pd.concat([df1, df2], ignore_index=True).convert_objects(convert_numeric=True).fillna(100) # df['Tokens'] = df.percent * 1500000000 / 100 df.M = df.M.astype(int).astype(str) + '%' with sns.color_palette("cubehelix", 4): ax = sns.tsplot(df, time='N', unit='folds', condition='M', value='Accuracy') plt.xlim(0, df.N.max()) # plt.axvline(15 * 1500000000/ 100, color='red', linestyle='dotted'); # plt.axvline(1 * 1500000000/ 100, color='green', linestyle='dotted'); plt.savefig('plot-learning-curve-with-reduction.pdf', format='pdf', dpi=300) """ Explanation: Learning curve with reduced coverage An extended version of the above cell End of explanation """
dsacademybr/PythonFundamentos
Cap03/Notebooks/DSA-Python-Cap03-03-While.ipynb
gpl-3.0
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font> Download: http://github.com/dsacademybr End of explanation """ # Usando o loop while para imprimir os valores de 0 a 9 counter = 0 while counter < 10: print(counter) counter = counter + 1 # Também é possível usar a claúsula else para encerrar o loop while x = 0 while x < 10: print ('O valor de x nesta iteração é: ', x) print (' x ainda é menor que 10, somando 1 a x') x += 1 else: print ('Loop concluído!') """ Explanation: While End of explanation """ counter = 0 while counter < 100: if counter == 4: break else: pass print(counter) counter = counter + 1 for verificador in "Python": if verificador == "h": continue print(verificador) """ Explanation: Pass, Break, Continue End of explanation """ for i in range(2,30): j = 2 counter = 0 while j < i: if i % j == 0: counter = 1 j = j + 1 else: j = j + 1 if counter == 0: print(str(i) + " é um número primo") counter = 0 else: counter = 0 """ Explanation: While e For juntos End of explanation """
Merinorus/adaisawesome
Homework/01 - Pandas and Data Wrangling/Data Wrangling with Pandas.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_context('notebook') """ Explanation: Table of Contents <p><div class="lev1"><a href="#Data-Wrangling-with-Pandas"><span class="toc-item-num">1&nbsp;&nbsp;</span>Data Wrangling with Pandas</a></div><div class="lev2"><a href="#Date/Time-data-handling"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Date/Time data handling</a></div><div class="lev2"><a href="#Merging-and-joining-DataFrame-objects"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Merging and joining DataFrame objects</a></div><div class="lev2"><a href="#Concatenation"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Concatenation</a></div><div class="lev2"><a href="#Exercise-1"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Exercise 1</a></div><div class="lev2"><a href="#Reshaping-DataFrame-objects"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Reshaping DataFrame objects</a></div><div class="lev2"><a href="#Pivoting"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Pivoting</a></div><div class="lev2"><a href="#Data-transformation"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Data transformation</a></div><div class="lev3"><a href="#Dealing-with-duplicates"><span class="toc-item-num">1.7.1&nbsp;&nbsp;</span>Dealing with duplicates</a></div><div class="lev3"><a href="#Value-replacement"><span class="toc-item-num">1.7.2&nbsp;&nbsp;</span>Value replacement</a></div><div class="lev3"><a href="#Inidcator-variables"><span class="toc-item-num">1.7.3&nbsp;&nbsp;</span>Inidcator variables</a></div><div class="lev2"><a href="#Categorical-Data"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Categorical Data</a></div><div class="lev3"><a href="#Discretization"><span class="toc-item-num">1.8.1&nbsp;&nbsp;</span>Discretization</a></div><div class="lev3"><a href="#Permutation-and-sampling"><span class="toc-item-num">1.8.2&nbsp;&nbsp;</span>Permutation and sampling</a></div><div class="lev2"><a href="#Data-aggregation-and-GroupBy-operations"><span class="toc-item-num">1.9&nbsp;&nbsp;</span>Data aggregation and GroupBy operations</a></div><div class="lev3"><a href="#Apply"><span class="toc-item-num">1.9.1&nbsp;&nbsp;</span>Apply</a></div><div class="lev2"><a href="#Exercise-2"><span class="toc-item-num">1.10&nbsp;&nbsp;</span>Exercise 2</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.11&nbsp;&nbsp;</span>References</a></div> # Data Wrangling with Pandas Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks. As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed. End of explanation """ from datetime import datetime now = datetime.now() now now.day now.weekday() """ Explanation: Date/Time data handling Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis. The datetime built-in library handles temporal information down to the nanosecond. End of explanation """ from datetime import date, time time(3, 24) date(1970, 9, 3) """ Explanation: In addition to datetime there are simpler objects for date and time information only, respectively. End of explanation """ my_age = now - datetime(1970, 1, 1) my_age print(type(my_age)) my_age.days/365 """ Explanation: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times: End of explanation """ segments = pd.read_csv("Data/AIS/transit_segments.csv") segments.head() """ Explanation: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed. The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information. For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc. End of explanation """ segments.seg_length.hist(bins=500) """ Explanation: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram: End of explanation """ segments.seg_length.apply(np.log).hist(bins=500) """ Explanation: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful: End of explanation """ segments.st_time.dtype """ Explanation: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime. End of explanation """ datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M') """ Explanation: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information. End of explanation """ from dateutil.parser import parse parse(segments.st_time.ix[0]) """ Explanation: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically. End of explanation """ segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M')) """ Explanation: We can convert all the dates in a particular column by using the apply method. End of explanation """ pd.to_datetime(segments.st_time[:10]) """ Explanation: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects. End of explanation """ pd.to_datetime([None]) """ Explanation: Pandas also has a custom NA value for missing datetime objects, NaT. End of explanation """ segments = pd.read_csv("Data/AIS/transit_segments.csv", parse_dates=['st_time', 'end_time']) segments.dtypes """ Explanation: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument. The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import: End of explanation """ segments.st_time.dt.month.head() segments.st_time.dt.hour.head() """ Explanation: Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example: End of explanation """ segments[segments.st_time.dt.month==2].head() """ Explanation: This can be used to easily filter rows by particular temporal attributes: End of explanation """ segments.st_time.dt.tz_localize('UTC').head() segments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head() """ Explanation: In addition, time zone information can be applied: End of explanation """ vessels = pd.read_csv("Data/AIS/vessel_information.csv", index_col='mmsi') vessels.head() [v for v in vessels.type.unique() if v.find('/')==-1] vessels.type.value_counts() """ Explanation: Merging and joining DataFrame objects Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table. End of explanation """ df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4))) df2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)), score=np.random.random(size=6))) df1 df2 pd.merge(df1, df2) """ Explanation: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments. In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example: End of explanation """ pd.merge(df1, df2, how='outer') """ Explanation: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables. Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables. End of explanation """ segments.head(1) vessels.head(1) """ Explanation: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other. Looking at the two datasets that we wish to merge: End of explanation """ segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi') segments_merged.head() """ Explanation: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other. End of explanation """ vessels.merge(segments, left_index=True, right_on='mmsi').head() """ Explanation: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other. Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table. Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables: End of explanation """ segments['type'] = 'foo' pd.merge(vessels, segments, left_index=True, right_on='mmsi').head() """ Explanation: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them. End of explanation """ np.concatenate([np.random.random(5), np.random.random(5)]) np.r_[np.random.random(5), np.random.random(5)] np.c_[np.random.random(5), np.random.random(5)] """ Explanation: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively. Concatenation A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_: End of explanation """ mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None) mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None) mb1.shape, mb2.shape mb1.head() """ Explanation: Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there. This operation is also called binding or stacking. With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate. Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index. End of explanation """ mb1.columns = mb2.columns = ['Count'] mb1.index.name = mb2.index.name = 'Taxon' mb1.head() """ Explanation: Let's give the index and columns meaningful labels: End of explanation """ mb1.index[:3] mb1.index.is_unique """ Explanation: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level. End of explanation """ pd.concat([mb1, mb2], axis=0).shape """ Explanation: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated: End of explanation """ pd.concat([mb1, mb2], axis=0).index.is_unique """ Explanation: However, the index is no longer unique, due to overlap between the two DataFrames. End of explanation """ pd.concat([mb1, mb2], axis=1).shape pd.concat([mb1, mb2], axis=1).head() """ Explanation: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames. End of explanation """ pd.concat([mb1, mb2], axis=1, join='inner').head() """ Explanation: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument. End of explanation """ mb1.combine_first(mb2).head() """ Explanation: If we wanted to use the second table to fill values absent from the first table, we could use combine_first. End of explanation """ pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head() pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique """ Explanation: We can also create a hierarchical index based on keys identifying the original tables. End of explanation """ pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head() """ Explanation: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table. End of explanation """ # Loading all the .xls files one by one mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None) mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None) mb3 = pd.read_excel('Data/microbiome/MID3.xls', 'Sheet 1', index_col=0, header=None) mb4 = pd.read_excel('Data/microbiome/MID4.xls', 'Sheet 1', index_col=0, header=None) mb5 = pd.read_excel('Data/microbiome/MID5.xls', 'Sheet 1', index_col=0, header=None) mb6 = pd.read_excel('Data/microbiome/MID6.xls', 'Sheet 1', index_col=0, header=None) mb7 = pd.read_excel('Data/microbiome/MID7.xls', 'Sheet 1', index_col=0, header=None) mb8 = pd.read_excel('Data/microbiome/MID8.xls', 'Sheet 1', index_col=0, header=None) mb9 = pd.read_excel('Data/microbiome/MID9.xls', 'Sheet 1', index_col=0, header=None) # Each of these files contain two column : the name of the taxon and a counter. So we name the second column as "count" to keep the same meaning. mb1.columns = mb2.columns = mb3.columns = mb4.columns = mb5.columns = mb6.columns = mb7.columns = mb8.columns = mb9.columns = ['Count'] # Same here for the first column by adding the name of the taxon. mb1.index.name = mb2.index.name = mb3.index.name = mb4.index.name = mb5.index.name = mb6.index.name = mb7.index.name = mb8.index.name = mb9.index.name = 'Taxon' # Now we'll add three columns which are defined in the metadata file : the barcode, the group and the sample type of each excel file. dataframe = pd.concat([mb1, mb2, mb3, mb4, mb5, mb6, mb7, mb8, mb9], axis=0) dataframe['Barcode']=['MID1']*len(mb1) + ['MID2']*len(mb2) + ['MID3']*len(mb3) + ['MID4']*len(mb4)+ ['MID5']*len(mb5)+ ['MID6']*len(mb6)+ ['MID7']*len(mb7)+ ['MID8']*len(mb8)+ ['MID9']*len(mb9) dataframe['Group']=['Extraction Control']*len(mb1) + ['NEC 1']*len(mb2) + ['Control 1']*len(mb3) + ['NEC 2']*len(mb4)+ ['Control 2']*len(mb5)+ ['NEC 1']*len(mb6)+ ['Control 1']*len(mb7)+ ['NEC 2']*len(mb8)+ ['Control 2']*len(mb9) dataframe['Sample']=['NA']*len(mb1) + ['tissue']*len(mb2) + ['tissue']*len(mb3) + ['tissue']*len(mb4)+ ['tissue']*len(mb5)+ ['stool']*len(mb6)+ ['stool']*len(mb7)+ ['stool']*len(mb8)+ ['stool']*len(mb9) dataframe.tail() type(dataset) """ Explanation: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument. Exercise 1 In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame. End of explanation """ cdystonia = pd.read_csv("Data/cdystonia.csv", index_col=None) cdystonia.head() """ Explanation: Reshaping DataFrame objects In the context of a single DataFrame, we are often interested in re-arranging the layout of our data. This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites. Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37) Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment) TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began End of explanation """ stacked = cdystonia.stack() stacked """ Explanation: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements. The stack method rotates the data frame so that columns are represented in rows: End of explanation """ stacked.unstack().head() """ Explanation: To complement this, unstack pivots from rows back to columns. End of explanation """ cdystonia2 = cdystonia.set_index(['patient','obs']) cdystonia2.head() cdystonia2.index.is_unique """ Explanation: For this dataset, it makes sense to create a hierarchical index based on the patient and observation: End of explanation """ twstrs_wide = cdystonia2['twstrs'].unstack('obs') twstrs_wide.head() cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']] .drop_duplicates() .merge(twstrs_wide, right_index=True, left_on='patient', how='inner') .head()) cdystonia_wide """ Explanation: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs. End of explanation """ (cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'] .unstack('week').head()) """ Explanation: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking: End of explanation """ pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], var_name='obs', value_name='twsters').head() """ Explanation: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments. End of explanation """ cdystonia.pivot(index='patient', columns='obs', values='twstrs').head() """ Explanation: This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected. The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them. Pivoting The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively. For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above: End of explanation """ cdystonia.pivot('patient', 'obs') """ Explanation: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table: End of explanation """ cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', aggfunc=max).head(20) """ Explanation: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function. End of explanation """ pd.crosstab(cdystonia.sex, cdystonia.site) """ Explanation: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired. End of explanation """ vessels.duplicated(subset='names') vessels.drop_duplicates(['names']) """ Explanation: Data transformation There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values. Dealing with duplicates We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name: End of explanation """ cdystonia.treat.value_counts() """ Explanation: Value replacement Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset: End of explanation """ treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2} cdystonia['treatment'] = cdystonia.treat.map(treatment_map) cdystonia.treatment """ Explanation: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes. End of explanation """ vals = pd.Series([float(i)**10 for i in range(10)]) vals np.log(vals) """ Explanation: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method. An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values: End of explanation """ vals = vals.replace(0, 1e-6) np.log(vals) """ Explanation: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace. End of explanation """ cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2}) """ Explanation: We can also perform the same replacement that we used map for with replace: End of explanation """ top5 = vessels.type.isin(vessels.type.value_counts().index[:5]) top5.head(10) vessels5 = vessels[top5] pd.get_dummies(vessels5.type).head(10) """ Explanation: Inidcator variables For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward. Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships: End of explanation """ cdystonia.treat.head() """ Explanation: Categorical Data Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category. For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters. End of explanation """ pd.Categorical(cdystonia.treat) cdystonia['treat'] = cdystonia.treat.astype('category') cdystonia.treat.describe() """ Explanation: We can convert this to a category type either by the Categorical constructor, or casting the column using astype: End of explanation """ cdystonia.treat.cat.categories """ Explanation: By default the Categorical type represents an unordered categorical. End of explanation """ cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U'] cdystonia.treat.cat.as_ordered().head() """ Explanation: However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order. End of explanation """ cdystonia.treat.cat.codes """ Explanation: The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels. End of explanation """ %time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head() segments['name'] = segments.name.astype('category') %time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head() """ Explanation: Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower. There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!): End of explanation """ cdystonia.age.describe() """ Explanation: Hence, we get a considerable speedup simply by using the appropriate dtype for our data. Discretization Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly! Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups: End of explanation """ pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30] """ Explanation: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's: End of explanation """ pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30] """ Explanation: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False: End of explanation """ pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30] """ Explanation: Since the data are now ordinal, rather than numeric, we can give them labels: End of explanation """ pd.qcut(cdystonia.age, 4)[:30] """ Explanation: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default: End of explanation """ quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1]) quantiles[:30] """ Explanation: Alternatively, one can specify custom quantiles to act as cut points: End of explanation """ pd.get_dummies(quantiles).head(10) """ Explanation: Note that you can easily combine discretiztion with the generation of indicator variables shown above: End of explanation """ new_order = np.random.permutation(len(segments)) new_order[:30] """ Explanation: Permutation and sampling For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence. End of explanation """ segments.take(new_order).head() """ Explanation: Using this sequence as an argument to the take method results in a reordered DataFrame: End of explanation """ segments.head() """ Explanation: Compare this ordering with the original: End of explanation """ vessels.sample(n=10) vessels.sample(n=10, replace=True) """ Explanation: For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement: End of explanation """ cdystonia_grouped = cdystonia.groupby(cdystonia.patient) """ Explanation: Data aggregation and GroupBy operations One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple: aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting) group-wise transformation, such as standardization/normalization End of explanation """ cdystonia_grouped """ Explanation: This grouped dataset is hard to visualize End of explanation """ for patient, group in cdystonia_grouped: print('patient', patient) print('group', group) """ Explanation: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups: End of explanation """ cdystonia_grouped.agg(np.mean).head() """ Explanation: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table. For example, we may want to aggregate our data with with some function. <div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div> We can aggregate in Pandas using the aggregate (or agg, for short) method: End of explanation """ cdystonia_grouped.mean().head() """ Explanation: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method. Some aggregation functions are so common that Pandas has a convenience method for them, such as mean: End of explanation """ cdystonia_grouped.mean().add_suffix('_mean').head() # The median of the `twstrs` variable cdystonia_grouped['twstrs'].quantile(0.5) """ Explanation: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation: End of explanation """ cdystonia.groupby(['week','site']).mean().head() """ Explanation: If we wish, we can easily aggregate according to multiple keys: End of explanation """ normalize = lambda x: (x - x.mean())/x.std() cdystonia_grouped.transform(normalize).head() """ Explanation: Alternately, we can transform the data, using a function of our choice with the transform method: End of explanation """ cdystonia_grouped['twstrs'].mean().head() # This gives the same result as a DataFrame cdystonia_grouped[['twstrs']].mean().head() """ Explanation: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns: End of explanation """ chunks = dict(list(cdystonia_grouped)) chunks[4] """ Explanation: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed: End of explanation """ grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1) {g:grouped_by_type.get_group(g) for g in grouped_by_type.groups} """ Explanation: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way: End of explanation """ cdystonia2.head(10) cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean() """ Explanation: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index: End of explanation """ def top(df, column, n=5): return df.sort_values(by=column, ascending=False)[:n] """ Explanation: Apply We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame. The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call. End of explanation """ top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']] top3segments.head(15) """ Explanation: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship: End of explanation """ mb1.index[:3] """ Explanation: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument. Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index. End of explanation """ class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3])) mb_class = mb1.copy() mb_class.index = class_index """ Explanation: Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class. End of explanation """ mb_class.head() """ Explanation: However, since there are multiple taxonomic units with the same class, our index is no longer unique: End of explanation """ mb_class.groupby(level=0).sum().head(10) """ Explanation: We can re-establish a unique index by summing all rows with the same class, using groupby: End of explanation """ from IPython.core.display import HTML HTML(filename='Data/titanic.html') #import titanic data file titanic = pd.read_excel("Data/titanic.xls", index_col=None) titanic.head() # turn "sex" attribute into numerical attribute # 0 = male ; 1= female sex_map = {'male': 0, 'female': 1} titanic['sex'] = titanic.sex.map(sex_map) titanic.head() # clean duplicate values titanic_2 = titanic.drop_duplicates(['name']) # convert attributes to categorical data pd.Categorical(titanic_2.pclass) pd.Categorical(titanic_2.survived) pd.Categorical(titanic_2.sex) pd.Categorical(titanic_2.age) pd.Categorical(titanic_2.sibsp) pd.Categorical(titanic_2.parch) pd.Categorical(titanic_2.ticket) pd.Categorical(titanic_2.fare) pd.Categorical(titanic_2.cabin) pd.Categorical(titanic_2.embarked) pd.Categorical(titanic_2.boat) pd.Categorical(titanic_2.body) titanic_2 # describe passenger class pclasses = titanic_2.pclass.value_counts() class1 = (pclasses[1]/1307)*100 class2 = (pclasses[2]/1307)*100 class3 = (pclasses[3]/1307)*100 d = {'1st Class' : class1, '2nd Class' : class2, '3rdclass' : class3} pd.Series(d) #24% of passengers travelled 1st class, 21% travelled in 2nd class and 54% travelled in 3rd class # plot classes 1 = 1st 2 = 2nd and 3 = 3rd pclasses.plot.pie() # describe passenger survival survivals = titanic_2.survived.value_counts() survived = (survivals[1]/1307)*100 survived # 38.25% of passengers survived # plot survivals 0 = death & 1 = survival survivals.plot.pie() # describe passenger sex sex = titanic_2.sex.value_counts() sex male_ratio = (sex[1]/1307)*100 male_ratio # results show that 35% of passengers are male and 65% are female # plot gender distribution 0 = male & 1 = female sex.plot.pie() # calculate proportions of port of embarcation S = Southtampton & C = Cherbourg & Q = Queenstown port = titanic_2.embarked.value_counts() S = (port[0]/1307)*100 C = (port[1]/1307)*100 Q = (port[2]/1307)*100 d = {'S' : S, 'C' : C, 'Q' : Q} pd.Series(d) # 20.6% of passengers boarded in C, 9.4% boarded in Q and 69.7% boarded in S. # plot gender distribution 0 = male & 1 = female port.plot.pie() # describe passenger age # assumption - dropping all NaN values and including values of estimated ages titanic_2age = titanic_2.age.dropna() titanic_2age.describe() # results show that mean age was 29.86 y.o. # min age was 0.16y.o. and max was 80 y.o. # 25% of passengers under 21, 50% under 28, 75% under 39 y.o. # show distribution of ages on board titanic_2age.plot.hist(bins=50) # describe passenger fare # assumption - dropping all NaN values titanic_2fare = titanic_2.fare.dropna() titanic_2fare.describe() # results show that mean fare was 33 # min fare was 0 and max was 512 # 25% of passengers paid under 7.9, 50% under 14.5, 75% under 31.27 # show distribution of fares on board titanic_2fare.plot.hist(bins=50) # majority of fares under 100 with few outliers # description of statistics on # of siblings and spouses on board # assumption - dropping all NaN values and include values which are 0 titanic_2sibsp = titanic_2.sibsp.dropna() titanic_2sibsp.describe() # results show that mean # of sibsp was 0.49 siblings or spouses aboard # min number of siblings or spouses was 0 and max was 8 # 75% of passengers had less than 1 sibling or spouse aboard, indicating outliers above 1 # show distribution of # of siblings and spouses on board titanic_2sibsp.plot.hist(bins=50) # description of statistics on # of parents and children on board # assumption - dropping all NaN values and include values which are 0 titanic_2parch = titanic_2.parch.dropna() titanic_2parch.describe() # results show that mean # of parch was 0.38 parents or children aboard # min number of parents or children was 0 and max was 9 # 75% of passengers had less or equal to 0 parents or children aboard, indicating many outliers in the data # show distribution of # of siblings and spouses on board titanic_2parch.plot.hist(bins=50) """ Explanation: Exercise 2 Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic. End of explanation """ # Part 2 # Using Groupby to find ratio of survival by sex sex_survival = titanic.groupby(titanic.survived).sex.value_counts() sex_survival # survivers gender profile calculation surv_tot = sex_survival[1].sum() # calculate total number of survivors fem_surv = (sex_survival[1,1]/surv_tot)*100 # calculate proportion of survived females male_surv = (sex_survival[1,0]/surv_tot)*100 # calculate proportion of survived males out2 = {'Male Survivors' : male_surv , 'Female Survivors' : fem_surv,} # display outputs simultaneously pd.Series(out2) # 67.8% of survivors were female and 32.2% were male # Part 3 # Using Groupby to find ratio of survival by sex and class # table outputs raw numbers, but not proportions sex_class = titanic_2.groupby(['survived','sex']).pclass.value_counts() sex_class # survivers gender + class profile calculation data = pd.DataFrame(sex_class) # turn into data set surv_tot = sex_class[1].sum() # calculate total number of survivors data['proportion of survived'] = (data/nsurv_tot)*100 #add column of proportion of survivors # this column refers to the percentage of people that survived/ did not survived that belong to each category (e.g. percntage of non survivors that were females in second class) data.loc[1] # the table below only shows proportions of different categories of people among survivors # Part 4 # Create Age Categories # Assumption: Dropped all NaNs age_group = pd.cut(titanic_2.age, [0,14,20,64,100], labels=['children','adolescents','adult','seniors']) # create age categories titanic_2['age_group'] = age_group #add column of age group to main dataframe sex_class_age = titanic_2.groupby(['survived','sex', 'pclass']).age_group.value_counts() #find counts for different combinations of age group, sex and class sex_class_age # survivers gender + class + age group profile calculation data = pd.DataFrame(sex_class_age) # turn into data set surv_tot = sex_class_age[1].sum() # calculate total number of survivors data['proportion of survivors'] = (data/surv_tot)*100 #add column of proportion # this column refers to the percentage of people that survived/ did not survive that belong to each category (e.g. percntage of survivors that were old males in first class data.loc[1] # the table below shows proportions of survivals belonging to different categories """ Explanation: Women and children first? Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings. Use the groupby method to calculate the proportion of passengers that survived by sex. Calculate the same proportion, but by class and sex. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex. End of explanation """
soloman817/udacity-ml
misc/keyboard-shortcuts.ipynb
mit
# mode practice """ Explanation: Keyboard shortcuts In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed. First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself. By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape. Exercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times. End of explanation """ ## Practice here def fibo(n): # Recursive Fibonacci sequence! if n == 0: return 0 elif n == 1: return 1 return fibo(n-1) + fibo(n-2) """ Explanation: Help with commands If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. Creating new cells One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell. Exercise: Create a cell above this cell using the keyboard command. Exercise: Create a cell below this cell using the keyboard command. Switching between Markdown and code With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press Y. To switch from code to Markdown, press M. Exercise: Switch the cell below between Markdown and code cells. End of explanation """ # DELETE ME """ Explanation: Line numbers A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell. Exercise: Turn line numbers on and off in the above code cell. Deleting cells Deleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice! Exercise: Delete the cell below. End of explanation """ # Move this cell down # below this cell """ Explanation: Saving the notebook Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy! The Command Palette You can easily access the command palette by pressing Shift + Control/Command + P. Note: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari. This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands. Exercise: Use the command palette to move the cell below down one position. End of explanation """
aboSamoor/polyglot
notebooks/Transliteration.ipynb
gpl-3.0
from polyglot.transliteration import Transliterator """ Explanation: Transliteration Transliteration is the conversion of a text from one script to another. For instance, a Latin transliteration of the Greek phrase "Ελληνική Δημοκρατία", usually translated as 'Hellenic Republic', is "Ellēnikḗ Dēmokratía". End of explanation """ from polyglot.downloader import downloader print(downloader.supported_languages_table("transliteration2")) """ Explanation: Languages Coverage End of explanation """ %%bash polyglot download embeddings2.en transliteration2.ar """ Explanation: Downloading Necessary Models End of explanation """ from polyglot.text import Text blob = """We will meet at eight o'clock on Thursday morning.""" text = Text(blob) """ Explanation: Example We tag each word in the text with one part of speech. End of explanation """ for x in text.transliterate("ar"): print(x) """ Explanation: We can query all the tagged words End of explanation """ !polyglot --lang en tokenize --input testdata/cricket.txt | polyglot --lang en transliteration --target ar | tail -n 30 """ Explanation: Command Line Interface End of explanation """
acehanks/projects
Data analysis/Ign_dataset_Analysis.ipynb
mit
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. """ Explanation: In this dataset, were are going to be exploring sales information provided. End of explanation """ import seaborn as sns from matplotlib import pyplot as plt %matplotlib inline from brewer2mpl import qualitative df= pd.read_csv("../input/Video_Games_Sales_as_at_22_Dec_2016.csv") df.shape """ Explanation: Let us start by importing some helper libraries and the dataset as well End of explanation """ df.head() """ Explanation: Now that we know the shape of the dataset, let's have a peek at the data and try to find if they are any missing values. End of explanation """ df.info() """ Explanation: The cell below shows the data type of the columns End of explanation """ df.isnull().sum() """ Explanation: The cell below shows the column name and the number of empty rows. End of explanation """ df.columns.tolist() """ Explanation: A list of all the columns in the dataset End of explanation """ df_na= ( df.isnull().sum() / len(df) ) * 100 df_na= df_na.drop(df_na[df_na == 0].index).sort_values(ascending= False) f, ax= plt.subplots(figsize=(12, 8)) plt.xticks(rotation='90') sns.barplot(x=df_na.index, y=df_na.values) ax.set(title='Missing Values Plot', ylabel='% Missing') """ Explanation: Calculating the % of missing values End of explanation """ df.Platform.unique() #df.Platform.value_counts() ssc = df.Platform.value_counts() f, ax= plt.subplots(figsize=(12, 8)) plt.xticks(rotation='90') sns.barplot(x=ssc.values, y=ssc.index, orient='h') ax.set(title='Consoles by count', ylabel='Count') f.tight_layout() """ Explanation: Unique Gaming platforms End of explanation """ df_clean= df.dropna(axis=0) df_clean.shape ssc = df_clean.Platform.value_counts() f, ax= plt.subplots(figsize=(12, 8)) plt.xticks(rotation='90') sns.barplot(x=ssc.values, y=ssc.index, orient='h') ax.set(title='Consoles by count after dropping NAs', ylabel='Count') f.tight_layout() #df['User_Score']= df['User_Score'].convert_objects(convert_numeric=True) df_clean.User_Score= df_clean.User_Score.astype('float') #df.User_Score.dtype #df['User_Score'] = video['User_Score'].convert_objects(convert_numeric= True) """ Explanation: dropping all NA values End of explanation """ sns.jointplot(x='User_Score', y='Critic_Score', data=df_clean, kind='hex', cmap='coolwarm', size=7) """ Explanation: Plot of the user score v the critic score of games. It appears the users and the critics agree on games with score greater than 8. End of explanation """ sns.jointplot(x='Critic_Score', y='Critic_Count', data=df_clean, kind='hex', cmap='plasma', size=7) """ Explanation: Critic score v critic count. From this plot, we observe that few critics give scores above 80. End of explanation """ stats=['Year_of_Release','NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales', 'Global_Sales', 'Critic_Score', 'Critic_Count', 'User_Score', 'User_Count', 'Rating'] corrmat = df_clean[stats].corr() f, ax = plt.subplots(figsize=(10, 7)) plt.xticks(rotation='90') plt.title('correlation between columns') sns.heatmap(corrmat, square=True, linewidths=.5, annot=True) """ Explanation: CORRELATION BETWEEN COLUMNS End of explanation """ play= df_clean[(df_clean['Platform']== 'PS2') | (df_clean['Platform']== 'PS3') | (df_clean['Platform']== 'PS')| (df_clean['Platform']== 'PS4')] play.shape """ Explanation: Taking a look at Playstation End of explanation """ sales_Play= play.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum() sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False) ax.set(title='Playststation Global over the year', ylabel='Cumulative count') """ Explanation: Playststation Global 1994-2016 End of explanation """ sales_Play= play.groupby(['Genre', 'Platform'])['Global_Sales'].sum() sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False) """ Explanation: Top selling genre for Playstation End of explanation """ sales_Play= play.groupby(['Rating', 'Platform'])['Global_Sales'].sum() sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False) """ Explanation: Rating of the games made End of explanation """ xb= df_clean[(df_clean['Platform']== 'X360') | (df_clean['Platform']== 'XOne') | (df_clean['Platform']== 'XB')] xb.shape """ Explanation: Taking a closer look at Xbox End of explanation """ sales_xb= xb.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum() sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False) """ Explanation: Global sales of the Xbox consoles globally 1994-2016 End of explanation """ sales_xb= xb.groupby(['Genre', 'Platform'])['Global_Sales'].sum() sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False) """ Explanation: Top selling genre per Xbox console. The top selling genre is the shooter, which makes sense because of the halo franchise. End of explanation """ sales_xb= xb.groupby(['Rating', 'Platform'])['Global_Sales'].sum() sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False) """ Explanation: Rating and global sales End of explanation """ nintendo= df_clean[(df_clean['Platform']== 'DS') | (df_clean['Platform']== 'Wii') | (df_clean['Platform']== 'GC')| (df_clean['Platform']== 'GBA') |(df_clean['Platform']== '3DS') | (df_clean['Platform']== 'WiiU')] nintendo.shape """ Explanation: Taking a closer look at nintendo End of explanation """ nintendo_sales= nintendo.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum() nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False) """ Explanation: Platform and total global sales from 1994-2016 End of explanation """ nintendo_sales= nintendo.groupby(['Genre', 'Platform'])['Global_Sales'].sum() nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False) """ Explanation: Genre and total sales on platform. Nintendo looks to be selling alot of sports oriented games, especially on the Wii. However the Wii U is struggling in sales. End of explanation """ nintendo_sales= nintendo.groupby(['Rating', 'Platform'])['Global_Sales'].sum() nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False) current_gen= df_clean[(df_clean['Platform']== 'Wii') | (df_clean['Platform']== 'X360') | (df_clean['Platform']== 'PS3')] current_gen.shape """ Explanation: Rating and total global sales. Nintendo sold majorly in the category of E (everyone) End of explanation """ current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False) current_gen_sales= current_gen.groupby(['Genre', 'Platform'])['Global_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False) current_gen_sales= current_gen.groupby(['Rating', 'Platform'])['Global_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False) """ Explanation: Comparing the top selling platforms, last generation End of explanation """ current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['NA_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False) """ Explanation: last generation sales from North America End of explanation """ current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['JP_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False) """ Explanation: Last generation sales from Japan End of explanation """ current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['EU_Sales'].sum() current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False) """ Explanation: Last generation sales from EU End of explanation """
is-cs/druljs
DD_Net_demo_jhmdb_split2.ipynb
mit
import numpy as np import math import random import pandas as pd import os import matplotlib.pyplot as plt import cv2 import glob from tqdm import tqdm import pickle import scipy.ndimage.interpolation as inter from scipy.signal import medfilt from scipy.spatial.distance import cdist from keras.optimizers import * from keras.models import Model from keras.layers import * from keras.layers.core import * from tensorflow.keras.callbacks import * from keras.layers.convolutional import * from keras import backend as K import tensorflow as tf import google.colab.files """ Explanation: <a href="https://colab.research.google.com/github/is-cs/druljs/blob/master/DD_Net_demo_jhmdb_split2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ random.seed(1234) class Config(): def __init__(self): self.frame_l = 32 # the length of frames self.joint_n = 15 # the number of joints self.joint_d = 2 # the dimension of joints self.clc_num = 21 # the number of class self.feat_d = 105 self.filters = 64 C = Config() """ Explanation: Define configurations End of explanation """ # Temple resizing function def zoom(p,target_l=64,joints_num=25,joints_dim=3): l = p.shape[0] p_new = np.empty([target_l,joints_num,joints_dim]) for m in range(joints_num): for n in range(joints_dim): p_new[:,m,n] = medfilt(p_new[:,m,n],3) p_new[:,m,n] = inter.zoom(p[:,m,n],target_l/l)[:target_l] return p_new # Calculate JCD feature def norm_scale(x): return (x-np.mean(x))/np.mean(x) def get_CG(p,C): M = [] iu = np.triu_indices(C.joint_n,1,C.joint_n) for f in range(C.frame_l): d_m = cdist(p[f],p[f],'euclidean') d_m = d_m[iu] M.append(d_m) M = np.stack(M) M = norm_scale(M) return M # Genrate dataset def data_generator(T,C,le): X_0 = [] X_1 = [] X_2 = [] Y = [] limb_locs = [(0,2),(4,0),(3,0),(8,4),(7,3),(12,8),(11,7),(1,0),(6,1),(5,1),(10,6),(9,5),(14,10),(13,9)] print('len_t_pose:',len(T['pose'])) for i in tqdm(range(len(T['pose']))): p = np.copy(T['pose'][i]) q = np.copy(p) q = np.pad(q,((0,0),(0,0),(0,1)), constant_values=0) qc = np.zeros((q.shape[0],q.shape[1]-1,q.shape[2])) normal_vec_pos = 0 for j1,j2 in limb_locs: joint1_section = np.copy(q[:,j1,:]) joint2_section = np.copy(q[:,j2,:]) limb_mat = joint1_section - joint2_section obs_vec_mat = np.copy(joint1_section) #print('obs_vec_mat.size:',obs_vec_mat.size()) #print('opx.size:',self.opx.size()) obs_vec_mat[:,0] = obs_vec_mat[:,0] - 0.023900129681375948 obs_vec_mat[:,1] = obs_vec_mat[:,1] - 0.37827336942214096 obs_vec_mat[:,2] = obs_vec_mat[:,2] - 1 qc[:,normal_vec_pos,:] = np.cross(limb_mat, obs_vec_mat, axis=1) normal_vec_pos += 1 if i == 0: print('p.shape:',p.shape) print('q.shape:',q.shape) p = zoom(p,target_l=C.frame_l,joints_num=C.joint_n,joints_dim=C.joint_d) qcz = zoom(qc,target_l=C.frame_l,joints_num=C.joint_n-1,joints_dim=C.joint_d+1) if i == 0: print('p1.shape:',p.shape) print('q1.shape:',q.shape) print('qc.shape:',qc.shape) print('qcz.shape:',qcz.shape) label = np.zeros(C.clc_num) label[le.transform(T['label'])[i]-1] = 1 M = get_CG(p,C) if i == 0: print('M.shape:',M.shape) X_0.append(M) X_1.append(p) X_2.append(qcz) Y.append(label) X_0 = np.stack(X_0) X_1 = np.stack(X_1) X_2 = np.stack(X_2) Y = np.stack(Y) return X_0,X_1,X_2,Y """ Explanation: Define data processing functions End of explanation """ def poses_diff(x): H, W = x.get_shape()[1],x.get_shape()[2] x = tf.subtract(x[:,1:,...],x[:,:-1,...]) x = tf.compat.v1.image.resize_nearest_neighbor(x,size=[H,W],align_corners=False) # should not alignment here return x def pose_motion(P,frame_l): #print('pose_motion: P.shape:',P.shape) P_diff_slow = Lambda(lambda x: poses_diff(x))(P) #print('pose_motion: P_diff_slow.shape:',P_diff_slow.shape) P_diff_slow = Reshape((frame_l,-1))(P_diff_slow) #print('pose_motion: P_diff_slow1.shape:',P_diff_slow.shape) P_fast = Lambda(lambda x: x[:,::2,...])(P) #print('pose_motion: P_fast.shape:',P_fast.shape) P_diff_fast = Lambda(lambda x: poses_diff(x))(P_fast) #print('pose_motion: P_diff_fast.shape:',P_diff_fast.shape) P_diff_fast = Reshape((int(frame_l/2),-1))(P_diff_fast) #print('pose_motion: P_diff_fast1.shape:',P_diff_fast.shape) return P_diff_slow,P_diff_fast def c1D(x,filters,kernel): #print('c1D - x.shape:',x.shape) x = Conv1D(filters, kernel_size=kernel,padding='same',use_bias=False)(x) x = BatchNormalization()(x) x = LeakyReLU(alpha=0.2)(x) return x def block(x,filters): x = c1D(x,filters,3) x = c1D(x,filters,3) return x def d1D(x,filters): x = Dense(filters,use_bias=False)(x) x = BatchNormalization()(x) x = LeakyReLU(alpha=0.2)(x) return x def build_FM(frame_l=32,joint_n=22,joint_d=2,feat_d=231,filters=16): M = Input(shape=(frame_l,feat_d)) P = Input(shape=(frame_l,joint_n,joint_d)) Q = Input(shape=(frame_l,14,3)) diff_slow,diff_fast = pose_motion(P,frame_l) #print('M.s:',M.shape) x = c1D(M,filters*2,1) x = SpatialDropout1D(0.1)(x) x = c1D(x,filters,3) x = SpatialDropout1D(0.1)(x) x = c1D(x,filters,1) x = MaxPooling1D(2)(x) x = SpatialDropout1D(0.1)(x) #print('P.s:',P.shape) #print('diff_slow.s:',diff_slow.shape) x_d_slow = c1D(diff_slow,filters*2,1) x_d_slow = SpatialDropout1D(0.1)(x_d_slow) x_d_slow = c1D(x_d_slow,filters,3) x_d_slow = SpatialDropout1D(0.1)(x_d_slow) x_d_slow = c1D(x_d_slow,filters,1) x_d_slow = MaxPool1D(2)(x_d_slow) x_d_slow = SpatialDropout1D(0.1)(x_d_slow) #print('diff_fast.s:',diff_fast.shape) x_d_fast = c1D(diff_fast,filters*2,1) x_d_fast = SpatialDropout1D(0.1)(x_d_fast) x_d_fast = c1D(x_d_fast,filters,3) x_d_fast = SpatialDropout1D(0.1)(x_d_fast) x_d_fast = c1D(x_d_fast,filters,1) x_d_fast = SpatialDropout1D(0.1)(x_d_fast) Q_reshaped = Reshape((frame_l,-1))(Q) x_cp = c1D(Q_reshaped,filters*2,1) x_cp = SpatialDropout1D(0.1)(x_cp) x_cp = c1D(x_cp,filters,3) x_cp = SpatialDropout1D(0.1)(x_cp) x_cp = c1D(x_cp,filters,1) x_cp = MaxPool1D(2)(x_cp) x_cp = SpatialDropout1D(0.1)(x_cp) x = concatenate([x,x_d_slow,x_d_fast, x_cp]) #x1 = concatenate([x,x_d_slow,x_d_fast, x_cp]) #print('mdl: x.shape:',x.shape,',x1.shape:', x1.shape) x = block(x,filters*2) x = MaxPool1D(2)(x) x = SpatialDropout1D(0.1)(x) x = block(x,filters*4) x = MaxPool1D(2)(x) x = SpatialDropout1D(0.1)(x) x = block(x,filters*8) x = SpatialDropout1D(0.1)(x) return Model(inputs=[M,P,Q],outputs=x) def build_DD_Net(C): M = Input(name='M', shape=(C.frame_l,C.feat_d)) P = Input(name='P', shape=(C.frame_l,C.joint_n,C.joint_d)) Q = Input(name='Q', shape=(C.frame_l,14,3)) FM = build_FM(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.filters) x = FM([M,P,Q]) x = GlobalMaxPool1D()(x) x = d1D(x,128) x = Dropout(0.5)(x) x = d1D(x,128) x = Dropout(0.5)(x) x = Dense(C.clc_num, activation='softmax')(x) ######################Self-supervised part model = Model(inputs=[M,P,Q],outputs=x) return model DD_Net = build_DD_Net(C) DD_Net.summary() """ Explanation: Define network End of explanation """ uploaded = google.colab.files.upload() Train = pickle.load(open("GT_train_2.pkl", "rb")) Test = pickle.load(open("GT_test_2.pkl", "rb")) from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit(Train['label']) X_0,X_1,X_2,Y = data_generator(Train,C,le) print('X_0.shape:',X_0.shape,', X_1.shape:',X_1.shape) X_test_0,X_test_1,X_test_2,Y_test = data_generator(Test,C,le) """ Explanation: Load dataset (download GT_train_1.pkl and GT_test_1.pkl from github and then upload them ) End of explanation """ import keras lr = 1e-3 DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy']) lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6) history = DD_Net.fit([X_0,X_1,X_2],Y, batch_size=len(Y), epochs=600, verbose=True, shuffle=True, callbacks=[lrScheduler], validation_data=([X_test_0,X_test_1,X_test_2],Y_test) ) lr = 1e-4 DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy']) lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6) history = DD_Net.fit([X_0,X_1,X_2],Y, batch_size=len(Y), epochs=600, verbose=True, shuffle=True, callbacks=[lrScheduler], validation_data=([X_test_0,X_test_1,X_test_2],Y_test) ) # Plot training & validation accuracy values plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() """ Explanation: Start train on split 1 End of explanation """
ClaudiaEsp/inet
Analysis/Sigmoids function to model connection probabilities.ipynb
gpl-2.0
%pylab inline from scipy.optimize import curve_fit import pickle # objective function def sigmoid(x, A, C, r): """ solves for the following 1igmoid function: f(x; A, C, r )=( A / ( 1 + np.exp((x-C)/r))) where x is the independent variable A is the maximal amplitude of the curve C is the half point of the sigmoidal function r is rate of maximum population growth """ return A / ( 1 + np.exp((x-C)/r)) def fit_sigmoid(xdata, ydata, mytitle): """ fit data to the sigmoid function, plots results and returns fitting parameters. It will also save the Arguments: ---------- xdata: array independent variable ydata: array results of the function title: string the name of the figure and the dictionary object to be saved """ # 1.- fit data pinit = [50, 150, 15] # initial guess for amplitude, half-width and slope lower_bounds = (0, 0, 15) upper_bounds = (100, 300, 100) mybounds = ((lower_bounds, upper_bounds)) param, pcov = curve_fit(f=sigmoid, xdata= xval, ydata= yval, p0 = pinit, \ method='trf', bounds=mybounds) # 2.- plot data and fitting curve plt.plot(xdata, ydata, 'ko'); plt.ylim(-10,110), plt.xlim(-5, 305); xfit = np.linspace(-5,305, 100) yfit = sigmoid(xfit, *param) plt.plot(xfit, yfit, color='brown') plt.xlabel('Intersomatic distance ($\mu$m)'); plt.ylabel('Connection probability (%)'); plt.title(mytitle) # 3.- return and save fitting results pickle.dump( param, open( mytitle+ ".p", 'wb' ) ) print('A = {0[0]}, C = {0[1]} and r = {0[2]}'.format(param)) """ Explanation: <H1>Sigmoid functions to model connections probabilities</H1> <P>For modeling distance-dependent connection probabilities between inhibitory neurons</P> End of explanation """ # Plot probabilities of inhibitory chemical synapses between inhibitory neurons ichem_found = np.array([10, 30, 29, 12, 2, 4], dtype=int) ichem_tested = np.array([4, 14, 9, 2, 0, 0], dtype=float) yval = (ichem_tested/ichem_found)*100 xval = arange(25,350,50) # bins xval = array([ 25, 75, 125, 175,275, 325]) fit_sigmoid(xval, yval, 'chem_syn') """ Explanation: <H2>Chemical Synapses</H2> End of explanation """ # Plot probabilities of inhibitory electrical synapses between inhibitory neurons igap_found = np.array([ 5, 15, 15, 6, 4, 1, 2], dtype=int) igap_tested = np.array([ 2, 11, 6, 2, 0, 0, 0], dtype=float) yval = (igap_tested/igap_found)*100 xval = arange(25,350,50) # bins fit_sigmoid(xval, yval, 'elec_syn') """ Explanation: <H2>Electrical Synapses</H2> End of explanation """
survey-methods/samplics
docs/source/tutorial/ttest.ipynb
mit
import numpy as np import pandas as pd from pprint import pprint from samplics.datasets import Auto from samplics.categorical.comparison import Ttest """ Explanation: T-test The t-test module allows comparing means of continuous variables of interest to known means or across two groups. There are four main types of comparisons. - Comparison of one-sample mean to a known mean - Comparison of two groups from the same sample - Comparison of two means from two different samples - Comparison of two paired means Ttest() is the class that implements all four type of comparisons. To run a comparison, the user call the method compare() with the appropriate parameters. End of explanation """ # Load Auto sample data auto_cls = Auto() auto_cls.load_data() auto = auto_cls.data mpg = auto["mpg"] one_sample_known_mean = Ttest(samp_type="one-sample") one_sample_known_mean.compare(y=mpg, known_mean=20) print(one_sample_known_mean) """ Explanation: Comparison of one-sample mean to a knowm mean For this comparison, the mean of a continuous variable, i.e. mpg, is compared to a know mean. In the example below, the user is testing whether the average mpg is equal to 20. Hence, the null hypothesis is H0: mean(mpg) = 20. There are three possible alternatives for this null hypotheses: - Ha: mean(mpg) < 20 (less_than alternative) - Ha: mean(mpg) > 20 (greater_than alternative) - Ha: Ha: mean(mpg) != 20 (not_equal alternative) All three alternatives are automatically computed by the method compare(). This behavior is similar across the four type of comparisons. End of explanation """ pprint(one_sample_known_mean.__dict__) """ Explanation: The print below shows the information encapsulated in the object. point_est provides the sample mean. Similarly, stderror, stddev, lower_ci, and upper_ci provide the standard error, standard deviation, lower bound confidence interval (CI), and upper bound CI, respectively. The class member stats provides the statistics related to the three t-tests (for the three alternative hypothesis). There is additional information encapsulated in the object as shown below. End of explanation """ foreign = auto["foreign"] one_sample_two_groups = Ttest(samp_type="one-sample") one_sample_two_groups.compare(y=mpg, group=foreign) print(one_sample_two_groups) """ Explanation: Comparison of two groups from the same sample This type of comparison is used when the two groups are from the sample. For example, after running a survey, the user want to know if the domestic cars have the same mpg on average compare to the foreign cars. The parameter group indicates the categorical variable. NB: note that, at this point, Ttest() does not take into account potential dependencies between groups. End of explanation """ print(f"\nThese are the group means for mpg: {one_sample_two_groups.point_est}\n") print(f"These are the group standard error for mpg: {one_sample_two_groups.stderror}\n") print(f"These are the group standard deviation for mpg: {one_sample_two_groups.stddev}\n") print("These are the computed statistics:\n") pprint(one_sample_two_groups.stats) """ Explanation: Since there are two groups for this comparison, the sample mean, standard error, standard deviation, lower bound CI, and upper bound CI are provided by group as Python dictionnaries. The class member stats provides statistics for the comparison assuming both equal and unequal variances. End of explanation """ two_samples_unpaired = Ttest(samp_type="two-sample", paired=False) two_samples_unpaired.compare(y=mpg, group=foreign) print(two_samples_unpaired) print(f"\nThese are the group means for mpg: {two_samples_unpaired.point_est}\n") print(f"These are the group standard error for mpg: {two_samples_unpaired.stderror}\n") print(f"These are the group standard deviation for mpg: {two_samples_unpaired.stddev}\n") print("These are the computed statistics:\n") pprint(two_samples_unpaired.stats) """ Explanation: Comparison of two means from two different samples This type of comparison should be used when the two groups come from different samples or different strata. The group are assumed independent. Otherwise, the information is similar to the previous test. Note that, when instantiating the class, we used samp_type="two-sample". End of explanation """ two_samples_paired = Ttest(samp_type="two-sample", paired=True) two_samples_paired.compare(y=auto[["y1", "y2"]], group=foreign) print(two_samples_paired) """ Explanation: Comparison of two paired means When two measures are taken from the same observations, the paired t-test is appropriate for comparing the means. End of explanation """ y1 = auto["y1"] y2 = auto["y2"] two_samples_paired = Ttest(samp_type="two-sample", paired=True) two_samples_paired.compare(y=[y1, y2], varnames= ["group_1", "gourp_2"], group=foreign) print(two_samples_paired) """ Explanation: varnames can be used rename teh variables End of explanation """
SunnyMarkLiu/Kaggle-House-Prices
XGBoost_and_Parameter_Tuning.ipynb
mit
# The error metric: RMSE on the log of the sale prices. from sklearn.metrics import mean_squared_error def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) def model_cross_validate(xgb_regressor, cv_paramters, dtrain, cv_folds = 5, early_stopping_rounds = 50, perform_progress=False): """ xgb model cross validate to choose best param from giving cv_paramters. @param cv_paramters:dict,where to choose best param. {'param':[1,2,3]} @param dtrain:xgboost.DMatrix, training data formatted for xgb @param early_stopping_rounds: Activates early stopping.Stop when perfomance does not improve for some rounds """ # get initial parameters xgb_param = xgb_regressor.get_xgb_params() # save best param best_params = {} best_cvresult = None min_mean_rmse = float("inf") for param, values in cv_paramters.items(): print '===========Tuning paramter:',param,'===========' best_param = values[0] for value in values: # set the param's value xgb_param[param] = value # cv to tune param from values cvresult = xgb.cv(xgb_param, dtrain, num_boost_round=xgb_param['n_estimators'], nfold=cv_folds, metrics='rmse', early_stopping_rounds=early_stopping_rounds) # calcuate the mean of several final rmses round_count = cvresult.shape[0] mean_rmse = cvresult.loc[round_count-11:round_count-1,'test-rmse-mean'].mean() if perform_progress: std_rmse = cvresult.loc[round_count-11:round_count-1,'test-rmse-std'].mean() if isinstance(value, int): print "%s=%d CV RMSE : Mean = %.7g | Std = %.7g" % (param, value, mean_rmse, std_rmse) else: print "%s=%f CV RMSE : Mean = %.7g | Std = %.7g" % (param, value, mean_rmse, std_rmse) if mean_rmse < min_mean_rmse: best_param = value best_cvresult = cvresult min_mean_rmse = mean_rmse best_params[param] = best_param # set best param value for xgb params, important xgb_param[param] = best_param print "best ", param, " = ", best_params[param] return best_params, min_mean_rmse, best_cvresult def model_fit(xgb_regressor, train_x, train_y, performCV=True, printFeatureImportance=True, cv_folds=5): # Perform cross-validation if performCV: xgb_param = xgb_regressor.get_xgb_params() cvresult = xgb.cv(xgb_param, dtrain, num_boost_round=xgb_param['n_estimators'], nfold=cv_folds, metrics='rmse', early_stopping_rounds=50) round_count = cvresult.shape[0] mean_rmse = cvresult.loc[round_count-11:round_count-1,'test-rmse-mean'].mean() std_rmse = cvresult.loc[round_count-11:round_count-1,'test-rmse-std'].mean() print "CV RMSE : Mean = %.7g | Std = %.7g" % (mean_rmse, std_rmse) # fir the train data xgb_regressor.fit(train_x, train_y) # Predict training set train_predictions = xgb_regressor.predict(train_x) mse = rmse(train_y, train_predictions) print("Train RMSE: %.7f" % mse) # Print Feature Importance if printFeatureImportance: feature_importances = pd.Series(xgb_regressor.feature_importances_, train_x.columns.values) feature_importances = feature_importances.sort_values(ascending=False) feature_importances= feature_importances.head(40) feature_importances.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score') return xgb_regressor, feature_importances """ Explanation: XGBoost & Parameter Tuning Ref: Complete Guide to Parameter Tuning in XGBoost Parameters Tuning Plan The overall parameters can be divided into 3 categories: General Parameters: Guide the overall functioning Booster Parameters: Guide the individual booster (tree/regression) at each step Learning Task Parameters: Guide the optimization performed In XGBRegressor: class xgboost.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=None) End of explanation """ xgb_regressor = XGBRegressor(seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) """ Explanation: Baseline XGBRegressor End of explanation """ param_test = {'n_estimators':range(300,400,10)} xgb_regressor = XGBRegressor( learning_rate =0.05, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) best_param, min_mean_rmse, best_cvresult = \ model_cross_validate(xgb_regressor, param_test, dtrain, perform_progress=True) print 'cross-validate best params:', best_param print 'cross-validate min_mean_rmse:', min_mean_rmse xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) """ Explanation: 1. Choose a relatively high learning_rate,optimum n_estimators End of explanation """ param_test = {'max_depth':range(1,6,1), 'min_child_weight':common_num_range(1,2,0.1)} xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) best_param, min_mean_rmse, best_cvresult = \ model_cross_validate(xgb_regressor, param_test, dtrain, perform_progress=True) print 'cross-validate best params:', best_param print 'cross-validate min_mean_rmse:', min_mean_rmse xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) """ Explanation: 2.Fix learning rate and number of estimators for tuning tree-based parameters Tune max_depth and min_child_weight End of explanation """ param_test = {'gamma':[0, 0.1, 0.01, 0.001,0.0001, 0.00001]} xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) best_param, min_mean_rmse, best_cvresult = \ model_cross_validate(xgb_regressor, param_test, dtrain, perform_progress=True) print 'cross-validate best params:', best_param print 'cross-validate min_mean_rmse:', min_mean_rmse xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.8, colsample_bytree=0.8, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) """ Explanation: Tune gamma,Minimum loss reduction required to make a further partition on a leaf node of the tree. End of explanation """ param_test = {'subsample':common_num_range(0.6, 0.9, 0.01), 'colsample_bytree':common_num_range(0.6, 0.9, 0.01)} xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0.01, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) best_param, min_mean_rmse, best_cvresult = \ model_cross_validate(xgb_regressor, param_test, dtrain, perform_progress=True) print 'cross-validate best params:', best_param print 'cross-validate min_mean_rmse:', min_mean_rmse xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.72, colsample_bytree=0.89, reg_lambda = 0.1, reg_alpha = 0.1, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) param_test2 = {'reg_lambda':common_num_range(0.55, 0.65, 0.01), 'reg_alpha':common_num_range(0.45, 0.6, 0.01)} xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.72, colsample_bytree=0.89, scale_pos_weight=1, objective= 'reg:linear', seed=10) best_param, min_mean_rmse, best_cvresult = \ model_cross_validate(xgb_regressor, param_test2, dtrain, perform_progress=True) print 'cross-validate best params:', best_param print 'cross-validate min_mean_rmse:', min_mean_rmse xgb_regressor = XGBRegressor( learning_rate =0.05, n_estimators = 300, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.72, colsample_bytree=0.89, reg_lambda = 0.61, reg_alpha = 0.53, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) xgb_regressor = XGBRegressor( learning_rate =0.01, n_estimators = 4000, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.72, colsample_bytree=0.89, reg_lambda = 0.61, reg_alpha = 0.53, scale_pos_weight=1, objective= 'reg:linear', seed=10) xgb_regressor, feature_importances = model_fit(xgb_regressor,train_X, train_Y) """ Explanation: Tune subsample and colsample_bytree subsample : Subsample ratio of the training instance. colsample_bytree : Subsample ratio of columns when constructing each tree End of explanation """ xgb_predictions = xgb_regressor.predict(test_X) xgb_predictions = np.expm1(xgb_predictions) submission = pd.DataFrame({ "Id": test_Id, "SalePrice": xgb_predictions }) submission.to_csv("result/xgb_param_tune_predictions_2_13.csv", index=False) print "Done." """ Explanation: Final paramters: ``` xgb_regressor = XGBRegressor( learning_rate =0.01, n_estimators = 4000, max_depth=3, min_child_weight=1.1, gamma=0.01, subsample=0.72, colsample_bytree=0.89, reg_lambda = 0.61, reg_alpha = 0.53, scale_pos_weight=1, objective= 'reg:linear', seed=10) ``` End of explanation """ from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(train_X, train_Y, test_size=0.4, random_state=0) from sklearn.linear_model import Ridge, ElasticNet, Lasso def simple_model_cross_validate(alphas, Model, model_name): min_rmse = float('inf') best_alpha = None for alpha in alphas: model = Model(alpha, max_iter=50000).fit(X_train, y_train) model_rmse = rmse(model.predict(X_test), y_test) if model_rmse < min_rmse: best_alpha = alpha min_rmse = model_rmse print model_name, 'best_alpha = ', best_alpha, 'min_rmse = ', min_rmse alphas = common_num_range(0.0001, 0.002, 0.0001) simple_model_cross_validate(alphas, Lasso, 'Lasso') simple_model_cross_validate(alphas, ElasticNet, 'ElasticNet') alphas = common_num_range(25, 50, 1) simple_model_cross_validate(alphas, Ridge, 'Ridge') lasso_model = Lasso(alpha=0.0009, max_iter=50000).fit(X_train, y_train) elastic_net_model = ElasticNet(alpha=0.0019, max_iter=50000).fit(X_train, y_train) ridge_model = Ridge(alpha=41, max_iter=50000).fit(X_train, y_train) lasso_predictions = lasso_model.predict(test_X) lasso_predictions = np.expm1(lasso_predictions) ridge_predictions = ridge_model.predict(test_X) ridge_predictions = np.expm1(ridge_predictions) elastic_net_predictions = elastic_net_model.predict(test_X) elastic_net_predictions = np.expm1(elastic_net_predictions) predictions = (lasso_predictions + ridge_predictions + elastic_net_predictions + xgb_predictions) / 4 plt.subplot(221) plt.plot(lasso_predictions, c="blue") # 0.12818 plt.title('lasso 0.12818') plt.subplot(222) plt.plot(elastic_net_predictions, c="yellow") # 0.12908 plt.title('elastic_net 0.12908') plt.subplot(223) plt.plot(ridge_predictions, c="pink") # 0.13161 plt.title('ridge 0.13161') plt.subplot(224) plt.plot(xgb_predictions, c="green") # 0.12167 plt.title('xgb 0.12167') plt.subplots_adjust(wspace=0.5, hspace=0.5) plt.show() plt.subplot(111) plt.plot(predictions, c="red") # 0.12419 plt.title('4 model vote 0.12419') # outlier data np.argwhere(lasso_predictions == lasso_predictions[lasso_predictions > 700000]) # convert outlier data to xgb_predictions[1089] lasso_predictions[1089] = xgb_predictions[1089] ridge_predictions[1089] = xgb_predictions[1089] elastic_net_predictions[1089] = xgb_predictions[1089] lasso_score = 1-0.12818 ridge_score = 1-0.13161 elastic_net_score = 1-0.12908 xgb_score = 1-0.12167 total_score = lasso_score + ridge_score + elastic_net_score + xgb_score predictions = (lasso_score / total_score) * lasso_predictions + \ (ridge_score / total_score) * ridge_predictions + \ (elastic_net_score / total_score) * elastic_net_predictions + \ (xgb_score / total_score) * xgb_predictions plt.subplot(221) plt.plot(lasso_predictions, c="blue") # 0.12818 plt.title('lasso 0.12818') plt.subplot(222) plt.plot(elastic_net_predictions, c="yellow") # 0.12908 plt.title('elastic_net 0.12908') plt.subplot(223) plt.plot(ridge_predictions, c="pink") # 0.13161 plt.title('ridge 0.13161') plt.subplot(224) plt.plot(xgb_predictions, c="green") # 0.12167 plt.title('xgb 0.12167') plt.subplots_adjust(wspace=0.5, hspace=0.5) plt.show() plt.subplot(111) plt.plot(predictions, c="red") # 0.12417 plt.title('4 model vote 0.12417') submission = pd.DataFrame({ "Id": test_Id, "SalePrice": lasso_predictions }) submission.to_csv("result/lasso_predictions_2_13.csv", index=False) submission = pd.DataFrame({ "Id": test_Id, "SalePrice": ridge_predictions }) submission.to_csv("result/ridge_predictions_2_13.csv", index=False) submission = pd.DataFrame({ "Id": test_Id, "SalePrice": elastic_net_predictions }) submission.to_csv("result/elastic_net_predictions_2_13.csv", index=False) submission = pd.DataFrame({ "Id": test_Id, "SalePrice": xgb_predictions }) submission.to_csv("result/xgb_predictions_2_13.csv", index=False) submission = pd.DataFrame({ "Id": test_Id, "SalePrice": predictions }) submission.to_csv("result/4_model_vote_predictions_2_13.csv", index=False) print "Done." """ Explanation: Model Voting Ridge, ElasticNet, Lasso, XGBRegressor model voting. End of explanation """ from sklearn.linear_model import LassoCV model_lasso = LassoCV(alphas = [1, 0.1, 0.001, 0.0005]).fit(train_X, train_Y) print rmse(model_lasso.predict(train_X), train_Y) lasso_predictions = model_lasso.predict(test_X) lasso_predictions = np.expm1(lasso_predictions) predictions = (lasso_predictions + xgb_predictions) / 2 submission = pd.DataFrame({ "Id": test_Id, "SalePrice": predictions }) submission.to_csv("result/lasso_xgb_vote_predictions_2_13.csv", index=False) print "Done." lasso_score = 1-0.12818 xgb_score = 1-0.12167 total_score = lasso_score + xgb_score predictions = (lasso_score / total_score) * lasso_predictions + \ (xgb_score / total_score) * xgb_predictions submission = pd.DataFrame({ "Id": test_Id, "SalePrice": predictions }) submission.to_csv("result/lasso_xgb_weighted_vote_predictions_2_13.csv", index=False) print "Done." """ Explanation: Best Vote Score End of explanation """
mannyfin/IRAS
Type C calibrations/TypeC calcs.ipynb
bsd-3-clause
# import a few packages %matplotlib notebook from thermocouples_reference import thermocouples import numpy as np import pandas as pd import matplotlib.pyplot as plt import sympy as sp from scipy import optimize, interpolate, signal typeC=thermocouples['C'] # make sure you are in the same dir as the file # read in the file and drop Na cols df = pd.read_excel('Type C Table 4-2-18.xlsx') df.dropna(axis=1, inplace=True) df.head() # NIST has values calibrated for T > 273.15 K, lets find the Tref based on these points # I am using Kelvin for all T. The CJC is quoted in deg C. tempdf = df.query('T>273.15') tempdf.head() # Let's find the T_ref by using this function to take the TypeC mV and the T to find the Tref def find_Tref(mV, T): x = np.arange(290, 301, 0.01) x = x[::-1] # lets reverse x i = 1 while typeC.inverse_KmV(mV, Tref=x[i]) - T >= 0: i += 1 # print(x[i]) return x[i] # This isn't the fastest way to do things, but since its just a short amount of rows, lets iterate over the mV and T # to find Tref Treflist=[] for idx in tempdf.index: # print(idx) Treflist.append(find_Tref(mV=tempdf['TypeCmV'][idx], T=tempdf['T'][idx])) print( ['%0.2f'% x for x in Treflist]) # now average the Trefs: avg_Tref = np.mean(Treflist) print(avg_Tref) # I will use this Tref for further calcs Tref_emf = typeC.emf_mVK(avg_Tref) print(Tref_emf) # The Tref_emf value is very close to the value in the table at 273.15 K, so we'll use this value to correct the new values # The value taken at 273.15 K was during the cooling process and is likely to be less accurate than the room temperature value # across these multiple observations # The emf correction for 273.15 K is then: calibrated_emf = raw_emf + Tref_emf # Let's add this to the df we initially imported... df['TypeC_calib_mV'] = df['TypeCmV'] + Tref_emf df.head() # Compared to the NIST table, we appear to be off at most a little less than 1 deg K # Had we used the CJC temperature as a proxy for room temp, we would've been even more off. # compare the TypeCmV using Tref = CJC vs using Tref = 294.67: print(typeC.emf_mVK(291.22, Tref =(25.26+273.15))) print(typeC.emf_mVK(291.22, Tref =avg_Tref)) # Let's visualize these results plt.plot(df['T'], df['TypeC_calib_mV'], 'o', ms=0.5 ) plt.xlabel('Temperature (K)') plt.ylabel('Type C calibrated emf (mV)') # Interesting. I cooled first to LN2 temperatures and then allowed the sample to heat up slowly by evaporating LN2 # The data agrees fairly well (within ~3 K) between the heating and cooling curves. I didn't heat all the way back up. # Now lets fit the data to a polynowmial using least squares fit_coeffs = np.polyfit(df['T'],df['TypeC_calib_mV'], deg = 10 , full=True) # print(fit_coeffs) fit_poly = np.poly1d(fit_coeffs[0]) print(fit_poly) fig, ax = plt.subplots() ax.plot(df['T'], df['TypeC_calib_mV'],'o',ms='0.5') ax.plot(df['T'], fit_poly(df['T']) , 'o', ms='0.5') """ Explanation: The situation Type C thermocouples are not NIST calibrated to below 273.15 K. For my research specific scenario, I need to cool my sample (Molybdenum) to cryogenic temperatures and also anneal to very high ~2000 K. There is no thermocouple with these properties. The solution We know that Type K thermocouples are accurate down to cryogenic temperatures. So what I've done here is to read the Type K temperature and record the corresponding Type C mV to create a calibration table. Both thermocouples were spot welded to a large mass very close to one another to ensure the temperature readings will be accurate. Then I will use a polynomial fit to get the low T calibration for the Type C thermocouple. End of explanation """ # These mV values are also close ~0.5 degrees K of one another print(fit_poly(273.15)) # fit print(typeC.emf_mVK(273.15)) # NIST value """ Explanation: The 10th degree polynomial appears to give the best fit overall. The lower order polynomials dont fit the curve exceedingly well below 100 K Also, the polynomial tracks the heating curve (the slightly higher mV points from 80-150K) a little more closely than the cooling curve (295 to 80 K). Heating occurred much more slowly than cooling, so I expect it to me more accurate anyways. End of explanation """ x = sp.symbols('x') polynom = sp.Poly(fit_coeffs[0],x) # print(fit_coeffs[0]) # find the second derivative of the polynomial second_derivative = polynom.diff(x,x) print(second_derivative) sp.solve(second_derivative,x, domain= sp.S.Reals) print(second_derivative.evalf(subs={x:77})) print(second_derivative.evalf(subs={x:80})) print('\n') print(second_derivative.evalf(subs={x:120})) print(second_derivative.evalf(subs={x:125})) print('\n') print(second_derivative.evalf(subs={x:135})) print(second_derivative.evalf(subs={x:145})) print('\n') print(second_derivative.evalf(subs={x:283})) print(second_derivative.evalf(subs={x:291})) first_deriv = polynom.diff(x) print(first_deriv) sp.solve(first_deriv,x, domain= sp.S.Reals) print(first_deriv.evalf(subs={x:80})) print(first_deriv.evalf(subs={x:84})) """ Explanation: It's also a good idea to check that the polynomial does not have any inflection points, at least in the area we are interested in using the polynomial (77 K - 273.15 K). We can use the second derivative test to see if this will be important for our case. End of explanation """ lowT_df = df.query('T<103') # Now lets fit the data to an exponential # print(np.min(lowT_df['TypeC_calib_mV'])) def func(x, a, b, c, d): return a * np.exp(b * x - c) + d fit_coeffs = optimize.curve_fit(func, lowT_df['T'],lowT_df['TypeC_calib_mV'], p0=(1, 1, 90, -3)) print(fit_coeffs) a = fit_coeffs[0][0] b = fit_coeffs[0][1] c = fit_coeffs[0][2] d = fit_coeffs[0][3] expfunc = func(lowT_df['T'],a,b,c,d) fig3, ax3 = plt.subplots() # ax3.plot(lowT_df['T'], a*np.exp(b*lowT_df['TypeC_calib_mV']), 'o',ms='0.5') ax3.plot(lowT_df['T'], lowT_df['TypeC_calib_mV'], 'o',ms='0.5') ax3.plot(lowT_df['T'], expfunc, 'o',ms='0.5',color='r') """ Explanation: Well this is not optimal-- there exists a local minimum at 83.86 K in our polynomial fit. We can attempt to fit an exponential curve to this very low temperature data and append this to the polynomial function. End of explanation """ # select data from 103 to 120 K just so we can see the point of intersection a little better checkT_df = df.query('77<=T<=120') fig4, ax4 = plt.subplots() ax4.plot(checkT_df['T'], fit_poly(checkT_df['T']), 'o', ms=0.5, label='polyfit', color='g') ax4.plot(lowT_df['T'], expfunc, 'o', ms=0.5, label='expfunc', color='r') ax4.plot(df['T'], df['TypeC_calib_mV'],'o',ms='0.5', label='Data', color='b') ax4.set_xlim([80,110]) ax4.set_ylim([-1.88,-1.75]) ax4.legend() """ Explanation: This appears to be a better fit than the polynomial in this regime. Now lets concatenate these two functions and interpolate near the points around 100 K to smooth things out if necessary. Recall that the two functions are fit_poly and expfunc End of explanation """ def switch_fcn(x, switchpoint, smooth): s = 0.5 + 0.5*np.tanh((x - switchpoint)/smooth) return s sw = switch_fcn(df['T'], 103, 0.2) expfunc2 = func(df['T'],a,b,c,d) len(expfunc2) fig, ax = plt.subplots() ax.plot(df['T'], sw,'o', ms=0.5) def combined(switch, low_f1, high_f2): comb = (1-switch)*low_f1 + switch*high_f2 return comb comb_fcn = combined(sw, expfunc2,fit_poly(df['T'])) fig, ax = plt.subplots() ax.plot(df['T'], comb_fcn, 'o', ms=0.5) fig5, ax5 = plt.subplots() ax5.plot(df['T'],comb_fcn, 'o', ms=2, label='combined') ax5.plot(checkT_df['T'], fit_poly(checkT_df['T']), 'o', ms=0.5, label='polyfit', color='g') ax5.plot(lowT_df['T'], expfunc, 'o', ms=0.5, label='expfunc2', color='r') ax5.set_xlim([80,110]) ax5.set_ylim([-1.88,-1.75]) ax5.legend() """ Explanation: The two fitted plots almost match near 103 K, but there is a little 'cusp'-like shape near the point of intersection. Let's smooth it out. Also, notice that the expfunc fit is a little better than the polyfit. End of explanation """ # low temperature array low_temp = np.arange(77.15,273.15, 0.1) # low_temp_calib = fit_poly(low_temp) low_temp_calib = combined(switch_fcn(low_temp, 103, 3), func(low_temp,a,b,c,d), fit_poly(low_temp)) # high temperature array high_temp = np.arange(273.15,2588.15, 0.1) high_temp_nist = typeC.emf_mVK(high_temp) # concatentate and put into a dataframe and output to excel Temperature = np.concatenate([low_temp, high_temp]) TypeC_mV = np.concatenate([low_temp_calib, high_temp_nist]) typeC_calibration = pd.DataFrame(data=TypeC_mV, index=Temperature, dtype='float32', columns = ['Type C (mV)']) typeC_calibration.index.name = 'Temperature (Kelvin)' print(typeC_calibration.head()) print(typeC_calibration.tail()) # Uncomment these lines and run the cell to output a calibration table # write to excel # xlwrite = pd.ExcelWriter('Type C calibration_low_res.xlsx') # typeC_calibration.to_excel(xlwrite) # xlwrite.save() """ Explanation: Now I will take the polynomial and take the values from 77 K to 273 K for calibration and append them to the NIST values End of explanation """ low_calib = combined(switch_fcn(Temperature, 103, 3), func(Temperature,a,b,c,d), fit_poly(Temperature)) high_calib = pd.DataFrame(index=high_temp, data=high_temp_nist,columns=['mV']) dummy_df = pd.DataFrame(index=low_temp, data=np.zeros(len(low_temp)),columns=['mV']) concat_high_calib = dummy_df.append(high_calib) print(concat_high_calib.loc[272.9:273.5]) freezept_calib = combined(switch_fcn(Temperature, 273.15, 0.45), low_calib, concat_high_calib['mV'] ) freezept_calib.index.name = 'T' freezept_calib.loc[272.9:273.5] """ Explanation: But wait! Suppose we also want to fix that discontinuity at 273.15 K? We can apply the same procudure as before. 1. Apply a tanh(x) function: $switch = 0.5 + 0.5np.tanh((x - switchpoint)/smooth)$ 2. Combine both functions: $comb = (1-switch)f1 + (switch)*f2 $ End of explanation """ fig, ax = plt.subplots() freezept_calib.plot(ax=ax, label ='combined') ax.plot(Temperature,low_calib, label = 'low calib') ax.plot(Temperature,concat_high_calib, label= 'high_calib') ax.set_ylim([-.04,0.04]) ax.set_xlim([268,277]) ax.legend() print(signal.argrelmin(freezept_calib.values)) # print(signal.argrelextrema(freezept_calib.values,np.less)) # print(signal.argrelextrema(freezept_calib.values,np.greater)) # No local maxima! # Uncomment these lines and run the cell to output a calibration table # write to excel xlwrite = pd.ExcelWriter('Type C calibration_corrected.xlsx') freezept_calib.to_excel(xlwrite) xlwrite.save() """ Explanation: The prior value at 273.15 K was -0.00867, when the actual value is 0. After the smoothing, the new value is -0.004336, about half of the prior value. Some of the values a little after 273.15 do not match exactly with the NIST table, but it is much better than the jump that we had before. End of explanation """
IanHawke/ET-NumericalMethods-2016
notebooks/02-horizon-finding.ipynb
mit
import numpy from matplotlib import pyplot %matplotlib notebook def horizon_RHS(H, theta, z_singularities): """ The RHS function for the apparent horizon problem. Parameters ---------- H : array vector [h, dh/dtheta] theta : double angle z_singularities : array Location of the singularities on the z axis; non-negative Returns ------- dHdtheta : array RHS """ assert(numpy.all(numpy.array(z_singularities) >= 0.0)), "Location of singularities cannot be negative" h = H[0] dh = H[1] psi = 1.0 dpsi_dr = 0.0 dpsi_dtheta = 0.0 for z in z_singularities: distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) - z)**2) psi += 0.5/distance dpsi_dr -= 0.5*(h-z*numpy.cos(theta))/distance**3 dpsi_dtheta -= 0.5**h*z*numpy.sin(theta)/distance**3 # Apply reflection symmetry if z > 0.0: distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) + z)**2) psi += 0.5/distance dpsi_dr -= 0.5*(h+z*numpy.cos(theta))/distance**3 dpsi_dtheta += 0.5**h*z*numpy.sin(theta)/distance**3 C2 = 1.0 / (1.0 + (dh / h)**2) # Impose that the term involving cot(theta) vanishes on axis. if (abs(theta) < 1e-16) or (abs(theta - numpy.pi) < 1e-16): cot_theta_dh_C2 = 0.0 else: cot_theta_dh_C2 = dh / (numpy.tan(theta) * C2) dHdtheta = numpy.zeros_like(H) dHdtheta[0] = dh dHdtheta[1] = 2.0*h - cot_theta_dh_C2 + 4.0*h**2/(psi*C2)*(dpsi_dr - dpsi_dtheta*dh/h**2) + 3.0*dh**2/h return dHdtheta """ Explanation: Apparent horizons We're now going to use finite differences to find a black hole apparent horizon. The spacetime we're going to look at is simplified: $3+1$ split (we're looking at one slice, so one instant in "time"); axisymmetric (so we can consider only two dimensions in space, using $r, \theta$); "bitant" or "reflection" symmetric (so we only consider $\theta \in [0, \pi/2]$); all singularities have bare mass $1$; time-symmetric (the extrinsic curvature vanishes). We then compute the expansion of outgoing null geodesics, and look for where this vanishes. The surface with radius $h(\theta)$ where this occurs is the apparent horizon. With our assumptions, $h$ obeys the boundary value problem $$ \frac{d^2 h}{d \theta^2} = 2 h + \frac{3}{h} \left( \frac{d h}{d \theta} \right)^2 + f \left( \theta, h, \frac{d h}{d \theta} \right), \qquad \frac{d h}{d \theta} ( \theta = 0 ) = 0 = \frac{d h}{d \theta} ( \theta = \pi/2 ). $$ The function $f$ encodes the spacetime effects due to the singularities. To solve this problem we convert to first order form. Introduce the vector $$ {\bf H} = \begin{pmatrix} h \ \frac{d h}{d \theta} \end{pmatrix}. $$ Then we have the problem $$ \frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad H_2(\theta = 0) = 0 = H_2(\theta = \pi/2). $$ We'll give the entire right-hand-side as code: End of explanation """ def euler_step(Hi, theta_i, dtheta, z_singularity): """ Euler's method - one step """ # Ntheta = 100 z_singularity = [0.0] theta = numpy.linspace(0.0, numpy.pi/2.0, Ntheta) dtheta = theta[1] - theta[0] H = numpy.zeros((2, Ntheta)) H[:, 0] = [0.5, 0.0] for i in range(Ntheta-1): H[:, i+1] = euler_step(H[:, i], theta[i], dtheta, z_singularity) pyplot.figure() pyplot.polar(theta, H[0,:]) pyplot.show() """ Explanation: We now need to solve the boundary value problem. We'll do this using shooting. Shooting Initial Value Problems If we knew the initial radius of the horizon, $h(\theta = 0) = H_1(\theta = 0) = h_0$, we could solve the initial value problem $$ \frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad {\bf H}(\theta = 0) = \begin{pmatrix} h_0 \ 0 \end{pmatrix}. $$ For example, the simple Schwarzschild black hole will have $h_0 = 1/2$, in this slicing. To solve the initial value problem we can re-use our finite differencing algorithms. For example, we evaluate the initial value problem equation at $\theta_i$ using forward differencing, to get \begin{align} \left. \frac{d}{d \theta} {\bf H} \right|_{\theta = \theta_i} & \approx \frac{1}{\Delta \theta} \left( {\bf H}^{(i+1)} - {\bf H}^{(i)} \right) \ & = {\bf F}({\bf H}^{(i)}, \theta_i), \end{align} where we have denoted ${\bf H}(\theta_i) \equiv {\bf H}^{(i)}$. We then re-arrange this to get Euler's method $$ {\bf H}^{(i+1)} = {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i). $$ We can use this to solve for the Schwarzschild case: End of explanation """ def rk2_step(Hi, theta_i, dtheta, z_singularity): """ RK2 method - one step """ # H = numpy.zeros((2, Ntheta)) H[:, 0] = [0.5, 0.0] for i in range(Ntheta-1): H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity) pyplot.figure() pyplot.polar(theta, H[0,:]) pyplot.show() """ Explanation: We see that this has worked nicely. However, Euler's method is very inaccurate on more complex problems, as it's only first order convergent. We would like to use a higher order method. Runge-Kutta methods When looking at central differencing earlier we used information from both sides of the point where we took the derivative. This gives higher accuracy, but isn't helpful in the initial value case, where we don't have half the information. Instead, we use many Euler steps combined. Each one gives an approximation to "future" data, which can be used to approximate the derivative at more locations. For example, the Euler step above starts from ${\bf H}^{(i)}$ and computes ${\bf F}^{(i)}$ to approximate ${\bf H}^{(i+1)}$. We can use this approximation to give us ${\bf F}^{(i+1)}$. Now, a more accurate solution would be $$ {\bf H}^{(i+1)} = {\bf H}^{(i)} + \int_{\theta_i}^{\theta_{i+1}} \text{d} \theta \, {\bf F}({\bf H}, \theta). $$ In Euler's method we are effectively representing the value of the integral by the value of the integrand at the start, multiplied by the width $\Delta \theta$. We could now approximate it by the average value of the integrand, $({\bf F}^{(i)} + {\bf F}^{(i+1)})/2$, multiplied by the width $\Delta \theta$. This gives the algorithm \begin{align} {\bf H}^{(p)} &= {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i), \ {\bf H}^{(i+1)} &= {\bf H}^{(i)} + \frac{\Delta \theta}{2} \left( {\bf F}({\bf H}^{(i)}, \theta_i) + {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right) \ &= \frac{1}{2} \left( {\bf H}^{(i)} + {\bf H}^{(p)} + \Delta \theta \, {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right). \end{align} The final re-arrangement ensures we do not have to store or re-compute ${\bf F}^{(i)}$. This is one of the Runge-Kutta methods. This version is second order accurate, and a big improvement over Euler's method. End of explanation """ initial_guesses = numpy.linspace(0.4, 0.6, 10) solutions = [] z_singularity = [0.0] for h0 in initial_guesses: # pyplot.figure() for r in solutions: pyplot.polar(theta, r) pyplot.show() """ Explanation: Root finding We still can't find a horizon unless we know the initial radius. However, there is a way around this. Let's see what happens if we compute in the Schwarzschild case, using the wrong initial data: End of explanation """ initial_guesses = numpy.linspace(0.4, 0.6, 100) dhdtheta_end = numpy.zeros_like(initial_guesses) z_singularity = [0.0] for guess, h0 in enumerate(initial_guesses): # pyplot.figure() pyplot.plot(initial_guesses, dhdtheta_end) pyplot.xlabel(r"$h_0$") pyplot.ylabel(r"$dh/d\theta(\pi/2)$") pyplot.show() """ Explanation: We see that the the surfaces that start with the radius too small are curving back in; their derivative is negative. The surfaces with radius too large are diverging; their derivative is positive. We know that the true solution has vanishing derivative. Let's explicitly plot the derivative at the endpoint. End of explanation """ def residual(h0, z_singularities): """ The residual function for the shooting method. """ # """ Explanation: We see that the derivative vanishes precisely where the horizon should be, exactly as expected. This also gives us a way of solving for the apparent horizon. We want to solve the equation $$ R(h_0) = 0. $$ The function $R$ is given by $R(h_0) = H_2(\pi/2 ; h_0)$. In other words, we compute the solution ${\bf H}$ given the initial guess $h_0$ for the unknown initial radius $H_1(0)$; from the solution for ${\bf H}$ at $\theta = \pi/2$, set $R(h_0) = H_2$. We can code this residual function as End of explanation """ def secant(R, x0, x1, args, tolerance = 1e-10): """ Secant method """ # """ Explanation: Finally, we need to find the root of this equation. Secant method Problems where we are given an algebraic, nonlinear function ${\bf R}$ and asked to find ${\bf x}$ such that ${\bf R}({\bf x}) = {\bf 0}$ are nonlinear root-finding problems. Many standard solution methods are based on Newton's algorithm: Guess the root to be ${\bf x}^{(0)}$, and set $n=0$; Compute the tangent planes to ${\bf R}$ at ${\bf x}^{(n)}$; Find where these planes intersect zero, and set this to be ${\bf x}^{(n+1)}$; If not converged to root, go to 2. Computing the derivative for the tangent in step 2 is slow; instead we use finite differencing again. In one dimension, Newton's method is $$ x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)})}{R'(x^{(n)})}. $$ Replacing the derivative with a finite difference gives $$ x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)}) \left( x^{(n)} - x^{(n-1)} \right)}{R(x^{(n)}) - R(x^{(n-1)})}. $$ This is the secant method. It's much easier to implement, but requires two initial guesses. End of explanation """ h0 = secant(residual, 0.4, 0.6, z_singularity) print("Computed initial radius is {}".format(h0)) """ Explanation: We apply this to the Schwarzschild case: End of explanation """ z_singularity = [0.75] """ Explanation: And from this we can compute the correct horizon. What happens if we get the guess wildly wrong? In this simple case it will nearly always converge to the "right" answer, but in general a poor initial guess means the algorithm - or most root-finding algorithms! - won't converge. Put it together We can now compute the more interesting binary black hole case, where the singularities are at $z = \pm 0.75$. Using the symmetry, we need: End of explanation """ initial_guesses = numpy.linspace(1.2, 1.4, 100) dhdtheta_end = numpy.zeros_like(initial_guesses) z_singularity = [0.75] for guess, h0 in enumerate(initial_guesses): # pyplot.figure() pyplot.plot(initial_guesses, dhdtheta_end) pyplot.xlabel(r"$h_0$") pyplot.ylabel(r"$dh/d\theta(\pi/2)$") pyplot.show() """ Explanation: We can now check what sorts of initial radius $h_0$ will be needed for the horizon: End of explanation """ h0 = secant(residual, 1.26, 1.3, z_singularity) """ Explanation: We see the algorithms are having problems for small radii, but that it suggests that the correct answer is roughly $h_0 \in [1.26, 1.3]$. So we use root-finding: End of explanation """ z_singularity = [0.75] H0 = [h0, 0.0] H = numpy.zeros((2,Ntheta)) H[:, 0] = [h0, 0.0] for i in range(Ntheta-1): H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity) pyplot.figure() pyplot.polar(theta, H[0,:]) pyplot.show() """ Explanation: And finally, we compute and plot the horizon surface. End of explanation """
danman10000/ics355_demos
ICS355_DES_Demo.ipynb
gpl-3.0
key_size_in_bits=112 "{:,}".format(2**key_size_in_bits) """ Explanation: DES Crypt Demo Required pip install pycrypto pip install crcmod Example 1: Key Combinations This example shows the number of key combinations based on the number of bits in a readable formation End of explanation """ from Crypto.Cipher import DES from Crypto import Random import binascii #Set the Initialization Vector (IV) iv = Random.new().read(DES.block_size) #Set the plaintext to be encrypted. It must be a multiple of 16 plaintext = b'ICS355 DES Test!' #Multiple of 16 #Set the key. It must be 8 bytes skey="01fe01fe01fe01fe" #8 bytes #The library requires a string byte array key= str(bytearray.fromhex(skey)) #Create an instance of the cipher cipher = DES.new(key, DES.MODE_OFB, iv) #Encrypt the message. You all need to share the IV. You can append this to the message msg = iv + cipher.encrypt(plaintext) #msg = cipher.encrypt(plaintext) #Print the encrypted message. This is what would be sent. print "Encrypted message:\n", binascii.hexlify(msg) #Get the IV iv =msg[:DES.block_size] #Get the encrypted message encryptedmsg=msg[DES.block_size:] #Create a new instance of the encryption library on the remote box cipher = DES.new(key, DES.MODE_OFB, iv) #Decrypt the message mydecrypt= cipher.decrypt(encryptedmsg) #Decypted message print "Decrypted message:\n",mydecrypt """ Explanation: Example 2: DES This example will show the normal usage of DES in a Python script. Reference: http://www.umich.edu/~x509/ssleay/des-weak.html End of explanation """ from Crypto.Cipher import DES from Crypto import Random import binascii iv = Random.new().read(DES.block_size) skey="ffffffffffffffff" key= str(bytearray.fromhex(skey)) cipher = DES.new(key, DES.MODE_OFB, iv) #msg = iv + cipher.encrypt(plaintext) msg = cipher.encrypt(plaintext) print "Encrypted message:\n", binascii.hexlify(msg) skey2="0000000000000000" #skey2="0101010101010101" #You can try this one too! key2= str(bytearray.fromhex(skey2)) cipher2 = DES.new(key2, DES.MODE_OFB, iv) mydecrypt= cipher2.decrypt(msg) print "Decrypted message:\n",mydecrypt """ Explanation: Example 3: DES Weak Keys This shows an example of how not to use DES. A key of all 0s or all 1s results in a non-unique derived key. Unfortunately we are unable to see the internal keys since the Python library hides it from us. Note the second half of the output was decrypted with a different key. End of explanation """ from Crypto.Cipher import DES from Crypto import Random import binascii #IV not needed for ECB mode iv = Random.new().read(DES.block_size) plaintext = b'ICS355 DES Test!' #Multiple of 16 desmode=DES.MODE_ECB #[DES.MODE_CBC, DES.MODE_CFB, DES.MODE_ECB, DES.MODE_OFB, DES.MODE_OPENPGP]: #DES.MODE_CTR requires a counter SemiWeakKeys=[["01FE 01FE 01FE 01FE", "FE01 FE01 FE01 FE01"], ["1FE0 1FE0 0EF1 0EF1", "E01F E01F F10E F10E"], ["01E0 01E0 01F1 01F1", "E001 E001 F101 F101"], ["1FFE 1FFE 0EFE 0EFE", "FE1F FE1F FE0E FE0E"], ["011F 011F 010E 010E", "1F01 1F01 0E01 0E01"], ["E0FE E0FE F1FE F1FE", "FEE0 FEE0 FEF1 FEF1"]] for key1,key2 in SemiWeakKeys: skey=key1 key= str(bytearray.fromhex(skey)) cipher = DES.new(key, desmode) #msg = iv + cipher.encrypt(plaintext) ctext = cipher.encrypt(plaintext) print "Key1 Ciphertext:\n",binascii.hexlify(ctext) skey=key2 key= str(bytearray.fromhex(skey)) cipher = DES.new(key, desmode) msg= cipher.encrypt(ctext) print "Key2: Plaintext:\n",msg print "\n" """ Explanation: Example 4: DES Semi-Weak Keys There are a handful of keys that can be used interchangeably which is something you never want to happen. For a symmetric algorithm you never want to see this situation: EK1(EK2(x))=x Reference: http://www.umich.edu/~x509/ssleay/des-weak.html and http://crypto.stackexchange.com/questions/12214/can-you-explain-weak-keys-for-des Note: Must use ECB mode for demo End of explanation """
maestrotf/pymepps
examples/example_plot_thredds.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt import pymepps """ Explanation: Load a thredds dataset In the following example we will load a thredds dataset from the norwegian met.no thredds server. End of explanation """ metno_path = 'http://thredds.met.no/thredds/dodsC/meps25files/' \ 'meps_det_pp_2_5km_latest.nc' metno_ds = pymepps.open_model_dataset(metno_path, 'nc') """ Explanation: The first step is to load the dataset. This will be performed with pymepps.open_model_dataset. The NetCDF4 backend is also supporting opendap paths. So we could specify nc as data type. End of explanation """ print(metno_ds) """ Explanation: The resulting dataset is a SpatialDataset. The dataset has several methods to load a xr.DataArray from the path. It also possible to print the content of the dataset. The content contains the dataset type, the number of file handlers within the dataset and all available data variables. End of explanation """ metno_t2m = metno_ds.select('air_temperature_2m') print(metno_t2m) metno_t2m.isel(validtime=0).plot() plt.show() """ Explanation: The next step is to select/extract a variable from the Dataset. We will select the air temperature in 2 metre height and print the content of the resulting data End of explanation """ print(metno_t2m.pp.grid) """ Explanation: We could see that the resulting data is a normal xarray.DataArray and all of the DataArray methods could be used. The coordinates of the DataArray are normalized. The DataArray is expanded with an accessor. Also the coordinates are normalized. We could access the accessor with metno_t2m.pp. The main methods of the accessor are allowing a grid handling. So our next step is to explore the grid of the DataArray. End of explanation """ hh_bounds = [9, 54, 11, 53] t2m_hh = metno_t2m.pp.sellonlatbox(hh_bounds) print(t2m_hh.pp.grid) print(t2m_hh) """ Explanation: We could see that the grid is a grid with a defined projection. In our next step we will slice out an area around Hamburg. We will see that a new DataArray with a new grid is created. End of explanation """ res = 2500 # model resolution in metre earth_radius = 6371000 # Earth radius in metre res_deg = np.round(res*360/(earth_radius*2*np.pi), 4) # rounded model resolution equivalent in degree if it where on the equator print(res_deg) """ Explanation: We sliced a longitude and latitude box around the given grid. So we sliced the data in a longitude and latitude projection. Our original grid was in another projection with unstructured lat lon coordinates. So it is not possible to create a structured grid based on this slice. So the grid becomes an unstructured grid. In the next step we will show the remapping capabilities of the pymepps grid structure. If we slice the data we have seen that the structured grid could not maintained. So in the next step we will create a structured LonLatGrid from scratch. After the grid building we will remap the raw DataArray basen on the new grid. The first step is to calculate the model resolution in degree. End of explanation """ grid_dict = dict( gridtype='lonlat', xsize=int((hh_bounds[2]-hh_bounds[0])/res_deg), ysize=int((hh_bounds[1]-hh_bounds[3])/res_deg), xfirst=hh_bounds[0], xinc=res_deg, yfirst=hh_bounds[3], yinc=res_deg, ) """ Explanation: Our next step is to build the grid. The grid implementation is inspired by the climate data operators. So to build the grid we will use the same format. End of explanation """ builder = pymepps.GridBuilder(grid_dict) hh_grid = builder.build_grid() print(hh_grid) """ Explanation: Now we use our grid dict together with the GridBuilder to build our grid. End of explanation """ t2m_hh_remapped = metno_t2m.pp.remapnn(hh_grid) print(t2m_hh_remapped) """ Explanation: Now we created the grid. The next step is a remapping of the raw DataArray to the new Grid. We will use th enearest neighbour approach to remap the data. End of explanation """ t2m_hh_remapped.isel(validtime=0).plot() plt.show() """ Explanation: To plot the data in a map, we have to slice the data. We will select the first validtime as plotting parameter. End of explanation """ # sphinx_gallery_thumbnail_number = 3 metno_t2m.pp.remapbil(hh_grid).isel(validtime=0).plot() plt.show() """ Explanation: In the map around Hamburg we could see the north and baltic sea in the top edges. But with the nearest enighbour approach we retain some of the sharp edges at the map. Our last step is a second remap plot, this time with a bilinear approach. End of explanation """
mjbommar/cscs-530-w2016
notebooks/basic-random/001-basic_distributions.ipynb
bsd-2-clause
# Imports import numpy import scipy.stats import matplotlib.pyplot as plt # Setup seaborn for plotting import seaborn; seaborn.set() # Import widget methods from IPython.html.widgets import * """ Explanation: CSCS530 Winter 2015 Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2015) Course ID: CMPLXSYS 530 Course Title: Computer Modeling of Complex Systems Term: Winter 2015 Schedule: Wednesdays and Friday, 1:00-2:30PM ET Location: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab) Teachers: Mike Bommarito and Sarah Cherng View this repository on NBViewer Basic Distributions From page 5 of Thinking Complexity: Deterministic → stochastic: Classical models are usually deterministic, which may reflect underlying philosophical determinism, discussed in Chapter 6; complex models often feature randomness. In order to incorporate randomness into our models, we need to understand basic distributions and learn how to work with them in Python. The notebook below covers the basic shape, parameters, and sampling of the following distributions: uniform discrete uniform continuous normal/Gaussian ("bell curve") Poisson End of explanation """ numpy.random.uniform(-1, 1, size=3) """ Explanation: Continuous Uniform distribution The continous uniform distribution is one of the most commonly utilized distributions. As its name implies, it is characterized by a uniform or equal probability of any point being drawn from the distribution. This is clear from the probability density function (PDF) below: We can sample a continuous uniform distribution using the numpy.random.uniform method below. Draw a continuous uniform sample In the example below, we draw size=3 samples from a continous uniform distribution with range from -1 to +1. End of explanation """ %matplotlib inline # Sample random data r = numpy.random.uniform(0, 1, size=100) p = plt.hist(r) """ Explanation: Visualize a continuous uniform sample In the example below, we will visualize the distribution of size=100 continous uniform samples. This particular type of visualization is called a histogram. End of explanation """ def plot_continuous_uniform(range_min=0, range_max=1, samples=100): """ A continuous uniform plotter that takes min/max range and sample count. """ # Check assertions assert (range_min < range_max) assert (samples > 1) # Sample random data r = numpy.random.uniform(range_min, range_max, samples) p = plt.hist(r) # Call the ipython interact() method to allow us to explore the parameters and sampling interact(plot_continuous_uniform, range_min=(0, 10), range_max = (1, 20), samples = (2, 1000)) """ Explanation: Interactive exploration of continuous uniform distribution In the interactive tool below, we will explore how a random sample drawn from the continuous uniform distribution varies with: minimum and maximum of range (range_min, range_max) number of samples drawn (samples) Try varying the number of samples in the single digits, then slowly increase the number to 1000. How does the "smoothness" of the average sample vary? Compare to the probability density function figure above. End of explanation """ numpy.random.randint(0, 10, size=3) """ Explanation: Discrete Uniform distribution The discrete uniform distribution is another commonly utilized distributions. As its name implies, it is characterized by a uniform or equal probability of any point being drawn from the distribution. This is clear from the probability density function (PDF) below: We can sample a discrete uniform distribution using the numpy.random.randint method below. Draw a discrete uniform sample In the example below, we draw size=3 samples from a discrete uniform distribution with range from 0 to 10. End of explanation """ # Sample random data r = numpy.random.randint(0, 10, size=100) p = plt.hist(r) """ Explanation: Visualize a discrete uniform sample In the example below, we will visualize the distribution of size=100 discrete uniform samples. End of explanation """ def plot_discrete_uniform(range_min=0, range_max=10, samples=100): """ A discrete uniform plotter that takes min/max range and sample count. """ # Check assertions assert (range_min < range_max) assert (samples > 1) # Sample random data r = numpy.random.randint(range_min, range_max, samples) p = plt.hist(r) # Call the ipython interact() method to allow us to explore the parameters and sampling interact(plot_discrete_uniform, range_min=(-10, 10), range_max = (-9, 20), samples = (2, 1000)) """ Explanation: Interactive exploration of discrete uniform distribution In the interactive tool below, we will explore how a random sample drawn from the discrete uniform distribution varies with: minimum and maximum of range (range_min, range_max) number of samples drawn (samples) Try varying the number of samples in the single digits, then slowly increase the number to 1000. How does the "smoothness" of the average sample vary? Compare to the probability density function figure above. End of explanation """ numpy.random.normal(10, 3, size=3) """ Explanation: Normal distribution The normal distribution, commonly referred to as the "bell curve", is one of the most commmonly occuring continuous distributions in nature. It is characterized by its symmetry and its dispersion parameter, referred to as standard deviation. 68% of the distribution's probability mass falls within +/-1 standard deviation, and 95% of the probability mass falls within +/-2 standard deviations. The normal distribution's probability density function (PDF) is below: We can sample a normal distribution using the numpy.random.normal method below. Draw a normal sample In the example below, we draw size=3 samples from a normal distribution with mean=10 and standard deviation sigma=3. End of explanation """ # Sample random data r = numpy.random.normal(10, 3, size=100) p = plt.hist(r) """ Explanation: Visualize a normal sample In the example below, we will visualize the distribution of size=100 normal samples. End of explanation """ def plot_normal(mean=0, standard_deviation=10, samples=100, window_range=100): # Check assertions assert (standard_deviation > 0) assert (samples > 1) # Sample random data and visualization r = numpy.random.normal(mean, standard_deviation, size=samples) p = plt.hist(r, normed=True) # Calculate the kernel density estimate and overplot it on the histogram kernel = scipy.stats.gaussian_kde(r) r_range = numpy.linspace(min(r), max(r)) plt.plot(r_range, kernel(r_range)) # Set the x limits plt.xlim(min(-window_range, min(r)), max(window_range, max(r))) # Create the widget interact(plot_normal, mean=(-25, 25), standard_deviation = (1, 100), samples = (2, 1000), window_range = (1, 100)) """ Explanation: Interactive exploration of normal distribution In the interactive tool below, we will explore how a random sample drawn from the normal distribution varies with: mean standard deviation number of samples drawn (samples) In addition to a histogram, this tool also shows a kernel density estimate (KDE). We can use KDEs to provide us with estimates of probability density functions, either for analysis and comparison or to use in further generative contexts to sample new values. End of explanation """ numpy.random.poisson(5, size=3) """ Explanation: Poisson distribution The Poisson distribution is, in Wikipedia's words: a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume. The Poisson distribution's probability density function (PDF) is below: We can sample a normal distribution using the numpy.random.poisson method below. Draw a Poisson sample In the example below, we draw size=3 samples from a Poisson distribution with rate=5. End of explanation """ # Sample random data r = numpy.random.poisson(5, size=100) p = plt.hist(r) """ Explanation: Visualize a Poisson sample In the example below, we will visualize the distribution of size=100 Poisson samples. End of explanation """ def plot_poisson(rate=5, samples=100, window_range=20): # Check assertions assert (rate > 0) assert (samples > 1) # Sample random data r = numpy.random.poisson(rate, size=samples) f = plt.figure() p = plt.hist(r, normed=True) # Calculate the KDE and overplot kernel = scipy.stats.gaussian_kde(r) r_range = numpy.linspace(min(r), max(r)) plt.plot(r_range, kernel(r_range)) # Set the x limits plt.xlim(-1, max(max(r), window_range)) # Create the ipython widget interact(plot_poisson, rate=(1, 100), samples = (2, 10000), window_range = (1, 100)) """ Explanation: Interactive exploration of Poisson distribution In the interactive tool below, we will explore how a random sample drawn from the Poisson distribution varies with: rate number of samples drawn (samples) In addition to a histogram, this tool again shows a kernel density estimate (KDE). Compare the KDE to the probability density function above. End of explanation """
h-mayorquin/camp_india_2016
tutorials/Spatial Coding/Phase Precession in Place Cells.ipynb
mit
######################################### ### Implementing model from Geisler et al 2010 ### Place cell maps: Rate based. ######################################### from numpy import * from scipy import * from pylab import * import matplotlib.cm as cmx import matplotlib.colors as colors from scipy import signal as sg import numpy as np from scipy.fftpack import fft import peakutils.peak as pk ##For colour bar: makecmap.py is in the same folder import makecmap as mc ##For checking freq is in theta band (4-12 Hz): import checktheta as ct ##For generating spiketimes: import spikegen as spg %matplotlib inline """ Explanation: Import necessary modules End of explanation """ #### Firing Rate maps (in time) or Firing Probabilities for individual cells: def SingleNeuronRateMap(t, f0=8.6, tau_n=0.075*1, sigma=0.5/sqrt(2), T_n=1): ''' Sigmoidal rate function modulated by Gaussian envelope. t : Time array f0 (=8.6 Hz) : Intrinsic theta frequency/frequency of modulation of single cell firing tau_n (=0.075) : Theta phase of neuron sigma (=0.5/sqrt(2)) : Half-Width of gaussian envelope T_n (=1) : Centre of place field of cell/Time of maximal firing Gaussian envelope: 1/sqrt(pi*sigma) * exp( -(t-Tn)^2/sigma^2 ) Firing rate is oscillatory(at freq f0) and phase shift is 2*pi*f0*tau_n ''' return ( 1+exp(2j*pi*f0*(t-tau_n)) ) * 1/(sqrt(pi)*sigma)*exp(-(t-T_n)**2/sigma**2) """ Explanation: Assume that the animal is running at constant velocity along a linear track. Then firing rate map will have same envelope as place field, but the firing rate is modulated at theta frequency (f0). Experimentally, this frequency can be obtained from the autocorrelation of the neuron's firing rate (f0 = 1/tau_autocorr) End of explanation """ ############################ #### Parameters: ############################ TotalL = 5.0 #s # Length of arena (Assuming constant running speed in forward direction) #### Parameters for ratemap: Homogenous population L = 1.5 #s # Place field size in time: L = s*PFS where # s = running speed, PFS = Place field size in space f0 = 8.6 #Hz # Oscillation frequency of single neurons c = 0.075 # Compression factor sigma = L/(3*sqrt(2)) # Sigma of gaussian envelope of SNRM [single neuron rate map] #### Distriuting place field centres N = 100 # No. of place cells Tn = arange(0,TotalL,TotalL/N) # Place field centres : uniformly distributed ## Time lag tau_n is correlated with position T_n of place-field centres. ## Experimental: Sigmoidal. Linear for large range. Model: Related by compression factor. taun = c*Tn # Theta -scale time lag. Depends on compression factor. # How separated are 2 place cell's theta phase given their # separation in space? Ans: delta tau_n = c * delta T_n #### Simulation parameters delt = 0.001 # Delta t: time step t = arange(0, TotalL, delt) # Time array """ Explanation: Define Parameters for setting up rate maps for place cells: End of explanation """ ############################ ######## Rate maps: ############################ rates = zeros([N, len(t)], dtype=complex) ### To hold rate maps for all cells #### Create place cell maps: for i in xrange(0,N): rates[i][:] = SingleNeuronRateMap(t,tau_n=taun[i], T_n = Tn[i], sigma=sigma) ############################# ####### Plotting: ############################# #### Plotting rate maps for example cells num_of_maps = 4 ### No. of example rate maps cells_to_plot = range(0, N, int(N/num_of_maps)) colorbar = mc.MakeColourMap(N) fig1 = figure(figsize=(8,5)) for idx in cells_to_plot: line = abs(rates[idx][:]) colorVal = colorbar.to_rgba(idx) plot(t, line, color=colorVal, linewidth=2.0) #Add label if you want legend. ylabel('Discharge probability for individual place cells') xlabel('(Space or) Time (sec) with constant running speed') title('Rate maps for various place cells') """ Explanation: Setting rate maps for single cells. End of explanation """ #### Population activity: nfactor = 2*(Tn[1] - Tn[0]) ### Normalization factor poprate = np.sum(rates,0)*nfactor ### Population rate fig2=figure(figsize=(2*TotalL,6)) plot(t, abs(poprate), color='b', linewidth=2.0) xlabel('Time (sec)') ylabel('Population Firing Rate') """ Explanation: Population rate is sum of individual neuron rate maps. End of explanation """ ### Power spectrum of rate maps: ns = len(poprate) pop_fft = fft(poprate) # Population rate FFT cell_fft = fft(rates[N/2][:]) # FFT for a single neuron rate map freq = np.arange(0.0,1.0/(2*delt),1.0/(2*delt)*2/ns) # Frequency array (0 to fmax) """ Explanation: Finding frequency of oscillation (in the theta band) for single neuron and population rate End of explanation """ fig3=figure() A = fig3.add_subplot(111) A.plot(freq,2.0/ns*abs(pop_fft[0:ns/2])/N, color ='b' , linewidth = 2.0) B = A.twinx() # Same x-axis, different scales on y-axis B.plot(freq,2.0/ns*abs(cell_fft[0:ns/2]), 'r-' , linewidth =2.0) A.set_xlim([0.05, 15]) # Plot upto freq = 15Hz A.set_ylabel('Population activity: Power', color='b' ) A.set_xlabel('Frequency (Hz)') B.set_ylabel('Individual cell activity: Power', color ='r') """ Explanation: Now let's plot the power spectrum! End of explanation """ ### Protocol for finding frequency with peak power: ### Finding local peaks ( and above threshold = 20% Peak power) ### pk.indexes returns the "indices" of the local peaks LFP_freq = pk.indexes(abs(pop_fft[0:ns/2]), thres=0.2) #Indices of local maximas in power spectrum of poprate Intrinsic_freq = pk.indexes(abs(cell_fft[0:ns/2]), thres=0.2) #Indices of local maximas in power spectrum of cell rates ### What is the frequency (in theta band) at which single neuron or population activity is modulated? ### Theta band used: 4-12 Hz ## LFP LFP_Theta = ct.CheckThetaFreq( LFP_freq, freq ) if LFP_Theta>12: print 'No Peak in Theta Band for population activity' else: print 'Population rate is modulated at frequency', LFP_Theta, 'Hz' ## Individual cells Intrinsic_Theta = ct.CheckThetaFreq( Intrinsic_freq, freq ) if Intrinsic_Theta < 12: print 'Individual cell firing rate is modulated at frequency', Intrinsic_Theta, 'Hz' else: print 'No Peak in Theta Band for individual cell activity' """ Explanation: Find the frequency in the theta band with maximum power. End of explanation """ example_cells = [int(0.35*N), int(0.65*N)] peak_times = {} threshold = 0.2 ## Minimum normalized peak size to be detected # Detecting peaks in firing rates for ii in example_cells: pks = sg.argrelextrema(abs(rates[ii][:]), np.greater) thresh_pk = threshold * max(abs(rates[ii])) #Minimum peak size idx = where(abs(rates[ii][pks]) >= thresh_pk) peak_times[ii] = pks[0][idx[0]] ### X-axis limits based on plotted spiketimes mintime = t[ peak_times[example_cells[0]][0] ] # First spike maxtime = t[ peak_times[example_cells[-1]][-1] ] # Last spike ### Plotting: fig4 = figure(figsize=(12,6)) # Plot population rate for reference plot(t, abs(poprate), color='k', label='Population rate', linewidth=1.5) #Plot peaks for example cells for idx in example_cells: colorVal=colorbar.to_rgba(idx) ptimes = peak_times[idx] plot(t[ptimes], abs(poprate[ptimes]), 'ro', color=colorVal, markersize=12.0, markeredgecolor='k', markeredgewidth=1.5, label='Spiketimes for Cell {}'.format(idx) ) xlabel('Time (sec)') xlim([mintime-100*delt, maxtime+100*delt]) ylabel('Population rate') legend(loc=3) """ Explanation: Superimpose firing rate peaks of individual cells on population rate End of explanation """ ## Demonstration: If LFP has no theta peak, work with a dummy theta. if LFP_Theta > 12: LFP_Theta = f0*(1-c) ### Define phase wrt LFP theta oscillations # Find first population trough to set as phase 0. skip = int(1.0/delt) #Skip initial 1.0s to avoid edge effects pop_troughs = sg.argrelextrema(poprate[skip:], np.less) ### Now that you have population rate troughs, you can calculate phase in each cycle wrt to ### distance between successive troughs. This is useful when your power spectrum does not show a ### single strong peak in the theta band. ### For this tutorial, we will assume a a constant frequency oscillation. Thus, the first trough ### can be used to set all phases pop_phase0 = pop_troughs[0][0] # because the fn argrel... returns tuple of arrays phase = mod(2*pi*LFP_Theta*(t-t[pop_phase0+skip]), 2*pi) # Array with LFP phase """ Explanation: Let's examine the phase precession Finding phase wrt Population Theta Oscillations End of explanation """ xhigh = max(len(peak_times[idx]) for idx in example_cells) ## Plot phase of successive peaks fig5=figure() for idx in example_cells: colorVal = colorbar.to_rgba(idx) ptimes = peak_times[idx] numspikes = len(ptimes) plot(range(1,numspikes+1), phase[ptimes]*180/pi, 'ro', color=colorVal,label='Spike phase for cell{}'.format(idx)) xlabel('Peak number') ylabel('Phase of peak') xlim([0, xhigh+1]) ylim([-10, 370]) legend() """ Explanation: Assignment: Measure phase within each cycle by representing the trough to trough interval in that cycle as phase 0 to 360 degrees, instead of measuring it using the sinusoid. Hint: Use scipy.signal.argrelextrema How does the phase of the peaks in rate map vary? End of explanation """ ### New set of example cells example_cells2=range(0,N,N/15) peak_times2 = {} threshold2 = 0.2 ## Minimum normalized peak size to be detected # Detecting peaks in firing rates for ind in example_cells2: pks = sg.argrelextrema(abs(rates[ind][:]), np.greater) thresh_pk = threshold * max(abs(rates[ind][:])) #Minimum peak size idx = where(abs(rates[ind][pks]) >= thresh_pk) peak_times2[ind] = pks[0][idx[0]] fig6 = figure() for idx in example_cells2: colorVal=colorbar.to_rgba(idx) maxrate = amax(abs(rates[idx][:])) ptimes = peak_times2[idx] plot( phase[ptimes]*180/pi, abs(rates[idx][ptimes])/maxrate, color=colorVal, linewidth=2.0) xlabel('"LFP" Theta Phase (deg)') ylabel('Normalised firing rate') title('Firing rate and phase for various place cells') """ Explanation: Assignment: Measure the slope of phase precession. How does it change with parameters like c, f0, ...? Why? How does the firing rate correlate with spike phase? End of explanation """ maxFR = 20 #Hz ()Max instantaneous firing rate threshold = 0.25 trials = 500 spiketimes ={} TimePeriod = 1/LFP_Theta #in sec spiking_cell = N/2 for ii in range(trials): spiketimes[ii] = spg.GenerateSpikeTimes(abs(rates[spiking_cell][:]), t, ns, delt, maxFR, threshold) ## Raster plot ## ??? ## ### Setting x-limits xlow=max(t) xhigh=0 ##To trap for empty trials for ii in range(trials): if len(spiketimes[ii])>0: xlow=min(xlow, t[spiketimes[ii][0]]) xhigh=max(xhigh, t[spiketimes[ii][-1]]) xlow=xlow-TimePeriod xhigh=xhigh+TimePeriod ## Phase of spikes: ## Find Pop troughs and peaks: troughs = arange(-3*TimePeriod+t[pop_phase0], max(t), TimePeriod) peaks = arange(-2.5*TimePeriod+t[pop_phase0], max(t), TimePeriod) colorVal=colorbar.to_rgba(spiking_cell) ### Plotting phases if spikes fig7=figure(figsize=(12,5)) ax=fig7.add_subplot(111) for ii in range(trials): plot(t[spiketimes[ii]], phase[[spiketimes[ii]]]*180/pi , 'ro', color=colorVal) bar(troughs,[400 for jj in troughs], bottom=[-20 for jj in troughs], width=2*delt, color='k', label='Population Troughs') bar(peaks,height=[400 for jj in peaks], bottom=[-20 for jj in peaks], width=2*delt, color='r', edgecolor='r', label='Population Peaks') ax.grid() xlim([xlow,xhigh]) ylim([-20,380]) xlabel('Time (sec)') ylabel('Phase of population activity (degrees) for each spike') legend() """ Explanation: Generating spikes Use the rate map to generate spiketimes for different cells End of explanation """ #### Creating histogram: #### Bins for histogram of spike times for spk_cells numofbins=100 bins = arange(xlow, xhigh, (xhigh-xlow)/numofbins) [spikecount,b] = histogram(t[spiketimes[0]], bins) count = spikecount for ii in xrange(1,trials): [spikecount,b] = histogram(t[spiketimes[ii]], bins) count += spikecount yhigh=max(count)+5 fig8=figure(figsize=(12,5)) ## Histogram of spike times for example cell: bar(bins[:-1], count, width=(bins[1]-bins[0])*0.9, label='Spike count over all trials') ## Theta peaks and troughs: #bar(troughs, [yhigh for jj in troughs], width=delt ,color='k', edgecolor='k', label='Population troughs') bar(peaks, [yhigh for jj in peaks], width=2*delt ,color='r', edgecolor='r', label='Population peaks') xlim(xlow,xhigh) ylim([0, yhigh]) xlabel('Time (in s)') ylabel('Spike count over %d trials'%(trials)) legend() """ Explanation: Assignment: Measure phase of first spike and mean phase within each theta cycle. How does it change with time(cycle number)? Do both vary identically? Plot histogram of spiketimes End of explanation """ ## Use same PF centres, but change widths, intrinsic frequency, compression factor, tau_n L = 1/(8.7)+3+rand(N)*5 # Place field size : Between 3.x and 8.x second, x=1/8.7 # (Assuming constant running speed in forward direction) sigma = L/(3*sqrt(2)) # Sigma of gaussian envelope of PF f0 = 8.6 - 1/L #Hz # Oscillation frequency of single neurons: Use diff distributions. # Here, dependent on L. (L and f0 co-vary across EC layers and dorsoventrally) c = 1/multiply(L,f0) # Compression factor: L*c*f0 = 1 taun = multiply(c,Tn) #Theta -scale time lag rates = zeros([N, len(t)], dtype=complex) #### Create place cell maps: for i in xrange(0,N): rates[i][:] = SingleNeuronRateMap(t,f0=f0[i],tau_n=taun[i], T_n = Tn[i], sigma=sigma[i]) ### Get population activity: poprate = np.sum(rates,0)*nfactor fig7a=figure(figsize=(15,6)) subplot(131) plot(range(N), L, 'ro') subplot(132) plot(f0,L,'bo') subplot(133) plot(multiply(L,f0),c,'go') #### Plotting rate maps for example cells num_of_maps = 8 ### No. of example rate maps cells_to_plot = range(0, N, int(N/num_of_maps)) colorbar = mc.MakeColourMap(N) fig1 = figure(figsize=(8,5)) for idx in cells_to_plot: line = abs(rates[idx][:]) colorVal = colorbar.to_rgba(idx) plot(t, line, color=colorVal, linewidth=2.0) #Add label if you want legend. ylabel('Discharge probability for individual place cells') xlabel('(Space or) Time (sec) with constant running speed') title('Rate maps for various place cells') ### Plot poprate fig7b=figure() plot(t,abs(poprate)) """ Explanation: Firing rate changes within the place field : Gaussian envelope Spiking is theta-modulated Spiking phase decreases in later part of place field (or here, in successive theta cycles) But... Experimentally, spiking phase is more correlated with location within the place field rather than time or number of theta cycles passed since entering place field Assignment: Generate spikes for all cells Calculate population activity from spike trains of all cells: Remember ML tutorial ?! Find power in theta band for population activity. Where is the peak? Plot phase of spikes for different cells versus time/cycle number. Analyse firing rate and phase precession for different cells. Heterogenous population of place cells End of explanation """ ### Spiking cells: ### Generate spikes. Plot phase versus maxFR = 20 #Hz ()Max instantaneous firing rate threshold = 0.25 trials = 500 spiketimes ={} TimePeriod = 1/LFP_Theta #in sec spiking_cell = N/2 for ii in range(trials): spiketimes[ii] = spg.GenerateSpikeTimes(abs(rates[spiking_cell][:]), t, ns, delt, maxFR, threshold) ## Raster plot ## ??? ## ### Setting x-limits xlow=max(t) xhigh=0 ##To trap for empty trials for ii in range(trials): if len(spiketimes[ii])>0: xlow=min(xlow, t[spiketimes[ii][0]]) xhigh=max(xhigh, t[spiketimes[ii][-1]]) xlow=xlow-TimePeriod xhigh=xhigh+TimePeriod ## Phase of spikes: ## Find Pop troughs and peaks: trs =sg.argrelextrema(abs(poprate), np.less) pks =sg.argrelextrema(abs(poprate), np.greater) troughs = t[trs] peaks = t[pks] cell_phase={} ### Getting phase from population troughs: for ii in range(trials): cell_phase[ii] = [] for jj in range(len(spiketimes[ii])): tr_next=searchsorted(troughs, t[spiketimes[ii][jj]]) tr_prev=tr_next-1 cell_phase[ii].append( (t[spiketimes[ii][jj]] - troughs[tr_prev] )*360/(troughs[tr_next] -troughs[tr_prev])) ### Plotting phases if spikes colorVal=colorbar.to_rgba(spiking_cell) fig7=figure(figsize=(12,5)) ax=fig7.add_subplot(111) for ii in range(trials): plot(t[spiketimes[ii]], cell_phase[ii] , 'ro', color=colorVal) bar(troughs,[400 for jj in troughs], bottom=[-20 for jj in troughs], width=2*delt, color='k', label='Population Troughs') bar(peaks,height=[400 for jj in peaks], bottom=[-20 for jj in peaks], width=2*delt, color='r', edgecolor='r', label='Population Peaks') ax.grid() xlim([xlow,xhigh]) ylim([-20,380]) xlabel('Time (sec)') ylabel('Phase of population activity (degrees) for each spike') legend() ### Power spectrum of population activity ### Power spectrum of rate maps: Population and example cell ns = len(poprate) pop_fft = fft(poprate) # Population rate FFT cell_fft = fft(rates[0][:]) # FFT for a single neuron rate map freq = np.arange(0.0,1.0/(2*delt),1.0/(2*delt)*2/ns) # Frequency array (0 to fmax) fig3=figure() A = fig3.add_subplot(111) A.plot(freq,2.0/ns*abs(pop_fft[0:ns/2])/N, color ='b' , linewidth = 2.0) B = A.twinx() # Same x-axis, different scales on y-axis B.plot(freq,2.0/ns*abs(cell_fft[0:ns/2]), 'r-' , linewidth =2.0) A.set_xlim([0.05, 15]) # Plot upto freq = 15Hz A.set_ylabel('Population activity: Power', color='b' ) A.set_xlabel('Frequency (Hz)') B.set_ylabel('Individual cell activity: Power', color ='r') ### Protocol for finding frequency with peak power: ### Finding local peaks ( and above threshold = 20% Peak power) ### pk.indexes returns the "indices" of the local peaks LFP_freq = pk.indexes(abs(pop_fft[0:ns/2]), thres=0.15) #Indices of local maximas in power spectrum of poprate Intrinsic_freq = pk.indexes(abs(cell_fft[0:ns/2]), thres=0.15) #Indices of local maximas in power spectrum of cell rates ### What is the frequency (in theta band) at which single neuron or population activity is modulated? ### Theta band used: 4-12 Hz ## LFP LFP_Theta = ct.CheckThetaFreq( LFP_freq, freq ) if LFP_Theta>12: print 'No Peak in Theta Band for population activity' else: print 'Population rate is modulated at frequency', LFP_Theta, 'Hz' ## Individual cells Intrinsic_Theta = ct.CheckThetaFreq( Intrinsic_freq, freq ) if Intrinsic_Theta < 12: print 'Individual cell firing rate is modulated at frequency', Intrinsic_Theta, 'Hz' else: print 'No Peak in Theta Band for individual cell activity' """ Explanation: Generating spikes to observe phase precession End of explanation """
tensorflow/hub
examples/colab/tf2_image_retraining.ipynb
apache-2.0
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2021 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import itertools import os import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub print("TF version:", tf.__version__) print("Hub version:", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE") """ Explanation: Retraining an Image Classifier <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_image_retraining"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_image_retraining.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/collections/image/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a> </td> </table> Introduction Image classification models have millions of parameters. Training them from scratch requires a lot of labeled training data and a lot of computing power. Transfer learning is a technique that shortcuts much of this by taking a piece of a model that has already been trained on a related task and reusing it in a new model. This Colab demonstrates how to build a Keras model for classifying five species of flowers by using a pre-trained TF2 SavedModel from TensorFlow Hub for image feature extraction, trained on the much larger and more general ImageNet dataset. Optionally, the feature extractor can be trained ("fine-tuned") alongside the newly added classifier. Looking for a tool instead? This is a TensorFlow coding tutorial. If you want a tool that just builds the TensorFlow or TFLite model for, take a look at the make_image_classifier command-line tool that gets installed by the PIP package tensorflow-hub[make_image_classifier], or at this TFLite colab. Setup End of explanation """ model_name = "efficientnetv2-xl-21k" # @param ['efficientnetv2-s', 'efficientnetv2-m', 'efficientnetv2-l', 'efficientnetv2-s-21k', 'efficientnetv2-m-21k', 'efficientnetv2-l-21k', 'efficientnetv2-xl-21k', 'efficientnetv2-b0-21k', 'efficientnetv2-b1-21k', 'efficientnetv2-b2-21k', 'efficientnetv2-b3-21k', 'efficientnetv2-s-21k-ft1k', 'efficientnetv2-m-21k-ft1k', 'efficientnetv2-l-21k-ft1k', 'efficientnetv2-xl-21k-ft1k', 'efficientnetv2-b0-21k-ft1k', 'efficientnetv2-b1-21k-ft1k', 'efficientnetv2-b2-21k-ft1k', 'efficientnetv2-b3-21k-ft1k', 'efficientnetv2-b0', 'efficientnetv2-b1', 'efficientnetv2-b2', 'efficientnetv2-b3', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'bit_s-r50x1', 'inception_v3', 'inception_resnet_v2', 'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v2_50', 'resnet_v2_101', 'resnet_v2_152', 'nasnet_large', 'nasnet_mobile', 'pnasnet_large', 'mobilenet_v2_100_224', 'mobilenet_v2_130_224', 'mobilenet_v2_140_224', 'mobilenet_v3_small_100_224', 'mobilenet_v3_small_075_224', 'mobilenet_v3_large_100_224', 'mobilenet_v3_large_075_224'] model_handle_map = { "efficientnetv2-s": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_s/feature_vector/2", "efficientnetv2-m": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_m/feature_vector/2", "efficientnetv2-l": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_l/feature_vector/2", "efficientnetv2-s-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_s/feature_vector/2", "efficientnetv2-m-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_m/feature_vector/2", "efficientnetv2-l-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_l/feature_vector/2", "efficientnetv2-xl-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_xl/feature_vector/2", "efficientnetv2-b0-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b0/feature_vector/2", "efficientnetv2-b1-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b1/feature_vector/2", "efficientnetv2-b2-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b2/feature_vector/2", "efficientnetv2-b3-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/feature_vector/2", "efficientnetv2-s-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2", "efficientnetv2-m-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/feature_vector/2", "efficientnetv2-l-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/feature_vector/2", "efficientnetv2-xl-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/feature_vector/2", "efficientnetv2-b0-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/feature_vector/2", "efficientnetv2-b1-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b1/feature_vector/2", "efficientnetv2-b2-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b2/feature_vector/2", "efficientnetv2-b3-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b3/feature_vector/2", "efficientnetv2-b0": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2", "efficientnetv2-b1": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b1/feature_vector/2", "efficientnetv2-b2": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b2/feature_vector/2", "efficientnetv2-b3": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b3/feature_vector/2", "efficientnet_b0": "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1", "efficientnet_b1": "https://tfhub.dev/tensorflow/efficientnet/b1/feature-vector/1", "efficientnet_b2": "https://tfhub.dev/tensorflow/efficientnet/b2/feature-vector/1", "efficientnet_b3": "https://tfhub.dev/tensorflow/efficientnet/b3/feature-vector/1", "efficientnet_b4": "https://tfhub.dev/tensorflow/efficientnet/b4/feature-vector/1", "efficientnet_b5": "https://tfhub.dev/tensorflow/efficientnet/b5/feature-vector/1", "efficientnet_b6": "https://tfhub.dev/tensorflow/efficientnet/b6/feature-vector/1", "efficientnet_b7": "https://tfhub.dev/tensorflow/efficientnet/b7/feature-vector/1", "bit_s-r50x1": "https://tfhub.dev/google/bit/s-r50x1/1", "inception_v3": "https://tfhub.dev/google/imagenet/inception_v3/feature-vector/4", "inception_resnet_v2": "https://tfhub.dev/google/imagenet/inception_resnet_v2/feature-vector/4", "resnet_v1_50": "https://tfhub.dev/google/imagenet/resnet_v1_50/feature-vector/4", "resnet_v1_101": "https://tfhub.dev/google/imagenet/resnet_v1_101/feature-vector/4", "resnet_v1_152": "https://tfhub.dev/google/imagenet/resnet_v1_152/feature-vector/4", "resnet_v2_50": "https://tfhub.dev/google/imagenet/resnet_v2_50/feature-vector/4", "resnet_v2_101": "https://tfhub.dev/google/imagenet/resnet_v2_101/feature-vector/4", "resnet_v2_152": "https://tfhub.dev/google/imagenet/resnet_v2_152/feature-vector/4", "nasnet_large": "https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/4", "nasnet_mobile": "https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/4", "pnasnet_large": "https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/4", "mobilenet_v2_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4", "mobilenet_v2_130_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/feature_vector/4", "mobilenet_v2_140_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/feature_vector/4", "mobilenet_v3_small_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5", "mobilenet_v3_small_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_075_224/feature_vector/5", "mobilenet_v3_large_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5", "mobilenet_v3_large_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_075_224/feature_vector/5", } model_image_size_map = { "efficientnetv2-s": 384, "efficientnetv2-m": 480, "efficientnetv2-l": 480, "efficientnetv2-b0": 224, "efficientnetv2-b1": 240, "efficientnetv2-b2": 260, "efficientnetv2-b3": 300, "efficientnetv2-s-21k": 384, "efficientnetv2-m-21k": 480, "efficientnetv2-l-21k": 480, "efficientnetv2-xl-21k": 512, "efficientnetv2-b0-21k": 224, "efficientnetv2-b1-21k": 240, "efficientnetv2-b2-21k": 260, "efficientnetv2-b3-21k": 300, "efficientnetv2-s-21k-ft1k": 384, "efficientnetv2-m-21k-ft1k": 480, "efficientnetv2-l-21k-ft1k": 480, "efficientnetv2-xl-21k-ft1k": 512, "efficientnetv2-b0-21k-ft1k": 224, "efficientnetv2-b1-21k-ft1k": 240, "efficientnetv2-b2-21k-ft1k": 260, "efficientnetv2-b3-21k-ft1k": 300, "efficientnet_b0": 224, "efficientnet_b1": 240, "efficientnet_b2": 260, "efficientnet_b3": 300, "efficientnet_b4": 380, "efficientnet_b5": 456, "efficientnet_b6": 528, "efficientnet_b7": 600, "inception_v3": 299, "inception_resnet_v2": 299, "nasnet_large": 331, "pnasnet_large": 331, } model_handle = model_handle_map.get(model_name) pixels = model_image_size_map.get(model_name, 224) print(f"Selected model: {model_name} : {model_handle}") IMAGE_SIZE = (pixels, pixels) print(f"Input size {IMAGE_SIZE}") BATCH_SIZE = 16#@param {type:"integer"} """ Explanation: Select the TF2 SavedModel module to use For starters, use https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4. The same URL can be used in code to identify the SavedModel and in your browser to show its documentation. (Note that models in TF1 Hub format won't work here.) You can find more TF2 models that generate image feature vectors here. There are multiple possible models to try. All you need to do is select a different one on the cell below and follow up with the notebook. End of explanation """ data_dir = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) def build_dataset(subset): return tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=.20, subset=subset, label_mode="categorical", # Seed needs to provided when using validation_split and shuffle = True. # A fixed seed is used so that the validation set is stable across runs. seed=123, image_size=IMAGE_SIZE, batch_size=1) train_ds = build_dataset("training") class_names = tuple(train_ds.class_names) train_size = train_ds.cardinality().numpy() train_ds = train_ds.unbatch().batch(BATCH_SIZE) train_ds = train_ds.repeat() normalization_layer = tf.keras.layers.Rescaling(1. / 255) preprocessing_model = tf.keras.Sequential([normalization_layer]) do_data_augmentation = False #@param {type:"boolean"} if do_data_augmentation: preprocessing_model.add( tf.keras.layers.RandomRotation(40)) preprocessing_model.add( tf.keras.layers.RandomTranslation(0, 0.2)) preprocessing_model.add( tf.keras.layers.RandomTranslation(0.2, 0)) # Like the old tf.keras.preprocessing.image.ImageDataGenerator(), # image sizes are fixed when reading, and then a random zoom is applied. # If all training inputs are larger than image_size, one could also use # RandomCrop with a batch size of 1 and rebatch later. preprocessing_model.add( tf.keras.layers.RandomZoom(0.2, 0.2)) preprocessing_model.add( tf.keras.layers.RandomFlip(mode="horizontal")) train_ds = train_ds.map(lambda images, labels: (preprocessing_model(images), labels)) val_ds = build_dataset("validation") valid_size = val_ds.cardinality().numpy() val_ds = val_ds.unbatch().batch(BATCH_SIZE) val_ds = val_ds.map(lambda images, labels: (normalization_layer(images), labels)) """ Explanation: Set up the Flowers dataset Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning. End of explanation """ do_fine_tuning = False #@param {type:"boolean"} print("Building model with", model_handle) model = tf.keras.Sequential([ # Explicitly define the input shape so the model can be properly # loaded by the TFLiteConverter tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)), hub.KerasLayer(model_handle, trainable=do_fine_tuning), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense(len(class_names), kernel_regularizer=tf.keras.regularizers.l2(0.0001)) ]) model.build((None,)+IMAGE_SIZE+(3,)) model.summary() """ Explanation: Defining the model All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module. For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy. End of explanation """ model.compile( optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1), metrics=['accuracy']) steps_per_epoch = train_size // BATCH_SIZE validation_steps = valid_size // BATCH_SIZE hist = model.fit( train_ds, epochs=5, steps_per_epoch=steps_per_epoch, validation_data=val_ds, validation_steps=validation_steps).history plt.figure() plt.ylabel("Loss (training and validation)") plt.xlabel("Training Steps") plt.ylim([0,2]) plt.plot(hist["loss"]) plt.plot(hist["val_loss"]) plt.figure() plt.ylabel("Accuracy (training and validation)") plt.xlabel("Training Steps") plt.ylim([0,1]) plt.plot(hist["accuracy"]) plt.plot(hist["val_accuracy"]) """ Explanation: Training the model End of explanation """ x, y = next(iter(val_ds)) image = x[0, :, :, :] true_index = np.argmax(y[0]) plt.imshow(image) plt.axis('off') plt.show() # Expand the validation image to (1, 224, 224, 3) before predicting the label prediction_scores = model.predict(np.expand_dims(image, axis=0)) predicted_index = np.argmax(prediction_scores) print("True label: " + class_names[true_index]) print("Predicted label: " + class_names[predicted_index]) """ Explanation: Try out the model on an image from the validation data: End of explanation """ saved_model_path = f"/tmp/saved_flowers_model_{model_name}" tf.saved_model.save(model, saved_model_path) """ Explanation: Finally, the trained model can be saved for deployment to TF Serving or TFLite (on mobile) as follows. End of explanation """ #@title Optimization settings optimize_lite_model = False #@param {type:"boolean"} #@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount. num_calibration_examples = 60 #@param {type:"slider", min:0, max:1000, step:1} representative_dataset = None if optimize_lite_model and num_calibration_examples: # Use a bounded number of training examples without labels for calibration. # TFLiteConverter expects a list of input tensors, each with batch size 1. representative_dataset = lambda: itertools.islice( ([image[None, ...]] for batch, _ in train_ds for image in batch), num_calibration_examples) converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path) if optimize_lite_model: converter.optimizations = [tf.lite.Optimize.DEFAULT] if representative_dataset: # This is optional, see above. converter.representative_dataset = representative_dataset lite_model_content = converter.convert() with open(f"/tmp/lite_flowers_model_{model_name}.tflite", "wb") as f: f.write(lite_model_content) print("Wrote %sTFLite model of %d bytes." % ("optimized " if optimize_lite_model else "", len(lite_model_content))) interpreter = tf.lite.Interpreter(model_content=lite_model_content) # This little helper wraps the TFLite Interpreter as a numpy-to-numpy function. def lite_model(images): interpreter.allocate_tensors() interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images) interpreter.invoke() return interpreter.get_tensor(interpreter.get_output_details()[0]['index']) #@markdown For rapid experimentation, start with a moderate number of examples. num_eval_examples = 50 #@param {type:"slider", min:0, max:700} eval_dataset = ((image, label) # TFLite expects batch size 1. for batch in train_ds for (image, label) in zip(*batch)) count = 0 count_lite_tf_agree = 0 count_lite_correct = 0 for image, label in eval_dataset: probs_lite = lite_model(image[None, ...])[0] probs_tf = model(image[None, ...]).numpy()[0] y_lite = np.argmax(probs_lite) y_tf = np.argmax(probs_tf) y_true = np.argmax(label) count +=1 if y_lite == y_tf: count_lite_tf_agree += 1 if y_lite == y_true: count_lite_correct += 1 if count >= num_eval_examples: break print("TFLite model agrees with original model on %d of %d examples (%g%%)." % (count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count)) print("TFLite model is accurate on %d of %d examples (%g%%)." % (count_lite_correct, count, 100.0 * count_lite_correct / count)) """ Explanation: Optional: Deployment to TensorFlow Lite TensorFlow Lite lets you deploy TensorFlow models to mobile and IoT devices. The code below shows how to convert the trained model to TFLite and apply post-training tools from the TensorFlow Model Optimization Toolkit. Finally, it runs it in the TFLite Interpreter to examine the resulting quality Converting without optimization provides the same results as before (up to roundoff error). Converting with optimization without any data quantizes the model weights to 8 bits, but inference still uses floating-point computation for the neural network activations. This reduces model size almost by a factor of 4 and improves CPU latency on mobile devices. On top, computation of the neural network activations can be quantized to 8-bit integers as well if a small reference dataset is provided to calibrate the quantization range. On a mobile device, this accelerates inference further and makes it possible to run on accelerators like Edge TPU. End of explanation """
tensorflow/graphics
tensorflow_graphics/notebooks/matting.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 Google LLC. End of explanation """ !pip install tensorflow_graphics """ Explanation: Closed Form Matting Energy <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/matting.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/matting.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Matting is an important task in image editing where a novel background is combined with a given foreground to produce a new composite image. To achieve a plausible result, the foreground needs to be carefully extracted from a given image, i.e. preserving all the thin structures, before being inpainted over the new background. In image matting, the input image $I$ is assumed to be a linear combination of a foreground image $F$ and a background image $B$. For a pixel $j$ of $I$, the color of the pixel can therefore be expressed as $I_j = \alpha_j F_j +(1-\alpha_j)B_j$, where $\alpha_j$ is the foreground opacity for the pixel $j$. The opacity image made of all the $\alpha_j$ pixels is called a matte. <div align="center"> <img src="https://github.com/frcs/alternative-matting-laplacian/raw/master/GT04.png" width="283" height="200" /> <img src="https://github.com/frcs/alternative-matting-laplacian/raw/master/alpha0-GT04.png" width="283" height="200" /> </div> Using a trimap (white for foreground, black for background, and gray for unknown pixels) <div align="center"> <img src="https://github.com/frcs/alternative-matting-laplacian/raw/master/trimap-GT04.png" width="283" height="200" /> </div> or a set of scribbles (user strokes), an optimization problem can be formulated to retrieve the unknown pixel opacities. This colab demonstrates how to use the image matting loss implemented in TensorFlow Graphics to precisely segment out objects from images and have the ability to paste them on top of new backgrounds. This matting loss is derived from the paper titled "A Closed Form Solution to Natural Image Matting" from Levin et al. The loss was "tensorized" inspired by "Deep-Energy: Unsupervised Training of Deep Neural Networks" from Golts et al. Setup & Imports If TensorFlow Graphics is not installed on your system, the following cell can install the TensorFlow Graphics package for you. End of explanation """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import matplotlib.pyplot as plt import tensorflow as tf from tensorflow_graphics.image import matting from tqdm import tqdm """ Explanation: Now that TensorFlow Graphics is installed, let's import everything needed to run the demos contained in this notebook. End of explanation """ # Download dataset from alphamatting.com !rm -rf input_training_lowres !rm -rf trimap_training_lowres !rm -rf gt_training_lowres !wget -q http://www.alphamatting.com/datasets/zip/input_training_lowres.zip !wget -q http://www.alphamatting.com/datasets/zip/trimap_training_lowres.zip !wget -q http://www.alphamatting.com/datasets/zip/gt_training_lowres.zip !unzip -q input_training_lowres.zip -d input_training_lowres !unzip -q trimap_training_lowres.zip -d trimap_training_lowres !unzip -q gt_training_lowres.zip -d gt_training_lowres # Read and decode images source = tf.io.read_file('input_training_lowres/GT07.png') source = tf.cast(tf.io.decode_png(source), tf.float64) / 255.0 source = tf.expand_dims(source, axis=0) trimap = tf.io.read_file('trimap_training_lowres/Trimap1/GT07.png') trimap = tf.cast(tf.io.decode_png(trimap), tf.float64) / 255.0 trimap = tf.reduce_mean(trimap, axis=-1, keepdims=True) trimap = tf.expand_dims(trimap, axis=0) gt_matte = tf.io.read_file('gt_training_lowres/GT07.png') gt_matte = tf.cast(tf.io.decode_png(gt_matte), tf.float64) / 255.0 gt_matte = tf.reduce_mean(gt_matte, axis=-1, keepdims=True) gt_matte = tf.expand_dims(gt_matte, axis=0) # Resize images to improve performance source = tf.image.resize( source, tf.shape(source)[1:3] // 2, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) trimap = tf.image.resize( trimap, tf.shape(trimap)[1:3] // 2, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) gt_matte = tf.image.resize( gt_matte, tf.shape(gt_matte)[1:3] // 2, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) # Show images figure = plt.figure(figsize=(22, 18)) axes = figure.add_subplot(1, 3, 1) axes.grid(False) axes.set_title('Input image', fontsize=14) _= plt.imshow(source[0, ...].numpy()) axes = figure.add_subplot(1, 3, 2) axes.grid(False) axes.set_title('Input trimap', fontsize=14) _= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) axes = figure.add_subplot(1, 3, 3) axes.grid(False) axes.set_title('GT matte', fontsize=14) _= plt.imshow(gt_matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) """ Explanation: Import the image and trimap Download the image and trimap from alphamatting.com. End of explanation """ # Extract the foreground and background constraints from the trimap image foreground = tf.cast(tf.equal(trimap, 1.0), tf.float64) background = tf.cast(tf.equal(trimap, 0.0), tf.float64) # Show foreground and background constraints figure = plt.figure(figsize=(22, 18)) axes = figure.add_subplot(1, 2, 1) axes.grid(False) axes.set_title('Foreground constraints', fontsize=14) _= plt.imshow(foreground[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) axes = figure.add_subplot(1, 2, 2) axes.grid(False) axes.set_title('Background constraints', fontsize=14) _= plt.imshow(background[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) """ Explanation: Extract the foreground and background constraints from the trimap image End of explanation """ # Initialize the matte with random values matte_shape = tf.concat((tf.shape(source)[:-1], (1,)), axis=-1) matte = tf.Variable( tf.random.uniform( shape=matte_shape, minval=0.0, maxval=1.0, dtype=tf.float64)) # Create the closed form matting Laplacian laplacian, _ = matting.build_matrices(source) # Function computing the loss and applying the gradient @tf.function def optimize(optimizer): with tf.GradientTape() as tape: tape.watch(matte) # Compute a loss enforcing the trimap constraints constraints = tf.reduce_mean((foreground + background) * tf.math.squared_difference(matte, foreground)) # Compute the matting loss smoothness = matting.loss(matte, laplacian) # Sum up the constraint and matting losses total_loss = 100 * constraints + smoothness # Compute and apply the gradient to the matte gradient = tape.gradient(total_loss, [matte]) optimizer.apply_gradients(zip(gradient, (matte,))) # Run the Adam optimizer for 400 iterations optimizer = tf.optimizers.Adam(learning_rate=1.0) nb_iterations = 400 for it in tqdm(range(nb_iterations)): optimize(optimizer) # Clip the matte value between 0 and 1 matte = tf.clip_by_value(matte, 0.0, 1.0) # Display the results figure = plt.figure(figsize=(22, 18)) axes = figure.add_subplot(1, 3, 1) axes.grid(False) axes.set_title('Input image', fontsize=14) plt.imshow(source[0, ...].numpy()) axes = figure.add_subplot(1, 3, 2) axes.grid(False) axes.set_title('Input trimap', fontsize=14) _= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) axes = figure.add_subplot(1, 3, 3) axes.grid(False) axes.set_title('Matte', fontsize=14) _= plt.imshow(matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1) """ Explanation: Setup & run the optimization Setup the matting loss function using TensorFlow Graphics and run the Adam optimizer for 400 iterations. End of explanation """ !wget -q https://p2.piqsels.com/preview/861/934/460/concrete-texture-background-backdrop.jpg background = tf.io.read_file('concrete-texture-background-backdrop.jpg') background = tf.cast(tf.io.decode_jpeg(background), tf.float64) / 255.0 background = tf.expand_dims(background, axis=0) # Resize images to improve performance background = tf.image.resize( background, tf.shape(source)[1:3], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) # Inpaint the foreground over a new background inpainted_black = matte * source inpainted_concrete = matte * source + (1.0 - matte) * background # Display the results figure = plt.figure(figsize=(22, 18)) axes = figure.add_subplot(1, 2, 1) axes.grid(False) axes.set_title('Inpainted black', fontsize=14) _= plt.imshow(inpainted_black[0, ...].numpy()) axes = figure.add_subplot(1, 2, 2) axes.grid(False) axes.set_title('Inpainted concrete', fontsize=14) _= plt.imshow(inpainted_concrete[0, ...].numpy()) """ Explanation: Compositing Let's now composite our extracted object on top of a new background! End of explanation """
hildensia/bayesian_changepoint_detection
Multivariate_Example.ipynb
mit
from __future__ import division import matplotlib.pyplot as plt import bayesian_changepoint_detection.generate_data as gd import seaborn %matplotlib inline %load_ext autoreload %autoreload 2 partition, data = gd.generate_xuan_motivating_example(200,500) """ Explanation: Bayesian Changepoint Detection with multivariate data in Python This code computes the probability of changepoints (including changes in correlation) in a time series. In this notebook I show how you can use it. This example is modified from Xiang Xuan's thesis Section 3.2. First let's generate some data and load some modules: End of explanation """ import numpy as np changes = np.cumsum(partition) fig, ax = plt.subplots(figsize=[16, 4]) for p in changes: ax.plot([p,p],[np.min(data),np.max(data)],'r') for d in range(2): ax.plot(data[:,d]) """ Explanation: Let's plot this data: End of explanation """ from bayesian_changepoint_detection.priors import const_prior from bayesian_changepoint_detection.offline_likelihoods import IndepentFeaturesLikelihood from bayesian_changepoint_detection.bayesian_models import offline_changepoint_detection from functools import partial Q_ifm, P_ifm, Pcp_ifm = offline_changepoint_detection( data, partial(const_prior, p=1/(len(data) + 1)), IndepentFeaturesLikelihood(), truncate=-20 ) fig, ax = plt.subplots(2, figsize=[18, 8]) for p in changes: ax[0].plot([p,p],[np.min(data),np.max(data)],'r') for d in range(2): ax[0].plot(data[:,d]) plt.legend(['Raw data with Original Changepoints']) ax[1].plot(np.exp(Pcp_ifm).sum(0)) plt.legend(['Independent Factor Model']) plt.show() """ Explanation: Let's try to detect the changes with independent features: End of explanation """ from bayesian_changepoint_detection.offline_likelihoods import FullCovarianceLikelihood Q_full, P_full, Pcp_full = offline_changepoint_detection( data, partial(const_prior, p=1/(len(data)+1)), FullCovarianceLikelihood(), truncate=-20 ) fig, ax = plt.subplots(2, figsize=[18, 8]) for p in changes: ax[0].plot([p,p],[np.min(data),np.max(data)],'r') for d in range(2): ax[0].plot(data[:,d]) plt.legend(['Raw data with Original Changepoints']) ax[1].plot(np.exp(Pcp_full).sum(0)) plt.legend(['Full Covariance Model']) plt.show() """ Explanation: Unfortunately, not very good... Now let's try the full covariance model (warning, it'll take a while): End of explanation """ %timeit Q_ifm, P_ifm, Pcp_ifm = offline_changepoint_detection(data, partial(const_prior, p=1/(len(data)+1)), IndepentFeaturesLikelihood(), truncate=-20) """ Explanation: Ahh, much better now! End of explanation """
awhite40/pymks
notebooks/checker_board.ipynb
mit
%matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt """ Explanation: Checkerboard Microstructure Introduction - What are 2-Point Spatial Correlations (also called 2-Point Statistics)? The purpose of this example is to introduce 2-point spatial correlations and how they are computed, using PyMKS. The example starts with some introductory information about spatial correlations. PyMKS is used to compute both the periodic and non-periodic 2-point spatial correlations (also referred to as 2-point statistics or autocorrelations and crosscorrelations) for a checkerboard microstructure. This is a relatively simple example that allows an easy discussion of how the spatial correlations capture the main features seen in the original microstructure. n-Point Spatial Correlations 1-Point Spatial Correlations (or 1-point statistics) n-point spatial correlations provide a way rigorously quantify material structure, using statistics. As an introduction n-point spatial correlations, let's first discuss 1-point statistics. 1-point statistics are the probabilities that a specified local state will be found in any randomly selected spatial bin in a microstructure [1][2][3]. In this checkerboard example discussed here, there are two possible local states, one is colored white and the other is colored black. 1-point statistics compute the volume fractions of the local states in the microstructure. 1-point statistics are computed as $$ f[l] = \frac{1}{S} \sum_s m[s,l] $$ In this equation, $f[l]$ is the probability of finding the local state $l$ in any randomly selected spatial bin in the microstructure, $m[s, l]$ is the microstructure function (the digital representation of the microstructure), $S$ is the total number of spatial bins in the microstructure and $s$ refers to a specific spatial bin. While 1-point statistics provide information on the relative amounts of the different local states, it does not provide any information about how those local states are spatially arranged in the microstructure. Therefore, 1-point statistics are a limited set of metrics to describe the structure of materials. 2-Point Spatial Correlations 2-point spatial correlations (also known as 2-point statistics) contain information about the fractions of local states as well as the first order information on how the different local states are distributed in the microstructure. 2-point statistics can be thought of as the probability of having a vector placed randomly in the microstructure and having one end of the vector be on one specified local state and the other end on another specified local state. This vector could have any length or orientation that the discrete microstructure allows. The equation for 2-point statistics can found below. $$ f[r \vert l, l'] = \frac{1}{S} \sum_s m[s, l] m[s + r, l'] $$ In this equation $ f[r \vert l, l']$ is the conditional probability of finding the local states $l$ and $l'$ at a distance and orientation away from each other defined by the vector $r$. All other variables are the same as those in the 1-point statistics equation. In the case that we have an eigen microstructure function (it only contains values of 0 or 1) and we are using an indicator basis, the the $r=0$ vector will recover the 1-point statistics. When the 2 local states are the same $l = l'$, it is referred to as a autocorrelation. If the 2 local states are not the same, it is referred to as a crosscorrelation. Higher Order Spatial Statistics Higher order spatial statistics are similar to 2-point statistics, in that they can be thought of in terms of conditional probabilities of finding specified local states separated by a prescribed set of vectors. 3-point statistics are the probabilities of finding three specified local states at the ends of a triangle (defined by 2 vectors) placed randomly in the material structure. 4-point statistics describe the probabilities of finding 4 local states at 4 locations (defined using 3 vectors) and so on. While higher order statistics are a better metric to quantify the material structure, the 2-point statistics can be computed much faster than higher order spatial statistics, and still provide information about how the local states are distributed. For this reason, only 2-point statistics are implemented into PyMKS. Let us look at an example of computing the 2-point statistics for a checkerboard microstructure. End of explanation """ from pymks.datasets import make_checkerboard_microstructure X = make_checkerboard_microstructure(square_size=21, n_squares=8) """ Explanation: 2-Point Statistics for Checkerboard Microstructure Let's first start with making a microstructure that looks like a 8 x 8 checkerboard. Although this type of microstructure may not resemble a physical system, it provides solutions that give some intuitive understanding of 2-point statistics. We can create a checkerboard microstructure using make_checkerboard_microstructure function from pymks.datasets. End of explanation """ from pymks.tools import draw_microstructures draw_microstructures(X) print X.shape """ Explanation: Now let's take a look at how the microstructure looks. End of explanation """ from pymks.stats import autocorrelate from pymks import PrimitiveBasis prim_basis = PrimitiveBasis(n_states=2) X_ = prim_basis.discretize(X) X_auto = autocorrelate(X_, periodic_axes=(0, 1)) """ Explanation: Compute Periodic 2-Point Statistics Now that we have created a microstructure to work with, we can start computing the 2-point statistics. Let's start by looking at the periodic autocorrelations of the microstructure and then compute the periodic crosscorrelation. This can be done using the autocorrelate and crosscorrelate functions from pymks.states, and using the keyword argument periodic_axes to specify the axes that are periodic. In order to compute 2-pont statistics, we need to select a basis to generate the microstructure function X_ from the microstructure X. Because we only have values of 0 or 1 in our microstructure we will using the PrimitiveBasis with n_states equal to 2. End of explanation """ from pymks.tools import draw_autocorrelations correlations = [('black', 'black'), ('white', 'white')] draw_autocorrelations(X_auto[0], autocorrelations=correlations) """ Explanation: We have now computed the autocorrelations. Let's take a look at them using draw_autocorrelations from pymks.tools. End of explanation """ center = (X_auto.shape[1] + 1) / 2 print 'Volume fraction of black phase', X_auto[0, center, center, 0] print 'Volume fraction of white phase', X_auto[0, center, center, 1] """ Explanation: Notice that for this checkerboard microstructure, the autocorrelation for these 2 local states in the exact same. We have just computed the periodic autocorrelations for a perfectly periodic microstructure with equal volume fractions. In general this is not the case and the autocorrelations will be different, as we will see later in this example. As mentioned in the introduction, because we using an indicator basis and the we have eigen microstructure functions (values are either 0 or 1), the (0, 0) vector equals the volume fraction. Let's double check that both the phases have a volume fraction of 0.5. End of explanation """ from pymks.stats import crosscorrelate X_cross = crosscorrelate(X_, periodic_axes=(0, 1)) """ Explanation: We can compute the cross-correlation of the microstructure function, using the crosscorrelate function from pymks.stats End of explanation """ from pymks.tools import draw_crosscorrelations correlations = [('black', 'white')] draw_crosscorrelations(X_cross[0], crosscorrelations=correlations) """ Explanation: Let's take a look at the cross correlation using draw_crosscorrelations from pymks.tools. End of explanation """ print 'Center value', X_cross[0, center, center, 0] """ Explanation: Notice that the crosscorrelation is the exact opposite of the 2 autocorrelations. The (0, 0) vector has a value of 0. This statistic reflects the probablity of 2 phases having the same location. In our microstructure, this probability is zero, as we have not allowed the two phases (colored black and white) to co-exist in the same spatial voxel. Let's check that it is zero. End of explanation """ from pymks.stats import correlate X_corr = correlate(X_) """ Explanation: Compute Non-Periodic 2-Point Statistics We will now compute the non-periodic 2-point statistics for our microstructure. This time, rather than using the autocorrelate and crosscorrelate functions, we will use the correlate function from pymks.stats. The correlate function computes all of the autocorrelations and crosscorrelations at the same time. We will compute the non-periodic statistics by omitting the keyword argument periodic_axes. End of explanation """ from pymks.tools import draw_correlations correlations = [('black', 'black'), ('white', 'white'), ('black', 'white')] draw_correlations(X_corr[0].real, correlations=correlations) """ Explanation: All or some of the correlations can be viewed, using the draw_correlations function from pymks.tools. In this example we will look at all of them. End of explanation """ print 'Volume fraction of black phase', X_corr[0, center, center, 0] print 'Volume fraction of white phase', X_corr[0, center, center, 1] """ Explanation: Notice that the maximum values for the autocorrelations are higher than 0.5. We can still show that the centers or the (0, 0) vectors are still equal to the volume fractions. End of explanation """
manahl/PyBloqs
docs/source/examples.ipynb
lgpl-2.1
%%capture import numpy as np import pandas as pd import pandas.util.testing as pt from datetime import datetime import pybloqs as p df = pd.DataFrame((np.random.rand(200, 4)-0.5)/10, columns=list("ABCD"), index=pd.date_range(datetime(2000,1,1), periods=200)) df_cr = (df + 1).cumprod() a = df_cr.A b = df_cr.B c = df_cr.C c.name = "C" """ Explanation: Composable Blocks & Interactive Charts This notebook is a simple illustration of the python API for Blocks and Highcharts interactive charts. What are Blocks? Blocks are composable layout elements for easily building HTML, PDF, PNG and JPG based reports. At the same time, all block constructs can be rendered in-line in IPython Notebooks (as will be shown later). This has practical benefits like a single backtest function that can be used for quick analysis during research work in a notebook, but also directly injected into more formal reports with having to fumble around with intermediate formats. Practically all the functionality is based on HTML rendering. HTML is a declarative, tree based language that is easy to work with and fungible. Blocks are also declarative, composable into a tree and are meant to be dead simple. The match was thus quite natural. The blocks do not try to match the power and precision of latex. Such an undertaking would be not only out of the scope of a simple library, but would mean the reinvention of latex with all the gnarliness that comes with it. This notebook aims to showcase the functionality and offers some examples to help people get started. Imports & Data End of explanation """ p.Block("Hello World!") """ Explanation: Using Blocks Obligatory "Hello World!" End of explanation """ p.Block("Hello World!", h_align="left") """ Explanation: Play around with alignment End of explanation """ p.Block("Hello World!", title="Announcement", h_align="left") """ Explanation: Adding a title End of explanation """ p.Block(df.head()) """ Explanation: Writing out dataframes End of explanation """ p.Block(df.A.plot()) """ Explanation: Writing out matplotlib plots End of explanation """ p.Block("<b>this text is bold</b>") """ Explanation: Raw HTML output End of explanation """ p.VStack([p.Block("Hello World!", title="Announcement"), p.Block("<b>this text is bold</b>")]) """ Explanation: Composing blocks End of explanation """ p.Block(["Block %s" % i for i in range(8)]) """ Explanation: In most cases, one does not need to explicitly wrap elements in blocks End of explanation """ p.Block(["Block %s" % i for i in range(8)], cols=4) """ Explanation: Splitting composite blocks into columns End of explanation """ p.Block(["Block %s" % i for i in range(8)], cols=4, text_align="right") """ Explanation: Layout styling is cascading - styles will cascade from parent blocks to child blocks by default. This behavior can be disabled by setting inherit_cfg to false on the child blocks, or simply specifying the desired settings explicitly. End of explanation """ p.Block([p.Paragraph("First paragraph."), p.Paragraph("Second paragraph."), p.Paragraph("Third paragraph.")], text_align="right") """ Explanation: Using specific block types is simple as well. As an example - the Paragraph block: End of explanation """ p.Pre(""" some: example: yaml: [1,2,3] data: "text" """) """ Explanation: The Pre block preserves whitespace formatting and is rendered using a fixed width font. Useful for rendering code-like text. End of explanation """ class Capitalize(p.Raw): def __init__(self, contents, **kwargs): # Stringify and capitalize contents = str(contents).upper() super(Capitalize, self).__init__(contents, **kwargs) Capitalize("this here text should look like shouting!") # Emails a block (or a report consisting of many blocks). The emailing is independent of previous reports being saved (e.g. there is no need to call save # before emailing). from smtplib import SMTPServerDisconnected try: p.Block('').email() except SMTPServerDisconnected: print("Please create ~/.pybloqs.cfg with entry for 'smtp_server'. See README.md and pybloqs/config.py for details.") """ Explanation: Creating custom blocks is trivial. For the majority of the cases, one can just inherit from the Container block, which has most of the plumbing already in place: End of explanation """ blocks = [p.Block("First page", styles={"page-break-after": "always"}), p.Block("Second page")] r = p.VStack(blocks) r.save("two_page_report.pdf") """ Explanation: Page break End of explanation """
seg/2016-ml-contest
SHandPR/FaciesTrial.ipynb
apache-2.0
%matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data """ Explanation: Facies classification using Machine Learning- Logistic regression Contest entry by Priyanka Raghavan and Steve Hall This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn. First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data. Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets. We will then be ready to build the classifier. Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data. Exploring the dataset First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data. End of explanation """ training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() training_data.describe() """ Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are: * Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE. * Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS) The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone 2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone) 6. Wackestone (limestone) 7. Dolomite 8. Packstone-grainstone (limestone) 9. Phylloid-algal bafflestone (limestone) These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors. Facies |Label| Adjacent Facies :---: | :---: |:--: 1 |SS| 2 2 |CSiS| 1,3 3 |FSiS| 2 4 |SiSh| 5 5 |MS| 4,6 6 |WS| 5,7 7 |D| 6,8 8 |PS| 6,7,9 9 |BS| 7,8 Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type. End of explanation """ blind = training_data[training_data['Well Name'] == 'SHANKLE'] training_data = training_data[training_data['Well Name'] != 'SHANKLE'] """ Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set. Remove a single well to use as a blind test later. End of explanation """ # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite # 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) """ Explanation: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. End of explanation """ def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) """ Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial. End of explanation """ make_facies_log_plot( training_data[training_data['Well Name'] == 'SHRIMPLIN'], facies_colors) """ Explanation: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters. We then show log plots for wells SHRIMPLIN. End of explanation """ #count the number of unique entries for each facies, sort them by #facies number (instead of by number of entries) facies_counts = training_data['Facies'].value_counts().sort_index() #use facies labels to index each count facies_counts.index = facies_labels facies_counts.plot(kind='bar',color=facies_colors, title='Distribution of Training Data by Facies') facies_counts """ Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class. End of explanation """ #save plot display settings to change back to when done plotting with seaborn inline_rc = dict(mpl.rcParams) import seaborn as sns sns.set() sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1), hue='FaciesLabels', palette=facies_color_map, hue_order=list(reversed(facies_labels))) #switch back to default matplotlib plot style mpl.rcParams.update(inline_rc) """ Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies. Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies. End of explanation """ correct_facies_labels = training_data['Facies'].values feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1) feature_vectors.describe() """ Explanation: Conditioning the data set Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector. End of explanation """ from sklearn import preprocessing scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) feature_vectors """ Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data. End of explanation """ from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( scaled_features, correct_facies_labels, test_size=0.1, random_state=42) """ Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set. End of explanation """ from sklearn import linear_model clf = linear_model.LogisticRegression(class_weight='balanced',multi_class='ovr',solver='sag',max_iter=1000,random_state=40,C=1e5) """ Explanation: Training the classifier using Logistic Regression Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a Logistic Regression. For our test we used the one vs rest and sag solver as data has already been scaled. End of explanation """ clf.fit(X_train,y_train) """ Explanation: Now we can train the classifier using the training set we created above. End of explanation """ predicted_labels = clf.predict(X_test) """ Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. End of explanation """ from sklearn.metrics import confusion_matrix from classification_utilities import display_cm, display_adj_cm conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, hide_zeros=True) """ Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels. The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i. To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function. End of explanation """ def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc """ Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS. The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications. End of explanation """ adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) print('Facies classification accuracy = %f' % accuracy(conf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies)) """ Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels. End of explanation """ blind """ Explanation: Applying the classification model to the blind data We held a well back from the training, and stored it in a dataframe called blind: End of explanation """ y_blind = blind['Facies'].values """ Explanation: The label vector is just the Facies column: End of explanation """ well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1) """ Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe: End of explanation """ X_blind = scaler.transform(well_features) """ Explanation: Now we can transform this with the scaler we made before: End of explanation """ y_pred = clf.predict(X_blind) blind['Prediction'] = y_pred """ Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe: End of explanation """ cv_conf = confusion_matrix(y_blind, y_pred) print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf)) print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies)) """ Explanation: Let's see how we did with the confusion matrix: End of explanation """ display_cm(cv_conf, facies_labels, display_metrics=True, hide_zeros=True) """ Explanation: The results are 0.59 accuracy on facies classification of blind data and 0.92 adjacent facies classification. End of explanation """ display_adj_cm(cv_conf, facies_labels, adjacent_facies, display_metrics=True, hide_zeros=True) def compare_facies_plot(logs, compadre, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[6]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im2, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-2): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[6].set_xlabel(compadre) ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) ax[6].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) compare_facies_plot(blind, 'Prediction', facies_colors) """ Explanation: ...but does remarkably well on the adjacent facies predictions. End of explanation """ well_data = pd.read_csv('../validation_data_nofacies.csv') well_data['Well Name'] = well_data['Well Name'].astype('category') well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) """ Explanation: Applying the classification model to new data Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input. This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data. End of explanation """ X_unknown = scaler.transform(well_features) """ Explanation: The data needs to be scaled using the same constants we used for the training data. End of explanation """ #predict facies of unclassified data y_unknown = clf.predict(X_unknown) well_data['Facies'] = y_unknown well_data well_data['Well Name'].unique() """ Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe. End of explanation """ make_facies_log_plot( well_data[well_data['Well Name'] == 'STUART'], facies_colors=facies_colors) make_facies_log_plot( well_data[well_data['Well Name'] == 'CRAWFORD'], facies_colors=facies_colors) """ Explanation: We can use the well log plot to view the classification results along with the well logs. End of explanation """ well_data.to_csv('SHPR_FirstAttempt_LogisticsRegression_facies.csv') """ Explanation: Finally we can write out a csv file with the well data along with the facies classification results. End of explanation """
Danghor/Formal-Languages
Python/DFA-2-RegExp.ipynb
gpl-2.0
def arb(S): for x in S: return x """ Explanation: Converting a Deterministic <span style="font-variant:small-caps;">Fsm</span> into a Regular Expression Given a set S, the function arb(S) returns an arbitrary member from S. End of explanation """ def regexp_sum(S): n = len(S) if n == 0: return 0 elif n == 1: return arb(S) else: r = arb(S) return ('+', r, regexp_sum(S - { r })) """ Explanation: The function regexp_sum takes a set $S = { r_1, \cdots, r_n }$ of regular expressions as its argument. It returns the regular expression $$ r_1 + \cdots + r_n. $$ End of explanation """ def rpq(p1, p2, Σ, 𝛿, Allowed): if Allowed == set(): AllChars = { c for c in Σ if 𝛿.get((p1, c)) == p2 } r = regexp_sum(AllChars) if p1 == p2: if AllChars == set(): return '' else: return ('+', '', r) else: return r else: q = arb(Allowed) RestAllowed = Allowed - { q } rp1p2 = rpq(p1, p2, Σ, 𝛿, RestAllowed) rp1q = rpq(p1, q, Σ, 𝛿, RestAllowed) rqq = rpq( q, q, Σ, 𝛿, RestAllowed) rqp2 = rpq( q, p2, Σ, 𝛿, RestAllowed) return ('+', rp1p2, ('&', ('&', rp1q, ('*', rqq)), rqp2)) """ Explanation: The function rpq assumes there is some <span style="font-variant:small-caps;">Fsm</span> $$ F = \langle \texttt{States}, \Sigma, \delta, \texttt{q0}, \texttt{Accepting} \rangle $$ given and takes five arguments: - p1 and p2 are states of the <span style="font-variant:small-caps;">Fsm</span> $F$, - $\Sigma$ is the alphabet of the <span style="font-variant:small-caps;">Fsm</span>, - $\delta$ is the transition function of the <span style="font-variant:small-caps;">Fsm</span> $F$, and - Allowed is a subset of the set States. The function rpq computes a regular expression that describes those strings that take the <span style="font-variant:small-caps;">Fsm</span> $F$ from the state p1 to state p2. When $F$ switches states from p1 to p2 only states in the set Allowed may be visited in-between the states p1 and p2. The function is defined by recursion on the set Allowed. There are two cases - $\texttt{Allowed} = {}$. Define AllCharsas the set of all characters that when read by $F$ in the state p1 cause $F$ to enter the state p2: $$ \texttt{AllChars} = { c \in \Sigma \mid \delta(p_1, c) = p_2 } $$ Then we need a further case distinction: - $p_1 = p_2$: In this case we have: $$ \texttt{rpq}(p_1, p_2, {}) := \sum\limits_{c\in\texttt{AllChars}} c \quad + \varepsilon$$ If $\texttt{AllChars} = {}$ the sum $\sum\limits_{c\in\texttt{AllChars}} c$ is to be interpreted as the regular expression $\emptyset$ that denotes the empty language. Otherwise, if $\texttt{AllChars} = \{c_1,\cdots,c_n\}$ we have $\sum\limits_{c\in\texttt{AllChars}} c \quad = c_1 + \cdots + c_n$. $p_1 \not= p_2$: In this case we have: $$ \texttt{rpq}(p_1, p_2, {}) := \sum\limits_{c\in\texttt{AllChars}} c \quad$$ $\texttt{Allowed} = { q } + \texttt{RestAllowed}$. In this case we recursively define the following variables: $\texttt{rp1p2} := \texttt{rpq}(p_1, p_2, \Sigma, \delta, \texttt{RestAllowed})$, $\texttt{rp1q } := \texttt{rpq}(p_1, q, \Sigma, \delta, \texttt{RestAllowed})$, $\texttt{rqq }\texttt{ } := \texttt{rpq}(q, q, \Sigma, \delta, \texttt{RestAllowed})$, $\texttt{rqp2 } := \texttt{rpq}(q, p_2, \Sigma, \delta, \texttt{RestAllowed})$. Then we can define: $$ \texttt{rpq}(p_1, p_2, \texttt{Allowed}) := \texttt{rp1p2} + \texttt{rp1q} \cdot \texttt{rqq}^* \cdot \texttt{rqp} $$ This formula can be understood as follows: If a string $w$ is read in state $p_1$ and reading this string takes the <span style="font-variant:small-caps;">Fsm</span> $F$ from the state $p_1$ to the state $p_2$ without visiting any state from the set Allowed in-between, then there are two cases: - Reading $w$ does not visit the state $q$ in-between. Hence the string $w$ can be described by the regular expression rp1p2. - The string $w$ can be written as $w = t u_1 \cdots u_n v$ where: - reading $t$ in the state $p_1$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ into the state $q$, - for all $i \in {1,\cdots,n}$ reading $v_i$ in the state $q$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ from $q$ to $q$, and - reading $v$ in the state $q$ takes the <span style="font-variant:small-caps;">Fsm</span> $F$ into the state $p_2$. End of explanation """ def dfa_2_regexp(F): States, Σ, 𝛿, q0, Accepting = F r = regexp_sum({ rpq(q0, p, Σ, 𝛿, States) for p in Accepting }) return r """ Explanation: The function dfa_2_regexp takes a deterministic <span style="font-variant:small-caps;">Fsm</span> $F$ and computes a regular expression $r$ that describes the same language as $F$, i.e. we have $$ L(A) = L(r). $$ Furthermore, it tries to simplify the regular expression $r$ using some algebraic rules. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncar/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: NCAR Source ID: SANDBOX-1 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:22 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/05_review/labs/3_tensorflow_wide_deep.ipynb
apache-2.0
PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = "cloud-training-bucket" # Replace with your BUCKET REGION = "us-central1" # Choose an available region for Cloud MLE TFVERSION = "1.14" # TF version for CMLE to use import os os.environ["BUCKET"] = BUCKET os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["TFVERSION"] = TFVERSION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi """ Explanation: Create TensorFlow Wide and Deep Model Learning Objective - Create a Wide and Deep model using the high-level Estimator API - Determine which features to use as wide columns and which to use as deep columns Introduction In this notebook, we'll explore modeling our data using a Wide & Deep Neural Network. As before, we can do this uisng the high-level Estimator API in Tensorflow. Have a look at the various other models available through the Estimator API in the documentation here. In particular, have a look at the implementation for Wide & Deep models. Start by setting the environment variables related to your project. End of explanation """ %%bash ls *.csv """ Explanation: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval. End of explanation """ import shutil import numpy as np import tensorflow as tf print(tf.__version__) """ Explanation: Create TensorFlow model using TensorFlow's Estimator API We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps. End of explanation """ # Determine CSV, label, and key columns CSV_COLUMNS = # TODO: Your code goes here LABEL_COLUMN = # TODO: Your code goes here # Set default values for each CSV column DEFAULTS = # TODO: Your code goes here TRAIN_STEPS = 1000 """ Explanation: Exercise 1 To begin creating out Tensorflow model, we need to set up variables that determine the csv column values, the label column and the key column. Fill in the TODOs below to set these variables. Note, CSV_COLUMNS should be a list and LABEL_COLUMN should be a string. It is important to get the column names in the correct order as they appear in the csv train/eval/test sets. If necessary, look back at the previous notebooks at how these csv files were created to ensure you have the correct ordering. We also need to set DEFAULTS for each of the CSV column values we prescribe. This will also the a list of entities that will vary depending on the data type of the csv column value. Have a look back at the previous examples to ensure you have the proper formatting. End of explanation """ # Create an input function reading a file using the Dataset API # Then provide the results to the Estimator API def read_dataset(filename_pattern, mode, batch_size = 512): def _input_fn(): def decode_csv(line_of_text): columns = # TODO: Your code goes here features = # TODO: Your code goes here label = # TODO: Your code goes here return features, label # Create list of files that match pattern file_list = # TODO: Your code goes here # Create dataset from file list dataset = # TODO: Your code goes here # In training mode, shuffle the dataset and repeat indefinitely # TODO: Your code goes here # This will now return batches of features, label dataset = # TODO: Your code goes here return dataset return _input_fn """ Explanation: Create the input function Now we are ready to create an input function using the Dataset API. Exercise 2 In the code below you are asked to complete the TODOs to create the input function for our model. Look back at the previous examples we have completed if you need a hint as to how to complete the missing fields below. In the first block of TODOs, your decode_csv file should return a dictionary called features and a value label. In the next TODO, use tf.gfile.Glob to create a list of files that match the given filename_pattern. Have a look at the documentation for tf.gfile.Glob if you get stuck. In the next TODO, use tf.data.TextLineDataset to read text file and apply the decode_csv function you created above to parse each row example. In the next TODO you are asked to set up the dataset depending on whether you are in TRAIN mode or not. (Hint: Use tf.estimator.ModeKeys.TRAIN). When in TRAIN mode, set the appropriate number of epochs and shuffle the data accordingly. When not in TRAIN mode, you will use a different number of epochs and there is no need to shuffle the data. Finally, in the last TODO, collect the operations you set up above to produce the final dataset we'll use to feed data into our model. Have a look at the examples we did in the previous notebooks if you need inspiration. End of explanation """ def get_wide_deep(): # Define column types fc_is_male,fc_plurality,fc_mother_age,fc_gestation_weeks = [# TODO: Your code goes here] # Bucketized columns fc_age_buckets = # TODO: Your code goes here fc_gestation_buckets = # TODO: Your code goes here # Sparse columns are wide, have a linear relationship with the output wide = [# TODO: Your code goes here] # Feature cross all the wide columns and embed into a lower dimension fc_crossed = # TODO: Your code goes here fc_embed = # TODO: Your code goes here # Continuous columns are deep, have a complex relationship with the output deep = [# TODO: Your code goes here] return wide, deep """ Explanation: Create the feature columns Next, define the feature columns. For a wide and deep model, we need to determine which features we will use as wide features and which to pass as deep features. The function get_wide_deep below will return a tuple containing the wide feature columns and deep feature columns. Have a look at this blog post on wide and deep models to remind yourself how best to describe the features. Exercise 3 There are different ways to set up the feature columns for our Wide & Deep model. In the cell below, we create a function called get_wide_deep. It has no arguments but returns a tuple containing two things: a list of our wide feature columns, and a list of our deep feature columns. In the first block of TODOs below, you are asked to create a list and assing the necessary feature columns for the features is_male, plurality, mother_age and gestation_weeks. Think about the nature of these features and make sure you use the appropriate tf.feature_column. In the next TODO, you will create the bucketized features for mother_age and gestation_weeks. Think about a values to set for the boundaries argument for these feature columns. Hint: use np.arange([start],[stop],[step]).tolist() to easily create boundaries. Next, create a list of the appropriate feature columns you created above to define the wide columns of our model. In the next two TODOs, create a crossed feature column that uses all of the wide columns you created above. You'll want to use a large enough hash_bucket_size to ensure there aren't collisions. Then, use that crossed feature column to create a feature column that embeds fc_crossed into a lower dimension. Finally, collect the deep feature columns you created into a single list called deep End of explanation """ def serving_input_fn(): feature_placeholders = # TODO: Your code goes here features = # TODO: Your code goes here return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders) """ Explanation: Create the Serving Input function To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user. Exercise 4 In the first TODO below, create the feature_placeholders dictionary by setting up the placeholders for each of the features we will use in our model. Look at the documentation for tf.placeholder to make sure you provide all the necessary arguments. You'll need to create placeholders for the features - 'is_male' - 'mother_age' - 'plurality' - 'gestation_weeks' - 'key' You'll also need to create the features dictionary to pass to the tf.estimator.export.ServingInputReceiver function. The features dictionary will reference the fearture_placeholders dict you created above. Remember to expand the dimensions of the tensors you'll incoude in the features dictionary to accomodate for batched data we'll send to the model for predicitons later. End of explanation """ def train_and_evaluate(output_dir): EVAL_INTERVAL = 300 run_config = tf.estimator.RunConfig( save_checkpoints_secs = EVAL_INTERVAL, keep_checkpoint_max = 3) estimator = # TODO: Your code goes here train_spec = # TODO: Your code goes here exporter = # TODO: Your code goes here eval_spec = # TODO: Your code goes here tf.estimator.train_and_evaluate(# TODO: Your code goes here) """ Explanation: Create the model and run training and evaluation Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations. Exercise 5 In the cell below, complete the TODOs to create our model for training. - First you must create your estimator using tf.estimator.DNNLinearCombinedRegressor. - Next, complete the code to set up your tf.estimator.TrainSpec, selecting the appropriate input function and dataset to use to read data to your function during training. - Next, set up your exporter and tf.estimator.EvalSpec. - Finally, pass the variables you created above to call tf.estimator.train_and_evaluate Be sure to check the documentation for these Tensorflow operations to make sure you set things up correctly. End of explanation """ # Run the model shutil.rmtree(path = "babyweight_trained_wd", ignore_errors = True) # start fresh each time train_and_evaluate("babyweight_trained_wd") """ Explanation: Finally, we train the model! End of explanation """
ziky5/F4500_Python_pro_fyziky
lekce_04/Kapr_v_medu.ipynb
mit
import pandas as pd data = pd.read_csv('data.csv') data """ Explanation: Kapr v medu moto: Spadne kapr do medu a říká: "Hustý, to je hustý..." Z.Janák, písemka z TM Osnova Úvod Alias vs hodnota String Mutanti a nemutanti Práce se souborem Elegance pythonu Závěrečné cvičení Úvod V této lekci se ponoříme (zabředneme) do úplných základů Pythonu. Daná tématika se může zdát možná až příliš abstraktní a v praxi nepoužitelná, nicméně opak je pravdou! Uvidíme, že díky solidním základům bude například načítání dat ze souboru jednoduchou a přímočarou záležitostí. Na začátku zavzpomínáme na (dnes již klasický) Fišerův problém. Data jsme tehdy měli uložená v csv souboru data.csv. Vše jsme načetli jednoduše pomocí pandy: End of explanation """ data['T'] """ Explanation: Pokud vzpomínáte, tak ke sloupci T jsme přistupovali takto: End of explanation """ data[T] """ Explanation: Proč jsme nemohli jednoduše vykonat následující? End of explanation """ T = 'T' #alias v hodnota """ Explanation: Alias v hodnota End of explanation """ 3 = 'T' """ Explanation: Alias je (zhruba řečeno) pojmenované místo v paměti počítače. Tedy pomocí T označujeme při běhu programu místo v paměti, kam jsme si již předtím něco uložili (v tomto případě "písmeno" T). Ne vše může být aliasem (např. žádná proměnná nemůže začínat číslem): End of explanation """ 300 """ Explanation: Naopak následující příkaz: End of explanation """ x = 1 # zde vznikne místo v paměti, do které se uloží číslo 1 a proměnná x ukazuje na to místo x = x + 1 # zde se vezme obsah proměnné x (číslo 1) a provede se operace součtu, # výsledek se uloží se do x x # výsledek je pochopitelně 2 """ Explanation: právě vytvořil nové "místo" v paměti počítače, kam uložil hodnotu (v binární podobě), která reprezentuje přirozené číslo 300. Že jsme si místo (jeho adresu) nezapamatovali, je náš problém. Nyní už neexistuje žádný způsob, jak zjistit, kde vlastně je ono místo a tudíž jej nemůžeme použít dále při běhu programu. Jupyter nám dokonce dává jasně najevo, že jsme si ono místo neuložili do proměnné (viz Out[6]: 300). Tím se dostáváme k tomu, co vlastně znamená symbol = ve zdrojovém kódu. Jedná se o tzv. operátor přiřazení. Aliasu přiřazuje místo v paměti, na které "ukazuje". Rozeberme tedy podrobně, co znamená následující: End of explanation """ 300 == 300 300 == 301 """ Explanation: Jinými slovy: to co je na pravé straně operátoru = se vyhodnotí (získají se hodnoty uložené v paměti), aplikují se všechny operace a adresu místa s výsledkem si uložíme do aliasu na levé straně. Klasický, z matematiky známý, operátor rovnosti se v Pythonu označuje == (porovnává hodnoty uložené na daných místech v paměti) End of explanation """ a = 300 b = 300 a is b # a ukazuje na jiné místo v paměti než b """ Explanation: Na identitu (tedy že daný alias ukazuje na stejné místo v paměti) se ptáme slůvkem is: End of explanation """ a == b """ Explanation: V obou místech je však uložená stejná hodnota (číslo 300), tedy: End of explanation """ T = 'T' len(T) #Příkaz len vrací délku (v tomto případě délku řetězce) """ Explanation: String Řetězec (string) v Pythonu reprezentuje text. Python nerozlišuje mezi znakem (character) a sekvencí znaků (string). Znak je v Pythonu reprezentován jako string o velikosti 1: End of explanation """ data[T] # T se vyhodnotí na písmeno "T" """ Explanation: Nyní můžeme udělat to, co nám v Úvodu nefungovalo: End of explanation """ T1 = 'T' T2 = "T" T3 = """T""" T4 = '''T''' T1 == T2 == T3 == T4 """ Explanation: V Pythonu není rozdíl mezi jednoduchou uvozovkou ' a dvojitou ", následující definice jsou si ekvivalentní End of explanation """ print(''' Hroch a Panda jsou dobří přátelé ''') """ Explanation: Tři uvozovky (ať už jednoduché či dvojité) se můžou rozprostírat na víc řádků. End of explanation """ 'Hroch' != 'Zikán' """ Explanation: Stringy lze opět porovnávat, v tomto případě použijeme operátor nerovnosti: End of explanation """ 'hroch' == 'zvíře' """ Explanation: Naopak následující může být trochu překvapivé (Python pochopitelně porovnává jednotlivé znaky, sémantice nerozumí): End of explanation """ s = 'Panda' print(s[0]) print(s[1:5]) """ Explanation: Jako dobrá představa stringu je seznam (list) znaků. Vskutku, se stringem lze zacházet obdobně jako se seznamem: End of explanation """ 'Hroch' + ' a ' + 'Panda' """ Explanation: Dva či více stringů lze spojit pomocí operátoru +: End of explanation """ s[0] = 'F' """ Explanation: Podobnost s listem však pokulhává v jedné zásadní věci: End of explanation """ l1 = ['a', 'a', 'n', 'd'] l1.append('a') """ Explanation: Mutanti Souhrnným označením mutanti se označují datové typy, které se mohou měnit (mutable data type). Jeden takový už známe. Vzpomenete si který? End of explanation """ l1[0] = 'P' l1 """ Explanation: Jak už víte, prvky seznamu (narozdíl od stringu) můžeme měnit: End of explanation """ l2 = l1 l2.append('!') l1 """ Explanation: Dokážete však vysvětlit následující? End of explanation """ l2 # l2 se změnil také """ Explanation: Jak je možné, že se změnil list l1, když jsme modifikovali l2? Pro jistotu: End of explanation """ l1 is l2 """ Explanation: Odpověď je poměrně jednoduchá - l1 i l2 ukazují na stejné místo v paměti (jedná se o stejné "objekty"): End of explanation """ l3 = l1.copy() l3 l3.append('!') l1 l3 """ Explanation: Pokud bychom chtěli vytvořit opravdovou kopii, tak aby se nám změny neprojevily na l1: End of explanation """ l3 is l1 """ Explanation: Pro jistotu se ještě přesvědčíme, že se opravdu nejedná o stejné objekty: End of explanation """ s = 'Panda' s[1:] 'F' + s[1:] """ Explanation: Nemutanti Jak jsme viděli před chvílí, stringy nelze modifikovat. Pokud bychom chtěli změnit string 'Panda' na 'Fanda', nezbyde nám nic jiného než vytvořit string nový: End of explanation """ a = 1 b = a a = 2 print(a) print(b) """ Explanation: Obdobně se chová i datový typ integer (celé číslo). Porovnejte následující s předchozími hrátky s listy l1 a l2: End of explanation """ f = open('data.csv') f """ Explanation: Toto je velice důležité chování Pythonu. Pokud si do nějaké proměnné uložíte nějaké číslo, pak se nikdy nemůže změnit jinak, než že ho sami explicitně změníte! Mutanti vs nemutanti Nemutanti: int (-5, 2, 150333) float (-5.1, -2., 1.50333e5) string ('Hroch') ... Mutanti: * list ([1, 2]) * DataFrame * vektor * ... * ... Práce se souborem Soubor označuje a sdružuje data uložená na disku. Abychom se k datům v souboru dostali a mohli s nimi pracovat, musíme požádat operační systém. Nebojte se, v Pythonu to není nic složitého. Prvním nezbytným krokem je soubor otevřít: End of explanation """ data = f.read() print(data) """ Explanation: Proměnná f nyní představuje tzv. file handle. Neobsahuje žádná data (ta zatím stále leží na disku), je to jenom prostředek, skrze který můžeme komunikovat s operačním systémem. Samotný transfer dat můžeme zahájit zavoláním funkce read: End of explanation """ f.close() """ Explanation: Nyní máme data v operační paměti a uložili jsme si je do proměnné data. Nyní je záhodno sdělit operačnímu systému, že se souborem již nebudeme dále pracovat a soubor takzvaně uzavřít: End of explanation """ print(type(data)) data """ Explanation: Podívejme se blíže na proměnnou data: End of explanation """ data = data.split('\n') data """ Explanation: Vidíme, že tentokrát máme k dispozici něco úplně jiného, než co nám vrátila panda v první lekci zavoláním funkce read_csv. Tentokrát se jedná o string, ony záhadné \n označují znak nového řádku, Python jim rozumí a pokud použijeme funkci print, pak se vše zobrazí správně (viz výše). V této podobě jsou nicméně data velice špatně použitelná. Nezbývá nám nic jiného, než si data sami upravit. Existuje mnoho způsobů, jak to udělat. Ten který si teď ukážeme není sice nejelegantnější, je však velice názorný. Nejprve data rozdělíme na jednotlivé řádky: End of explanation """ data = [line.split(',') for line in data if line != ''] data """ Explanation: Následně každý řádek rozdělíme podle čárky na jednotlivé hodnoty: End of explanation """ import numpy as np arr = np.array(data[1:]) arr """ Explanation: Naším cílem je vytvořit starý známý DataFrame. Po pozorném přečtení dokumentace vidíme, že nejprve potrebujeme vytvořit numpy vektor (v tomto případě 2D vektor - správně bychom mělí říci tenzor či matice): End of explanation """ arr[0] """ Explanation: 2D numpy vektory, jsou velice podobné 1D vektorům, k prvnímu řádku této matice lze přistoupit takto: End of explanation """ arr[0][0] """ Explanation: K prvnímu elementu prvního řádku pak takto: End of explanation """ import pandas as pd data = pd.DataFrame(arr, columns=data[0]) data data['C'] = data['A'] + data['B'] data """ Explanation: Nyní máme konečně vše nachystané k vytvoření DataFrame: End of explanation """ def load_data(file_name): f = open(file_name) data = f.read() f.close() return data def make_dataframe(data): data = data.split('\n') data = [line.split(',') for line in data if line != ''] arr = np.array(data[1:]) return pd.DataFrame(arr, columns=data[0]) data = make_dataframe(load_data('data.csv')) data x = data['A'][0] print(x) print(type(x)) """ Explanation: Asi jsme něco udělali špatně. Co se to vlastně stalo? Datové konverze Projděme si celý proces načítání dat ještě jednou. Nadefinujme si k tomu dvě užitečné funkce: End of explanation """ print(float(x)) print(type(float(x))) """ Explanation: Problém spočívá v tom, že data jsou stringy, ne floaty. Musíme je ručně převést. K tomu slouží funkce float: End of explanation """ strings = [['1', '2', '4'], ['5', '9', '9']] strings """ Explanation: Pokud bychom měli následující vnořenou strukturu End of explanation """ floats = [] for rec in strings: floats.append([float(num) for num in rec]) floats """ Explanation: a chtěli vytvořit novou identickou strukturu jen převést všechny stringy na floaty, pak to můžeme udělat např. následovně: End of explanation """ def make_dataframe(data): data = data.split('\n') data = [line.split(',') for line in data if line != ''] floats = [] for rec in data[1:]: floats.append([float(num) for num in rec]) arr = np.array(floats) return pd.DataFrame(arr, columns=data[0]) data = make_dataframe(load_data('data.csv')) data print(type(data['A'][0])) data['C'] = data['A'] + data['B'] data """ Explanation: Poznamenejme jen, že se vlastně jedná o dva vnořené for cykly. V prvním procházíme prvky listu strings - cykly tedy budou dva: první s ['1', '2', '4'] a druhý s ['5', '9', '9']. Vnořený for cylkus bude nejprve pro hodnoty 1, 2 a 4 a následně pro 5, 9 a 9. Výsledekem každého vnitřního for cyklu bude vždy nový list: [1.0, 2.0, 4.0] a [5.0, 9.0, 9.0]. Tyto listy vždy přilepíme na konec listu float, který si na začátku inicializujeme na prázdný list. Naši funkci make_dataframe tedy můžeme upravit například následujícím způsobem: End of explanation """ def make_dataframe(data): data = data.split('\n') data = [line.split(',') for line in data if line != ''] arr = np.array(data[1:]) return pd.DataFrame(arr, columns=data[0], dtype='f') data = make_dataframe(load_data('data.csv')) data print(type(data['A'][0])) """ Explanation: Poznámka: pokud bychom četli dokumentaci opravdu pozorně zjistili bychom, že panda za nás může konverzi provést sama: End of explanation """ with open('data.csv') as f: data = [] for line in f: row = line.strip().split(',') data.append(row) data = pd.DataFrame(np.array(data[1:]), columns=data[0], dtype='f') data """ Explanation: Elegance Pythonu Vzhledem k tomu, že práce se souborem je velice častá, Python nabízí několik elegantních metod, jak si ulehčit život. Prvně si představíme tzv. "with block", který nás zbaví nutnosti soubor ručně zavírat a pak si ukážeme, jak efektivně iterovat přes jednotlivé řádky souboru. End of explanation """ total_sum = 0 with open('data.csv') as f: f.readline() # zahodíme hlavičku for line in f: total_sum += float(line.split(',')[0]) total_sum """ Explanation: Uvedený způsob skýtá dvě velké výhody. With block se automaticky postará o zavření souboru. I kdybychom udělali nějakou chybu v kódu ve with blocku, Python stejně automaticky zavře soubor, takže po nás nezůstane nic otevřeného (existuje samozřejmě způsob, jak to ošetřit i ručně, ale tím se zde zabývat nebudeme) Důležitý rozdíl při iteraci přes soubor oproti použití funkce read je v tom, že jsme NENAČETLI celý obsah souboru naráz. Operační systém nám v každé iteraci vrátil pouze jeden řádek. My jsme si sice všechna data uložili do proměnné data, protože jsme na konec chtěli vytvořit DataFrame. Pokud bychom však chtěli např. pouze sečíst všechny hodnoty v prvním sloupci, mohli bychom to udělat například následovně: End of explanation """
aneeshsathe/DataAndImageAnalysisForBiologists
Notebooks/2-Playing with Images.ipynb
mit
import os import glob root_root = '/home/aneesh/Images/Source/' dir_of_root = os.listdir(root_root) file_paths = [glob.glob(os.path.join(root_root,dor, '*.tif')) for dor in dir_of_root] print(file_paths[0][0]) """ Explanation: Make me an Image Analyst already! In the last lesson you learnt the basics of Python. You learnt what variables are, how loops function and how to get a list of our filenames. In this lesson we will learn how to read and display images, followed by how to identify objects of interest in the image. Since the information we generated in the other notebook can't be shared with this notebook, let's regenerate our file list of images. We didn't mention in the last lesson that it is customary to put all the import statements at the top of a program, although this isn't a requirement it helps to keep the code organized. End of explanation """ from skimage.io import imread, imshow %matplotlib inline in_img = imread(file_paths[0][0]) imshow(in_img) """ Explanation: Reading Images In the last cell we printed the name of the first image file in the Type1 directory. We will use this as our test file for now. To read images anaconda comes with the sci-kit image package, which has many useful functions for image analysis. Among them are the functions imread() and imshow() these readn and display the image respectively. As with os and glob the scikit image package has to be imported, but in this case we won't be using many functions from the package so we will only import the imread() and imshow() functions. you will notice a strange looking line: %matplotlib inline this is actually meant for the jupyter notebook. This tells the notebook to display the images within the page. End of explanation """ import matplotlib.cm as cm imshow(in_img, cmap=cm.gray) """ Explanation: If you open the corresponding image in Fiji you will see that its displayed in grayscale, while here it looks blue-green. This is because the imshow() function applies something called as a lookup table (LUT) to help our eyes see the high and low intensities better. If you would like to still view it as a grey image we can change the display parameters as below: End of explanation """ in_img """ Explanation: Above, matplotlib is the standard plotting library in python. Matplotlib has a function called as colormap which corresponds to the cm that we import. We tell imshow to use the cm.gray colormap to display our image in grayscale. You can also use other colormaps, go ahead and experiment if you wish. Now that I have the Image, what do i do? Congrats! You have successfully read and displayed an image! If you are from a bio-background you will identify the little gray blobs as nuclei of cells. For those of you not from a bio-background nuclei is where cells store their DNA. We acquired these images on a microscope by applying a fluorescent dye specific to the DNA and capturing the image with a microscope. Surely this isn't enough. We would like to measure things about the nucleus. Many things can be measured, but for the purposes of this exercise we will lmit ourselves to measuring the area and the length of the major axis. But.. we are jumping ahead. Although we can see the nuclei, the computer doesn't know what you are talking about all it sees is numbers. Don't believe me? try printing the variable that hold our image: End of explanation """ import numpy as np """ Explanation: And, that's what an image really is. Its just a series of numbers which are interpreted by the computer and displayed as intensities, which our eyes then interpret as images. Nothing particularly fancy. However, images being just numbers gives us a big advantage. We can do mathematical operations on them. In our images for example the nuclei are bright compared to the background. Maybe this can be used to tell the computer where the nuclei are? GIVE ME ALLLLL THE BRIGHT THINGS How will you figure out what you count as bright? One way is to open the image in imageJ and see wha the approximate intensities of images are. We will try a little dumber method here, but learn a few things about arrays along the way Like lists arrays are a kind of object in python. They are a lot more powerful than lists when it comes to manipulation but come with some drawbacks. Usually we end up using a list, an array, or something else as the situation demands. Arrays aren't a default part of Python, but are introduced by the Numpy package. The next line is one that you will have at the top of nearly every program you write: End of explanation """ myarray = np.array([[1,2,3], [1,2,3], [1,2,3]]) myarray """ Explanation: We have just imported numpy but decided to refer to it as np. You can replace 'np' with anything you like but np is the standard used everywhere, so I suggest you stick to it. This helps in things like copy-pasting code from the internet ;) let's declare a small numpy array and play with it a little bit. End of explanation """ imshow(myarray, cmap=cm.gray) """ Explanation: You might notice that the object passed to the np.array() function is a list. The np.array() function takes the list and converts it to an array. You know it's an array because it says 'array' when its printed. This is the same thing that was printed when we called our image variable. So, we can safely say that our image is also an array. Anything that we do with this small array here can also be done to our bigger image array. In fact we can even treat our little array as an image and see how it looks: End of explanation """ myarray>1 """ Explanation: With the image we would like to find all the pixels that are bright. In the case of our small array, we can decide that everything bigger than 1 is a bright pixel. Using this as our criteria, we can as the computer to identify the pixels for us. How you ask? Remember the booleans from the last lesson? End of explanation """ from skimage.filters import threshold_otsu my_array_thresh = threshold_otsu(myarray) print(my_array_thresh) print(myarray>my_array_thresh) imshow(myarray>my_array_thresh) img_thresh = threshold_otsu(in_img) print(img_thresh) bw_img = in_img>img_thresh imshow(bw_img) """ Explanation: We get a boolean array which is full of true and false values corresponding to our question: is this number bigger than 1 Boolean arrays have some interesting properties, one of which is that you can multiply them with regular arrays. The True values are treated as 1 and the False as 0. However, we will not go into that right now. Right now we'd like to know of a good method to identify our nuclei. Otsu: the man on the threshold. In the last section when we asked the computer which values are greater than 1 what we did was set the threshold as '1'. But this isn't always straighforward especially with images where the intensities can vary a lot. This means that if you painfully find the threshold for one image there is no gurantee that it will work for the next image. What we need to do is to use the characteristics of the image to determine what the appropriate threshold is. Otsu's method is a very effective way of doing this. I won't go into the details of the method but I highly reccomend reading about it: https://en.wikipedia.org/wiki/Otsu%27s_method The nice people behind the Scikit Image package have already made a function to determine the otsu threshold as shown below. This can alse be applied to our image. End of explanation """ from skimage.measure import label label_im = label(bw_img) imshow(label_im) """ Explanation: Otsu's method identifies '560' as being a good threhold for our image to identify the brighter objects and it seems to do a good job! The next step is to get the area of the first nucleus.... but which is the first nucleus?? Tag 'em and bag 'em How do we decide which the first nucleus to be analysed should be? There is no inherent numbering and if you look at the bw_img its just true/false values. One option is to number all the objects based on if they are connected to other pixels of the same value. This is called labeling. scikit image has a function for this. The label() function takes in a black and white image and returns an image where each object has been given a number, which is interpreted as an intensity by the imshow function. End of explanation """ from skimage.measure import regionprops r_props = regionprops(label_im, in_img) #print length of number r_props to determine number of objects measured. len(r_props) """ Explanation: Now we have a way to determine the order in which objects are measured. The function to measure objects is called regionprops(). This function takes in the labelled image and the intensity image as an input and returns an object which has measurements of each of the objects. End of explanation """ # Area for the first object print(r_props[0].area) # Area for the first object print(r_props[1].area) # Area for the first object print(r_props[2].area) """ Explanation: Unlike the other data types which we have seen so far regionprops gives us a list of named properties these can be accessed as below: End of explanation """ help(regionprops) """ Explanation: Note: Regionprops returns many properties a full list of the properties is available in the help file. The help for any imported function can be accessed by running the help(regionprops) command. End of explanation """ all_areas = [rp.area for rp in r_props] print(all_areas) """ Explanation: We can also use list comprehension to collect all the measured areas into a single list. End of explanation """
michalkurka/h2o-3
h2o-py/demos/H2O_tutorial_medium_NOPASS.ipynb
apache-2.0
import pandas as pd import numpy from numpy.random import choice from sklearn.datasets import load_boston from h2o.estimators.random_forest import H2ORandomForestEstimator import h2o h2o.init() # transfer the boston data from pandas to H2O boston_data = load_boston() X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names) X["Median_value"] = boston_data.target X = h2o.H2OFrame.from_python(X.to_dict("list")) # select 10% for valdation r = X.runif(seed=123456789) train = X[r < 0.9,:] valid = X[r >= 0.9,:] h2o.export_file(train, "Boston_housing_train.csv", force=True) h2o.export_file(valid, "Boston_housing_test.csv", force=True) """ Explanation: H2O Tutorial Author: Spencer Aiello Contact: spencer@h2oai.com This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms. Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai. Setting up your system for this demo The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory End of explanation """ %matplotlib inline import matplotlib.pyplot as plt """ Explanation: Enable inline plotting in the Jupyter Notebook End of explanation """ fr = h2o.import_file("Boston_housing_train.csv") """ Explanation: Intro to H2O Data Munging Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store. End of explanation """ fr.head() """ Explanation: View the top of the H2O frame. End of explanation """ fr.tail() """ Explanation: View the bottom of the H2O Frame End of explanation """ fr["CRIM"].head() # Tab completes """ Explanation: Select a column fr["VAR_NAME"] End of explanation """ columns = ["CRIM", "RM", "RAD"] fr[columns].head() """ Explanation: Select a few columns End of explanation """ fr[2:7,:] # explicitly select all columns with : """ Explanation: Select a subset of rows Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection. End of explanation """ # The columns attribute is exactly like Pandas print("Columns:", fr.columns, "\n") print("Columns:", fr.names, "\n") print("Columns:", fr.col_names, "\n") # There are a number of attributes to get at the shape print("length:", str( len(fr) ), "\n") print("shape:", fr.shape, "\n") print("dim:", fr.dim, "\n") print("nrow:", fr.nrow, "\n") print("ncol:", fr.ncol, "\n") # Use the "types" attribute to list the column types print("types:", fr.types, "\n") """ Explanation: Key attributes: * columns, names, col_names * len, shape, dim, nrow, ncol * types Note: Since the data is not in local python memory there is no "values" attribute. If you want to pull all of the data into the local python memory then do so explicitly with h2o.export_file and reading the data into python memory from disk. End of explanation """ fr.shape """ Explanation: Select rows based on value End of explanation """ mask = fr["CRIM"]>1 fr[mask,:].shape """ Explanation: Boolean masks can be used to subselect rows based on a criteria. End of explanation """ fr.describe() """ Explanation: Get summary statistics of the data and additional data distribution information. End of explanation """ x = fr.names[:] y="Median_value" x.remove(y) """ Explanation: Set up the predictor and response column names Using H2O algorithms, it's easier to reference predictor and response columns by name in a single frame (i.e., don't split up X and y) End of explanation """ # Define and fit first 400 points model = H2ORandomForestEstimator(seed=42) model.train(x=x, y=y, training_frame=fr[:400,:]) model.predict(fr[400:fr.nrow,:]) # Predict the rest """ Explanation: Machine Learning With H2O H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented. Unlike Scikit-learn, H2O allows for categorical and missing data. The basic work flow is as follows: * Fit the training data with a machine learning algorithm * Predict on the testing data Simple model End of explanation """ perf = model.model_performance(fr[400:fr.nrow,:]) perf.r2() # get the r2 on the holdout data perf.mse() # get the mse on the holdout data perf # display the performance object """ Explanation: The performance of the model can be checked using the holdout dataset End of explanation """ r = fr.runif(seed=12345) # build random uniform column over [0,1] train= fr[r<0.75,:] # perform a 75-25 split test = fr[r>=0.75,:] model = H2ORandomForestEstimator(seed=42) model.train(x=x, y=y, training_frame=train, validation_frame=test) perf = model.model_performance(test) perf.r2() """ Explanation: Train-Test Split Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data. End of explanation """ model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model model.train(x=x, y=y, training_frame=fr) scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)) print("Scores:", scores.round(2)) """ Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled. Cross validation H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits). In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either: * AUTO: Perform random assignment * Random: Each row has a equal (1/nfolds) chance of being in any fold. * Modulo: Observations are in/out of the fold based by modding on nfolds End of explanation """ from sklearn.model_selection import cross_val_score from h2o.cross_validation import H2OKFold from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer """ Explanation: However, you can still make use of the cross_val_score from Scikit-Learn Cross validation: H2O and Scikit-Learn End of explanation """ model = H2ORandomForestEstimator(seed=42) scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv) print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)) print("Scores:", scores.round(2)) """ Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method. End of explanation """ h2o.__PROGRESS_BAR__=False h2o.no_progress() """ Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage. Since the progress bar print out gets annoying let's disable that End of explanation """ from sklearn import __version__ sklearn_version = __version__ print(sklearn_version) """ Explanation: Grid Search Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties) Randomized grid search: H2O and Scikit-Learn End of explanation """ %%time from sklearn.model_selection import RandomizedSearchCV # Import grid search from scipy.stats import randint, uniform model = H2ORandomForestEstimator(seed=42) # Define model params = {"ntrees": randint(20,30), "max_depth": randint(1,10), "min_rows": randint(1,10), # scikit's min_samples_leaf "mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=5, seed=42) # make a cv random_search = RandomizedSearchCV(model, params, n_iter=10, scoring=scorer, cv=custom_cv, random_state=42, n_jobs=1) # Define grid search object random_search.fit(fr[x], fr[y]) print("Best R^2:", random_search.best_score_, "\n") print("Best params:", random_search.best_params_) """ Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions). The steps to perform a randomized grid search: 1. Import model and RandomizedSearchCV 2. Define model 3. Specify parameters to test 4. Define grid search object 5. Fit data to grid search object 6. Collect scores All the steps will be repeated from above. Because 0.16.1 is installed, we use scipy to define specific distributions ADVANCED TIP: Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1). We'll turn it back on again in the aftermath of a Parallel job. If you don't want to run jobs in parallel, don't turn off the reference counting. Pattern is: >>> h2o.turn_off_ref_cnts() >>> .... parallel job .... >>> h2o.turn_on_ref_cnts() End of explanation """ def report_grid_score_detail(random_search, charts=True): """Input fit grid search estimator. Returns df of scores with details""" df_list = [] for line in random_search.grid_scores_: results_dict = dict(line.parameters) results_dict["score"] = line.mean_validation_score results_dict["std"] = line.cv_validation_scores.std()*1.96 df_list.append(results_dict) result_df = pd.DataFrame(df_list) result_df = result_df.sort("score", ascending=False) if charts: for col in get_numeric(result_df): if col not in ["score", "std"]: plt.scatter(result_df[col], result_df.score) plt.title(col) plt.show() for col in list(result_df.columns[result_df.dtypes == "object"]): cat_plot = result_df.score.groupby(result_df[col]).mean()[0] cat_plot.sort() cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2)) plt.show() return result_df def get_numeric(X): """Return list of numeric dtypes variables""" return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist() report_grid_score_detail(random_search).head() """ Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report. End of explanation """ %%time params = {"ntrees": randint(30,35), "max_depth": randint(5,8), "mtries": randint(4,6),} custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big # impact on the std of the resulting scores. More random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher n_iter=5, # variation per sample scoring=scorer, cv=custom_cv, random_state=43, n_jobs=1) random_search.fit(fr[x], fr[y]) print("Best R^2:", random_search.best_score_, "\n") print("Best params:", random_search.best_params_) report_grid_score_detail(random_search) """ Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs: End of explanation """ from h2o.transforms.preprocessing import H2OScaler from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA """ Explanation: Transformations Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful. At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames. Basic steps: Remove the response variable from transformations. Import transformer Define transformer Fit train data to transformer Transform test and train data Re-attach the response variable. First let's normalize the data using the means and standard deviations of the training data. Then let's perform a principal component analysis on the training data and select the top 5 components. Using these components, let's use them to reduce the train and test design matrices. End of explanation """ y_train = train.pop("Median_value") y_test = test.pop("Median_value") norm = H2OScaler() norm.fit(train) X_train_norm = norm.transform(train) X_test_norm = norm.transform(test) print(X_test_norm.shape) X_test_norm """ Explanation: Normalize Data: Use the means and standard deviations from the training data. End of explanation """ pca = H2OPCA(k=5) pca.fit(X_train_norm) X_train_norm_pca = pca.transform(X_train_norm) X_test_norm_pca = pca.transform(X_test_norm) # prop of variance explained by top 5 components? print(X_test_norm_pca.shape) X_test_norm_pca[:5] model = H2ORandomForestEstimator(seed=42) model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train)) y_hat = model.predict(X_test_norm_pca) h2o_r2_score(y_test,y_hat) """ Explanation: Then, we can apply PCA and keep the top 5 components. A user warning is expected here. End of explanation """ from h2o.transforms.preprocessing import H2OScaler from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown> model = H2ORandomForestEstimator(seed=42) pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps ("pca", H2OPCA(k=5)), ("rf", model)]) # Notice the last step is an estimator pipe.fit(train, y_train) # Fit training data y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator) h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before """ Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers. Pipelines "Tranformers unite!" If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple. Steps: Import Pipeline, transformers, and model Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest). Fit the training data to pipeline Either transform or predict the testing data End of explanation """ pipe = Pipeline([("standardize", H2OScaler()), ("pca", H2OPCA()), ("rf", H2ORandomForestEstimator(seed=42))]) params = {"standardize__center": [True, False], # Parameters to test "standardize__scale": [True, False], "pca__k": randint(2, 6), "rf__ntrees": randint(10,20), "rf__max_depth": randint(4,10), "rf__min_rows": randint(5,10), } # "rf__mtries": randint(1,4),} # gridding over mtries is # problematic with pca grid over # k above from sklearn.model_selection import RandomizedSearchCV from h2o.cross_validation import H2OKFold from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer custom_cv = H2OKFold(fr, n_folds=5, seed=42) random_search = RandomizedSearchCV(pipe, params, n_iter=5, scoring=make_scorer(h2o_r2_score), cv=custom_cv, random_state=42, n_jobs=1) random_search.fit(fr[x],fr[y]) results = report_grid_score_detail(random_search) results.head() """ Explanation: This is so much easier!!! But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score. Combining randomized grid search and pipelines "Yo dawg, I heard you like models, so I put models in your models to model models." Steps: Import Pipeline, grid search, transformers, and estimators <Not shown below> Define pipeline Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words. Define grid search Fit to grid search End of explanation """ best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline save_path = h2o.save_model(h2o_model, path=".", force=True) print(save_path) # assumes new session my_model = h2o.load_model(path=save_path) my_model.predict(X_test_norm_pca) """ Explanation: Currently Under Development (drop-in scikit-learn pieces): * Richer set of transforms (only PCA and Scale are implemented) * Richer set of estimators (only RandomForest is available) * Full H2O Grid Search Other Tips: Model Save/Load It is useful to save constructed models to disk and reload them between H2O sessions. Here's how: End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/load_data/images.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import numpy as np import os import PIL import PIL.Image import tensorflow as tf import tensorflow_datasets as tfds print(tf.__version__) """ Explanation: 이미지 로드 및 전처리하기 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/images"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a> </td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td> </table> 이 튜토리얼은 두 가지 방법으로 이미지 데이터세트를 로드하고 전처리하는 방법을 보여줍니다. 먼저, 고급 Keras 전처리 유틸리티 및 레이어를 사용합니다. 다음으로 tf.data를 사용하여 처음부터 자체 입력 파이프라인을 작성합니다. 설정 End of explanation """ import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file(origin=dataset_url, fname='flower_photos', untar=True) data_dir = pathlib.Path(data_dir) """ Explanation: 꽃 데이터세트 다운로드하기 이 튜토리얼에서는 수천 장의 꽃 사진 데이터세트를 사용합니다. 꽃 데이터세트에는 클래스당 하나씩 5개의 하위 디렉토리가 있습니다. flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/ 참고: 모든 이미지에는 CC-BY 라이선스가 있으며 크리에이터는 LICENSE.txt 파일에 나열됩니다. End of explanation """ image_count = len(list(data_dir.glob('*/*.jpg'))) print(image_count) """ Explanation: 다운로드한 후 (218MB), 이제 꽃 사진의 사본을 사용할 수 있습니다. 총 3670개의 이미지가 있습니다. End of explanation """ roses = list(data_dir.glob('roses/*')) PIL.Image.open(str(roses[0])) roses = list(data_dir.glob('roses/*')) PIL.Image.open(str(roses[1])) """ Explanation: 각 디렉토리에는 해당 유형의 꽃 이미지가 포함되어 있습니다. 다음은 장미입니다. End of explanation """ batch_size = 32 img_height = 180 img_width = 180 """ Explanation: tf.keras.preprocessing을 사용하여 로드하기 tf.keras.preprocessing.image_dataset_from_directory를 사용하여 이러한 이미지를 디스크에서 로드해 보겠습니다. 참고: 이 섹션에 소개된 Keras Preprocesing 유틸리티 및 레이어는 현재 실험 중이며 변경될 수 있습니다. 데이터세트 만들기 로더를 위해 일부 매개변수를 정의합니다. End of explanation """ train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size) """ Explanation: 모델을 개발할 때 검증 분할을 사용하는 것이 좋습니다. 이미지의 80%를 훈련에 사용하고 20%를 유효성 검사에 사용합니다. End of explanation """ class_names = train_ds.class_names print(class_names) """ Explanation: 이러한 데이터세트의 class_names 속성에서 클래스 이름을 찾을 수 있습니다. End of explanation """ import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") """ Explanation: 데이터 시각화하기 훈련 데이터세트의 처음 9개 이미지는 다음과 같습니다. End of explanation """ for image_batch, labels_batch in train_ds: print(image_batch.shape) print(labels_batch.shape) break """ Explanation: 이러한 데이터세트를 사용하는 모델을 model.fit(이 튜토리얼의 뒷부분에 표시)에 전달하여 모델을 훈련할 수 있습니다. 원하는 경우, 데이터세트를 수동으로 반복하고 이미지 배치를 검색할 수도 있습니다. End of explanation """ normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255) """ Explanation: image_batch는 (32, 180, 180, 3) 형상의 텐서이며, 180x180x3 형상의 32개 이미지 묶음으로 되어 있습니다(마지막 차원은 색상 채널 RGB를 나타냄). label_batch는 형상 (32,)의 텐서이며 32개 이미지에 해당하는 레이블입니다. 참고: 이들 텐서 중 하나에서 .numpy()를 호출하여 numpy.ndarray로 변환할 수 있습니다. 데이터 표준화하기 RGB 채널 값은 [0, 255] 범위에 있습니다. 이것은 신경망에 이상적이지 않습니다. 일반적으로 입력 값을 작게 만들어야 합니다. 여기서는 tf.keras.layers.experimental.preprocessing.Rescaling 레이어를 사용하여 [0, 1] 범위에 있도록 값을 표준화합니다. End of explanation """ normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) image_batch, labels_batch = next(iter(normalized_ds)) first_image = image_batch[0] # Notice the pixels values are now in `[0,1]`. print(np.min(first_image), np.max(first_image)) """ Explanation: 이 레이어를 사용하는 방법에는 두 가지가 있습니다. map을 호출하여 데이터세트에 레이어를 적용할 수 있습니다. End of explanation """ AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) """ Explanation: 또는 모델 정의 내에 레이어를 포함하여 배포를 단순화할 수 있습니다. 여기서는 두 번째 접근법을 사용할 것입니다. 참고: 픽셀 값을 [-1,1]으로 조정하려면 대신 Rescaling(1./127.5, offset=-1)를 작성할 수 있습니다. 참고: 이전에 tf.keras.preprocessing.image_dataset_from_directory의 image_size 인수를 사용하여 이미지 크기를 조정했습니다. 모델에 크기 조정 논리를 포함하려면 tf.keras.layers.experimental.preprocessing.Resizing 레이어를 대신 사용할 수 있습니다. 성능을 위한 데이터세트 구성하기 버퍼링된 프리페치를 사용하여 I/O를 차단하지 않고 디스크에서 데이터를 생성할 수 있도록 하겠습니다. 데이터를 로드할 때 다음 두 가지 중요한 메서드를 사용해야 합니다. .cache()는 첫 번째 epoch 동안 디스크에서 이미지를 로드한 후 이미지를 메모리에 유지합니다. 이렇게 하면 모델을 훈련하는 동안 데이터세트가 병목 상태가 되지 않습니다. 데이터세트가 너무 커서 메모리에 맞지 않는 경우, 이 메서드를 사용하여 성능이 높은 온디스크 캐시를 생성할 수도 있습니다. .prefetch()는 훈련 중에 데이터 전처리 및 모델 실행과 겹칩니다. 관심 있는 독자는 데이터 성능 가이드에서 두 가지 메서드와 디스크에 데이터를 캐시하는 방법에 대해 자세히 알아볼 수 있습니다. End of explanation """ num_classes = 5 model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1./255), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(num_classes) ]) model.compile( optimizer='adam', loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) """ Explanation: 모델 훈련하기 완전성을 위해 준비한 데이터세트를 사용하여 간단한 모델을 훈련하는 방법을 보여줍니다. 이 모델은 어떤 식으로든 조정되지 않았습니다. 목표는 방금 만든 데이터세트를 사용하여 역학을 보여주는 것입니다. 이미지 분류에 대한 자세한 내용은 이 튜토리얼을 참조하세요. End of explanation """ model.fit( train_ds, validation_data=val_ds, epochs=3 ) """ Explanation: 참고: 몇 개의 epoch에 대해서만 훈련하므로 이 튜토리얼은 빠르게 진행됩니다. End of explanation """ list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False) list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False) for f in list_ds.take(5): print(f.numpy()) """ Explanation: 참고: model.fit을 사용하는 대신 사용자 정의 훈련 루프를 작성할 수도 있습니다. 자세한 내용은 이 튜토리얼을 참조하세요. 검증 정확도가 훈련 정확도에 비해 낮으므로 모델이 과대적합되었음을 알 수 있습니다. 이 튜토리얼에서 과대적합 및 이를 줄이는 방법에 대해 자세히 알아볼 수 있습니다. 미세 제어를 위해 tf.data 사용하기 위의 tf.keras.preprocessing 유틸리티는 이미지의 디렉토리에서 tf.data.Dataset을 작성하는 편리한 방법입니다. 보다 세밀한 제어를 위해 tf.data을 사용하여 자체 입력 파이프라인을 작성할 수 있습니다. 이 섹션에서는 이전에 다운로드한 TGZ 파일의 파일 경로부터 시작하여 이를 수행하는 방법을 보여줍니다. End of explanation """ class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])) print(class_names) """ Explanation: 파일의 트리 구조를 사용하여 class_names 목록을 컴파일할 수 있습니다. End of explanation """ val_size = int(image_count * 0.2) train_ds = list_ds.skip(val_size) val_ds = list_ds.take(val_size) """ Explanation: 데이터세트를 훈련 및 검증으로 분할합니다. End of explanation """ print(tf.data.experimental.cardinality(train_ds).numpy()) print(tf.data.experimental.cardinality(val_ds).numpy()) """ Explanation: 다음과 같이 각 데이터세트의 길이를 볼 수 있습니다. End of explanation """ def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, os.path.sep) # The second to last is the class-directory one_hot = parts[-2] == class_names # Integer encode the label return tf.argmax(one_hot) def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.io.decode_jpeg(img, channels=3) # resize the image to the desired size return tf.image.resize(img, [img_height, img_width]) def process_path(file_path): label = get_label(file_path) # load the raw data from the file as a string img = tf.io.read_file(file_path) img = decode_img(img) return img, label """ Explanation: 파일 경로를 (img, label) 쌍으로 변환하는 간단한 함수를 작성합니다. End of explanation """ # Set `num_parallel_calls` so multiple images are loaded/processed in parallel. train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE) val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE) for image, label in train_ds.take(1): print("Image shape: ", image.numpy().shape) print("Label: ", label.numpy()) """ Explanation: Dataset.map을 사용하여 image, label 쌍의 데이터세트를 작성합니다. End of explanation """ def configure_for_performance(ds): ds = ds.cache() ds = ds.shuffle(buffer_size=1000) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=AUTOTUNE) return ds train_ds = configure_for_performance(train_ds) val_ds = configure_for_performance(val_ds) """ Explanation: 성능을 위한 데이터세트 구성하기 이 데이터세트로 모델을 훈련하려면 데이터에 대해 다음이 필요합니다. 잘 섞는다. 배치 처리한다. 가능한 빨리 배치를 사용할 수 있어야 한다. 이러한 기능은 tf.data API를 사용하여 추가할 수 있습니다. 자세한 내용은 입력 파이프라인 성능 가이드를 참조하세요. End of explanation """ image_batch, label_batch = next(iter(train_ds)) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(image_batch[i].numpy().astype("uint8")) label = label_batch[i] plt.title(class_names[label]) plt.axis("off") """ Explanation: 데이터 시각화하기 이 데이터세트를 이전에 작성한 데이터세트와 유사하게 시각화할 수 있습니다. End of explanation """ model.fit( train_ds, validation_data=val_ds, epochs=3 ) """ Explanation: 모델 계속 훈련하기 위의 keras.preprocessing에 의해 작성된 것과 유사한 tf.data.Dataset를 수동으로 빌드했습니다. 이것으로 모델 훈련을 계속할 수 있습니다. 이전과 마찬가지로 실행 시간을 짧게 유지하기 위해 몇 개의 epoch 동안만 훈련합니다. End of explanation """ (train_ds, val_ds, test_ds), metadata = tfds.load( 'tf_flowers', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, ) """ Explanation: TensorFlow 데이터세트 사용하기 이 튜토리얼에서는 지금까지 디스크에서 데이터를 로드하는 데 중점을 두었습니다. TensorFlow 데이터세트에서 다운로드하기 쉬운 대규모 데이터세트 카탈로그를 탐색하여 사용할 데이터세트를 찾을 수도 있습니다. 이전에 Flowers 데이터세트를 디스크에서 로드했으므로 TensorFlow 데이터세트로 가져오는 방법을 살펴보겠습니다. TensorFlow 데이터세트를 사용하여 꽃 데이터세트를 다운로드합니다. End of explanation """ num_classes = metadata.features['label'].num_classes print(num_classes) """ Explanation: 꽃 데이터세트에는 5개의 클래스가 있습니다. End of explanation """ get_label_name = metadata.features['label'].int2str image, label = next(iter(train_ds)) _ = plt.imshow(image) _ = plt.title(get_label_name(label)) """ Explanation: 데이터세트에서 이미지를 검색합니다. End of explanation """ train_ds = configure_for_performance(train_ds) val_ds = configure_for_performance(val_ds) test_ds = configure_for_performance(test_ds) """ Explanation: 이전과 마찬가지로, 성능을 위해 각 데이터세트를 일괄 처리, 셔플 및 구성해야 합니다. End of explanation """
probml/pyprobml
notebooks/book2/25/IPM_divergences.ipynb
mit
import jax import random import numpy as np import jax.numpy as jnp import seaborn as sns import matplotlib.pyplot as plt import scipy !pip install -qq dm-haiku !pip install -qq optax try: import haiku as hk except ModuleNotFoundError: %pip install -qq haiku import haiku as hk try: import optax except ModuleNotFoundError: %pip install -qq optax import optax sns.set(rc={"lines.linewidth": 2.8}, font_scale=2) sns.set_style("whitegrid") """ Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/IPM_divergences.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Critics in IPMs variational bounds on $f$-divergences Author: Mihaela Rosca This colab uses a simple example (two 1-d distributions) to show how the critics of various IPMs (Wasserstein distance and MMD) look like. We also look at how smooth estimators (neural nets) can estimte density ratios which are not smooth, and how that can be useful in providing a good learning signal for a model. End of explanation """ import scipy.stats from scipy.stats import truncnorm from scipy.stats import beta # We allow a displacement from 0 of the beta distribution. class TranslatedBeta: def __init__(self, a, b, expand_dims=False, displacement=0): self._a = a self._b = b self.expand_dims = expand_dims self.displacement = displacement def rvs(self, size): val = beta.rvs(self._a, self._b, size=size) + self.displacement return np.expand_dims(val, axis=1) if self.expand_dims else val def pdf(self, x): return beta.pdf(x - self.displacement, self._a, self._b) p_param1 = 3 p_param2 = 5 q_param1 = 2 q_param2 = 3 start_p = 0 start_r = 1 start_q = 2 p_dist = TranslatedBeta(p_param1, p_param2, displacement=start_p) q_dist = TranslatedBeta(q_param1, q_param2, displacement=start_q) r_dist = TranslatedBeta(q_param1, q_param2, displacement=start_r) plt.figure(figsize=(14, 10)) p_x_samples = p_dist.rvs(size=15) q_x_samples = q_dist.rvs(size=15) p_linspace_x = np.linspace(start_p, start_p + 1, 100) p_x_pdfs = p_dist.pdf(p_linspace_x) q_linspace_x = np.linspace(start_q, start_q + 1, 100) q_x_pdfs = q_dist.pdf(q_linspace_x) plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p_1(x)$") plt.plot(p_x_samples, [0] * len(p_x_samples), "bo", ms=10) plt.plot(q_linspace_x, q_x_pdfs, "r", label=r"$p_2(x)$") plt.plot(q_x_samples, [0] * len(q_x_samples), "rd", ms=10) plt.ylim(-0.5, 2.7) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend() plt.xticks([]) plt.yticks([]) plt.figure(figsize=(14, 8)) local_start_p = 0 local_start_r = 1.2 local_start_q = 2.4 local_p_dist = TranslatedBeta(p_param1, p_param2, displacement=local_start_p) local_q_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_q) local_r_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_r) p_linspace_x = np.linspace(local_start_p, local_start_p + 1, 100) q_linspace_x = np.linspace(local_start_q, local_start_q + 1, 100) r_linspace_x = np.linspace(local_start_r, local_start_r + 1, 100) p_x_pdfs = local_p_dist.pdf(p_linspace_x) q_x_pdfs = local_q_dist.pdf(q_linspace_x) r_x_pdfs = local_r_dist.pdf(r_linspace_x) plt.plot(p_linspace_x, p_x_pdfs, "b") plt.plot(q_linspace_x, q_x_pdfs, "r") plt.plot(r_linspace_x, r_x_pdfs, "g") num_samples = 15 plt.plot(local_p_dist.rvs(size=num_samples), [0] * num_samples, "bo", ms=10, label=r"$p^*$") plt.plot(local_q_dist.rvs(size=num_samples), [0] * num_samples, "rd", ms=10, label=r"$q(\theta_1)$") plt.plot(local_r_dist.rvs(size=num_samples), [0] * num_samples, "gd", ms=10, label=r"$q(\theta_2)$") plt.ylim(-0.5, 2.7) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend(framealpha=0) plt.xticks([]) plt.yticks([]) """ Explanation: KL and non overlapping distributions non overlapping distributions (visual) explain ratio will be infinity - integral move the distributions closer and they will not have signal End of explanation """ model_transform = hk.without_apply_rng( hk.transform( lambda *args, **kwargs: hk.Sequential( [hk.Linear(10), jax.nn.relu, hk.Linear(10), jax.nn.tanh, hk.Linear(40), hk.Linear(1)] )(*args, **kwargs) ) ) BATCH_SIZE = 100 NUM_UPDATES = 1000 dist1 = TranslatedBeta(p_param1, p_param2, expand_dims=True, displacement=start_p) dist2 = TranslatedBeta(q_param1, q_param2, expand_dims=True, displacement=start_q) @jax.jit def estimate_kl(params, dist1_batch, dist2_batch): dist1_logits = model_transform.apply(params, dist1_batch) dist2_logits = model_transform.apply(params, dist2_batch) return jnp.mean(dist1_logits - jnp.exp(dist2_logits - 1)) def update(params, opt_state, dist1_batch, dist2_batch): model_loss = lambda *args: -estimate_kl(*args) loss, grads = jax.value_and_grad(model_loss, has_aux=False)(params, dist1_batch, dist2_batch) params_update, new_opt_state = optim.update(grads, opt_state, params) new_params = optax.apply_updates(params, params_update) return loss, new_params, new_opt_state NUM_UPDATES = 200 rng = jax.random.PRNGKey(1) init_model_params = model_transform.init(rng, dist1.rvs(BATCH_SIZE)) params = init_model_params optim = optax.adam(learning_rate=0.0005, b1=0.9, b2=0.999) opt_state = optim.init(init_model_params) for i in range(NUM_UPDATES): # Get a new batch of data x = dist1.rvs(BATCH_SIZE) y = dist2.rvs(BATCH_SIZE) loss, params, opt_state = update(params, opt_state, x, y) if i % 50 == 0: print("Loss at {}".format(i)) print(loss) plotting_x = np.expand_dims(np.linspace(-1.0, 3.5, 100), axis=1) # TODO: how do you get the ratio values form the estimate - need to check the fgan paper ratio_values = model_transform.apply(params, plotting_x) # ratio_values = 1 + np.log(model_transform.apply(params, plotting_x)) plt.figure(figsize=(14, 8)) p_linspace_x = np.linspace(start_p, start_p + 1, 100) q_linspace_x = np.linspace(start_q, start_q + 1, 100) plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$") plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18) plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$") plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18) x = np.linspace(-1, 3.5, 200) ratio = p_dist.pdf(x) / q_dist.pdf(x) plt.hlines(6.1, -0.6, start_q, linestyles="--", color="r") plt.hlines(6.1, start_q + 1, 3.5, linestyles="--", color="r") plt.text(3.4, 5.6, r"$\infty$") plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4) plt.plot( plotting_x, ratio_values[:, 0].T, color="darkgray", label=r"MLP approx to $\frac{p^*}{q(\theta)}$", linewidth=4 ) plt.ylim(-2.5, 8) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0) plt.xticks([]) plt.yticks([]) """ Explanation: Approximation of the ratio using the f-gan approach End of explanation """ plt.figure(figsize=(14, 8)) grad_fn = jax.grad(lambda x: model_transform.apply(params, x)[0]) grad_values = jax.vmap(grad_fn)(plotting_x) plt.figure(figsize=(14, 8)) p_linspace_x = np.linspace(start_p, start_p + 1, 100) q_linspace_x = np.linspace(start_q, start_q + 1, 100) plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$") plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18) plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$") plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18) x = np.linspace(-1, 3.5, 200) ratio = p_dist.pdf(x) / q_dist.pdf(x) plt.hlines(5.8, -0.6, start_q, linestyles="--", color="r") plt.hlines(5.8, start_q + 1, 3.5, linestyles="--", color="r") plt.text(3.4, 5.4, r"$\infty$") plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4) plt.plot( plotting_x, ratio_values[:, 0].T, color="darkgray", label=r"$f_{\phi}$ approximating $\frac{p^*}{q(\theta)}$", linewidth=4, ) plt.plot(plotting_x, grad_values[:, 0].T, color="orange", label=r"$\nabla_{x} f_{\phi}(x)$", linewidth=4, ls="-.") plt.ylim(-2.5, 8) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0) plt.xticks([]) plt.yticks([]) """ Explanation: Gradients In order to see why the learned density ratio has useful properties for learning, we can plot the gradients of the learned density ratio across the input space End of explanation """ from scipy.optimize import linprog def get_W_witness_spectrum(p_samples, q_samples): n = len(p_samples) m = len(q_samples) X = np.concatenate([p_samples, q_samples], axis=0) ## AG: repeat [-1/n] n times c = np.array(n * [-1 / n] + m * [1 / m]) A_ub, b_ub = [], [] for i in range(n + m): for j in range(n + m): if i == j: continue z = np.zeros(n + m) z[i] = 1 z[j] = -1 A_ub.append(z) b_ub.append(np.abs(X[i] - X[j])) ## AG: Minimize: c^T * x ## Subject to: A_ub * x <= b_ub res = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="simplex", options={"tol": 1e-5}) a = res["x"] ## AG: second argument xs to be passed into the internal ## function. def witness_spectrum(x): diff = np.abs(x - X[:, np.newaxis]) one = np.min(a[:, np.newaxis] + diff, axis=0) two = np.max(a[:, np.newaxis] - diff, axis=0) return one, two return witness_spectrum x = np.linspace(-1, 3.5, 100) wass_estimate = get_W_witness_spectrum(p_x_samples + start_p, q_x_samples + start_q)(x) wa, wb = wass_estimate w = (wa + wb) / 2 w -= w.mean() plt.figure(figsize=(14, 6)) display_offset = 0.8 plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$") plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18) plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$") plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18) x = np.linspace(-1, 3.5, 100) plt.plot(x, w + display_offset, "r", label=r"$f^{\star}$", linewidth=4) plt.ylim(-2.5, 8) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0) plt.xticks([]) plt.yticks([]) """ Explanation: Wasserstein distance for the same two distributions Computing the Wasserstein critic in 1 dimension. Reminder that the Wasserstein distance is defined as: $$ W(p, q) = \sup_{\|\|f\|\|_{Lip} \le 1} E_p(x) f(x) - E_q(x) f(x) $$ The below code finds the values of f evaluated at the samples of the two distributions. This vector is computed to maximise the empirical (Monte Carlo) estimate of the IPM: $$ \frac{1}{n}\sum_{i=1}^n f(x_i) - \frac{1}{m}\sum_{i=1}^m f(y_j) $$ where $x_i$ are samples from the first distribution, while $y_j$ are samples from the second distribution. Since we want the function $f$ to be 1-Lipschitz, an inequality constraint is added to ensure that for all two choices of samples in the two distributions, $\forall x \in {x_1, ... x_n, y_1, ... y_m}, \forall y \in {x_1, ... x_n, y_1, ... y_m}$ $$ f(x) - f(y) \le |x - y| \ f(y) - f(x) \le |x - y| \ $$ This maximisation needs to occur under the constraint that the function $f$ is 1-Lipschitz, which is ensured uisng the constraint on the linear program. Note: This approach does not scale to large datasets. Thank you to Arthur Gretton and Dougal J Sutherland for this version of the code. End of explanation """ def covariance(kernel_fn, X, Y): num_rows = len(X) num_cols = len(Y) K = np.zeros((num_rows, num_cols)) for i in range(num_rows): for j in range(num_cols): K[i, j] = kernel_fn(X[i], Y[j]) return K def gaussian_kernel(x1, x2, gauss_var=0.1, height=2.2): return height * np.exp(-np.linalg.norm(x1 - x2) ** 2 / gauss_var) def evaluate_mmd_critic(p_samples, q_samples): n = p_samples.shape[0] m = q_samples.shape[0] p_cov = covariance(gaussian_kernel, p_samples, p_samples) print("indices") print(np.diag_indices(n)) p_samples_norm = np.sum(p_cov) - np.sum(p_cov[np.diag_indices(n)]) p_samples_norm /= n * (n - 1) q_cov = covariance(gaussian_kernel, q_samples, q_samples) q_samples_norm = np.sum(q_cov) - np.sum(q_cov[np.diag_indices(m)]) q_samples_norm /= m * (m - 1) p_q_cov = covariance(gaussian_kernel, p_samples, q_samples) p_q_norm = np.sum(p_q_cov) p_q_norm /= n * m norm = p_samples_norm + q_samples_norm - 2 * p_q_norm def critic(x): p_val = np.mean([gaussian_kernel(x, y) for y in p_samples]) q_val = np.mean([gaussian_kernel(x, y) for y in q_samples]) return (p_val - q_val) / norm return critic critic_fn = evaluate_mmd_critic(p_x_samples, q_x_samples) plt.figure(figsize=(14, 6)) display_offset = 0 plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$") plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18) plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$") plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18) x = np.linspace(-1, 3.5, 100) plt.plot( start_p + x, np.array([critic_fn(x_val) for x_val in x]) + display_offset, "r", label=r"$f^{\star}$", linewidth=4 ) plt.ylim(-2.5, 8) plt.xlim(-0.2, 3.5) plt.axis("off") plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0) plt.xticks([]) plt.yticks([]) """ Explanation: MMD computation The MMD is an IPM defined as: $$ MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} E_p(x) f(x) - E_q(x) f(x) $$ where $\mathcal{H}$ is a RKHS. Using the mean embedding operators in an RKHS, we can write: $$ E_p(x) f(x) = \langle f, \mu_p \rangle \ E_q(x) f(x) = \langle f, \mu_q \rangle \ $$ replacing in the MMD: $$ MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} \langle f, \mu_p - \mu_q \rangle $$ which means that $$ f = \frac{\mu_p - \mu_q}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}} $$ To obtain an estimate of $f$ evaluated at $x$ we use that: $$ f(x) = \frac{\mathbb{E}{p(y)} k(x, y) - \mathbb{E}{q(y)} k(x, y)}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}} $$ to estimate $\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}$ we use: $$ \|\|\mu_p - \mu_q\|\|_{\mathcal{H}} = \langle \mu_p - \mu_q, \mu_p - \mu_q \rangle = \langle \mu_p, \mu_p \rangle + \langle \mu_q, \mu_q \rangle - 2 \langle \mu_p, \mu_q \rangle $$ To estimate the dot products, we use: $$ \langle \mu_p, \mu_p \rangle = E_p(x) \mu_p(x) = E_p(x) \langle \mu_p, k(x, \cdot) \rangle = E_p(x) E_p(x') k(x, x') $$ For more details see the slides here: http://www.gatsby.ucl.ac.uk/~gretton/coursefiles/lecture5_distribEmbed_1.pdf End of explanation """
tensorflow/docs
site/en/guide/migrate/mirrored_strategy.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ import tensorflow as tf import tensorflow.compat.v1 as tf1 features = [[1., 1.5], [2., 2.5], [3., 3.5]] labels = [[0.3], [0.5], [0.7]] eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]] eval_labels = [[0.8], [0.9], [1.]] """ Explanation: Migrate single-worker multiple-GPU training <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/migrate/mirrored_strategy"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/mirrored_strategy.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/mirrored_strategy.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/mirrored_strategy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This guide demonstrates how to migrate the single-worker multiple-GPU workflows from TensorFlow 1 to TensorFlow 2. To perform synchronous training across multiple GPUs on one machine: In TensorFlow 1, you use the tf.estimator.Estimator APIs with tf.distribute.MirroredStrategy. In TensorFlow 2, you can use Keras Model.fit or a custom training loop with tf.distribute.MirroredStrategy. Learn more in the Distributed training with TensorFlow guide. Setup Start with imports and a simple dataset for demonstration purposes: End of explanation """ def _input_fn(): return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1) def _eval_input_fn(): return tf1.data.Dataset.from_tensor_slices( (eval_features, eval_labels)).batch(1) def _model_fn(features, labels, mode): logits = tf1.layers.Dense(1)(features) loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits) optimizer = tf1.train.AdagradOptimizer(0.05) train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step()) return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op) strategy = tf1.distribute.MirroredStrategy() config = tf1.estimator.RunConfig( train_distribute=strategy, eval_distribute=strategy) estimator = tf1.estimator.Estimator(model_fn=_model_fn, config=config) train_spec = tf1.estimator.TrainSpec(input_fn=_input_fn) eval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn) tf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec) """ Explanation: TensorFlow 1: Single-worker distributed training with tf.estimator.Estimator This example demonstrates the TensorFlow 1 canonical workflow of single-worker multiple-GPU training. You need to set the distribution strategy (tf.distribute.MirroredStrategy) through the config parameter of the tf.estimator.Estimator: End of explanation """ dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1) eval_dataset = tf.data.Dataset.from_tensor_slices( (eval_features, eval_labels)).batch(1) strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)]) optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05) model.compile(optimizer=optimizer, loss='mse') model.fit(dataset) model.evaluate(eval_dataset, return_dict=True) """ Explanation: TensorFlow 2: Single-worker training with Keras When migrating to TensorFlow 2, you can use the Keras APIs with tf.distribute.MirroredStrategy. If you use the tf.keras APIs for model building and Keras Model.fit for training, the main difference is instantiating the Keras model, an optimizer, and metrics in the context of Strategy.scope, instead of defining a config for tf.estimator.Estimator. If you need to use a custom training loop, check out the Using tf.distribute.Strategy with custom training loops guide. End of explanation """
btq/statlearning-notebooks
src/chapter5.ipynb
mit
from __future__ import division import pandas as pd import numpy as np import scipy as sp import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.cross_validation import LeaveOneOut from sklearn.cross_validation import KFold from sklearn.cross_validation import Bootstrap from sklearn.metrics import mean_squared_error %matplotlib inline auto_df = pd.read_csv("../data/Auto.csv", na_values="?") auto_df.dropna(inplace=True) auto_df.head() ax = auto_df.plot(x="horsepower", y="mpg", style="o") ax.set_ylabel("mpg") """ Explanation: Chapter 5: Cross-Validation and Bootstrap End of explanation """ clf = LinearRegression() loo = LeaveOneOut(len(auto_df)) X = auto_df[["horsepower"]].values y = auto_df["mpg"].values n = np.shape(X)[0] mses = [] for train, test in loo: Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test] clf.fit(Xtrain, ytrain) ypred = clf.predict(Xtest) mses.append(mean_squared_error(ytest, ypred)) np.mean(mses) def loo_shortcut(X, y): """ implement one-pass LOOCV calculation for linear models from ISLR Page 180 (Eqn 5.2) """ clf = LinearRegression() clf.fit(X, y) ypred = clf.predict(X) xbar = np.mean(X, axis=0) xsum = np.sum(np.power(X - xbar, 2)) nrows = np.shape(X)[0] mses = [] for row in range(0, nrows): hi = (1 / nrows) + (np.sum(X[row] - xbar) ** 2 / xsum) mse = (y[row] - ypred[row]) ** 2 / (1 - hi) mses.append(mse) return np.mean(mses) loo_shortcut(auto_df[["horsepower"]].values, auto_df["mpg"].values) # LOOCV against models of different degrees auto_df["horsepower^2"] = auto_df["horsepower"] * auto_df["horsepower"] auto_df["horsepower^3"] = auto_df["horsepower^2"] * auto_df["horsepower"] auto_df["horsepower^4"] = auto_df["horsepower^3"] * auto_df["horsepower"] auto_df["horsepower^5"] = auto_df["horsepower^4"] * auto_df["horsepower"] auto_df["unit"] = 1 colnames = ["unit", "horsepower", "horsepower^2", "horsepower^3", "horsepower^4", "horsepower^5"] cv_errors = [] for ncols in range(2, 6): X = auto_df[colnames[0:ncols]] y = auto_df["mpg"] clf = LinearRegression() clf.fit(X, y) cv_errors.append(loo_shortcut(X.values, y.values)) plt.plot(range(1,5), cv_errors) plt.xlabel("degree") plt.ylabel("cv.error") """ Explanation: Leave One Out Cross Validation (LOOCV) Instead of R's glm, we use Scikit-Learn's LinearRegression to arrive at very similar results. End of explanation """ cv_errors = [] for ncols in range(2, 6): # each ncol corresponds to a polynomial model X = auto_df[colnames[0:ncols]].values y = auto_df["mpg"].values kfold = KFold(len(auto_df), n_folds=10) mses = [] for train, test in kfold: # each model is cross validated 10 times Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test] clf = LinearRegression() clf.fit(X, y) ypred = clf.predict(Xtest) mses.append(mean_squared_error(ypred, ytest)) cv_errors.append(np.mean(mses)) plt.plot(range(1,5), cv_errors) plt.xlabel("degree") plt.ylabel("cv.error") """ Explanation: K-Fold Cross Validation End of explanation """ cv_errors = [] for ncols in range(2, 6): # each ncol corresponds to a polynomial model X = auto_df[colnames[0:ncols]].values y = auto_df["mpg"].values n = len(auto_df) bs = Bootstrap(n, train_size=int(0.9*n), test_size=int(0.1*n), n_iter=10, random_state=0) mses = [] for train, test in bs: # each model is resampled 10 times Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test] clf = LinearRegression() clf.fit(X, y) ypred = clf.predict(Xtest) mses.append(mean_squared_error(ypred, ytest)) cv_errors.append(np.mean(mses)) plt.plot(range(1,5), cv_errors) plt.xlabel("degree") plt.ylabel("cv.error") def alpha(x, y): """ allocate alpha of your assets to x and (1-alpha) to y for optimum """ vx = np.var(x) vy = np.var(y) cxy = np.cov(x, y) return ((vy - cxy) / (vx + vy - 2 * cxy))[0, 1] # From ISLR package, retrieved with write.csv(Portfolio, "portfolio.csv", row.names=FALSE) portfolio_df = pd.read_csv("../data/Portfolio.csv") portfolio_df.head() alpha(portfolio_df["X"].values, portfolio_df["Y"].values) # Find the variance of alpha - shows that bootstrapping results in a near-normal distribution X = portfolio_df["X"].values Y = portfolio_df["Y"].values bs = Bootstrap(len(portfolio_df), n_iter=1000, train_size=99, random_state=0) alphas = [] for train, test in bs: xtrain, ytrain = X[train], Y[train] alphas.append(alpha(xtrain, ytrain)) plt.hist(alphas) """ Explanation: Bootstrap End of explanation """
Diyago/Machine-Learning-scripts
DEEP LEARNING/NLP/LSTM RNN/Sentiment pytorch/Sentiment_classif.ipynb
apache-2.0
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" (this lesson) Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network (video only - nothing in notebook) PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset<a id='lesson_1'></a> End of explanation """ len(reviews) reviews[0] labels[0] """ Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a> End of explanation """ from collections import Counter import numpy as np """ Explanation: Project 1: Quick Theory Validation<a id='project_1'></a> There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook. You'll find the Counter class to be useful in this exercise, as well as the numpy library. End of explanation """ # Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() """ Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. End of explanation """ # Loop over all the words in all the reviews and increment the counts in the appropriate counter objects for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 """ Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter. Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. End of explanation """ # Examine the counts of the most common words in positive reviews positive_counts.most_common() # Examine the counts of the most common words in negative reviews negative_counts.most_common() """ Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. End of explanation """ pos_neg_ratios = Counter() # Calculate the ratios of positive and negative uses of the most common words # Consider words to be "common" if they've been used at least 100 times for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio """ Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the ratios you've calculated for a few words: End of explanation """ # Convert ratios to logs for word,ratio in pos_neg_ratios.most_common(): pos_neg_ratios[word] = np.log(ratio) """ Explanation: Looking closely at the values you just calculated, we see the following: Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be. Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be. Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway. Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons: Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys. When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms. TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)) In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: NOTE: In the video, Andrew uses the following formulas for the previous cell: For any postive words, convert the ratio using np.log(ratio) For any negative words, convert the ratio using -np.log(1/(ratio + 0.01)) These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the log of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use np.log(ratio). Examine the new ratios you've calculated for the same words from before: End of explanation """ # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] # Note: Above is the code Andrew uses in his solution video, # so we've included it here to avoid confusion. # If you explore the documentation for the Counter class, # you will see you could also find the 30 least common # words like this: pos_neg_ratios.most_common()[:-31:-1] """ Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. End of explanation """ from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') """ Explanation: End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers<a id='lesson_3'></a> End of explanation """ vocab = set(total_counts.keys()) """ Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a> TODO: Create a set named vocab that contains every word in the vocabulary. End of explanation """ vocab_size = len(vocab) print(vocab_size) """ Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 End of explanation """ from IPython.display import Image Image(filename='sentiment_network_2.png') """ Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. End of explanation """ layer_0 = np.zeros((1,vocab_size)) """ Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. End of explanation """ layer_0.shape from IPython.display import Image Image(filename='sentiment_network.png') """ Explanation: Run the following cell. It should display (1, 74074) End of explanation """ # Create a dictionary of words in the vocabulary mapped to index positions # (to be used in layer_0) word2index = {} for i,word in enumerate(vocab): word2index[word] = i # display the map of words to indices word2index """ Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. End of explanation """ def update_input_layer(review): """ Modify the global layer_0 to represent the vector form of review. The element at a given index of layer_0 should represent how many times the given word occurs in the review. Args: review(string) - the string of the review Returns: None """ global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 # count how many times each word is used in the given review and store the results in layer_0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 """ Explanation: TODO: Complete the implementation of update_input_layer. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0. End of explanation """ update_input_layer(reviews[0]) layer_0 """ Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. End of explanation """ def get_target_for_label(label): """Convert a label to `0` or `1`. Args: label(string) - Either "POSITIVE" or "NEGATIVE". Returns: `0` or `1`. """ if(label == 'POSITIVE'): return 1 else: return 0 """ Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, depending on whether the given label is NEGATIVE or POSITIVE, respectively. End of explanation """ labels[0] get_target_for_label(labels[0]) """ Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively. End of explanation """ labels[1] get_target_for_label(labels[1]) """ Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # populate review_vocab with all of the words in the given reviews review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # populate label_vocab with all of the words in the given labels. label_vocab = set() for label in labels: label_vocab.add(label) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # These are the weights between the input layer and the hidden layer. self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) # These are the weights between the hidden layer and the output layer. ## NOTE: the difference in the standard deviation of the normal weights ## This has been changed from `self.output_nodes**-0.5` to `self.hidden_nodes**-0.5` self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) # The input layer, a two-dimensional matrix with shape 1 x input_nodes self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): # NOTE: This if-check was not in the version of this method created in Project 2, # and it appears in Andrew's Project 3 solution without explanation. # It simply ensures the word is actually a key in word2index before # accessing it, which is important because accessing an invalid key # with raise an exception in Python. This allows us to ignore unknown # words encountered in new reviews. if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] += 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # Get the next review and its correct label review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer self.update_input_layer(review) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # Update the weights self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step # Keep track of correct predictions. if(layer_2 >= 0.5 and label == 'POSITIVE'): correct_so_far += 1 elif(layer_2 < 0.5 and label == 'NEGATIVE'): correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # Run a forward pass through the network, like in the "train" function. # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer; # return NEGATIVE for other values if(layer_2[0] >= 0.5): return "POSITIVE" else: return "NEGATIVE" """ Explanation: End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network<a id='project_3'></a> TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following: - Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs. - Re-use the code from earlier in this notebook to create the training data (see TODOs in the code) - Implement the pre_process_data function to create the vocabulary for our training data generating functions - Ensure train trains over the entire corpus Where to Get Help if You Need it Re-watch previous week's Udacity Lectures Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) """ Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. End of explanation """ mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. End of explanation """ from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common() """ Explanation: Different Initialization, Different Results With slightly better initialization code (using hidden_nodes**-0.5 rather than output_nodes**-0.5), we actually see an improvement with a learning rate = 0.01! This solution isn't perfect, but it certainly shows potential and we will improve upon it later in this lesson. Why is this a better initialization strategy? We'll get into the details of weight initialization later in this progam, but suffice to say that the best weights are a function of the $1/\sqrt(n)$ where n is the number of nodes in that layer. In this case, the weights between the hidden and output layer should be a function of the hidden layer size. So, using hidden_nodes**-0.5 rather than output_nodes**-0.5 is a better initialization strategy! End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise<a id='lesson_4'></a> End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # populate review_vocab with all of the words in the given reviews review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # populate label_vocab with all of the words in the given labels. label_vocab = set() for label in labels: label_vocab.add(label) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # These are the weights between the input layer and the hidden layer. self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) # The input layer, a two-dimensional matrix with shape 1 x input_nodes self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): # NOTE: This if-check was not in the version of this method created in Project 2, # and it appears in Andrew's Project 3 solution without explanation. # It simply ensures the word is actually a key in word2index before # accessing it, which is important because accessing an invalid key # with raise an exception in Python. This allows us to ignore unknown # words encountered in new reviews. if(word in self.word2index.keys()): ## New for Project 4: changed to set to 1 instead of add 1 self.layer_0[0][self.word2index[word]] = 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # Get the next review and its correct label review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer self.update_input_layer(review) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # Update the weights self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step # Keep track of correct predictions. if(layer_2 >= 0.5 and label == 'POSITIVE'): correct_so_far += 1 elif(layer_2 < 0.5 and label == 'NEGATIVE'): correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # Run a forward pass through the network, like in the "train" function. # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer; # return NEGATIVE for other values if(layer_2[0] >= 0.5): return "POSITIVE" else: return "NEGATIVE" """ Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a> TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following: * Copy the SentimentNetwork class you created earlier into the following cell. * Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with "New for Project 4" End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. End of explanation """ Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (1 * weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png') layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 """ Explanation: End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # populate review_vocab with all of the words in the given reviews review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # populate label_vocab with all of the words in the given labels. label_vocab = set() for label in labels: label_vocab.add(label) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # These are the weights between the input layer and the hidden layer. self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) ## New for Project 5: Removed self.layer_0; added self.layer_1 # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes self.layer_1 = np.zeros((1,hidden_nodes)) ## New for Project 5: Removed update_input_layer function def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) ## New for Project 5: changed name of first parameter form 'training_reviews' # to 'training_reviews_raw' def train(self, training_reviews_raw, training_labels): ## New for Project 5: pre-process training reviews so we can deal # directly with the indices of non-zero inputs training_reviews = list() for review in training_reviews_raw: indices = set() for word in review.split(" "): if(word in self.word2index.keys()): indices.add(self.word2index[word]) training_reviews.append(list(indices)) # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # Get the next review and its correct label review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### ## New for Project 5: Removed call to 'update_input_layer' function # because 'layer_0' is no longer used # Hidden layer ## New for Project 5: Add in only the weights for non-zero items self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] # Output layer ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1' layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # Update the weights ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1' self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step ## New for Project 5: Only update the weights that were used in the forward pass for index in review: self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step # Keep track of correct predictions. if(layer_2 >= 0.5 and label == 'POSITIVE'): correct_so_far += 1 elif(layer_2 < 0.5 and label == 'NEGATIVE'): correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # Run a forward pass through the network, like in the "train" function. ## New for Project 5: Removed call to update_input_layer function # because layer_0 is no longer used # Hidden layer ## New for Project 5: Identify the indices used in the review and then add # just those weights to layer_1 self.layer_1 *= 0 unique_indices = set() for word in review.lower().split(" "): if word in self.word2index.keys(): unique_indices.add(self.word2index[word]) for index in unique_indices: self.layer_1 += self.weights_0_1[index] # Output layer ## New for Project 5: changed to use self.layer_1 instead of local layer_1 layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer; # return NEGATIVE for other values if(layer_2[0] >= 0.5): return "POSITIVE" else: return "NEGATIVE" """ Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a> TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Remove the update_input_layer function - you will not need it in this version. * Modify init_network: You no longer need a separate input layer, so remove any mention of self.layer_0 You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero Modify train: Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step. At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review. Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review. When updating weights_0_1, only update the individual weights that were used in the forward pass. Modify run: Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with "New for Project 5" End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it once again. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse_2.png') # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook output_notebook() hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="Word Positive/Negative Affinity Distribution") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="The frequency distribution of the words in our corpus") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) """ Explanation: End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction<a id='lesson_6'></a> End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: ## New for Project 6: added min_count and polarity_cutoff parameters def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews min_count(int) - Words should only be added to the vocabulary if they occur more than this many times polarity_cutoff(float) - The absolute value of a word's positive-to-negative ratio must be at least this big to be considered. hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training ## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call self.pre_process_data(reviews, labels, polarity_cutoff, min_count) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) ## New for Project 6: added min_count and polarity_cutoff parameters def pre_process_data(self, reviews, labels, polarity_cutoff, min_count): ## ---------------------------------------- ## New for Project 6: Calculate positive-to-negative ratios for words before # building vocabulary # positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 pos_neg_ratios = Counter() for term,cnt in list(total_counts.most_common()): if(cnt >= 50): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio for word,ratio in pos_neg_ratios.most_common(): if(ratio > 1): pos_neg_ratios[word] = np.log(ratio) else: pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01))) # ## end New for Project 6 ## ---------------------------------------- # populate review_vocab with all of the words in the given reviews review_vocab = set() for review in reviews: for word in review.split(" "): ## New for Project 6: only add words that occur at least min_count times # and for words with pos/neg ratios, only add words # that meet the polarity_cutoff if(total_counts[word] > min_count): if(word in pos_neg_ratios.keys()): if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)): review_vocab.add(word) else: review_vocab.add(word) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # populate label_vocab with all of the words in the given labels. label_vocab = set() for label in labels: label_vocab.add(label) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # These are the weights between the input layer and the hidden layer. self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) ## New for Project 5: Removed self.layer_0; added self.layer_1 # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes self.layer_1 = np.zeros((1,hidden_nodes)) ## New for Project 5: Removed update_input_layer function def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) ## New for Project 5: changed name of first parameter form 'training_reviews' # to 'training_reviews_raw' def train(self, training_reviews_raw, training_labels): ## New for Project 5: pre-process training reviews so we can deal # directly with the indices of non-zero inputs training_reviews = list() for review in training_reviews_raw: indices = set() for word in review.split(" "): if(word in self.word2index.keys()): indices.add(self.word2index[word]) training_reviews.append(list(indices)) # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # Get the next review and its correct label review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### ## New for Project 5: Removed call to 'update_input_layer' function # because 'layer_0' is no longer used # Hidden layer ## New for Project 5: Add in only the weights for non-zero items self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] # Output layer ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1' layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # Update the weights ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1' self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step ## New for Project 5: Only update the weights that were used in the forward pass for index in review: self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step # Keep track of correct predictions. if(layer_2 >= 0.5 and label == 'POSITIVE'): correct_so_far += 1 elif(layer_2 < 0.5 and label == 'NEGATIVE'): correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # Run a forward pass through the network, like in the "train" function. ## New for Project 5: Removed call to update_input_layer function # because layer_0 is no longer used # Hidden layer ## New for Project 5: Identify the indices used in the review and then add # just those weights to layer_1 self.layer_1 *= 0 unique_indices = set() for word in review.lower().split(" "): if word in self.word2index.keys(): unique_indices.add(self.word2index[word]) for index in unique_indices: self.layer_1 += self.weights_0_1[index] # Output layer ## New for Project 5: changed to use self.layer_1 instead of local layer_1 layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer; # return NEGATIVE for other values if(layer_2[0] >= 0.5): return "POSITIVE" else: return "NEGATIVE" """ Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a> TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Modify pre_process_data: Add two additional parameters: min_count and polarity_cutoff Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.) Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times. Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff Modify __init__: Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data The following code is the same as the previous project, with project-specific changes marked with "New for Project 6" End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a small polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a much larger polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. End of explanation """ mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp_full.train(reviews[:-1000],labels[:-1000]) Image(filename='sentiment_network_sparse.png') def get_most_similar_words(focus = "horrible"): most_similar = Counter() for word in mlp_full.word2index.keys(): most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]]) return most_similar.most_common() get_most_similar_words("excellent") get_most_similar_words("terrible") import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]: if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) pos = 0 neg = 0 colors_list = list() vectors_list = list() for word in words_to_visualize: if word in pos_neg_ratios.keys(): vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]]) if(pos_neg_ratios[word] > 0): pos+=1 colors_list.append("#00ff00") else: neg+=1 colors_list.append("#000000") from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) words_top_ted_tsne = tsne.fit_transform(vectors_list) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="vector T-SNE for most polarized words") source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0], x2=words_top_ted_tsne[:,1], names=words_to_visualize, color=colors_list)) p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color") word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6, text_font_size="8pt", text_color="#555555", source=source, text_align='center') p.add_layout(word_labels) show(p) # green indicates positive words, black indicates negative words """ Explanation: End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?<a id='lesson_7'></a> End of explanation """
amueller/advanced_training
04.1 Pipelines.ipynb
bsd-2-clause
from sklearn.svm import SVC from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler # load and split the data cancer = load_breast_cancer() X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=0) # compute minimum and maximum on the training data scaler = MinMaxScaler().fit(X_train) # rescale training data X_train_scaled = scaler.transform(X_train) svm = SVC() # learn an SVM on the scaled training data svm.fit(X_train_scaled, y_train) # scale test data and score the scaled data X_test_scaled = scaler.transform(X_test) svm.score(X_test_scaled, y_test) """ Explanation: Algorithm Chains and Pipelines End of explanation """ from sklearn.model_selection import GridSearchCV # illustration purposes only, don't use this code param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100], 'gamma': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(SVC(), param_grid=param_grid, cv=5) grid.fit(X_train_scaled, y_train) print("best cross-validation accuracy:", grid.best_score_) print("test set score: ", grid.score(X_test_scaled, y_test)) print("best parameters: ", grid.best_params_) mglearn.plots.plot_improper_processing() """ Explanation: Parameter Selection with Preprocessing End of explanation """ from sklearn.pipeline import Pipeline pipe = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC())]) pipe.fit(X_train, y_train) pipe.score(X_test, y_test) """ Explanation: Building Pipelines End of explanation """ param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100], 'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(pipe, param_grid=param_grid, cv=5) grid.fit(X_train, y_train) print("best cross-validation accuracy:", grid.best_score_) print("test set score: ", grid.score(X_test, y_test)) print("best parameters: ", grid.best_params_) mglearn.plots.plot_proper_processing() rnd = np.random.RandomState(seed=0) X = rnd.normal(size=(100, 10000)) y = rnd.normal(size=(100,)) from sklearn.feature_selection import SelectPercentile, f_regression select = SelectPercentile(score_func=f_regression, percentile=5).fit(X, y) X_selected = select.transform(X) print(X_selected.shape) from sklearn.model_selection import cross_val_score from sklearn.linear_model import Ridge np.mean(cross_val_score(Ridge(), X_selected, y, cv=5)) pipe = Pipeline([("select", SelectPercentile(score_func=f_regression, percentile=5)), ("ridge", Ridge())]) np.mean(cross_val_score(pipe, X, y, cv=5)) """ Explanation: Using Pipelines in Grid-searches End of explanation """ def fit(self, X, y): X_transformed = X for step in self.steps[:-1]: # iterate over all but the final step # fit and transform the data X_transformed = step[1].fit_transform(X_transformed, y) # fit the last step self.steps[-1][1].fit(X_transformed, y) return self def predict(self, X): X_transformed = X for step in self.steps[:-1]: # iterate over all but the final step # transform the data X_transformed = step[1].transform(X_transformed) # fit the last step return self.steps[-1][1].predict(X_transformed) ![pipeline_illustration](figures/pipeline.svg) """ Explanation: The General Pipeline Interface End of explanation """ from sklearn.pipeline import make_pipeline # standard syntax pipe_long = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC(C=100))]) # abbreviated syntax pipe_short = make_pipeline(MinMaxScaler(), SVC(C=100)) pipe_short.steps from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA pipe = make_pipeline(StandardScaler(), PCA(n_components=2), StandardScaler()) pipe.steps """ Explanation: Convenient Pipeline creation with make_pipeline End of explanation """ # fit the pipeline defined above to the cancer dataset pipe.fit(cancer.data) # extract the first two principal components from the "pca" step components = pipe.named_steps["pca"].components_ print(components.shape) """ Explanation: Accessing step attributes End of explanation """ from sklearn.linear_model import LogisticRegression pipe = make_pipeline(StandardScaler(), LogisticRegression()) param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]} X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=4) grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) print(grid.best_estimator_) print(grid.best_estimator_.named_steps["logisticregression"]) print(grid.best_estimator_.named_steps["logisticregression"].coef_) """ Explanation: Accessing attributes in grid-searched pipeline. End of explanation """ from sklearn.datasets import load_boston boston = load_boston() X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=0) from sklearn.preprocessing import PolynomialFeatures pipe = make_pipeline( StandardScaler(), PolynomialFeatures(), Ridge()) param_grid = {'polynomialfeatures__degree': [1, 2, 3], 'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]} grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1) grid.fit(X_train, y_train) plt.matshow(np.array([s.mean_validation_score for s in grid.grid_scores_]).reshape(3, -1), vmin=0, cmap="viridis") plt.xlabel("ridge__alpha") plt.ylabel("polynomialfeatures__degree") plt.xticks(range(len(param_grid['ridge__alpha'])), param_grid['ridge__alpha']) plt.yticks(range(len(param_grid['polynomialfeatures__degree'])), param_grid['polynomialfeatures__degree']) plt.colorbar() print(grid.best_params_) grid.score(X_test, y_test) param_grid = {'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]} pipe = make_pipeline(StandardScaler(), Ridge()) grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) grid.score(X_test, y_test) """ Explanation: Grid-searching preprocessing steps and model parameters End of explanation """
liam2/larray
doc/source/tutorial/getting_started.ipynb
gpl-3.0
%xmode Minimal from larray import * """ Explanation: Getting Started The purpose of the present Getting Started section is to give a quick overview of the main objects and features of the LArray library. To get a more detailed presentation of all capabilities of LArray, read the next sections of the tutorial. The API Reference section of the documentation give you the list of all objects, methods and functions with their individual documentation and examples. To use the LArray library, the first thing to do is to import it: End of explanation """ from larray import __version__ __version__ """ Explanation: To know the version of the LArray library installed on your machine, type: End of explanation """ s = 1 + 2 # In the interactive mode, there is no need to use the print() function # to display the content of the variable 's'. # Simply typing 's' is enough s # In the interactive mode, there is no need to use the print() function # to display the result of an expression 1 + 2 """ Explanation: <div class="alert alert-warning"> **Note:** The tutorial is generated from Jupyter notebooks which work in the "interactive" mode (like in the LArray Editor console). In the interactive mode, there is no need to use the print() function to display the content of a variable. Simply writing its name is enough. The same remark applies for the returned value of an expression.<br><br> In a Python script (file with .py extension), you always need to use the print() function to display the content of a variable or the value returned by a function or an expression. </div> End of explanation """ # create an axis using one string age = Axis('age=0-9,10-17,18-66,67+') # labels generated using the special syntax start..end time = Axis('time=2015..2017') # labels given as a list gender = Axis(['female', 'male'], 'gender') age, gender, time """ Explanation: Create an array Working with the LArray library mainly consists of manipulating Array data structures. They represent N-dimensional labelled arrays and are composed of raw data (NumPy ndarray), axes and optionally some metadata. An Axis object represents a dimension of an array. It contains a list of labels and has a name. They are several ways to create an axis: End of explanation """ # define some data. This is the belgian population (in thousands). Source: eurostat. data = [[[633, 635, 634], [663, 665, 664]], [[484, 486, 491], [505, 511, 516]], [[3572, 3581, 3583], [3600, 3618, 3616]], [[1023, 1038, 1053], [756, 775, 793]]] # create an Array object population = Array(data, axes=[age, gender, time]) population """ Explanation: <div class="alert alert-warning"> **Warning:** When using the string syntax `"axis_name=list,of,labels"` or `"axis_name=start..end"`, LArray will automatically infer the type of labels.<br> For instance, the command line `age = Axis("age=0..100")` will create an age axis with labels of type `int`.<br><br> Mixing special characters like `+` with numbers will lead to create an axis with labels of type `str` instead of `int`.<br> As a consequence, the command line `age = Axis("age=0..98,99+")` will create an age axis with labels of type `str` instead of `int`! </div> The labels allow to select subsets and to manipulate the data without working with the positions of array elements directly. To create an array from scratch, you need to supply data and axes: End of explanation """ # attach some metadata to the population array population.meta.title = 'population by age, gender and year' population.meta.source = 'Eurostat' # display metadata population.meta """ Explanation: You can optionally attach some metadata to an array: End of explanation """ # Array summary: metadata + dimensions + description of axes population.info """ Explanation: To get a short summary of an array, type: End of explanation """ population.axes """ Explanation: To get the axes of an array, type: End of explanation """ # extract the 'time' axis belonging to the 'population' array time = population.time time """ Explanation: It is also possible to extract one axis belonging to an array using its name: End of explanation """ zeros([age, gender]) ones([age, gender]) full([age, gender], fill_value=10.0) # With initial=1.0 and inc=0.5, we generate the sequence 1.0, 1.5, 2.0, 2.5, 3.0, ... sequence(age, initial=1.0, inc=0.5) ndtest([age, gender]) """ Explanation: Create an array filled with predefined values Arrays filled with predefined values can be generated through dedicated functions: zeros : creates an array filled with 0 ones : creates an array filled with 1 full : creates an array filled with a given value sequence : creates an array by sequentially applying modifications to the array along axis. ndtest : creates a test array with increasing numbers as data End of explanation """ # save our population array to a CSV file population.to_csv('population_belgium.csv') """ Explanation: Save/Load an array The LArray library offers many I/O functions to read and write arrays in various formats (CSV, Excel, HDF5). For example, to save an array in a CSV file, call the method to_csv: End of explanation """ population = read_csv('population_belgium.csv') population """ Explanation: The content of the CSV file is then: age,gender\time,2015,2016,2017 0-9,female,633,635,634 0-9,male,663,665,664 10-17,female,484,486,491 10-17,male,505,511,516 18-66,female,3572,3581,3583 18-66,male,3600,3618,3616 67+,female,1023,1038,1053 67+,male,756,775,793 <div class="alert alert-info"> Note: In CSV or Excel files, the last dimension is horizontal and the names of the last two dimensions are separated by a backslash \. </div> To load a saved array, call the function read_csv: End of explanation """ population['67+', 'female', 2017] """ Explanation: Other input/output functions are described in the Input/Output section of the API documentation. Selecting a subset To select an element or a subset of an array, use brackets [ ]. In Python we usually use the term indexing for this operation. Let us start by selecting a single element: End of explanation """ population[2017, 'female', '67+'] """ Explanation: Labels can be given in arbitrary order: End of explanation """ population['female'] """ Explanation: When selecting a larger subset the result is an array: End of explanation """ population['female', ['0-9', '10-17']] """ Explanation: When selecting several labels for the same axis, they must be given as a list (enclosed by [ ]) End of explanation """ # in this case '10-17':'67+' is equivalent to ['10-17', '18-66', '67+'] population['female', '10-17':'67+'] # :'18-66' selects all labels between the first one and '18-66' # 2017: selects all labels between 2017 and the last one population[:'18-66', 2017:] """ Explanation: You can also select slices, which are all labels between two bounds (we usually call them the start and stop bounds). Specifying the start and stop bounds of a slice is optional: when not given, start is the first label of the corresponding axis, stop the last one: End of explanation """ country = Axis(['Belgium', 'Netherlands', 'Germany'], 'country') citizenship = Axis(['Belgium', 'Netherlands', 'Germany'], 'citizenship') immigration = ndtest((country, citizenship, time)) immigration """ Explanation: <div class="alert alert-info"> Note: Contrary to slices on normal Python lists, the stop bound is included in the selection. </div> <div class="alert alert-warning"> Selecting by labels as above only works as long as there is no ambiguity. When several axes have some labels in common and you do not specify explicitly on which axis to work, it fails with an error ending with something like ValueError: <somelabel> is ambiguous (valid in <axis1>, <axis2>). </div> For example, imagine you need to work with an 'immigration' array containing two axes sharing some common labels: End of explanation """ immigration[country['Netherlands'], citizenship['Belgium'], 2017] """ Explanation: If we try to get the number of Belgians living in the Netherlands for the year 2017, we might try something like: python immigration['Netherlands', 'Belgium', 2017] ... but we receive back a volley of insults: [some long error message ending with the line below] [...] ValueError: Netherlands is ambiguous (valid in country, citizenship) In that case, we have to specify explicitly which axes the 'Netherlands' and 'Belgium' labels we want to select belong to: End of explanation """ for year in time: print(year) """ Explanation: Iterating over an axis To iterate over an axis, use the following syntax: End of explanation """ population """ Explanation: Aggregation The LArray library includes many aggregations methods: sum, mean, min, max, std, var, ... For example, assuming we still have an array in the population variable: End of explanation """ population.sum(gender) """ Explanation: We can sum along the 'gender' axis using: End of explanation """ population.sum(age, gender) """ Explanation: Or sum along both 'age' and 'gender': End of explanation """ population.sum_by(time) """ Explanation: It is sometimes more convenient to aggregate along all axes except some. In that case, use the aggregation methods ending with _by. For example: End of explanation """ children = age['0-9', '10-17'] children """ Explanation: Groups A Group object represents a subset of labels or positions of an axis: End of explanation """ working = age['18-66'] >> 'working' working nonworking = age['0-9', '10-17', '67+'] >> 'nonworking' nonworking """ Explanation: It is often useful to attach them an explicit name using the &gt;&gt; operator: End of explanation """ population """ Explanation: Still using the same population array: End of explanation """ population[working] population[nonworking] """ Explanation: Groups can be used in selections: End of explanation """ population.sum(nonworking) """ Explanation: or aggregations: End of explanation """ population.sum((children, working, nonworking)) """ Explanation: When aggregating several groups, the names we set above using &gt;&gt; determines the label on the aggregated axis. Since we did not give a name for the children group, the resulting label is generated automatically : End of explanation """ age_100 = Axis('age=0..100') # mixing slices and individual labels leads to the creation of several groups (a tuple of groups) age_100[0:10, 20, 30, 40] # the union() method allows to mix slices and individual labels to create a single group age_100[0:10].union(age_100[20, 30, 40]) """ Explanation: <div class="alert alert-warning"> **Warning:** Mixing slices and individual labels inside the `[ ]` will generate **several groups** (a tuple of groups) instead of a single group.<br> If you want to create a single group using both slices and individual labels, you need to use the `.union()` method (see below). </div> End of explanation """ population = zeros([age, gender, time]) births = zeros([age, gender, time]) deaths = zeros([age, gender, time]) # create a session containing the arrays of the model demography_session = Session(population=population, births=births, deaths=deaths) # get an array (option 1) demography_session['population'] # get an array (option 2) demography_session.births # modify an array demography_session.deaths['male'] = 1 # add an array demography_session.foreigners = zeros([age, gender, time]) # displays names of arrays contained in the session # (in alphabetical order) demography_session.names """ Explanation: Grouping arrays in a Session Variables (arrays) may be grouped in Session objects. A session is an ordered dict-like container with special I/O methods: End of explanation """ # dump all arrays contained in demography_session in one HDF5 file demography_session.save('demography.h5') # load all arrays saved in the HDF5 file 'demography.h5' and store them in the 'demography_session' variable demography_session = Session('demography.h5') """ Explanation: One of the main interests of using sessions is to save and load many arrays at once: End of explanation """
jdsanch1/SimRC
02. Parte 2/15. Clase 15/.ipynb_checkpoints/03Class NB-checkpoint.ipynb
mit
#importar los paquetes que se van a usar import pandas as pd import pandas_datareader.data as web import numpy as np import datetime from datetime import datetime import scipy.stats as stats import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline #algunas opciones para Python pd.set_option('display.notebook_repr_html', True) pd.set_option('display.max_columns', 6) pd.set_option('display.max_rows', 10) pd.set_option('display.width', 78) pd.set_option('precision', 3) """ Explanation: Clase 3: Trabajando con opciones Juan Diego Sánchez Torres, Profesor, MAF ITESO Departamento de Matemáticas y Física dsanchez@iteso.mx Tel. 3669-34-34 Ext. 3069 Oficina: Cubículo 4, Edificio J, 2do piso 1. Uso de Pandas para descargar datos financieros En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos. End of explanation """ #Descargar datos de Yahoo! finance #Tickers tickers = ['AA','AAPL','MSFT', '^GSPC'] # Fuente data_source = 'yahoo' # Fechas: desde 01/01/2014 hasta 12/31/2016. start_date = '2014-01-01' end_date = '2016-12-31' # Usar el pandas data reader. El comando sort_index ordena los datos por fechas assets = (web.DataReader(tickers, data_source, start_date, end_date)).sort_index('major_axis') assets """ Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados. Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda: *conda install -c conda-forge pandas-datareader * End of explanation """ aapl = web.Options('AAPL', 'yahoo') appl_opt = aapl.get_all_data().reset_index() appl_opt appl_opt['Expiry'] appl_opt['Type'] appl_opt.loc[1080] call01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='call')] call01 """ Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX. Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX. End of explanation """ ax = call01.set_index('Strike')[['IV']].plot(figsize=(8,6)) ax.axvline(call01.Underlying_Price.iloc[0], color='g'); put01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='put')] put01 ax = put01.set_index('Strike')[['IV']].plot(figsize=(8,6)) ax.axvline(put01.Underlying_Price.iloc[0], color='g'); """ Explanation: 2. Volatilidad implícita End of explanation """ def call_payoff(ST, K): return max(0, ST-K) call_payoff(25, 30) def call_payoffs(STmin, STmax, K, step=1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(call_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df call_payoffs(10,25,15) def plot_call_payoffs(STmin, STmax, K, step=1): payoffs = call_payoffs(STmin, STmax, K, step) plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10) plt.ylabel("Payoff") plt.xlabel("Precio de maduración") plt.title('Payoff call, Precio strike={0}'.format(K)) plt.xlim(STmin, STmax) plt.plot(payoffs.index, payoffs.Payoff.values); plot_call_payoffs(10, 25, 15) def put_payoff(ST, K): return max(0, K-ST) put_payoff(25, 30) def put_payoffs(STmin, STmax, K, step=1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(put_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df put_payoffs(10,25,15) def plot_put_payoffs(STmin, STmax, K, step=1): payoffs = put_payoffs(STmin, STmax, K, step) plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10) plt.ylabel("Payoff") plt.xlabel("Precio de maduración") plt.title('Payoff put, Precio strike={0}'.format(K)) plt.xlim(STmin, STmax) plt.plot(payoffs.index, payoffs.Payoff.values); plot_put_payoffs(10, 25, 15) def call_pnl_buyer(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(call_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': payoffs-ct}, index=maturities) df.index.name = 'Precio de maduración' return df call_pnl_buyer(12, 15, 10, 35) def call_pnl_seller(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(call_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': ct-payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df call_pnl_seller(12, 15, 10, 35) def call_pnl_combined(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(call_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnLcomprador': payoffs-ct, 'PnLvendedor': ct-payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df call_pnl_combined(12, 15, 10, 35) def put_pnl_buyer(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(put_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': payoffs-ct}, index=maturities) df.index.name = 'Precio de maduración' return df put_pnl_buyer(2, 15, 10, 30) def put_pnl_seller(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(put_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnL': ct-payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df put_pnl_seller(2, 15, 10, 30) def put_pnl_combined(ct, K, STmin, STmax, step = 1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(put_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs, 'Prima': ct, 'PnLcomprador': payoffs-ct, 'PnLvendedor': ct-payoffs}, index=maturities) df.index.name = 'Precio de maduración' return df put_pnl_combined(2, 15, 10, 30) def plot_pnl(pnl_df, okind, who): plt.ylim(pnl_df.Payoff.min() - 10, pnl_df.Payoff.max() + 10) plt.ylabel("Ganancia/pérdida") plt.xlabel("Precio de maduración") plt.title('Ganancia y pérdida de una opción {0} para el {1}, Prima={2}, Strike={3}'.format(okind, who, pnl_df.Prima.iloc[0], pnl_df.Strike.iloc[0])) plt.ylim(pnl_df.PnL.min()-3, pnl_df.PnL.max() + 3) plt.xlim(pnl_df.index[0], pnl_df.index[len(pnl_df.index)-1]) plt.plot(pnl_df.index, pnl_df.PnL) plt.axhline(0, color='g'); plot_pnl(call_pnl_buyer(12, 15, 10, 35), "call", "comprador") plot_pnl(call_pnl_seller(12, 15, 10, 35), "call", "vendedor") plot_pnl(put_pnl_buyer(2, 15, 10, 30), "put", "comprador") """ Explanation: 3. Gráficos del Pay Off End of explanation """
ManuSetty/wishbone
notebooks/Wishbone_for_mass_cytometry.ipynb
gpl-2.0
import wishbone # Plotting and miscellaneous imports import os import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import seaborn as sns import random %matplotlib inline """ Explanation: <h3>Wishbone for mass cytometry</h3> <h4>Table of contents</h4> <br/> <a href='#intro'>Introduction</a> <br/> <a href='#data'>Loading data</a> <br/> <a href='#pre'>Data preprocessing</a> <br/> <a href='#wishbone'>Running Wishbone</a> <br/> <a href='#wishbone2'>Visualizing Wishbone results</a> <br/> <a href='#pheno'>Clustering of cells</a> <br/> <a href='#gating'>Gating for start cells</a> <a id="intro"></a><h3> Introduction </h3> Wishbone is an algorithm to identify bifurcating developmental trajectories from single cell data. Wishbone can applied to both single cell RNA-seq and mass cytometry datasets. This notebook details the usage of Wishbone for mass cytometry data. <a id="data"></a><h3> Loading data </h3> Wishbone can read mass cytometry data from a fcs file. The following code snipped imports the wishbone package along with other plotting related imports End of explanation """ # Load sample data scdata = wishbone.wb.SCData.from_fcs(os.path.expanduser('~/.wishbone/data/sample_masscyt.fcs'), cofactor=None) """ Explanation: A sample mass cytometry fcs file is installed at ~/.wishbone/data/sample_masscyt.fcs. This is a sample of cells from replicate of the mouse thymus data generated for the Wishbone manuscript. Thus the data is already cleaned and arcsinh transformed. The data can be loaded using the wishbone.wb.SCData.from_fcs function. End of explanation """ scdata """ Explanation: This will create an object of the type wishbone.wb.SCData which is the base class for the analysis. The cofactor parameter is used for arcsinh transformation. Since the data is already transformed, cofactor is set to None. Typically, this parameter will be set to default (5) for untransformed data. A summary of the scdata object is shown below End of explanation """ scdata.run_tsne() """ Explanation: This shows that the data matrix contains 25000 cells and 13 markers along with the different properties of the wishbone.wb.SCData class. <a id='pre'></a><h3>Data preprocessing</h3> <h4> tSNE visualization </h4> Wishbone uses tSNE for visualization and tSNE can be run using the run_tsne function. Unlike single cell RNA-seq data, tSNE is run directly on the transformed data for mass cytometry. End of explanation """ fig, ax = scdata.plot_tsne() """ Explanation: tSNE results can be visualized by the plot_tsne function. End of explanation """ fig, ax = scdata.plot_gene_expression(['CD44', 'CD25', 'CD4', 'CD8']) """ Explanation: Gene expression can be visualized on tSNE maps using the plot_gene_expression function. The genes parameter is an string iterable of genes, which are a subset of the expression of column names. The below function plots the expression of DN genes CD44 and CD25, SP genes CD4 and CD8 End of explanation """ fig, ax = wishbone.wb.get_fig() plt.scatter(scdata.data['CD8'], scdata.data['CD4'], s=10, edgecolors='none') plt.xlim([0, 6]) plt.ylim([0, 6]) plt.xlabel('CD8') plt.ylabel('CD4') """ Explanation: The below example shows how the user can compare expression of the desired markers. The plot comparing expression of CD8 and CD4 is shown below End of explanation """ # Run diffusion maps scdata.run_diffusion_map() fig, ax = scdata.plot_diffusion_components() """ Explanation: <h4> Diffusion maps </h4> Wishbone uses diffusion maps, a non-linear dimensionality reduction technique to denoise the data and capture the major axes of variation. Diffusion maps can be determined by using the run_diffusion_map function and the diffusion components visualized on tSNE maps using plot_diffusion_components. End of explanation """ scdata.plot_diffusion_eigen_vectors() """ Explanation: Note the component 0 is the trivial component with eigen value 1 and does not encode any information of the data From the above plots, the first three non-trivial components (Components 1, 2, 3), capture the differences between cell types. This difference in information encoded can be clearly seen by the eigen values associated with these components. Simiar to PCA, the eigen values encode the amount of variance explained in the data and can be visualized usign the plot_diffusion_eigen_vectors function. End of explanation """ # Start cell start_cell = scdata.data.index[(scdata.data['CD4'] < 1) & (scdata.data['CD8'] < 1)][0] # Waypoints waypoints = pd.DataFrame.from_csv(os.path.expanduser('~/.wishbone/data/masscyt_waypoints.csv')).iloc[:, 0] waypoints = list(waypoints) # Wishbone class wb = wishbone.wb.Wishbone(scdata) wb.run_wishbone(start_cell, components_list=[1, 2, 3], num_waypoints=waypoints) """ Explanation: A clear drop-off in eigen value can be seen after component 3. Thus Wishbone will be run using components 1, 2 and 3 <h4> Saving SCData object </h4> The SCData object can be saved to a pickle file and loaded using the save and load functions. <code> scdata.save('mouse_thymus_scdata.p') scdata = wishbone.wb.SCdata.load('mouse_thymus_scdata.p') </code> <a id="wishbone"></a><h3> Running Wishbone </h3> wishbone.wb.Wishbone is the class for running Wishbone. After initialization, Wishbone can be run by specifying the start cell, components to use and number of waypoints to be used. The start cell for this dataset is one of the DN cells based on expression of CD4 and CD8. A sample list of waypoints is also provided as part of the data. The parameter can also be an integer representing the number of waypoints to be sampled from the data. End of explanation """ wb """ Explanation: Wishbone objects contain the SCData object along with the identified trajectory, branch associations and waypoints End of explanation """ fig, ax = wb.plot_wishbone_on_tsne() """ Explanation: <a id="wishbone2"></a><h3> Visualizing Wishbone results </h3> Wishbone trajectory and branch results can be visualized on tSNE maps using the plot_wishbone_on_tsne function End of explanation """ vals, fig, ax = wb.plot_marker_trajectory(['CD4', 'CD8', 'CD44'], smoothing_factor=1.5, show_variance=True); """ Explanation: Gene expression trends along the Wishbone trajectory can be visualized using the plot_marker_trajectory function. This function also returns the smoothed trends along with the matplotlib fig, ax handler objects. End of explanation """ fig, ax = wb.plot_marker_heatmap(vals) fig, ax = wb.plot_marker_heatmap(vals, trajectory_range=[0.1, 0.6]) """ Explanation: The marker trends can be visualized as heatmaps in a given trajectory range using the following functions End of explanation """ fig, ax = wb.plot_derivatives(vals) fig, ax = wb.plot_derivatives(vals, trajectory_range=[0.3, 0.6]) """ Explanation: The change in marker trends along the trajectory or derivatives can be visualized using these functions End of explanation """ scdata.run_phenograph() """ Explanation: Similar to SCData objects, Wishbone objects can also be saved and loaded using save and load functions. <a id='pheno'></a><h3>Clustering of cells </h3> Clustering of cells to identify phenopytically distinct populations can be performed using the run_phenograph function. This function uses <a href="">Phenograph</a> to cluster the cells. Phenograph was used in the Wishbone manuscript to filter out stromal cells and regulatory T cells from the mouse thymus End of explanation """ fig, ax = scdata.plot_phenograph_clusters() fig, ax = scdata.summarize_phenograph_clusters() """ Explanation: The results of Phenograph can be visualized on the tSNE maps using the function plot_phenograph_clusters and the mean expression of all markers in each cluster can be visualized as a heatmap using the function summarize_phenograph_clusters End of explanation """ new_scd = scdata.select_clusters([0, 1, 2, 3]) new_scd """ Explanation: Finally, a new SCData with cells belonging to a subset of clusters can be generated using the select_clusters function End of explanation """ %matplotlib notebook sns.set_style('whitegrid') """ Explanation: <a id='gating'></a><h3>Gating for start cells </h3> The python implementation of Wishbone does not yet support the specification of a get for start cells. This section provides information about how to choose a cell from a desired section of the tSNE plot As a first step, change the plotting defaults to show the axes on tSNE maps and use alternative matplotlib settings to have the ability to zoom in and out End of explanation """ plt.figure(figsize=[5, 5]) plt.scatter(scdata.tsne['x'], scdata.tsne['y'], s=10, edgecolors='none') """ Explanation: Plot tSNE results on a scatter plot End of explanation """ # The gate should be specified as x1, y1, x2, y2,....xn, yn gate = matplotlib.path.Path(np.reshape([30, -20, 40, -20, 40, -10, 30, -10, 30, -20], [5, 2])) """ Explanation: The gate can be set using the matplotlib.path functions. Make sure to choose a closed path by specifying the x-y co ordinates of the desired gate. End of explanation """ gated_cells = scdata.tsne.index[gate.contains_points(scdata.tsne)] # Plot the gated cells %matplotlib inline plt.figure(figsize=[5, 5]) plt.scatter(scdata.tsne['x'], scdata.tsne['y'], s=10, edgecolors='none', color='lightgrey') plt.scatter(scdata.tsne.ix[gated_cells, 'x'], scdata.tsne.ix[gated_cells, 'y'], s=10, edgecolors='none') """ Explanation: The cells within the gate can be identified and plotted using the following code snippet End of explanation """ start_cell = random.sample(list(gated_cells), 1)[0] start_cell """ Explanation: Randomly select one of the cells from the gated cells as the start cell End of explanation """
phoebe-project/phoebe2-docs
2.2/tutorials/general_concepts.ipynb
gpl-3.0
!pip install -I "phoebe>=2.2,<2.3" """ Explanation: General Concepts HOW TO RUN THIS FILE: if you're running this in a Jupyter notebook or Google Colab session, you can click on a cell and then shift+Enter to run the cell and automatically select the next cell. Alt+Enter will run a cell and create a new cell below it. Ctrl+Enter will run a cell but keep it selected. To restart from scratch, restart the kernel/runtime. This tutorial introduces all the general concepts of dealing with Parameters, ParameterSets, and the Bundle. This tutorial aims to be quite complete - covering almost everything you can do with Parameters, so on first read you may just want to try to get familiar, and then return here as a reference for any details later. All of these tutorials assume basic comfort with Python in general - particularly with the concepts of lists, dictionaries, and objects as well as basic comfort with using the numpy and matplotlib packages. Setup Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation """ import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt """ Explanation: Let's get started with some basic imports End of explanation """ logger = phoebe.logger(clevel='INFO', flevel='DEBUG', filename='tutorial.log') """ Explanation: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore. PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants. Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent. Logger Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel). A convenience function is provided at the top-level via phoebe.logger to initialize the logger with any desired level. The levels from most to least information are: DEBUG INFO WARNING ERROR CRITICAL End of explanation """ param = phoebe.parameters.StringParameter(qualifier='myparameter', description='mydescription', value='myvalue') """ Explanation: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel. So with this logger, anything with INFO, WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory. Note: the logger messages are not included in the outputs shown below. Parameters Parameters hold a single value, but need to be aware about their own types, limits, and connection with other Parameters (more on this later when we discuss ParameterSets). Note that generally you won't ever have to "create" or "define" your own Parameters, those will be created for you by helper functions, but we have to start somewhere... so let's create our first Parameter. We'll start with creating a StringParameter since it is the most generic, and then discuss and specific differences for each type of Parameter. End of explanation """ print(type(param)) """ Explanation: If you ever need to know the type of a Parameter, you can always use python's built-in type functionality: End of explanation """ print(param) """ Explanation: If we print the parameter object we can see a summary of information End of explanation """ print(param.tags) """ Explanation: You can see here that we've defined three a few things about parameter: the qualifier, description, and value (others do exist, they just don't show up in the summary). These "things" can be split into two groups: tags and attributes (although in a pythonic sense, both can be accessed as attributes). Don't worry too much about this distinction - it isn't really important except for the fact that tags are shared across all Parameters whereas attributes are dependent on the type of the Parameter. The tags of a Parameter define the Parameter and how it connects to other Parameters (again, more on this when we get to ParameterSets). For now, just know that you can access a list of all the tags as follows: End of explanation """ print(param['qualifier'], param.qualifier) """ Explanation: and that each of these is available through both a dictionary key and an object attribute. For example: End of explanation """ param.attributes """ Explanation: The 'qualifier' attribute is essentially an abbreviated name for the Parameter. These tags will be shared across all Parameters, regardless of their type. Attributes, on the other hand, can be dependent on the type of the Parameter and tell the Parameter its rules and how to interpret its value. You can access a list of available attributes as follows: End of explanation """ print(param['description'], param.description) """ Explanation: and again, each of these are available through both a dictionary key and as an object attribute. For example, all parameters have a 'description' attribute which gives additional information about what the Parameter means: End of explanation """ print(param.get_value(), param['value'], param.value) """ Explanation: For the special case of the 'value' attribute, there is also a get_value method (will become handy later when we want to be able to request the value in a specific unit). End of explanation """ param.set_value('newvalue') print(param.get_value()) """ Explanation: The value attribute is also the only attribute that you'll likely want to change, so it also has a set_value method: End of explanation """ param = phoebe.parameters.ChoiceParameter(qualifier='mychoiceparameter', description='mydescription', choices=['choice1', 'choice2'], value='choice1') print(param) print(param.attributes) print(param['choices'], param.choices) print(param.get_value()) #param.set_value('not_a_choice') # would raise a ValueError param.set_value('choice2') print(param.get_value()) """ Explanation: The 'visible_if' attribute only comes into play when the Parameter is a member of a ParameterSet, so we'll discuss it at the end of this tutorial when we get to ParameterSets. The 'copy_for' attribute is only used when the Parameter is in a particular type of ParameterSet called a Bundle (explained at the very end of this tutorial). We'll see the 'copy_for' capability in action later in the Datasets Tutorial, but for now, just know that you can view this property only and cannot change it... and most of the time it will just be an empty string. StringParameters We'll just mention StringParameters again for completeness, but we've already seen about all they can do - the value must cast to a valid string but no limits or checks are performed at all on the value. ChoiceParameters ChoiceParameters are essentially StringParameters with one very important exception: the value must match one of the prescribed choices. Therefore, they have a 'choice' attribute and an error will be raised if trying to set the value to any string not in that list. End of explanation """ param = phoebe.parameters.SelectParameter(qualifier='myselectparameter', description='mydescription', choices=['choice1', 'choice2'], value=['choice1']) print(param) print(param.attributes) print(param['choices'], param.choices) print(param.get_value()) """ Explanation: SelectParameters NEW IN PHOEBE 2.1 SelectParameters are very similar to ChoiceParameters except that the value is a list, where each item must match one of the prescribed choices. End of explanation """ param.set_value(["choice*"]) print(param.get_value()) print(param.expand_value()) """ Explanation: However, SelectParameters also allow you to use * as a wildcard and will expand to any of the choices that match that wildcard. For example, End of explanation """ param = phoebe.parameters.FloatParameter(qualifier='myfloatparameter', description='mydescription', default_unit=u.m, limits=[None,20], value=5) print(param) """ Explanation: FloatParameters FloatParameters are probably the most common Parameter used in PHOEBE and hold both a float and a unit, with the ability to retrieve the value in any other convertible unit. End of explanation """ print(param.attributes) """ Explanation: You'll notice here a few new mentions in the summary... "Constrained by", "Constrains", and "Related to" are all referring to constraints which will be discussed in a future tutorial. End of explanation """ print(param['limits'], param.limits) #param.set_value(30) # would raise a ValueError param.set_value(2) print(param.get_value()) """ Explanation: FloatParameters have an attribute which hold the "limits" - whenever a value is set it will be checked to make sure it falls within the limits. If either the lower or upper limit is None, then there is no limit check for that extreme. End of explanation """ print(param['default_unit'], param.default_unit) """ Explanation: FloatParameters have an attribute which holds the "default_unit" - this is the unit in which the value is stored and the unit that will be provided if not otherwise overriden. End of explanation """ print(param.get_value()) """ Explanation: Calling get_value will then return a float in these units End of explanation """ print(param.get_value(unit=u.km), param.get_value(unit='km')) """ Explanation: But we can also request the value in a different unit, by passing an astropy Unit object or its string representation. End of explanation """ print(param.get_quantity(), param.get_quantity(unit=u.km)) """ Explanation: FloatParameters also have their own method to access an astropy Quantity object that includes both the value and the unit End of explanation """ param.set_value(10) print(param.get_quantity()) param.set_value(0.001*u.km) print(param.get_quantity()) param.set_value(10, unit='cm') print(param.get_quantity()) """ Explanation: The set_value method also accepts a unit - this doesn't change the default_unit internally, but instead converts the provided value before storing. End of explanation """ param.set_default_unit(u.km) print(param.get_quantity()) """ Explanation: If for some reason you want to change the default_unit, you can do so as well: End of explanation """ print(param.limits) """ Explanation: But note that the limits are still stored as a quantity object in the originally defined default_units End of explanation """ param = phoebe.parameters.IntParameter(qualifier='myintparameter', description='mydescription', limits=[0,None], value=1) print(param) print(param.attributes) """ Explanation: IntParameters IntParameters are essentially the same as FloatParameters except they always cast to an Integer and they have no units. End of explanation """ print(param['limits'], param.limits) """ Explanation: Like FloatParameters above, IntParameters still have limits End of explanation """ param.set_value(1.9) print(param.get_value()) """ Explanation: Note that if you try to set the value to a float it will not raise an error, but will cast that value to an integer (following python rules of truncation, not rounding) End of explanation """ param = phoebe.parameters.BoolParameter(qualifier='myboolparameter', description='mydescription', value=True) print(param) print(param.attributes) """ Explanation: Bool Parameters BoolParameters are even simpler - they accept True or False. End of explanation """ param.set_value(0) print(param.get_value()) param.set_value(None) print(param.get_value()) """ Explanation: Note that, like IntParameters, BoolParameters will attempt to cast anything you give it into True or False. End of explanation """ param.set_value('') print(param.get_value()) param.set_value('some_string') print(param.get_value()) """ Explanation: As with Python, an empty string will cast to False and a non-empty string will cast to True End of explanation """ param.set_value('False') print(param.get_value()) param.set_value('false') print(param.get_value()) """ Explanation: The only exception to this is that (unlike Python), 'true' or 'True' will cast to True and 'false' or 'False' will cast to False. End of explanation """ param = phoebe.parameters.FloatArrayParameter(qualifier='myfloatarrayparameters', description='mydescription', default_unit=u.m, value=np.array([0,1,2,3])) print(param) print(param.attributes) print(param.get_value(unit=u.km)) """ Explanation: FloatArrayParameters FloatArrayParameters are essentially the same as FloatParameters (in that they have the same unit treatment, although obviously no limits) but hold numpy arrays rather than a single value. By convention in Phoebe, these will (almost) always have a pluralized qualifier. End of explanation """ param1 = phoebe.parameters.FloatParameter(qualifier='param1', description='param1 description', default_unit=u.m, limits=[None,20], value=5, context='context1', kind='kind1') param2 = phoebe.parameters.FloatParameter(qualifier='param2', description='param2 description', default_unit=u.deg, limits=[0,2*np.pi], value=0, context='context2', kind='kind2') param3 = phoebe.parameters.FloatParameter(qualifier='param3', description='param3 description', default_unit=u.kg, limits=[0,2*np.pi], value=0, context='context1', kind='kind2') ps = phoebe.parameters.ParameterSet([param1, param2, param3]) print(ps.to_list()) """ Explanation: FloatArrayParameters also allow for built-in interpolation... but this requires them to be a member of a Bundle, so we'll discuss this in just a bit. ParametersSets ParameterSets are a collection of Parameters that can be filtered by their tags to return another ParameterSet. For illustration, let's create 3 random FloatParameters and combine them to make a ParameterSet. End of explanation """ print(ps) """ Explanation: If we print a ParameterSet, we'll see a listing of all the Parameters and their values. End of explanation """ print(ps.tags) """ Explanation: Similarly to Parameters, we can access the tags of a ParameterSet End of explanation """ print(ps.get('param1@kind1')) """ Explanation: Twigs The string notation used for the Parameters is called a 'twig' - its simply a combination of all the tags joined with the '@' symbol and gives a very convenient way to access any Parameter. The order of the tags doesn't matter, and you only need to provide enough tags to produce a unique match. Since there is only one parameter with kind='kind1', we do not need to provide the extraneous context='context1' in the twig to get a match. End of explanation """ print(ps.get('param1@kind1').description) """ Explanation: Note that this returned the ParameterObject itself, so you can now use any of the Parameter methods or attributes we saw earlier. For example: End of explanation """ ps.set_value('param1@kind1', 10) print(ps.get_value('param1@kind1')) """ Explanation: But we can also use set and get_value methods from the ParameterSet itself: End of explanation """ print(ps.meta.keys()) """ Explanation: Tags Each Parameter has a number of tags, and the ParameterSet has the same tags - where the value of any given tag is None if not shared by all Parameters in that ParameterSet. So let's just print the names of the tags again and then describe what each one means. End of explanation """ print(ps.context) """ Explanation: Most of these "metatags" act as labels - for example, you can give a component tag to each of the components for easier referencing. But a few of these tags are fixed and not editable: qualifier: literally the name of the parameter. kind: tells what kind a parameter is (ie whether a component is a star or an orbit). context: tells what context this parameter belongs to twig: a shortcut to the parameter in a single string. uniquetwig: the minimal twig needed to reach this parameter. uniqueid: an internal representation used to reach this parameter These contexts are (you'll notice that most are represented in the tags): setting history system component feature dataset constraint compute model fitting [not yet supported] feedback [not yet supported] plugin [not yet supported] One way to distinguish between context and kind is with the following question and answer: "What kind of [context] is this? It's a [kind] tagged [context]=[tag-with-same-name-as-context]." In different cases, this will then become: "What kind of component is this? It's a star tagged component=starA." (context='component', kind='star', component='starA') "What kind of feature is this? It's a spot tagged feature=spot01." (context='feature', kind='spot', feature='spot01') "What kind of dataset is this? It's a LC (light curve) tagged dataset=lc01." (context='dataset', kind='LC', dataset='lc01') "What kind of compute (options) are these? They're phoebe (compute options) tagged compute=preview." (context='compute', kind='phoebe', compute='preview') As we saw before, these tags can be accessed at the Parameter level as either a dictionary key or as an object attribute. For ParameterSets, the tags are only accessible through object attributes. End of explanation """ print(ps.contexts) """ Explanation: This returns None since not all objects in this ParameterSet share a single context. But you can see all the options for a given tag by providing the plural version of that tag name: End of explanation """ print(ps.filter(context='context1')) """ Explanation: Filtering Any of the tags can also be used to filter the ParameterSet: End of explanation """ print(ps.filter(context='context1', kind='kind1')) """ Explanation: Here we were returned a ParameterSet of all Parameters that matched the filter criteria. Since we're returned another ParameterSet, we can chain additional filter calls together. End of explanation """ print(ps.filter(context='context1', kind='kind1')) """ Explanation: Now we see that we have drilled down to a single Parameter. Note that a ParameterSet is still returned - filter will always return a ParameterSet. We could have accomplished the exact same thing with a single call to filter: End of explanation """ print(ps.filter(context='context1', kind='kind1').get()) print(ps.get(context='context1', kind='kind1')) """ Explanation: If you want to access the actual Parameter, you must use get instead of (or in addition to) filter. All of the following lines do the exact same thing: End of explanation """ print(ps['context1@kind1']) print(ps['context1']['kind1']) """ Explanation: Or we can use those twigs. Remember that twigs are just a combination of these tags separated by the @ symbol. You can use these for dictionary access in a ParameterSet - without needing to provide the name of the tag, and without having to worry about order. And whenever this returns a ParameterSet, these are also chainable, so the following two lines will do the same thing: End of explanation """ print(ps['context1']) print(ps['context1@kind1']) """ Explanation: You may notice that the final result was a Parameter, not a ParameterSet. Twig dictionary access tries to be smart - if exactly 1 Parameter is found, it will return that Parameter instead of a ParameterSet. Notice the difference between the two following lines: End of explanation """ print(ps['context1@kind1']['description']) """ Explanation: Of course, once you get the Parameter you can then use dictionary keys to access any attributes of that Parameter. End of explanation """ print(ps['description@context1@kind1']) """ Explanation: So we decided we might as well allow access to those attributes directly from the twig as well End of explanation """ b = phoebe.Bundle() print(b) """ Explanation: The Bundle The Bundle is nothing more than a glorified ParameterSet with some extra methods to compute models, add new components and datasets, etc. You can initialize an empty Bundle as follows: End of explanation """ print(b.filter(context='system')) """ Explanation: and filter just as you would for a ParameterSet End of explanation """ param1 = phoebe.parameters.ChoiceParameter(qualifier='what_is_this', choices=['matter', 'aether'], value='matter', context='context1') param2 = phoebe.parameters.FloatParameter(qualifier='mass', default_unit=u.kg, value=5, visible_if='what_is_this:matter', context='context1') b = phoebe.Bundle([param1, param2]) print(b) """ Explanation: Visible If As promised earlier, the 'visible_if' attribute of a Parameter controls whether its visible to a ParameterSet... but it only does anything if the Parameter belongs to a Bundle. Let's make a new ParameterSet in which the visibility of one parameter is dependent on the value of another. End of explanation """ b.set_value('what_is_this', 'aether') print(b) """ Explanation: It doesn't make much sense to need to define a mass if this thing isn't baryonic. So if we change the value of 'what_is_this' to 'aether' then the 'mass' Parameter will temporarily hide itself. End of explanation """ xparam = phoebe.parameters.FloatArrayParameter(qualifier='xs', default_unit=u.d, value=np.linspace(0,1,10), context='context1') yparam = phoebe.parameters.FloatArrayParameter(qualifier='ys', default_unit=u.m, value=np.linspace(0,1,10)**2, context='context1') b = phoebe.Bundle([xparam, yparam]) print(b.filter('ys').get().twig) print(b['ys'].get_value()) """ Explanation: FloatArrayParameters: interpolation As mentioned earlier, when a part of a Bundle, FloatArrayParameters can handle simple linear interpolation with respect to another FloatArrayParameter in the same Bundle. End of explanation """ print(b['ys'].interp_value(xs=0)) print(b['ys'].interp_value(xs=0.2)) """ Explanation: Now we can interpolate the 'ys' param for any given value of 'xs' End of explanation """
hadim/public_notebooks
Theory/Microfluidic_Flow_Rate/notebook.ipynb
mit
%matplotlib qt import numpy as np import matplotlib.pyplot as plt def calculcate_section_circle(diameter): return np.pi * ((diameter / 2) ** 2) def calculcate_section_rectangle(height, width): return height * width def calculate_characteristic_length_circle(diameter): return diameter def calculate_characteristic_length_rectangle(height, width): return (2 * height * width) / (height + width) def calculate_kinematic_viscosity(viscosity, density): return viscosity / density def calculate_flow_rate_circle(diameter, viscosity, length, delta_pressure): return (np.pi * (diameter / 2)**4 * np.abs(delta_pressure)) / (8 * viscosity * length) def calculate_flow_rate_rectangle(height, width, viscosity, length, delta_pressure): return (1 - 0.630 * (height / width)) * (((height ** 3) * width * delta_pressure) / (12 * viscosity * length)) def calculate_pressure_circle(diameter, viscosity, length, flow_rate): return (8 * viscosity * length * flow_rate) / (np.pi * (diameter / 2)**4) def calculate_pressure_rectangle(height, width, viscosity, length, flow_rate): return (12 * viscosity * length * flow_rate) / ((1 - 0.630 * (height / width)) * ((height ** 3) * width)) def calculate_flow_velocity(flow_rate, section): return (flow_rate / section) def calculate_reynolds_number(kinematic_viscosity, characteristic_length, velocity): return (velocity * characteristic_length) / kinematic_viscosity """ Explanation: Explore microfluidic flow rate and pressure according to a defined geometry Date : 20/11/2016 Note : Feel free to report any errors/mistakes. The purpose of this notebook is not to go deep inside the fluid mechanic in microfluidic devices but more trying to develop an intuition of the different physical constants and their order of magnitude at this scale. <img src="drawing.png" alt="" width="600px"/> For easier calculations, we assume very simple geometries : rectangle or circle. Theory Flow rate The flow rate depends on the pressure difference, the properties of the fluid and the geometry used. We can apply the Ohm's law to a flow of particles other than electron. $ \Delta P = RQ $ Where : $\Delta P$ is the pressure (analog to the tension $U$). $R$ is the internal resistance of the system (analog to the Ohm's resistance). $Q$ is the flow rate, analog to the intensity of the current $I$. For each type of channels used in the microfluidic system the resistance need to be calculated. Cylindrical channel For a Poiseuille flow, we define the resistance ($R$) into a cylindrical channel with : $ R = \cfrac{8 \eta L}{\pi r⁴} $ Where : $R$ is the resistance of the channel (Pa.s/m³) $r$ is the internal radius of the cylinder $\eta$ is the dynamic fluid viscosity (Pa.s or N.s/m²) $L$ is the length of the channel (m) Then we can define the flow rate $Q$ according to a given pressure $P$. $ Q = \cfrac{\Delta P}{R} = \cfrac{\pi r⁴}{8 \eta L} |\Delta P| $ Where : $\Delta P$ is the difference of pressure (Pa) $Q$ is the flow rate (m³/s) Rectangular channel For a Poiseuille flow, we define the the resistance ($R$) into a rectangular channel with : $ R \approx \cfrac{12 \eta L}{\Bigl[1 - 0.630 \cfrac{h}{w}\Bigr] h³w} $ Where : $R$ is the resistance of the channel (Pa.s/m³) $h$ is the height of the rectangular channel (m) $w$ is the width of the rectangular channel (m) $\eta$ is the dynamic fluid viscosity (Pa.s or N.s/m²) $L$ is the length of the channel (m) Then we can define the flow rate $Q$ according to a given pressure $P$. $ Q = \cfrac{\Delta P}{R} \approx \cfrac{\Bigl[1 - 0.630 \cfrac{h}{w}\Bigr] h³w}{12 \eta L} \Delta P $ Where : $\Delta P$ is the difference of pressure (Pa) $Q$ is the flow rate (m³/s) Flow velocity The velocity $V$ is calculated from the flow rate ($Q$) and the section of the channel ($S$): $V = \cfrac{Q}{S}$ Where : $V$ is the fluid velocity (m/s) $Q$ is the flow rate (m³/s) $S$ is the surface section (m²) Reynolds number We define the Reynolds number : $Re = \cfrac{inertial\; forces}{viscous\; forces} = \cfrac{ρVL}{μ}$ Where : $ρ$ is the density of the fluid (kg/m³) $V$ is the velocity of the object relative to the fluid (m/s) $L$ is a characteristic length (m) $μ$ is the dynamic viscosity of the fluid (Pa.s or N.s/m²) We can define the kinematic viscosity of the fluid (m²/s) : $ν = \cfrac{μ}{ρ}$. So : $Re = \cfrac{ρVL}{μ} = \cfrac{VL}{ν}$ Calculations Below some functions to help make the calculation of the different equations above. End of explanation """ from ipywidgets import interactive """ Explanation: Define some initial conditions (feel free to test others conditions) : Interactive widgets End of explanation """ # Initial conditions ## Geometry height = 100e-6 # m (not used for circle) width = 100e-6 # m (not used for circle) diameter = 100e-6 # m (not used for rectangle) shape = "rectangle" # Choose in ["rectangle", "circle"] length = 20e-3 # m ## Water at 20°C viscosity = 1.307e-3 # Pa.s or N.s/m² density = 1e3 # kg/m³ ## Flow rate wanted flow_rate = 20 # ul/min ## Convert flow rate to IU flow_rate_iu = (flow_rate / 60) * 1e-9 assert height <= width """ Explanation: Calculate pressure needed for a given flow rate End of explanation """ if shape == "circle": section = calculcate_section_circle(diameter) characteristic_length = calculate_characteristic_length_circle(diameter) pressure = calculate_pressure_circle(diameter, viscosity, length, flow_rate_iu) elif shape == "rectangle": section = calculcate_section_rectangle(height, width) characteristic_length = calculate_characteristic_length_rectangle(height, width) pressure = calculate_pressure_rectangle(height, width, viscosity, length, flow_rate_iu) kinematic_viscosity = calculate_kinematic_viscosity(viscosity, density) flow_velocity = calculate_flow_velocity(flow_rate_iu, section) reynolds_number = calculate_reynolds_number(kinematic_viscosity, characteristic_length, flow_velocity) print("Shape used is '{}'.".format(shape)) print("Flow rate wanted is {} ul/min.".format(flow_rate)) print() print("Section is {0:.3f} mm².".format(section*1e6)) print("Characteristic length is {0:.0f} μm.".format(characteristic_length*1e6)) print() print("Kinematic viscosity is {} m²/s.".format(kinematic_viscosity)) print() print("Pressure {0:.3f} mbar.".format(pressure / 100)) print("Flow velocity is {0:.3f} m/s.".format(flow_velocity)) print() print("Reynolds number is {0:.6f}.".format(reynolds_number)) if reynolds_number < 2300: print("Flow regime is laminar.") elif reynolds_number >= 2300 and reynolds_number < 4000: print("Flow regime is transitional.") elif reynolds_number >= 4000: print("Flow regime is turbulent.") """ Explanation: Do the math ! End of explanation """ n = 1000 ## Geometry height = np.linspace(50e-6, 0.001, n) # m width = np.linspace(50e-6, 0.001, n) # m diameter = np.linspace(50e-6, 0.001, n) # m length = 20e-3 # m ## Water at 20°C viscosity = 1.307e-3 # Pa.s or N.s/m² density = 1e3 # kg/m³ ## Flow rate wanted flow_rate = 20 # ul/min ## Convert flow rate to IU flow_rate_iu = (flow_rate / 60) * 1e-9 pressure_circle = calculate_pressure_circle(diameter, viscosity, length, flow_rate_iu) pressure_rectangle = calculate_pressure_rectangle(height, width, viscosity, length, flow_rate_iu) fig, ax = plt.subplots() ax.plot(height * 1e6, pressure_rectangle / 100, label="Square Channel", lw=2, alpha=0.8) ax.plot(diameter * 1e6, pressure_circle / 100, label="Cylindrical Channel", lw=2, alpha=0.8) ax.set_xlabel("Shape Size (um)") ax.set_ylabel("Pressure Needed (mbar)") #ax.set_xscale("log") ax.set_yscale("log") ax.legend() ax.set_title("Pressure needed versus the shape size\nfor a flow rate of {} ul/min".format(flow_rate)) """ Explanation: Important : Calculation is only valid for laminar flow ! Plot : Shape size against pressure for a given flow rate End of explanation """ n = 1000 ## Geometry height = 100e-6 # m width = 100e-6 # m length = 20e-3 # m ## Water at 20°C viscosity = 1.307e-3 # Pa.s or N.s/m² density = 1e3 # kg/m³ ## Flow rate wanted flow_rate = np.linspace(1, 500, n) # ul/min ## Convert flow rate to IU flow_rate_iu = (flow_rate / 60) * 1e-9 pressure_rectangle = calculate_pressure_rectangle(height, width, viscosity, length, flow_rate_iu) fig, ax = plt.subplots() ax.plot(flow_rate, pressure_rectangle / 100, label="Square Channel", lw=2, alpha=1.0, color="gray") ax.set_xlabel("Flow rate wanted (ul/min)") ax.set_ylabel("Pressure Needed (mbar)") #ax.set_xscale("log") ax.set_yscale("log") ax.set_title("Pressure needed versus the flow rate wanted\nfor a rectangular channel of wxhxL = {}x{}x{} m ".format(height, width, length)) """ Explanation: Plot : Flow rate against pressure for a given shape size End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/mpi-esm-1-2-lr/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: MPI-M Source ID: MPI-ESM-1-2-LR Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:17 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/04_features/a_features.ipynb
apache-2.0
import math import shutil import numpy as np import pandas as pd import tensorflow as tf print(tf.__version__) tf.logging.set_verbosity(tf.logging.INFO) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format """ Explanation: Trying out features Learning Objectives: * Improve the accuracy of a model by adding new features with the appropriate representation The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Set Up In this first cell, we'll load the necessary libraries. End of explanation """ df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",") """ Explanation: Next, we'll load our data set. End of explanation """ df.head() df.describe() """ Explanation: Examine and split the data It's a good idea to get to know your data a little bit before you work with it. We'll print out a quick summary of a few useful statistics on each column. This will include things like mean, standard deviation, max, min, and various quantiles. End of explanation """ np.random.seed(seed=1) #makes result reproducible msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] """ Explanation: Now, split the data into two parts -- training and evaluation. End of explanation """ def add_more_features(df): df['avg_rooms_per_house'] = df['total_rooms'] / df['households'] #expect positive correlation df['avg_persons_per_room'] = df['population'] / df['total_rooms'] #expect negative correlation return df # Create pandas input function def make_input_fn(df, num_epochs): return tf.estimator.inputs.pandas_input_fn( x = add_more_features(df), y = df['median_house_value'] / 100000, # will talk about why later in the course batch_size = 128, num_epochs = num_epochs, shuffle = True, queue_capacity = 1000, num_threads = 1 ) # Define your feature columns def create_feature_cols(): return [ tf.feature_column.numeric_column('housing_median_age'), tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), boundaries = np.arange(32.0, 42, 1).tolist()), tf.feature_column.numeric_column('avg_rooms_per_house'), tf.feature_column.numeric_column('avg_persons_per_room'), tf.feature_column.numeric_column('median_income') ] # Create estimator train and evaluate function def train_and_evaluate(output_dir, num_train_steps): estimator = tf.estimator.LinearRegressor(model_dir = output_dir, feature_columns = create_feature_cols()) train_spec = tf.estimator.TrainSpec(input_fn = make_input_fn(traindf, None), max_steps = num_train_steps) eval_spec = tf.estimator.EvalSpec(input_fn = make_input_fn(evaldf, 1), steps = None, start_delay_secs = 1, # start evaluating after N seconds, throttle_secs = 5) # evaluate every N seconds tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Launch tensorboard OUTDIR = './trained_model' # Run the model shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file train_and_evaluate(OUTDIR, 2000) """ Explanation: Training and Evaluation In this exercise, we'll be trying to predict median_house_value It will be our label (sometimes also called a target). We'll modify the feature_cols and input function to represent the features you want to use. We divide total_rooms by households to get avg_rooms_per_house which we expect to positively correlate with median_house_value. We also divide population by total_rooms to get avg_persons_per_room which we expect to negatively correlate with median_house_value. End of explanation """
gon1213/SDC
traffic_sign/tensorflow/CarND-LeNet-Lab/LeNet-Lab.ipynb
gpl-3.0
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", reshape=False) X_train, y_train = mnist.train.images, mnist.train.labels X_validation, y_validation = mnist.validation.images, mnist.validation.labels X_test, y_test = mnist.test.images, mnist.test.labels assert(len(X_train) == len(y_train)) assert(len(X_validation) == len(y_validation)) assert(len(X_test) == len(y_test)) print() print("Image Shape: {}".format(X_train[0].shape)) print() print("Training Set: {} samples".format(len(X_train))) print("Validation Set: {} samples".format(len(X_validation))) print("Test Set: {} samples".format(len(X_test))) """ Explanation: LeNet Lab Source: Yan LeCun Load Data Load the MNIST data, which comes pre-loaded with TensorFlow. You do not need to modify this section. End of explanation """ import numpy as np # Pad images with 0s X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant') print("Updated Image Shape: {}".format(X_train[0].shape)) """ Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images. However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels. In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32). You do not need to modify this section. End of explanation """ import random import numpy as np import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index]) """ Explanation: Visualize Data View a sample from the dataset. You do not need to modify this section. End of explanation """ from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) """ Explanation: Preprocess Data Shuffle the training data. You do not need to modify this section. End of explanation """ import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 128 """ Explanation: Setup TensorFlow The EPOCH and BATCH_SIZE values affect the training speed and model accuracy. You do not need to modify this section. End of explanation """ from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_w = tf.Variable(tf.truncated_normal(shape=(5,5,1,6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_w, strides=[1,1,1,1], padding= "VALID")+conv1_b # TODO: Activation. conv1 = tf.nn.relu(conv1) # TODO: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID" ) # TODO: Layer 2: Convolutional. Output = 10x10x16. conv2_w = tf.Variable(tf.truncated_normal(shape=(5,5,6,16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_w, strides=[1,1,1,1],padding= "VALID")+conv2_b # TODO: Activation. conv2 = tf.nn.relu(conv2) # TODO: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1,2,2,1], strides=[1,2,2,1], padding = "VALID") # TODO: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # TODO: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_w = tf.Variable(tf.truncated_normal(shape=(400,120), mean = mu, stddev= sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0,fc1_w)+fc1_b # TODO: Activation. fc1 = tf.nn.relu(fc1) # TODO: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_w = tf.Variable(tf.truncated_normal(shape=(120,84), mean = mu, stddev= sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1,fc2_w)+fc2_b # TODO: Activation. fc2 = tf.nn.relu(fc2) # TODO: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_w = tf.Variable(tf.truncated_normal(shape=(84,10), mean = mu, stddev= sigma)) fc3_b = tf.Variable(tf.zeros(10)) logits = tf.matmul(fc2,fc3_w)+fc3_b return logits """ Explanation: TODO: Implement LeNet-5 Implement the LeNet-5 neural network architecture. This is the only cell you need to edit. Input The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case. Architecture Layer 1: Convolutional. The output shape should be 28x28x6. Activation. Your choice of activation function. Pooling. The output shape should be 14x14x6. Layer 2: Convolutional. The output shape should be 10x10x16. Activation. Your choice of activation function. Pooling. The output shape should be 5x5x16. Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you. Layer 3: Fully Connected. This should have 120 outputs. Activation. Your choice of activation function. Layer 4: Fully Connected. This should have 84 outputs. Activation. Your choice of activation function. Layer 5: Fully Connected (Logits). This should have 10 outputs. Output Return the result of the 2nd fully connected layer. End of explanation """ x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 10) """ Explanation: Features and Labels Train LeNet to classify MNIST data. x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels. You do not need to modify this section. End of explanation """ rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) """ Explanation: Training Pipeline Create a training pipeline that uses the model to classify MNIST data. You do not need to modify this section. End of explanation """ correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples """ Explanation: Model Evaluation Evaluate how well the loss and accuracy of the model for a given dataset. You do not need to modify this section. End of explanation """ with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved") """ Explanation: Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section. End of explanation """ with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) """ Explanation: Evaluate the Model Once you are completely satisfied with your model, evaluate the performance of the model on the test set. Be sure to only do this once! If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data. You do not need to modify this section. End of explanation """
antoniomezzacapo/qiskit-tutorial
qiskit/basics/the_ibmq_provider.ipynb
apache-2.0
from qiskit import IBMQ IBMQ.backends() """ Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> The IBM Q provider In Qiskit we have an interface for backends and jobs that will be useful for running circuits and extending to third-party backends. In this tutorial, we will review the core components of Qiskit’s base backend framework, using the IBM Q provider as an example. The interface has three parts: the provider, the backend, and the job: provider: accesses backends and provides backend objects backend: runs the quantum circuit job: keeps track of the submitted job The Provider The IBMQ Provider is an entity that provides access to a group of different backends (for example, backends available through IBM Q Experience or IBM Q Network). The IBMQ provider inherits from BaseProvider and implements the methods: backends(): returns all backend objects known to the provider. get_backend(name): returns the named backend. The IBM Q provider has some extra functions for handling administrative tasks. The credentials can be saved to disk or used in a session and never saved. enable_account(token, url): enable an account in the current session disable_accounts(**kwargs): disable one or more accounts from current session save_account(token, url): save an account to disk delete_accounts(**kwargs): delete the account or accounts from disk load_accounts(**kwargs): load previously-saved account or accounts into session active_accounts(): list all accounts active in this session stored_accounts(): list all accounts saved to disk End of explanation """ IBMQ.delete_accounts() """ Explanation: Here we see that there are no backends. This is because no accounts have been loaded. Let's start fresh and delete any accounts on disk. If no accounts are on disk this will error End of explanation """ IBMQ.stored_accounts() """ Explanation: verify that there are no accounts stored now End of explanation """ import Qconfig_IBMQ_network import Qconfig_IBMQ_experience """ Explanation: To demonstrate that we can load multiple accounts using the IBMQ provider, here we use two files Qconfig_IBMQ_experience.py and Qconfig_IBMQ_network.py, which are just containers of the APItoken and URL. APItoken = 'MY_API_TOKEN' URL = 'THE_URL' For the IBM Q experience the URL is not needed and is loaded by default in enable_account and save_account. For the IBM Q Network the url is found on your q-console account page. We don't recommend saving and using files like this. We recommend just inputting the APItoken and URL directly into the methods enable_account and save_account. End of explanation """ IBMQ.enable_account(Qconfig_IBMQ_experience.APItoken) """ Explanation: To enable an account (useful for one-off use, or if you don't want to save to disk) End of explanation """ # uncomment to print to screen (it will show your token and url) # IBMQ.active_accounts() """ Explanation: To see that accounts which are enabled for use End of explanation """ IBMQ.backends() """ Explanation: and backends which are available End of explanation """ IBMQ.disable_accounts(token=Qconfig_IBMQ_experience.APItoken) """ Explanation: Disable that account (so we go back to no accounts active) End of explanation """ IBMQ.backends() """ Explanation: Now no backends are available End of explanation """ IBMQ.save_account(Qconfig_IBMQ_experience.APItoken) IBMQ.save_account(Qconfig_IBMQ_network.APItoken, Qconfig_IBMQ_network.url) """ Explanation: Save two accounts: a public (IBM Q experience) and a premium (IBM Q network) End of explanation """ # uncomment to print to screen (it will show your token and url) # IBMQ.stored_accounts() """ Explanation: Now they should show up as present on disk End of explanation """ IBMQ.active_accounts() """ Explanation: but no account active in current session yet End of explanation """ IBMQ.backends() """ Explanation: so IBMQ can't see any backends yet End of explanation """ IBMQ.load_accounts() """ Explanation: now load up every account stored to disk End of explanation """ IBMQ.backends() """ Explanation: backends from two different accounts available for use End of explanation """ IBMQ.backends(hub='ibm-q-internal') """ Explanation: now if you want to work with backends of a single account, you can do so via account filtering End of explanation """ IBMQ.disable_accounts(hub='ibm-q-internal') """ Explanation: but you can also just disable account in the current session End of explanation """ # uncomment to print to screen (it will show your token and url) # IBMQ.active_accounts() """ Explanation: so now only one account is active End of explanation """ IBMQ.backends() """ Explanation: and only that account's backends are available End of explanation """ IBMQ.disable_accounts() IBMQ.load_accounts(hub=None) IBMQ.backends() """ Explanation: or from the start use the filtering to just load up that account you're interested in End of explanation """ IBMQ.backends(operational=True, simulator=False) """ Explanation: Filtering the backends You may also optionally filter the set of returned backends, by passing arguments that query the backend's configuration or status or properties. The filters are passed by conditions and for more general filters you can make advanced functions using the lambda function. As a first example: only return currently operational devices End of explanation """ IBMQ.backends(filters=lambda x: x.configuration()['n_qubits'] <= 5 and not x.configuration()['simulator'] and x.status()['operational']==True) """ Explanation: only return backends that are real devices, have more than 10 qubits and are operational End of explanation """ from qiskit.backends.ibmq import least_busy small_devices = IBMQ.backends(filters=lambda x: x.configuration()['n_qubits'] <= 5 and not x.configuration()['simulator']) least_busy(small_devices) """ Explanation: Filter: show the least busy device (in terms of pending jobs in the queue) End of explanation """ IBMQ.get_backend('ibmq_16_melbourne') """ Explanation: The above filters can be combined as desired. If you just want to get an instance of a particular backend, you can use the get_backend() method. End of explanation """ backend = least_busy(small_devices) """ Explanation: The backend Backends represent either a simulator or a real quantum computer, and are responsible for running quantum circuits and returning results. They have a run method which takes in a qobj as input, which is a quantum object and the result of the compilation process, and returns a BaseJob object. This object allows asynchronous running of jobs for retrieving results from a backend when the job is completed. At a minimum, backends use the following methods, inherited from BaseBackend: provider - returns the provider of the backend name() - gets the name of the backend. status() - gets the status of the backend. configuration() - gets the configuration of the backend. properties() - gets the properties of the backend. run() - runs a qobj on the backend. For remote backends they must support the additional jobs() - returns a list of previous jobs executed by this user on this backend. retrieve_job() - returns a job by a job_id. In future updates they will introduce the following commands defaults() - gives a data structure of typical default parameters. schema() - gets a schema for the backend There are some IBMQ only functions hub - returns the IBMQ hub for this backend. group - returns the IBMQ group for this backend. project - returns the IBMQ project for this backend. End of explanation """ backend.provider """ Explanation: Let's start with the backend.provider, which returns a provider object End of explanation """ backend.name() """ Explanation: Next is the name(), which returns the name of the backend End of explanation """ backend.status() """ Explanation: Next let's look at the status(): operational lets you know that the backend is taking jobs pending_jobs lets you know how many jobs are in the queue End of explanation """ backend.configuration() """ Explanation: The next is configuration() End of explanation """ backend.properties() """ Explanation: The next is properties() method End of explanation """ backend.hub backend.group backend.project """ Explanation: The next is hub, group, and project. For the IBM Q experience these will return None End of explanation """ for ran_job in backend.jobs(limit=5): print(str(ran_job.job_id()) + " " + str(ran_job.status())) """ Explanation: To see your last 5 jobs ran on the backend use the jobs() method of that backend End of explanation """ job = backend.retrieve_job(ran_job.job_id()) """ Explanation: Then the job can be retreived using retrieve_job(job_id()) method End of explanation """ job.status() """ Explanation: The Job object Job instances can be thought of as the “ticket” for a submitted job. They find out the execution’s state at a given point in time (for example, if the job is queued, running, or has failed) and also allow control over the job. They have the following methods: status() - returns the status of the job. backend() - returns the backend the job was run on. job_id() - gets the job_id. cancel() - cancels the job. result() - gets the results from the circuit run. IBMQ only functions creation_date() - gives the date at which the job was created. queue_position() - gives the position of the job in the queue. error_message() - gives the error message of failed jobs. Let's start with the status(). This returns the job status and a message End of explanation """ backend_temp = job.backend() backend_temp """ Explanation: To get a backend object from the job use the backend() method End of explanation """ job.job_id() """ Explanation: To get the job_id use the job_id() method End of explanation """ result = job.result() counts = result.get_counts() print(counts) """ Explanation: To get the result from the job use the result() method End of explanation """ job.creation_date() """ Explanation: If you want to check the creation date use creation_date() End of explanation """ from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister from qiskit import compile qr = QuantumRegister(3) cr = ClassicalRegister(3) circuit = QuantumCircuit(qr, cr) circuit.x(qr[0]) circuit.x(qr[1]) circuit.ccx(qr[0], qr[1], qr[2]) circuit.cx(qr[0], qr[1]) circuit.measure(qr, cr) """ Explanation: Let's make an active example End of explanation """ qobj = compile(circuit, backend=backend, shots=1024) job = backend.run(qobj) """ Explanation: To compile this circuit for the backend use the compile function. It will make a qobj (quantum object) that can be run on the backend using the run(qobj) method. End of explanation """ job.status() """ Explanation: The status of this job can be checked with the status() method End of explanation """ import time #time.sleep(10) job.cancel() """ Explanation: If you made a mistake and need to cancel the job use the cancel() method. End of explanation """ job.status() """ Explanation: The status() will show that the job cancelled. End of explanation """ job = backend.run(qobj) lapse = 0 interval = 60 while job.status().name != 'DONE': print('Status @ {} seconds'.format(interval * lapse)) print(job.status()) print(job.queue_position()) time.sleep(interval) lapse += 1 print(job.status()) result = job.result() counts = result.get_counts() print(counts) """ Explanation: To rerun the job and set up a loop to check the status and queue position you can use the queue_position() method. End of explanation """
tokuda109/tensorflow-docker-skeleton
notebooks/playground_ja/tensorflow/00_tensorflow_activation_functions.ipynb
mit
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt sess = tf.Session() """ Explanation: TensorFlow の活性化関数 活性化関数(伝達関数)は、入力信号の総和を出力信号に変換する関数のことです。 パーセプトロンの時代ではステップ関数が用いられ、バックプロパゲーションの時代ではシグモイド関数が用いられましたが、最近ではReLU関数が多く用いられます。 ここでは、よく使われる活性化関数の概要を説明し、TensorFlowで使う場合のサンプルコードを紹介したいと思います。 事前準備 まずサンプルコードで使用するパッケージをインポートします。 End of explanation """ def step_function(x): return np.array(x > 0, dtype=np.int) X = np.arange(-8., 8., 0.02) Y = step_function(X) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-0.5, 1.5) plt.grid() plt.show() """ Explanation: 活性化関数 ステップ関数 $$ f(x) = \begin{cases} 1 & x \le \theta\ 0 & x > \theta \end{cases} $$ End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.sigmoid(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-0.5, 1.5) plt.grid() plt.show() """ Explanation: ロジスティックシグモイド関数 ロジスティックシグモイド関数はロジスティク関数とも呼ばれていて、範囲は 0 から 1 までの値をとります。 $$ f(x) = \frac{1}{1 + e^{-x}} $$ End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.relu(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-1., 8.) plt.grid() plt.show() """ Explanation: ReLU(Rectified Linear Unit)関数 シグモイド関数やハイパボリックタンジェント関数は絶対値が大きいほど勾配がなくなって学習が停滞する問題があります。 $$ f(x) = max(0, x) $$ End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.relu6(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-1., 8.) plt.grid() plt.show() """ Explanation: ReLU6関数 End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.elu(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-2., 8.) plt.grid() plt.show() """ Explanation: ELU(Exponential Linear Units)関数 End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.softplus(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-1., 8.) plt.grid() plt.show() """ Explanation: ソフトプラス関数 $$ f(x) = \log{(1 + e^x)} $$ End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.softsign(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-1.5, 1.5) plt.grid() plt.show() """ Explanation: ソフトサイン関数 $$ f(x) = \frac{x}{1 + |x|} $$ End of explanation """ X = np.arange(-8., 8., 0.1) Y = sess.run(tf.nn.tanh(X)) plt.plot(X, Y) plt.xlim(-8, 8) plt.ylim(-1.5, 1.5) plt.grid() plt.show() """ Explanation: ハイパボリックタンジェント関数 ハイパボリックタンジェント関数は双曲線正接関数とも呼ばれていて、範囲は -1 から 1 までの値をとります。 $$ tanh(x) = \frac{e^x - e^{-u}}{e^x + e^{-u}} $$ とも表せるように、ロジスティック関数と似た性質を持ちます。 End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_stats_cluster_spatio_temporal_repeated_measures_anova.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Eric Larson <larson.eric.d@gmail.com> # Denis Engemannn <denis.engemann@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np from numpy.random import randn import matplotlib.pyplot as plt import mne from mne import (io, spatial_tris_connectivity, compute_morph_matrix, grade_to_tris) from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm, f_mway_rm, summarize_clusters_stc) from mne.minimum_norm import apply_inverse, read_inverse_operator from mne.datasets import sample print(__doc__) """ Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering This example illustrates how to make use of the clustering functions for arbitrary, self-defined contrasts beyond standard t-tests. In this case we will tests if the differences in evoked responses between stimulation modality (visual VS auditory) depend on the stimulus location (left vs right) for a group of subjects (simulated here using one subject's data). For this purpose we will compute an interaction effect using a repeated measures ANOVA. The multiple comparisons problem is addressed with a cluster-level permutation test across space and time. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' subjects_dir = data_path + '/subjects' tmin = -0.2 tmax = 0.3 # Use a lower tmax to reduce multiple comparisons # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) """ Explanation: Set parameters End of explanation """ raw.info['bads'] += ['MEG 2443'] picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads') # we'll load all four conditions that make up the 'two ways' of our ANOVA event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4) reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) # Equalize trial counts to eliminate bias (which would otherwise be # introduced by the abs() performed below) epochs.equalize_event_counts(event_id) """ Explanation: Read epochs for all channels, removing a bad one End of explanation """ fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) inverse_operator = read_inverse_operator(fname_inv) # we'll only use one hemisphere to speed up this example # instead of a second vertex array we'll pass an empty array sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)] # Let's average and compute inverse, then resample to speed things up conditions = [] for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important evoked = epochs[cond].average() evoked.resample(50, npad='auto') condition = apply_inverse(evoked, inverse_operator, lambda2, method) # Let's only deal with t > 0, cropping to reduce multiple comparisons condition.crop(0, None) conditions.append(condition) tmin = conditions[0].tmin tstep = conditions[0].tstep """ Explanation: Transform to source space End of explanation """ n_vertices_sample, n_times = conditions[0].lh_data.shape n_subjects = 7 print('Simulating data for %d subjects.' % n_subjects) # Let's make sure our results replicate, so set the seed. np.random.seed(0) X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10 for ii, condition in enumerate(conditions): X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis] """ Explanation: Transform to common cortical space Normally you would read in estimates across several subjects and morph them to the same cortical space (e.g. fsaverage). For example purposes, we will simulate this by just having each "subject" have the same response (just noisy in source space) here. We'll only consider the left hemisphere in this tutorial. End of explanation """ fsave_vertices = [np.arange(10242), np.array([], int)] # right hemi is empty morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices, fsave_vertices, 20, subjects_dir) n_vertices_fsave = morph_mat.shape[0] # We have to change the shape for the dot() to work properly X = X.reshape(n_vertices_sample, n_times * n_subjects * 4) print('Morphing data.') X = morph_mat.dot(X) # morph_mat is a sparse matrix X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4) """ Explanation: It's a good idea to spatially smooth the data, and for visualization purposes, let's morph these to fsaverage, which is a grade 5 source space with vertices 0:10242 for each hemisphere. Usually you'd have to morph each subject's data separately (and you might want to use morph_data instead), but here since all estimates are on 'sample' we can use one morph matrix for all the heavy lifting. End of explanation """ X = np.transpose(X, [2, 1, 0, 3]) # X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)] """ Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the clustering function work correctly with the ANOVA function X needs to be a list of multi-dimensional arrays (one per condition) of shape: samples (subjects) x time x space. First we permute dimensions, then split the array into a list of conditions and discard the empty dimension resulting from the split using numpy squeeze. End of explanation """ factor_levels = [2, 2] """ Explanation: Prepare function for arbitrary contrast As our ANOVA function is a multi-purpose tool we need to apply a few modifications to integrate it with the clustering function. This includes reshaping data, setting default arguments and processing the return values. For this reason we'll write a tiny dummy function. We will tell the ANOVA how to interpret the data matrix in terms of factors. This is done via the factor levels argument which is a list of the number factor levels for each factor. End of explanation """ effects = 'A:B' # Tell the ANOVA not to compute p-values which we don't need for clustering return_pvals = False # a few more convenient bindings n_times = X[0].shape[1] n_conditions = 4 """ Explanation: Finally we will pick the interaction effect by passing 'A:B'. (this notation is borrowed from the R formula language). Without this also the main effects will be returned. End of explanation """ def stat_fun(*args): return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels, effects=effects, return_pvals=return_pvals)[0] # get f-values only. """ Explanation: A stat_fun must deal with a variable number of input arguments. Inside the clustering function each condition will be passed as flattened array, necessitated by the clustering procedure. The ANOVA however expects an input array of dimensions: subjects X conditions X observations (optional). The following function catches the list input and swaps the first and the second dimension, and finally calls ANOVA. Note. for further details on this ANOVA function consider the corresponding time-frequency tutorial &lt;tut_stats_cluster_sensor_rANOVA_tfr&gt;. End of explanation """ source_space = grade_to_tris(5) # as we only have one hemisphere we need only need half the connectivity lh_source_space = source_space[source_space[:, 0] < 10242] print('Computing connectivity.') connectivity = spatial_tris_connectivity(lh_source_space) # Now let's actually do the clustering. Please relax, on a small # notebook and one single thread only this will take a couple of minutes ... pthresh = 0.0005 f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh) # To speed things up a bit we will ... n_permutations = 128 # ... run fewer permutations (reduces sensitivity) print('Clustering.') T_obs, clusters, cluster_p_values, H0 = clu = \ spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1, threshold=f_thresh, stat_fun=stat_fun, n_permutations=n_permutations, buffer_size=None) # Now select the clusters that are sig. at p < 0.05 (note that this value # is multiple-comparisons corrected). good_cluster_inds = np.where(cluster_p_values < 0.05)[0] """ Explanation: Compute clustering statistic To use an algorithm optimized for spatio-temporal clustering, we just pass the spatial connectivity matrix (instead of spatio-temporal). End of explanation """ print('Visualizing clusters.') # Now let's build a convenient representation of each cluster, where each # cluster becomes a "time point" in the SourceEstimate stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep, vertices=fsave_vertices, subject='fsaverage') # Let's actually plot the first "time point" in the SourceEstimate, which # shows all the clusters, weighted by duration subjects_dir = op.join(data_path, 'subjects') # The brighter the color, the stronger the interaction between # stimulus modality and stimulus location brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, colormap='mne', views='lateral', time_label='Duration significant (ms)') brain.save_image('cluster-lh.png') brain.show_view('medial') """ Explanation: Visualize the clusters End of explanation """ inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in enumerate(good_cluster_inds)][0] # first cluster times = np.arange(X[0].shape[1]) * tstep * 1e3 plt.figure() colors = ['y', 'b', 'g', 'purple'] event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis'] for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)): # extract time course at cluster vertices condition = condition[:, :, inds_v] # normally we would normalize values across subjects but # here we use data from the same subject so we're good to just # create average time series across subjects and vertices. mean_tc = condition.mean(axis=2).mean(axis=0) std_tc = condition.std(axis=2).std(axis=0) plt.plot(times, mean_tc.T, color=color, label=eve_id) plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray', alpha=0.5, label='') ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5 plt.xlabel('Time (ms)') plt.ylabel('Activation (F-values)') plt.xlim(times[[0, -1]]) plt.ylim(ymin, ymax) plt.fill_betweenx((ymin, ymax), times[inds_t[0]], times[inds_t[-1]], color='orange', alpha=0.3) plt.legend() plt.title('Interaction between stimulus-modality and location.') plt.show() """ Explanation: Finally, let's investigate interaction effect by reconstructing the time courses End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/2_hyperparameter_tuning.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst PROJECT = "<YOUR PROJECT>" BUCKET = "<YOUR BUCKET>" REGION = "<YOUR REGION>" TFVERSION = "2.3.0" # TF version for AI Platform to use import os os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TFVERSION"] = TFVERSION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: Hyperparameter tuning Learning Objectives 1. Learn how to use cloudml-hypertune to report the results for Cloud hyperparameter tuning trial runs 2. Learn how to configure the .yaml file for submitting a Cloud hyperparameter tuning job 3. Submit a hyperparameter tuning job to Cloud AI Platform Introduction Let's see if we can improve upon that by tuning our hyperparameters. Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training. These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units. Here are the four most common ways to finding the ideal hyperparameters: 1. Manual 2. Grid Search 3. Random Search 4. Bayesian Optimzation 1. Manual Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance. Pros - Educational, builds up your intuition as a data scientist - Inexpensive because only one trial is conducted at a time Cons - Requires alot of time and patience 2. Grid Search On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination. Pros - Can run hundreds of trials in parallel using the cloud - Gauranteed to find the best solution within the search space Cons - Expensive 3. Random Search Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range. Pros - Can run hundreds of trials in parallel using the cloud - Requires less trials than Grid Search to find a good solution Cons - Expensive (but less so than Grid Search) 4. Bayesian Optimization Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here here. Pros - Picks values intelligenty based on results from past trials - Less expensive because requires fewer trials to get a good result Cons - Requires sequential trials for best results, takes longer AI Platform HyperTune AI Platform HyperTune, powered by Google Vizier, uses Bayesian Optimization by default, but also supports Grid Search and Random Search. When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best. End of explanation """ !gsutil ls gs://$BUCKET/taxifare/data """ Explanation: Make code compatible with AI Platform Training Service In order to make our code compatible with AI Platform Training Service we need to make the following changes: Upload data to Google Cloud Storage Move code into a trainer Python package Submit training job with gcloud to train on AI Platform Upload data to Google Cloud Storage (GCS) Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS. To do this run the notebook 0_export_data_from_bq_to_gcs.ipynb, which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command: End of explanation """ !ls -la taxifare/trainer """ Explanation: Move code into python package In the previous lab, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the taxifare/trainer directory: - __init__.py - model.py - task.py End of explanation """ %%writefile ./taxifare/trainer/model.py import datetime import hypertune import logging import os import shutil import numpy as np import tensorflow as tf from tensorflow.keras import activations from tensorflow.keras import callbacks from tensorflow.keras import layers from tensorflow.keras import models from tensorflow import feature_column as fc logging.info(tf.version.VERSION) CSV_COLUMNS = [ 'fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key', ] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']] DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] def features_and_labels(row_data): for unwanted_col in ['key']: row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size, num_repeat): dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS, num_epochs=num_repeat, ) return dataset.map(features_and_labels) def create_train_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=None) return dataset.prefetch(1) def create_eval_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=1) return dataset.prefetch(1) def parse_datetime(s): if type(s) is not str: s = s.numpy().decode('utf-8') return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z") def euclidean(params): lon1, lat1, lon2, lat2 = params londiff = lon2 - lon1 latdiff = lat2 - lat1 return tf.sqrt(londiff*londiff + latdiff*latdiff) def get_dayofweek(s): ts = parse_datetime(s) return DAYS[ts.weekday()] @tf.function def dayofweek(ts_in): return tf.map_fn( lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in ) def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets): # Pass-through columns transformed = inputs.copy() del transformed['pickup_datetime'] feature_columns = { colname: fc.numeric_column(colname) for colname in NUMERIC_COLS } # Scaling longitude from range [-70, -78] to [0, 1] for lon_col in ['pickup_longitude', 'dropoff_longitude']: transformed[lon_col] = layers.Lambda( lambda x: (x + 78)/8.0, name='scale_{}'.format(lon_col) )(inputs[lon_col]) # Scaling latitude from range [37, 45] to [0, 1] for lat_col in ['pickup_latitude', 'dropoff_latitude']: transformed[lat_col] = layers.Lambda( lambda x: (x - 37)/8.0, name='scale_{}'.format(lat_col) )(inputs[lat_col]) # Adding Euclidean dist (no need to be accurate: NN will calibrate it) transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([ inputs['pickup_longitude'], inputs['pickup_latitude'], inputs['dropoff_longitude'], inputs['dropoff_latitude'] ]) feature_columns['euclidean'] = fc.numeric_column('euclidean') # hour of day from timestamp of form '2010-02-08 09:17:00+00:00' transformed['hourofday'] = layers.Lambda( lambda x: tf.strings.to_number( tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32), name='hourofday' )(inputs['pickup_datetime']) feature_columns['hourofday'] = fc.indicator_column( fc.categorical_column_with_identity( 'hourofday', num_buckets=24)) latbuckets = np.linspace(0, 1, nbuckets).tolist() lonbuckets = np.linspace(0, 1, nbuckets).tolist() b_plat = fc.bucketized_column( feature_columns['pickup_latitude'], latbuckets) b_dlat = fc.bucketized_column( feature_columns['dropoff_latitude'], latbuckets) b_plon = fc.bucketized_column( feature_columns['pickup_longitude'], lonbuckets) b_dlon = fc.bucketized_column( feature_columns['dropoff_longitude'], lonbuckets) ploc = fc.crossed_column( [b_plat, b_plon], nbuckets * nbuckets) dloc = fc.crossed_column( [b_dlat, b_dlon], nbuckets * nbuckets) pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4) feature_columns['pickup_and_dropoff'] = fc.embedding_column( pd_pair, 100) return transformed, feature_columns def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(nbuckets, nnsize, lr): # input layer is all float except for pickup_datetime which is a string STRING_COLS = ['pickup_datetime'] NUMERIC_COLS = ( set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS) ) inputs = { colname: layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } inputs.update({ colname: layers.Input(name=colname, shape=(), dtype='string') for colname in STRING_COLS }) # transforms transformed, feature_columns = transform( inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets) dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed) x = dnn_inputs for layer, nodes in enumerate(nnsize): x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x) output = layers.Dense(1, name='fare')(x) model = models.Model(inputs, output) lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr) model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse']) return model def train_and_evaluate(hparams): batch_size = hparams['batch_size'] eval_data_path = hparams['eval_data_path'] nnsize = hparams['nnsize'] nbuckets = hparams['nbuckets'] lr = hparams['lr'] num_evals = hparams['num_evals'] num_examples_to_train_on = hparams['num_examples_to_train_on'] output_dir = hparams['output_dir'] train_data_path = hparams['train_data_path'] if tf.io.gfile.exists(output_dir): tf.io.gfile.rmtree(output_dir) timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S') savedmodel_dir = os.path.join(output_dir, 'savedmodel') model_export_path = os.path.join(savedmodel_dir, timestamp) checkpoint_path = os.path.join(output_dir, 'checkpoints') tensorboard_path = os.path.join(output_dir, 'tensorboard') dnn_model = build_dnn_model(nbuckets, nnsize, lr) logging.info(dnn_model.summary()) trainds = create_train_dataset(train_data_path, batch_size) evalds = create_eval_dataset(eval_data_path, batch_size) steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals) checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path, save_weights_only=True, verbose=1) tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1) history = dnn_model.fit( trainds, validation_data=evalds, epochs=num_evals, steps_per_epoch=max(1, steps_per_epoch), verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch callbacks=[checkpoint_cb, tensorboard_cb] ) # Exporting the model with default serving function. tf.saved_model.save(dnn_model, model_export_path) # TODO 1 hp_metric = history.history['val_rmse'][num_evals-1] # TODO 1 hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='rmse', metric_value=hp_metric, global_step=num_evals ) return history """ Explanation: To use hyperparameter tuning in your training job you must perform the following steps: Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object. Include the following code in your training application: Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial. Add your hyperparameter metric to the summary for your graph. To submit a hyperparameter tuning job, we must modify model.py and task.py to expose any variables we want to tune as command line arguments. Modify model.py End of explanation """ %%writefile taxifare/trainer/task.py import argparse import json import os from trainer import model if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( "--batch_size", help = "Batch size for training steps", type = int, default = 32 ) parser.add_argument( "--eval_data_path", help = "GCS location pattern of eval files", required = True ) parser.add_argument( "--nnsize", help = "Hidden layer sizes (provide space-separated sizes)", nargs = "+", type = int, default=[32, 8] ) parser.add_argument( "--nbuckets", help = "Number of buckets to divide lat and lon with", type = int, default = 10 ) parser.add_argument( "--lr", help = "learning rate for optimizer", type = float, default = 0.001 ) parser.add_argument( "--num_evals", help = "Number of times to evaluate model on eval data training.", type = int, default = 5 ) parser.add_argument( "--num_examples_to_train_on", help = "Number of examples to train on.", type = int, default = 100 ) parser.add_argument( "--output_dir", help = "GCS location to write checkpoints and export models", required = True ) parser.add_argument( "--train_data_path", help = "GCS location pattern of train files containing eval URLs", required = True ) parser.add_argument( "--job-dir", help = "this model ignores this field, but it is required by gcloud", default = "junk" ) args, _ = parser.parse_known_args() hparams = args.__dict__ hparams["output_dir"] = os.path.join( hparams["output_dir"], json.loads( os.environ.get("TF_CONFIG", "{}") ).get("task", {}).get("trial", "") ) print("output_dir", hparams["output_dir"]) model.train_and_evaluate(hparams) """ Explanation: Modify task.py End of explanation """ %%writefile hptuning_config.yaml trainingInput: scaleTier: BASIC hyperparameters: goal: MINIMIZE maxTrials: 10 maxParallelTrials: 2 hyperparameterMetricTag: rmse # TODO enableTrialEarlyStopping: True params: - parameterName: lr # TODO type: DOUBLE minValue: 0.0001 maxValue: 0.1 scaleType: UNIT_LOG_SCALE - parameterName: nbuckets # TODO type: INTEGER minValue: 10 maxValue: 25 scaleType: UNIT_LINEAR_SCALE - parameterName: batch_size # TODO type: DISCRETE discreteValues: - 15 - 30 - 50 """ Explanation: Create config.yaml file Specify the hyperparameter tuning configuration for your training job Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object. In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1: End of explanation """ !pip install --user cloudml-hypertune %%bash EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid* TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train* OUTPUT_DIR=./taxifare-model rm -rf ${OUTDIR} export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare python3 -m trainer.task \ --eval_data_path $EVAL_DATA_PATH \ --output_dir $OUTPUT_DIR \ --train_data_path $TRAIN_DATA_PATH \ --batch_size 5 \ --num_examples_to_train_on 100 \ --num_evals 1 \ --nbuckets 10 \ --lr 0.001 \ --nnsize 32 8 ls taxifare-model/tensorboard %%bash PROJECT_ID=$(gcloud config list project --format "value(core.project)") BUCKET=$PROJECT_ID REGION="us-central1" TFVERSION="2.1" # Output directory and jobID OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S) JOBID=taxifare_$(date -u +%y%m%d_%H%M%S) echo ${OUTDIR} ${REGION} ${JOBID} gsutil -m rm -rf ${OUTDIR} # Model and training hyperparameters BATCH_SIZE=15 NUM_EXAMPLES_TO_TRAIN_ON=100 NUM_EVALS=10 NBUCKETS=10 LR=0.001 NNSIZE="32 8" # GCS paths GCS_PROJECT_PATH=gs://$BUCKET/taxifare DATA_PATH=$GCS_PROJECT_PATH/data TRAIN_DATA_PATH=$DATA_PATH/taxi-train* EVAL_DATA_PATH=$DATA_PATH/taxi-valid* # TODO gcloud ai-platform jobs submit training $JOBID \ --module-name=trainer.task \ --package-path=taxifare/trainer \ --staging-bucket=gs://${BUCKET} \ --config=hptuning_config.yaml \ --python-version=3.7 \ --runtime-version=${TFVERSION} \ --region=${REGION} \ -- \ --eval_data_path $EVAL_DATA_PATH \ --output_dir $OUTDIR \ --train_data_path $TRAIN_DATA_PATH \ --batch_size $BATCH_SIZE \ --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \ --num_evals $NUM_EVALS \ --nbuckets $NBUCKETS \ --lr $LR \ --nnsize $NNSIZE """ Explanation: Report your hyperparameter metric to AI Platform Training The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training. We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping. TensorFlow with a runtime version If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions. You may need to install cloudml-hypertune on your machine to run this code locally. End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/images/classification.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ !pip install tf-nightly """ Explanation: 이미지 분류 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/images/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td> </table> 이 튜토리얼은 꽃 이미지를 분류하는 방법을 보여줍니다. keras.Sequential 모델을 사용하여 이미지 분류자를 만들고 preprocessing.image_dataset_from_directory를 사용하여 데이터를 로드합니다. 이를 통해 다음 개념을 실제로 경험해 볼 수 있습니다. 디스크에서 데이터세트를 효율적으로 로드합니다. 데이터 증강 및 드롭아웃을 포함하여 과대적합을 식별하고 이를 완화하는 기술을 적용합니다. 이 튜토리얼은 기본적인 머신러닝 워크플로를 따릅니다. 데이터 검사 및 이해하기 입력 파이프라인 빌드하기 모델 빌드하기 모델 훈련하기 모델 테스트하기 모델을 개선하고 프로세스 반복하기 End of explanation """ import matplotlib.pyplot as plt import numpy as np import os import PIL import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential """ Explanation: TensorFlow 및 기타 라이브러리 가져오기 End of explanation """ import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True) data_dir = pathlib.Path(data_dir) """ Explanation: 데이터세트 다운로드 및 탐색하기 이 튜토리얼에서는 약 3,700장의 꽃 사진 데이터세트를 사용합니다. 데이터세트에는 클래스당 하나씩 5개의 하위 디렉토리가 있습니다. flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/ End of explanation """ image_count = len(list(data_dir.glob('*/*.jpg'))) print(image_count) """ Explanation: 다운로드 후, 데이터세트 사본을 사용할 수 있습니다. 총 3,670개의 이미지가 있습니다. End of explanation """ roses = list(data_dir.glob('roses/*')) PIL.Image.open(str(roses[0])) PIL.Image.open(str(roses[1])) """ Explanation: 다음은 몇 가지 장미입니다. End of explanation """ tulips = list(data_dir.glob('tulips/*')) PIL.Image.open(str(tulips[0])) PIL.Image.open(str(tulips[1])) """ Explanation: 그리고 일부 튤립이 있습니다. End of explanation """ batch_size = 32 img_height = 180 img_width = 180 """ Explanation: keras.preprocessing을 사용하여 로드하기 유용한 image_dataset_from_directory 유틸리티를 사용하여 이러한 이미지를 디스크에서 로드하겠습니다. 몇 줄의 코드만으로 디스크의 이미지 디렉토리가 tf.data.Dataset로 가져와 집니다. 원하는 경우, 이미지 로드 튜토리얼을 방문하여 데이터 로드 코드를 처음부터 직접 작성할 수도 있습니다. 데이터세트 만들기 로더에 대한 몇 가지 매개변수를 정의합니다. End of explanation """ train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size) """ Explanation: 모델을 개발할 때 검증 분할을 사용하는 것이 좋습니다. 훈련에 이미지의 80%를 사용하고 검증에 20%를 사용합니다. End of explanation """ class_names = train_ds.class_names print(class_names) """ Explanation: 이러한 데이터세트의 class_names 속성에서 클래스 이름을 찾을 수 있습니다. 이들 클래스 이름은 알파벳 순서의 디렉토리 이름에 해당합니다. End of explanation """ import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") """ Explanation: 데이터 시각화하기 훈련 데이터세트의 처음 9개 이미지는 다음과 같습니다. End of explanation """ for image_batch, labels_batch in train_ds: print(image_batch.shape) print(labels_batch.shape) break """ Explanation: 잠시 후에 이들 데이터세트를 model.fit에 전달하여 모델을 훈련하겠습니다. 원하는 경우, 데이터세트를 수동으로 반복하고 이미지 배치를 검색할 수도 있습니다. End of explanation """ AUTOTUNE = tf.data.experimental.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) """ Explanation: image_batch는 (32, 180, 180, 3) 형상의 텐서이며, 180x180x3 형상의 32개 이미지 묶음으로 되어 있습니다(마지막 차원은 색상 채널 RGB를 나타냄). label_batch는 형상 (32,)의 텐서이며 32개 이미지에 해당하는 레이블입니다. image_batch 및 labels_batch 텐서에서 .numpy()를 호출하여 이를 numpy.ndarray로 변환할 수 있습니다. 성능을 높이도록 데이터세트 구성하기 버퍼링된 프리페치를 사용하여 I/O를 차단하지 않고 디스크에서 데이터를 생성할 수 있도록 하겠습니다. 데이터를 로드할 때 다음 두 가지 중요한 메서드를 사용해야 합니다. Dataset.cache()는 첫 epoch 동안 디스크에서 이미지를 로드한 후 이미지를 메모리에 유지합니다. 이렇게 하면 모델을 훈련하는 동안 데이터세트가 병목 상태가 되지 않습니다. 데이터세트가 너무 커서 메모리에 맞지 않는 경우, 이 메서드를 사용하여 성능이 높은 온디스크 캐시를 생성할 수도 있습니다. Dataset.prefetch()는 훈련 중에 데이터 전처리 및 모델 실행과 겹칩니다. 관심 있는 독자는 데이터 성능 가이드에서 두 가지 방법과 디스크에 데이터를 캐싱하는 방법에 대해 자세히 알아볼 수 있습니다. End of explanation """ normalization_layer = layers.experimental.preprocessing.Rescaling(1./255) """ Explanation: 데이터 표준화하기 RGB 채널 값은 [0, 255] 범위에 있습니다. 신경망에는 이상적이지 않습니다. 일반적으로 입력 값을 작게 만들어야 합니다. 여기서는 Rescaling 레이어를 사용하여 값이 [0, 1]에 있도록 표준화합니다. End of explanation """ normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) image_batch, labels_batch = next(iter(normalized_ds)) first_image = image_batch[0] # Notice the pixels values are now in `[0,1]`. print(np.min(first_image), np.max(first_image)) """ Explanation: 참고: 이 섹션에서 소개하는 Keras Preprocesing 유틸리티와 레이어는 현재 실험적 단계이며 변경될 수 있습니다. 이 레이어를 사용하는 방법에는 두 가지가 있습니다. map을 호출하여 데이터세트에 레이어를 적용할 수 있습니다. End of explanation """ num_classes = 5 model = Sequential([ layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ]) """ Explanation: 또는 모델 정의 내에 레이어를 포함하여 배포를 단순화할 수 있습니다. 여기서는 두 번째 접근법을 사용할 것입니다. 참고: 이전에 image_dataset_from_directory의 image_size 인수를 사용하여 이미지 크기를 조정했습니다. 모델에 크기 조정 논리를 포함하려면 크기 조정 레이어를 사용할 수 있습니다. 모델 만들기 모델은 각각에 최대 풀 레이어가 있는 3개의 컨볼루션 블록으로 구성됩니다. 그 위에 relu 활성화 함수에 의해 활성화되는 128개의 단위가 있는 완전히 연결된 레이어가 있습니다. 이 모델은 높은 정확성을 고려해 조정되지 않았습니다. 이 튜토리얼의 목표는 표준적인 접근법을 보여주는 것입니다. End of explanation """ model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) """ Explanation: 모델 컴파일하기 이 튜토리얼에서는 optimizers.Adam 옵티마이저 및 losses.SparseCategoricalCrossentropy 손실 함수를 선택합니다. 각 훈련 epoch에 대한 훈련 및 검증 정확성을 보려면 metrics 인수를 전달합니다. End of explanation """ model.summary() """ Explanation: 모델 요약 모델의 summary 메서드를 사용하여 네트워크의 모든 레이어를 봅니다. End of explanation """ epochs=10 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) """ Explanation: 모델 훈련하기 End of explanation """ acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() """ Explanation: 훈련 결과 시각화하기 훈련 및 검증 세트에 대한 손실과 정확성 플롯을 생성합니다. End of explanation """ data_augmentation = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal", input_shape=(img_height, img_width, 3)), layers.experimental.preprocessing.RandomRotation(0.1), layers.experimental.preprocessing.RandomZoom(0.1), ] ) """ Explanation: 플롯에서 볼 수 있듯이 훈련 정확성과 검증 정확성은 큰 차이가 있으며, 모델은 검증 세트에서 약 60%의 정확성까지만 도달합니다. 무엇이 잘못되었는지 살펴보고 모델의 전반적인 성능을 향상해 보겠습니다. 과대적합 위의 플롯에서 훈련 정확성은 시간이 지남에 따라 선형적으로 증가하는 반면, 검증 정확성은 훈련 과정에서 약 60%를 벗어나지 못합니다. 또한 훈련 정확성과 검증 정확성 간의 정확성 차이가 상당한데, 이는 과대적합의 징후입니다. 훈련 예제가 적을 때 모델은 새로운 예제에서 모델의 성능에 부정적인 영향을 미치는 정도까지 훈련 예제의 노이즈나 원치 않는 세부까지 학습합니다. 이 현상을 과대적합이라고 합니다. 이는 모델이 새 데이터세트에서 일반화하는 데 어려움이 있음을 의미합니다. 훈련 과정에서 과대적합을 막는 여러 가지 방법들이 있습니다. 이 튜토리얼에서는 데이터 증강을 사용하고 모델에 드롭아웃을 추가합니다. 데이터 증강 과대적합은 일반적으로 훈련 예제가 적을 때 발생합니다. 데이터 증강은 증강한 다음 믿을 수 있는 이미지를 생성하는 임의 변환을 사용하는 방법으로 기존 예제에서 추가 훈련 데이터를 생성하는 접근법을 취합니다. 그러면 모델이 데이터의 더 많은 측면을 파악하게 되므로 일반화가 더 쉬워집니다. 여기서는 실험적인 Keras 전처리 레이어를 사용하여 데이터 증강을 구현합니다. 이들 레이어는 다른 레이어와 마찬가지로 모델 내에 포함될 수 있으며, GPU에서 실행됩니다. End of explanation """ plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(1): for i in range(9): augmented_images = data_augmentation(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype("uint8")) plt.axis("off") """ Explanation: 동일한 이미지에 데이터 증강을 여러 번 적용하여 몇 가지 증강된 예제가 어떻게 보이는지 시각화하겠습니다. End of explanation """ model = Sequential([ data_augmentation, layers.experimental.preprocessing.Rescaling(1./255), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Dropout(0.2), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ]) """ Explanation: 잠시 후에 데이터 증강을 사용하여 모델을 훈련하겠습니다. 드롭아웃 과대적합을 줄이는 또 다른 기술은 정규화의 한 형태인 드롭아웃을 네트워크에 도입하는 것입니다. 드롭아웃을 레이어에 적용하면, 훈련 프로세스 중에 레이어에서 여러 출력 단위가 무작위로 드롭아웃됩니다(활성화를 0으로 설정). 드롭아웃은 0.1, 0.2, 0.4 등의 형식으로 소수를 입력 값으로 사용합니다. 이는 적용된 레이어에서 출력 단위의 10%, 20% 또는 40%를 임의로 제거하는 것을 의미합니다. layers.Dropout을 사용하여 새로운 신경망을 생성한 다음, 증강 이미지를 사용하여 훈련해 보겠습니다. End of explanation """ model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() epochs = 15 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) """ Explanation: 모델 컴파일 및 훈련하기 End of explanation """ acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() """ Explanation: 훈련 결과 시각화하기 데이터 증강 및 드롭아웃을 적용한 후, 이전보다 과대적합이 줄어들고 훈련 및 검증 정확성이 더 가깝게 조정됩니다. End of explanation """ sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg" sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url) img = keras.preprocessing.image.load_img( sunflower_path, target_size=(img_height, img_width) ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create a batch predictions = model.predict(img_array) score = tf.nn.softmax(predictions[0]) print( "This image most likely belongs to {} with a {:.2f} percent confidence." .format(class_names[np.argmax(score)], 100 * np.max(score)) ) """ Explanation: 새로운 데이터로 예측하기 마지막으로, 모델을 사용하여 훈련 또는 검증 세트에 포함되지 않은 이미지를 분류해 보겠습니다. 참고: 데이터 증강 및 드롭아웃 레이어는 추론 시 비활성화됩니다. End of explanation """
sonium0/pymatgen
examples/Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
mit
#This initializes the REST adaptor. You may need to put your own API key in as an arg. a = MPRester() #Entries are the basic unit for thermodynamic and other analyses in pymatgen. #This gets all entries belonging to the Ca-C-O system. entries = a.get_entries_in_chemsys(['Ca', 'C', 'O']) #With entries, you can do many sophisticated analyses, like creating phase diagrams. pd = PhaseDiagram(entries) """ Explanation: Generating the phase diagram To generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen. End of explanation """ from pymatgen.phasediagram.plotter import PDPlotter #Let's show all phases, including unstable ones plotter = PDPlotter(pd, show_unstable=True) plotter.show() """ Explanation: Plotting the phase diagram To plot a phase diagram, we send our phase diagram object into the PDPlotter class. End of explanation """ from pymatgen.phasediagram.pdanalyzer import PDAnalyzer a = PDAnalyzer(pd) import collections data = collections.defaultdict(list) for e in entries: decomp, ehull = a.get_decomp_and_e_above_hull(e) data["Materials ID"].append(e.entry_id) data["Composition"].append(e.composition.reduced_formula) data["Ehull"].append(ehull) data["Decomposition"].append(" + ".join(["%.2f %s" % (v, k.composition.formula) for k, v in decomp.items()])) from pandas import DataFrame df = DataFrame(data, columns=["Materials ID", "Composition", "Ehull", "Decomposition"]) print(df) """ Explanation: Calculating energy above hull and other phase equilibria properties To perform more sophisticated analyses, use the PDAnalyzer object. End of explanation """
banyh/ShareIPythonNotebook
NLP_With_Python/Ch5.ipynb
gpl-3.0
import nltk text = nltk.word_tokenize("And now for something completely different") nltk.pos_tag(text) """ Explanation: Ch5 Categorizing and Tagging Words 本章的目標是回答這些問題: 什麼是lexical categories? 它們如何應用在NLP中? 要儲存單字和分類的資料結構是什麼? 如何自動為每個單字分類? 本章會提到一些基本的NLP方法,例如sequence labeling、n-gram models、backoff、evaluation。 辨識單字的part-of-speech(詞性)並標記的過程稱為tagging,或稱part-of-speech tagging、POS tagging。在一般的NLP流程中,tagging是接在tokenization後面。part-of-speech又稱為word class或lexical category。而可供選擇的tag集合稱為tagset。 Using a Tagger End of explanation """ nltk.tag.str2tuple('fly/NN') # tagged_words() 是一個已經表示成tuple形態的資料 nltk.corpus.brown.tagged_words() # 用參數 tagset='universal' 可以換成簡單的tag nltk.corpus.brown.tagged_words(tagset='universal') # 利用 FreqDist 統計詞性的數量 tag_fd = nltk.FreqDist(tag for (word, tag) in nltk.corpus.brown.tagged_words(tagset='universal')) tag_fd.most_common() %matplotlib inline tag_fd.plot() tag_cd = nltk.ConditionalFreqDist(nltk.corpus.brown.tagged_words(tagset='universal')) # 查詢某單字的常用POS tag_cd['yield'] """ Explanation: 上面的範例中,CC是對等連接詞、RB是副詞、IN是介系詞、NN是名詞、JJ則是形容詞。如果想知道詳細的tag定義,可以用nltk.help.upenn_tagset('RB')來查詢。 Tagged Corpora 在NLTK的習慣上,tagged token會表示成由單字和tag結合的tuple,但是儲存在corpus中的資料,則是一個字串包含單字和tag,中間以'/'分隔,例如'fly/NN'。利用str2tuple可以從corpus中的字串轉成tuple。 End of explanation """ nltk.corpus.brown.tagged_sents(tagset='universal')[0] """ Explanation: corpus中也有tagged sentences: End of explanation """ pos = {} # 在python中定義dictionary最簡單的方法 pos['hello'] = 'world' pos['right'] = 'here' pos [w for w in pos] # 用for的時候會找出key pos.keys() pos.items() pos.values() pos = dict(hello = 'world', right = 'here') # 另一種定義方式 pos """ Explanation: Mapping Words to Properties 要儲存mapping資料,最自然的方法就是用dictionary,又稱為associative array或hash array。一般的陣列是用整數作為index,但dictionary中則是用任何hasable的資料作為index,例如字串、tuple等等。 mapping資料的應用有: 書的index: 將單字mapping到頁數 thesaurus: 將字義mapping到一組同義字 字典: 將單字mapping到單字的解釋 比較字集: 將單字mapping到多國語言的單字 End of explanation """ f = nltk.defaultdict(int) f['color'] = 4 f f['dream'] # dream不存在,但查詢時會自動新增 f # 查詢dream後,就直接新增了一個dream f = nltk.defaultdict(lambda: 'xxx') f['hello'] = 'world' f f['here'] = f['here'] + 'comment' f """ Explanation: Default Dictionary 如果試著存取不存在的key,會造成錯誤。defaultdict可以使存取不存在的key時,自動新增一個預設值。 End of explanation """ old = dict(nltk.corpus.brown.tagged_words()[:100]) new = dict((value, key) for (key, value) in old.items()) new['JJ'] # 雖然成功的反相,但只能查出最後輸入的字 new2 = nltk.defaultdict(list) # 當key不存在時,都視為empty list for (key, value) in old.items(): new2[value].append(key) new2['JJ'] """ Explanation: Inverting a Dictionary dict的設計是用key來查詢value,如果你希望用value來查詢key,速度會很慢。一種簡單的解法是用(value,key)重新產生一個dict,就可以用value查詢key。 End of explanation """ new3 = nltk.Index((value, key) for (key, value) in old.items()) new3['JJ'] """ Explanation: 更簡單的方法: 利用nltk內建的函式。 End of explanation """ from nltk.corpus import brown brown_tagged_sents = brown.tagged_sents(categories='news') brown_sents = brown.sents(categories='news') """ Explanation: Summary Dictionary Methods d = {}: 建立空的dict d[key] = value: 為key指定新的value d.keys(): 傳回list of keys list(d): 傳回list of keys d.values(): 傳回list of values sorted(d): 傳回sorted list of keys key in d: 如果d有包含key則傳回True for key in d: 依序傳回每一個Key d1.update(d2): 將d2的每個item複製到d1 defaultdict(int): 預設value為0的dict defaultdict(list): 預設value為[]的dict Automatic Tagging End of explanation """ tags = [tag for (word, tag) in brown.tagged_words(categories='news')] nltk.FreqDist(tags).max() default_tagger = nltk.DefaultTagger('NN') # 因為NN頻率最高,所以未知詞性的情況一律當成NN default_tagger.tag(nltk.word_tokenize('i like my mother and dog')) # 當然預測的準確率很差,因為只有13%是真的NN default_tagger.evaluate(brown_tagged_sents) """ Explanation: Default Tagger 第一步,先準備一個預設的tagger,統計所有part-of-speech中,那一種詞性出現頻率最高,就視為預設值。 End of explanation """ patterns = [ (r'.*ing$', 'VBG'), (r'.*ed$', 'VBD'), (r'.*es$', 'VBZ'), (r'.*ould$', 'MD'), (r'.*\'s$', 'NN$'), (r'.*s$', 'NNS'), (r'^-?[0-9]+(.[0-9]+)?$', 'CD'), (r'.*', 'NN') ] regexp_tagger = nltk.RegexpTagger(patterns) regexp_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM')) regexp_tagger.evaluate(brown_tagged_sents) """ Explanation: Regular Expression Tagger 用經驗判斷那些字尾可能是什麼詞性,然後用regular expression寫成條件。 End of explanation """ unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500]) unigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM')) """ Explanation: Unigram Tagger End of explanation """ unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:500], backoff = regexp_tagger) unigram_tagger.evaluate(brown_tagged_sents[500:]) unigram_tagger = nltk.UnigramTagger(brown.tagged_sents(categories='news')[:4000], backoff = regexp_tagger) unigram_tagger.evaluate(brown_tagged_sents[4000:]) """ Explanation: unigram tagger是統計每個字最常出現的詞性,因此訓練資料越大,就會越準確。但遇到沒看過的字,就會傳回None。因此需要設backoff,當unigram tagger無法判斷時,用另一個tagger來輔助。 End of explanation """ bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000]) bigram_tagger.tag(nltk.word_tokenize('i could be sleeping in 9 AM')) bigram_tagger = nltk.BigramTagger(brown.tagged_sents(categories='news')[:4000], backoff=unigram_tagger) bigram_tagger.evaluate(brown_tagged_sents[4000:]) """ Explanation: 兩個重點: 隨著training data增加,準確率也有提升,利用unigram最高可以到90%左右 記得將training/testing data分開,否則準確率是不準的 Bigram Tagger 統計兩個單字組成的bigram來作tagger。precision較高,但recall很低,一旦遇到不認識的字就馬上出現None。 End of explanation """ from cPickle import dump output = open('t2.pkl', 'wb') dump(bigram_tagger, output, -1) output.close() from cPickle import load input = open('t2.pkl', 'rb') tagger = load(input) input.close() tagger.evaluate(brown_tagged_sents[4000:]) """ Explanation: Storing Taggers 因為training tagger很花時間,所以儲存結果是必要的。利用cPickle.dump來寫出binary格式的物件 End of explanation """ brown_sents = brown.sents() brown_tagged_sents = brown.tagged_sents(tagset = 'universal') default_tagger = nltk.DefaultTagger('NOUN') unigram_tagger = nltk.UnigramTagger(brown_tagged_sents[:4000], backoff=default_tagger) bigram_tagger = nltk.BigramTagger(brown_tagged_sents[:4000], backoff=unigram_tagger) unigram_tagger.tag(nltk.word_tokenize('I like your mother')) test = [tag for sent in brown_sents[4000:] for (word, tag) in bigram_tagger.tag(sent)] gold = [tag for sent in brown_tagged_sents[4000:] for (word, tag) in sent] print nltk.ConfusionMatrix(gold, test) """ Explanation: Confusion Matrix End of explanation """
astrograzl/SymPyTut
notebooks/Fundamentals-of-mathematics.ipynb
bsd-3-clause
3 # an int 3.0 # a float """ Explanation: Fundamentals of mathematics Let's begin by learning about the basic SymPy objects and the operations we can carry out on them. We'll learn the SymPy equivalents of many math verbs like &ldquo;to solve&rdquo; (an equation), &ldquo;to expand&rdquo; (an expression), &ldquo;to factor&rdquo; (a polynomial). Numbers In Python, there are two types of number objects: ints and floats. End of explanation """ 1/7 # int/int gives int """ Explanation: Integer objects in Python are a faithful representation of the set of integers $\mathbb{Z}={\ldots,-2,-1,0,1,2,\ldots}$. Floating point numbers are approximate representations of the reals $\mathbb{R}$. Regardless of its absolute size, a floating point number is only accurate to 16 decimals. Special care is required when specifying rational numbers, because integer division might not produce the answer you want. In other words, Python will not automatically convert the answer to a floating point number, but instead round the answer to the closest integer: End of explanation """ 1.0/7 # float/int gives float """ Explanation: To avoid this problem, you can force float division by using the number 1.0 instead of 1: End of explanation """ S('1/7') # = Rational(1,7) """ Explanation: This result is better, but it's still only an approximation of the exact number $\frac{1}{7} \in \mathbb{Q}$, since a float has 16 decimals while the decimal expansion of $\frac{1}{7}$ is infinitely long. To obtain an exact representation of $\frac{1}{7}$ you need to create a SymPy expression. You can sympify any expression using the shortcut function S(): End of explanation """ 2**10 # same as S('2^10') """ Explanation: Note the input to S() is specified as a text string delimited by quotes. We could have achieved the same result using S('1')/7 since a SymPy object divided by an int is a SymPy object. Except for the tricky Python division operator, other math operators like addition +, subtraction -, and multiplication * work as you would expect. The syntax ** is used in Python to denote exponentiation: End of explanation """ pi pi.evalf() """ Explanation: When solving math problems, it's best to work with SymPy objects, and wait to compute the numeric answer in the end. To obtain a numeric approximation of a SymPy object as a float, call its .evalf() method: End of explanation """ from __future__ import division from sympy import * x, y, z, t = symbols('x y z t') k, m, n = symbols('k m n', integer=True) f, g, h = symbols('f g h', cls=Function) """ Explanation: The method .n() is equivalent to .evalf(). The global SymPy function N() can also be used to to compute numerical values. You can easily change the number of digits of precision of the approximation. Enter pi.n(400) to obtain an approximation of $\pi$ to 400 decimals. Symbols Python is a civilized language so there's no need to define variables before assigning values to them. When you write a = 3, you define a new name a and set it to the value 3. You can now use the name a in subsequent calculations. Most interesting SymPy calculations require us to define symbols, which are the SymPy objects for representing variables and unknowns. For your convenience, when live.sympy.org starts, it runs the following commands automatically: End of explanation """ x + 2 # an Add expression p + 2 """ Explanation: The first statement instructs python to convert 1/7 to 1.0/7 when dividing, potentially saving you from any int division confusion. The second statement imports all the SymPy functions. The remaining statements define some generic symbols x, y, z, and t, and several other symbols with special properties. Note the difference between the following two statements: End of explanation """ p = Symbol('p') # the same as p = symbols('p') p + 2 # = Add(Symbol('p'), Integer(2)) """ Explanation: The name x is defined as a symbol, so SymPy knows that x + 2 is an expression; but the variable p is not defined, so SymPy doesn't know what to make of p + 2. To use p in expressions, you must first define it as a symbol: End of explanation """ a0, a1, a2, a3 = symbols('a0:4') """ Explanation: You can define a sequence of variables using the following notation: End of explanation """ 3+3 _*2 """ Explanation: You can use any name you want for a variable, but it's best if you avoid the letters Q,C,O,S,I,N and E because they have special uses in SymPy: I is the unit imaginary number $i \equiv \sqrt(-1)$, E is the base of the natural logarithm, S() is the sympify function, N() is used to obtain numeric approximations, and O is used for big-O notation. The underscore symbol _ is a special variable that contains the result of the last printed value. The variable _ is analogous to the ans button on certain calculators, and is useful in multi-step calculations: End of explanation """ expr = 2*x + 3*x - sin(x) - 3*x + 42 simplify(expr) """ Explanation: Expresions You define SymPy expressions by combining symbols with basic math operations and other functions: End of explanation """ factor( x**2-2*x-8 ) expand( (x-4)*(x+2) ) a, b = symbols('a b') collect(x**2 + x*b + a*x + a*b, x) # collect terms for diff. pows of x """ Explanation: The function simplify can be used on any expression to simplify it. The examples below illustrate other useful SymPy functions that correspond to common mathematical operations on expressions: End of explanation """ expr = sin(x) + cos(y) expr expr.subs({x:1, y:2}) expr.subs({x:1, y:2}).n() """ Explanation: To substitute a given value into an expression, call the .subs() method, passing in a python dictionary object { key:val, ... } with the symbol&ndash;value substitutions you want to make: End of explanation """ solve( x**2 + 2*x - 8, x) """ Explanation: Note how we used .n() to obtain the expression's numeric value. Solving equations The function solve is the main workhorse in SymPy. This incredibly powerful function knows how to solve all kinds of equations. In fact solve can solve pretty much any equation! When high school students learn about this function, they get really angry&mdash;why did they spend five years of their life learning to solve various equations by hand, when all along there was this solve thing that could do all the math for them? Don't worry, learning math is never a waste of time. The function solve takes two arguments. Use solve(expr,var) to solve the equation expr==0 for the variable var. You can rewrite any equation in the form expr==0 by moving all the terms to one side of the equation; the solutions to $A(x) = B(x)$ are the same as the solutions to $A(x) - B(x) = 0$. For example, to solve the quadratic equation $x^2 + 2x - 8 = 0$, use End of explanation """ a, b, c = symbols('a b c') solve( a*x**2 + b*x + c, x) """ Explanation: In this case the equation has two solutions so solve returns a list. Check that $x = 2$ and $x = -4$ satisfy the equation $x^2 + 2x - 8 = 0$. The best part about solve and SymPy is that you can obtain symbolic answers when solving equations. Instead of solving one specific quadratic equation, we can solve all possible equations of the form $ax^2 + bx + c = 0$ using the following steps: End of explanation """ gen_sol = solve( a*x**2 + b*x + c, x) [ gen_sol[0].subs({'a':1,'b':2,'c':-8}), gen_sol[1].subs({'a':1,'b':2,'c':-8}) ] """ Explanation: In this case solve calculated the solution in terms of the symbols a, b, and c. You should be able to recognize the expressions in the solution&mdash;it's the quadratic formula $x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$. To solve a specific equation like $x^2 + 2x - 8 = 0$, we can substitute the coefficients $a = 1$, $b = 2$, and $c = -8$ into the general solution to obtain the same result: End of explanation """ solve([x + y - 3, 3*x - 2*y], [x, y]) """ Explanation: To solve a system of equations, you can feed solve with the list of equations as the first argument, and specify the list of unknowns you want to solve for as the second argument. For example, to solve for $x$ and $y$ in the system of equations $x + y = 3$ and $3x - 2y = 0$, use End of explanation """ h, k = symbols('h k') solve( (x-h)**2 + k - (x**2-4*x+7), [h,k] ) ((x-2)**2+3).expand() # so h = 2 and k = 3, verify... """ Explanation: The function solve is like a Swiss Army knife you can use to solve all kind of problems. Suppose you want to complete the square in the expression $x^2 - 4x + 7$, that is, you want to find constants $h$ and $k$ such that $x^2 -4x + 7 = (x-h)^2 + k$. There is no special &ldquo;complete the square&rdquo; function in SymPy, but you can call solve on the equation $(x - h)^2 + k - (x^2 - 4x + 7) = 0$ to find the unknowns $h$ and $k$: End of explanation """ a, b, c, d = symbols('a b c d') a/b + c/d together(a/b + c/d) """ Explanation: Learn the basic SymPy commands and you'll never need to suffer another tedious arithmetic calculation painstakingly performed by hand again! Rational functions By default, SymPy will not combine or split rational expressions. You need to use together to symbolically calculate the addition of fractions: End of explanation """ apart( (x**2+x+4)/(x+2) ) """ Explanation: Alternately, if you have a rational expression and want to divide the numerator by the denominator, use the apart function: End of explanation """ log(E**3) # same as ln(E**3) """ Explanation: Exponentials and logarithms Euler's constant $e = 2.71828\dots$ is defined one of several ways, $$ e \equiv \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n \equiv \lim_{\epsilon\to 0}(1+\epsilon)^{1/\epsilon} \equiv \sum_{n=0}^{\infty}\frac{1}{n!}, $$ and is denoted E in SymPy. Using exp(x) is equivalent to E**x. The functions log and ln both compute the logarithm base $e$: End of explanation """ x, y = symbols('x y') log(x*y).expand() a, b = symbols('a b', positive=True) log(a*b).expand() """ Explanation: By default, SymPy assumes the inputs to functions like exp and log are complex numbers, so it will not expand certain logarithmic expressions. However, indicating to SymPy that the inputs are positive real numbers will make the expansions work: End of explanation """ P = (x-1)*(x-2)*(x-3) P """ Explanation: Polynomials Let's define a polynomial $P$ with roots at $x = 1$, $x = 2$, and $x = 3$: End of explanation """ P.expand() """ Explanation: To see the expanded version of the polynomial, call its expand method: End of explanation """ P.factor() P.simplify() """ Explanation: When the polynomial is expressed in it's expanded form $P(x) = x^3 - 6x^2 + 11x - 6$, we can't immediately identify its roots. This is why the factored form $P(x) = (x - 1)(x - 2)(x - 3)$ is preferable. To factor a polynomial, call its factor method or simplify it: End of explanation """ roots = solve(P,x) roots # let's check if P equals (x-1)(x-2)(x-3) simplify( P - (x-roots[0])*(x-roots[1])*(x-roots[2]) ) """ Explanation: Recall that the roots of the polynomial $P(x)$ are defined as the solutions to the equation $P(x) = 0$. We can use the solve function to find the roots of the polynomial: End of explanation """ p = (x-5)*(x+5) q = x**2 - 25 p == q # fail p - q == 0 # fail simplify(p - q) == 0 sin(x)**2 + cos(x)**2 == 1 # fail simplify( sin(x)**2 + cos(x)**2 - 1) == 0 """ Explanation: Equality checking In the last example, we used the simplify function to check whether two expressions were equal. This way of checking equality works because $P = Q$ if and only if $P - Q = 0$. This is the best way to check if two expressions are equal in SymPy because it attempts all possible simplifications when comparing the expressions. Below is a list of other ways to check whether two quantities are equal with example cases where they fail: End of explanation """ sin(pi/6) cos(pi/6) """ Explanation: Trigonometry The trigonometric functions sin and cos take inputs in radians: End of explanation """ sin(30*pi/180) # 30 deg = pi/6 rads """ Explanation: For angles in degrees, you need a conversion factor of $\frac{\pi}{180}$[rad/$^\circ$]: End of explanation """ asin(1/2) acos(sqrt(3)/2) """ Explanation: The inverse trigonometric functions $\sin^{-1}(x) \equiv \arcsin(x)$ and $\cos^{-1}(x) \equiv \arccos(x)$ are used as follows: End of explanation """ tan(pi/6) atan( 1/sqrt(3) ) """ Explanation: Recall that $\tan(x) \equiv \frac{\sin(x)}{\cos(x)}$. The inverse function of $\tan(x)$ is $\tan^{-1}(x) \equiv \arctan(x) \equiv$ atan(x) End of explanation """ sin(x) == cos(x - pi/2) simplify( sin(x)*cos(y)+cos(x)*sin(y) ) e = 2*sin(x)**2 + 2*cos(x)**2 trigsimp(e) trigsimp(log(e)) trigsimp(log(e), deep=True) simplify(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4) """ Explanation: The function acos returns angles in the range $[0, \pi]$, while asin and atan return angles in the range $[-\frac{\pi}{2},\frac{\pi}{2}]$. Here are some trigonometric identities that SymPy knows: End of explanation """ expand(sin(2*x)) # = (sin(2*x)).expand() expand_trig(sin(2*x)) # = (sin(2*x)).expand(trig=True) """ Explanation: The function trigsimp does essentially the same job as simplify. If instead of simplifying you want to expand a trig expression, you should use expand_trig, because the default expand won't touch trig functions: End of explanation """ simplify( (exp(x)+exp(-x))/2 ) simplify( (exp(x)-exp(-x))/2 ) """ Explanation: Hyperbolic trigonometric functions The hyperbolic sine and cosine in SymPy are denoted sinh and cosh respectively and SymPy is smart enough to recognize them when simplifying expressions: End of explanation """ simplify( cosh(x)**2 - sinh(x)**2 ) """ Explanation: Recall that $x = \cosh(\mu)$ and $y = \sinh(\mu)$ are defined as $x$ and $y$ coordinates of a point on the the hyperbola with equation $x^2 - y^2 = 1$ and therefore satisfy the identity $\cosh^2 x - \sinh^2 x = 1$: End of explanation """
alexlib/openpiv-python
openpiv/docs/src/windef.ipynb
gpl-3.0
# import packages from openpiv import windef # <---- see windef.py for details from openpiv import tools, scaling, validation, filters, preprocess import openpiv.pyprocess as process from openpiv import pyprocess import numpy as np import os from time import time import warnings import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Multi-grid window deformation algorithm tutorial End of explanation """ settings = windef.Settings() 'Data related settings' # Folder with the images to process settings.filepath_images = '../../examples/test1/' # Folder for the outputs settings.save_path = '../../../examples/test1/' # Root name of the output Folder for Result Files settings.save_folder_suffix = 'Test_1' # Format and Image Sequence settings.frame_pattern_a = 'exp1_001_a.bmp' settings.frame_pattern_b = 'exp1_001_b.bmp' 'Region of interest' # (50,300,50,300) #Region of interest: (xmin,xmax,ymin,ymax) or 'full' for full image settings.ROI = 'full' 'Image preprocessing' # 'None' for no masking, 'edges' for edges masking, 'intensity' for intensity masking # WARNING: This part is under development so better not to use MASKS settings.dynamic_masking_method = 'None' settings.dynamic_masking_threshold = 0.005 settings.dynamic_masking_filter_size = 7 settings.deformation_method = 'symmetric' 'Processing Parameters' settings.correlation_method='circular' # 'circular' or 'linear' settings.normalized_correlation=False settings.num_iterations = 2 # select the number of PIV passes # add the interroagtion window size for each pass. # For the moment, it should be a power of 2 settings.windowsizes = (64, 32, 16) # if longer than n iteration the rest is ignored # The overlap of the interroagtion window for each pass. settings.overlap = (32, 16, 8) # This is 50% overlap # Has to be a value with base two. In general window size/2 is a good choice. # methode used for subpixel interpolation: 'gaussian','centroid','parabolic' settings.subpixel_method = 'gaussian' # order of the image interpolation for the window deformation settings.interpolation_order = 3 settings.scaling_factor = 1 # scaling factor pixel/meter settings.dt = 1 # time between to frames (in seconds) 'Signal to noise ratio options (only for the last pass)' # It is possible to decide if the S/N should be computed (for the last pass) or not # settings.extract_sig2noise = True # 'True' or 'False' (only for the last pass) # method used to calculate the signal to noise ratio 'peak2peak' or 'peak2mean' settings.sig2noise_method = 'peak2peak' # select the width of the masked to masked out pixels next to the main peak settings.sig2noise_mask = 2 # If extract_sig2noise==False the values in the signal to noise ratio # output column are set to NaN 'vector validation options' # choose if you want to do validation of the first pass: True or False settings.validation_first_pass = True # only effecting the first pass of the interrogation the following passes # in the multipass will be validated 'Validation Parameters' # The validation is done at each iteration based on three filters. # The first filter is based on the min/max ranges. Observe that these values are defined in # terms of minimum and maximum displacement in pixel/frames. settings.MinMax_U_disp = (-30, 30) settings.MinMax_V_disp = (-30, 30) # The second filter is based on the global STD threshold settings.std_threshold = 7 # threshold of the std validation # The third filter is the median test (not normalized at the moment) settings.median_threshold = 3 # threshold of the median validation # On the last iteration, an additional validation can be done based on the S/N. settings.median_size=1 #defines the size of the local median 'Validation based on the signal to noise ratio' # Note: only available when extract_sig2noise==True and only for the last # pass of the interrogation # Enable the signal to noise ratio validation. Options: True or False # settings.do_sig2noise_validation = False # This is time consuming # minmum signal to noise ratio that is need for a valid vector settings.sig2noise_threshold = 1.2 'Outlier replacement or Smoothing options' # Replacment options for vectors which are masked as invalid by the validation settings.replace_vectors = True # Enable the replacment. Chosse: True or False settings.smoothn=True #Enables smoothing of the displacemenet field settings.smoothn_p=0.5 # This is a smoothing parameter # select a method to replace the outliers: 'localmean', 'disk', 'distance' settings.filter_method = 'localmean' # maximum iterations performed to replace the outliers settings.max_filter_iteration = 4 settings.filter_kernel_size = 2 # kernel size for the localmean method 'Output options' # Select if you want to save the plotted vectorfield: True or False settings.save_plot = False # Choose wether you want to see the vectorfield or not :True or False settings.show_plot = True settings.scale_plot = 200 # select a value to scale the quiver plot of the vectorfield # run the script with the given settings """ Explanation: Set up all the settings: where the images are where to save the results names of the image files what is the region of interest do you apply dynamic masking or a masking image what kind of correlation to apply: circular vs linear interrogation window sizes, overlap sizes, number of iterations time interval, interpolation options, etc. Read the tutorial by Theo Kaufer with all the details. See windef.py for more code details End of explanation """ windef.piv(settings) """ Explanation: Run the windef.py function, called piv with these settings End of explanation """ # we can run it from any folder path = settings.filepath_images frame_a = tools.imread( os.path.join(path,settings.frame_pattern_a)) frame_b = tools.imread( os.path.join(path,settings.frame_pattern_b)) frame_a = (frame_a).astype(np.int32) frame_b = (frame_b).astype(np.int32) u, v, sig2noise = process.extended_search_area_piv( frame_a, frame_b, \ window_size=32, overlap=16, dt=1, search_area_size=64, sig2noise_method='peak2peak' ) x, y = process.get_coordinates( image_size=frame_a.shape, search_area_size=64, overlap=16 ) u, v, mask = validation.sig2noise_val( u, v, sig2noise, threshold = 1.3 ) u, v, mask = validation.global_val( u, v, (-1000, 2000), (-1000, 1000) ) u, v = filters.replace_outliers( u, v, method='localmean', max_iter=10, kernel_size=2) x, y, u, v = scaling.uniform(x, y, u, v, scaling_factor = 1) x, y, u, v = tools.transform_coordinates(x, y, u, v) tools.save(x, y, u, v, mask, 'test1.vec' ) tools.display_vector_field('test1.vec', scale=75, width=0.0035); """ Explanation: Run the extended search area PIV for comparison End of explanation """
tnzmnjm/Seaborn-visualisation
CO2 Emission.ipynb
agpl-3.0
# Importing Iran`s dataset IRAN_SOURCE_FILE = 'iran_emission_dataset.csv' iran_csv = pd.read_csv(IRAN_SOURCE_FILE) iran_csv.head(5) # Importing Turkey`s dataset TURKEY_SOURCE_FILE = 'turkey_emission_dataset.csv' turkey_csv = pd.read_csv(TURKEY_SOURCE_FILE) turkey_csv.head(5) """ Explanation: Importing the CSV files This CSV file is available on Irans dataset in World bank](https://www.worldbank.org/en/country/iran) and [Turkeys dataset in the same website . I cleaned this data using LibreOffice and kept the important rows to prevent complexity in my code. End of explanation """ iran_csv = iran_csv.transpose() iran_csv = iran_csv.fillna(0) iran_csv.columns = iran_csv.ix[0,:] iran_csv = iran_csv.ix[1:,:] iran_csv.astype(np.float64) iran_csv.head(5) turkey_csv = turkey_csv.transpose() turkey_csv = turkey_csv.fillna(0) turkey_csv.columns = turkey_csv.ix[0,:] turkey_csv = turkey_csv.ix[1:,:] turkey_csv.astype(np.float64) turkey_csv.head(5) """ Explanation: As I wanted the emission types be my coloumns and the years be the rows, I used transpose() function. Some data was missing for the last three years which I substituded their value by zero. End of explanation """ #Iran (Blue) sns.distplot(iran_csv.ix[:,1]) #Turkey (Green) sns.distplot(turkey_csv.ix[:,1]) """ Explanation: Distribution plot for CO2 emssions from liquid fuel consumption End of explanation """ SOLID_FUEL_COLUMN_INDEX = 2 a = sns.jointplot(iran_csv.ix[:,SOLID_FUEL_COLUMN_INDEX], turkey_csv.ix[:,SOLID_FUEL_COLUMN_INDEX]).set_axis_labels( "IRAN: " + iran_csv.columns[SOLID_FUEL_COLUMN_INDEX], "TURKEY: " + turkey_csv.columns[SOLID_FUEL_COLUMN_INDEX]) a.savefig("output.png") """ Explanation: # Joint plot for CO2 emissions from solid fuel consumption in Iran and Turkey As there were no values for the gaseous column before 1978, I preferred to present my data using solid fuel emission values instead. End of explanation """
SHDShim/pytheos
examples/6_p_scale_test_Yokoo_Au.ipynb
apache-2.0
%config InlineBackend.figure_format = 'retina' """ Explanation: For high dpi displays. End of explanation """ import matplotlib.pyplot as plt import numpy as np from uncertainties import unumpy as unp import pytheos as eos """ Explanation: 0. General note This example compares pressure calculated from pytheos and original publication for the gold scale by Yokoo 2009. 1. Global setup End of explanation """ eta = np.linspace(1., 0.60, 21) print(eta) yokoo_au = eos.gold.Yokoo2009() yokoo_au.print_equations() yokoo_au.print_equations() yokoo_au.print_parameters() v0 = 67.84742110765599 yokoo_au.three_r v = v0 * (eta) temp = 3000. p = yokoo_au.cal_p(v, temp * np.ones_like(v)) """ Explanation: 3. Compare End of explanation """ print('for T = ', temp) for eta_i, p_i in zip(eta, p): print("{0: .3f} {1: .2f}".format(eta_i, p_i)) v = yokoo_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6) print(1.-(v/v0)) """ Explanation: <img src='./tables/Yokoo_Au.png'> End of explanation """ yokoo_au = eos.gold.Yokoo2009(reproduce_table=True) p = yokoo_au.cal_p(v, temp * np.ones_like(v)) """ Explanation: I cannot quite reproduce the table values. The mismatch is about 3 GPa at 3000 K and 380 GPa. This means his parameters may have been rounded. Therefore, I readjusted the eos parameters from Yokoo to match their table values better. Users have a choice if they use the table values or the parameter values. If reproduce_table sets to True, the difference reduces to 0.1 GPa. End of explanation """
cliburn/sta-663-2017
exams/Midterm Exams.ipynb
mit
heart = sm.datasets.heart.load_pandas().data heart.head(n=6) """ Explanation: Q1 (10 points) The heart dataframe contains the survival time after receiving a heart transplant, the age of the patient and whether or not the survival time was censored Number of Observations - 69 Number of Variables - 3 Variable name definitions:: survival - Days after surgery until death censors - indicates if an observation is censored. 1 is uncensored age - age at the time of surgery Answer the following questions with respect to the heart data set: Sort the data frame by age in descending order (oldest at top) without making a copy How many patients were censored? What is the average age for uncensored patients under the age of 45? Find the mean and standard deviation of age and survival time for each value of the cenoring variable. Plot the linear regression of survival (y-axis) against age (x-axis) conditioned on censoring (i.e. either have two separate plots or a single plot using color to distinguish censored and uncensored patients). End of explanation """ def sgd(b, x, y, max_iter, alpha): n = x.shape[0] for i in range(max_iter): for j in range(n): b[0] -= alpha * (2*(b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j])) b[1] =- alpha * (2*x[j] * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j])) b[2] -= alpha * (2*x[j]**2 * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j])) return b np.random.seed(12345) n = 10000 x = np.linspace(0, 10, n) y = 2*x**2 + 6*x + 3 + np.random.normal(0, 5, n) k = 100 alpha = 0.00001 b0 = np.random.random(3) b = sgd(b0, x, y, k, alpha) yhat = b[0] + b[1]*x+ b[2]*x**2 idx = sorted(np.random.choice(n, 100)) plt.scatter(x[idx], y[idx]) plt.plot(x[idx], yhat[idx], c='red') pass """ Explanation: Q2 (10 points) Write a flatmap function that works like map except that the function given takes a list and returns a list of lists that is then flattened (4 points). In other words, flatmap takes two arguments, a function and a list (or other iterable), just like map. Howevver the function given as the first agument takes a single argument and returns a list (or ohter iterable). In order to get a simple list back, we need to unravel the reuslting list of lists, hence the flatten part. For example, python flatmap(lambda x: x.split(), ["hello world", "the quick dog"]) should return python ["hello", "world", "the", "quick", "dog"] Q3 (10 points) An affine transformation of a vector $x$ is the operation $Ax + b$, where $A$ is a matrix and $b$ is a vector. Write a function to perform an affine transformation. Write a function to reverse the affine transformation Perform an affine transformation of a random 3 by 3 matrix A, and random 3-vectors $x$ and $b$ drawn from the standard uniform distribution with random seed = 1234 and save the result as $y$. Perform the reverse affine transform on $y$ to recover the original vector $x$. Q4 (10 points) You are given the following DNA sequecne in FASTA format. python dna = '''&gt; A simulated DNA sequence. TTAGGCAGTAACCCCGCGATAGGTAGAGCACGCAATCGTCAAGGCGTGCGGTAGGGCTTCCGTGTCTTACCCAAAGAAAC GACGTAACGTTCCCCGGGCGGTTAAACCAAATCCACTTCACCAACGGCATAACGCGAAGCCCAAACTAAATCGCGCTCGA GCGGACGCACATTCGCTAGGCTGTGTAGGGGCAGTCTCCGTTAAGGACGATTACCACGTGATGGTAGTTCGCAACATTGG ACTGTCGGGAATTCCCGAAGGCACTTAAGCGGAGTCTTAGCGTACAGTAACGCAGTCCCGCGTGAACGACTGACAGATGA ''' Remove the comment line and combine the 4 lines of nucleotide symbols into a single string Count the frequecny of all 16 two-letter combinations in the string. Q5 (10 points) The code given below performs a stochastic gradient descent to fit a quadratic polynomila to $n$ data points. Maake the code run faster by: Using numba JIT using Cython Some test code is provided. Please run this for your optimized versios to confirm that they give the same resuls as the orignal code. End of explanation """
paulcon/active_subspaces
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
mit
import active_subspaces as ac import numpy as np %matplotlib inline # The otlcircuit_functions.py file contains two functions: the circuit function (circuit(xx)) # and its gradient (circuit_grad(xx)). Each takes an Mx6 matrix (M is the number of data # points) with rows being normalized inputs; circuit returns a column vector of function # values at each row of the input and circuit_grad returns a matrix whose ith row is the # gradient of circuit at the ith row of xx with respect to the normalized inputs from otlcircuit_functions import * """ Explanation: Active Subspaces Example Function: Circuit Voltage Ryan Howard, CO School of Mines, &#114;&#121;&#104;&#111;&#119;&#97;&#114;&#100;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; Paul Constantine, CO School of Mines, &#112;&#99;&#111;&#110;&#115;&#116;&#97;&#110;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; <br> In this tutorial, we'll be applying active subspaces to the function $$ V_m = \frac{(V_{b1}+0.74)\beta(R_{c2}+9)}{\beta(R_{c2}+9)+R_f}+\frac{11.35R_f}{\beta(R_{c2}+9)+R_f}+\frac{0.74R_f\beta(R_{c2}+9)}{R_{c1}(\beta(R_{c2}+9)+R_f)}, $$ where $V_{b1} = 12R_{b2}/(R_{b1}+R_{b2})$, as seen on http://www.sfu.ca/~ssurjano/otlcircuit.html. This function models the midpoint voltage of a transformerless push-pull circuit, and its inputs and their distributions are described in the table below. Variable|Symbol|Distribution (U(min, max)) :-----|:-----:|:----- resistance b1|$R_{b1}$|U(50, 150) resistance b2|$R_{b2}$|U(25, 70) resistance f|$R_f$|U(.5, 3) resistance c1|$R_{c1}$|U(1.2, 2.5) resistance c2|$R_{c2}$|U(.25, 1.2) current gain|$\beta$|U(50, 300) End of explanation """ M = 1000 #This is the number of data points to use #Sample the input space according to the distributions in the table above Rb1 = np.random.uniform(50, 150, (M, 1)) Rb2 = np.random.uniform(25, 70, (M, 1)) Rf = np.random.uniform(.5, 3, (M, 1)) Rc1 = np.random.uniform(1.2, 2.5, (M, 1)) Rc2 = np.random.uniform(.25, 1.2, (M, 1)) beta = np.random.uniform(50, 300, (M, 1)) #the input matrix x = np.hstack((Rb1, Rb2, Rf, Rc1, Rc2, beta)) """ Explanation: First we draw M samples randomly from the input space. End of explanation """ #Upper and lower limits for inputs xl = np.array([50, 25, .5, 1.2, .25, 50]) xu = np.array([150, 70, 3, 2.5, 1.2, 300]) #XX = normalized input matrix XX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x) """ Explanation: Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$. End of explanation """ #output values (f) and gradients (df) f = circuit(XX) df = circuit_grad(XX) """ Explanation: Compute gradients to approximate the matrix on which the active subspace is based. End of explanation """ #Set up our subspace using the gradient samples ss = ac.subspaces.Subspaces() ss.compute(df=df, nboot=500) """ Explanation: Now we use our data to compute the active subspace. End of explanation """ #Component labels in_labels = ['Rb1', 'Rb2', 'Rf', 'Rc1', 'Rc2', 'beta'] #plot eigenvalues, subspace errors ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br) ac.utils.plotters.subspace_errors(ss.sub_br) #manually make the subspace 2D for the eigenvector and 2D summary plots ss.partition(2) #Compute the active variable values y = XX.dot(ss.W1) #Plot eigenvectors, sufficient summaries ac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels) ac.utils.plotters.sufficient_summary(y, f) """ Explanation: We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values). End of explanation """
maxis42/ML-DA-Coursera-Yandex-MIPT
2 Supervised learning/Lectures notebooks/12 bonus video/imdb.ipynb
mit
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Рецензии на imdb End of explanation """ imdb = pd.read_csv('labeledTrainData.tsv', delimiter='\t') imdb.shape imdb.head() """ Explanation: Имеются 25000 рецензий пользователей imdb с бинарными метками, посчитанными по оценкам: 0 при оценке < 5 и 1 при оценке >=7. Полные данные: https://www.kaggle.com/c/word2vec-nlp-tutorial/data Загрузим выбоку: End of explanation """ imdb.sentiment.value_counts() """ Explanation: Классы сбалансированы: End of explanation """ from sklearn.cross_validation import train_test_split texts_train, texts_test, y_train, y_test = train_test_split(imdb.review.values, imdb.sentiment.values) """ Explanation: Разобъём выборку на обучение и контроль: End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer vect = TfidfVectorizer(sublinear_tf=True, use_idf=True) X_train = vect.fit_transform(texts_train) X_test = vect.transform(texts_test) """ Explanation: Векторизуем тексты рецензий: End of explanation """ from sklearn.linear_model import LogisticRegression from sklearn import metrics clf = LogisticRegression() clf.fit(X_train, y_train) print metrics.accuracy_score(y_test, clf.predict(X_test)) print metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1]) """ Explanation: Логистическая регрессия Настроим на векторизованных данных логистическую регрессию и посчитаем AUC: End of explanation """ X_train.shape """ Explanation: Признаков получилось очень много: End of explanation """ clf = LogisticRegression(C=0.15, penalty='l1') clf.fit(X_train, y_train) print np.sum(np.abs(clf.coef_) > 1e-4) print metrics.accuracy_score(y_test, clf.predict(X_test)) print metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1]) """ Explanation: Попробуем отбирать признаки с помощью лассо: End of explanation """ from sklearn.linear_model import RandomizedLogisticRegression rlg = RandomizedLogisticRegression(C=0.13) rlg.fit(X_train, y_train) """ Explanation: Ещё один способ отбора признаков — рандомизированная логистическая регрессия: End of explanation """ np.sum(rlg.scores_ > 0) """ Explanation: Посмотрим, сколько признаков отбирается: End of explanation """ X_train_lasso = X_train[:, rlg.scores_ > 0] X_test_lasso = X_test[:, rlg.scores_ > 0] clf = LogisticRegression(C=1) clf.fit(X_train_lasso, y_train) print metrics.accuracy_score(y_test, clf.predict(X_test_lasso)) print metrics.roc_auc_score(y_test, clf.predict_proba(X_test_lasso)[:, 1]) """ Explanation: Настроим логистическую регрессию на отобранных признаках: End of explanation """ from sklearn.decomposition import TruncatedSVD tsvd = TruncatedSVD(n_components=100) X_train_pca = tsvd.fit_transform(X_train) X_test_pca = tsvd.transform(X_test) """ Explanation: Метод главных компонент Сделаем 100 синтетических признаков с помощью метода главных компонент: End of explanation """ clf = LogisticRegression() clf.fit(X_train_pca, y_train) print metrics.accuracy_score(y_test, clf.predict(X_test_pca)) print metrics.roc_auc_score(y_test, clf.predict_proba(X_test_pca)[:, 1]) """ Explanation: Обучим на них логистическую регрессию: End of explanation """ from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) clf.fit(X_train_pca, y_train) print metrics.accuracy_score(y_test, clf.predict(X_test_pca)) print metrics.roc_auc_score(y_test, clf.predict_proba(X_test_pca)[:, 1]) """ Explanation: По 100 полученных таким способом признакам качество получается не намного хуже, чем по всем 66702! Попробуем обучить на них обучить случайный лес: End of explanation """
GoogleCloudPlatform/bigquery-oreilly-book
09_bqml/image_embeddings.ipynb
apache-2.0
BUCKET='ai-analytics-solutions-kfpdemo' # CHANGE to a bucket you own """ Explanation: Image embeddings in BigQuery for image similarity and clustering tasks This notebook shows how to do use a pre-trained embedding as a vector representation of an image in Google Cloud Storage. Given this embedding, we can load it as a BQ-ML model and then carry out document similarity or clustering. This notebook accompanies the following Medium blog post: End of explanation """ import tensorflow as tf import tensorflow_hub as tfhub import os model = tf.keras.Sequential() model.add(tf.keras.Input(shape=[None,None,3])) model.add(tfhub.KerasLayer("https://tfhub.dev/google/efficientnet/b4/feature-vector/1", name='image_embeddings')) model.summary() """ Explanation: Embedding model for images We're going to use the EfficientNets model trained on ImageNet. It is compact and trained on a large variety of real-world images. End of explanation """ @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)]) def serve(filename): img = tf.io.read_file(filename[0]) img = tf.io.decode_image(img, channels=3) img = tf.cast(img, tf.float32) / 255.0 #img = tf.image.resize(img, [380, 380]) return model(img) path='gs://{}/effnet_image_embedding'.format(BUCKET) tf.saved_model.save(model, path, signatures={'serving_default': serve}) !saved_model_cli show --all --dir gs://$BUCKET/effnet_image_embedding """ Explanation: The model on TensorFlow Hub expects images of a certain size, and provided as normalized arrays. So, we'll define a serving function that carries out the necessary reading and preprocessing of the images. End of explanation """ %%bigquery CREATE OR REPLACE MODEL advdata.effnet_image_embed OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/effnet_image_embedding/*') """ Explanation: Loading model into BigQuery Since we saved the model in SavedModel format into GCS it is straightforward to load it into BigQuery Let's load the model into a BigQuery dataset named advdata (create it if necessary) End of explanation """ %%bigquery SELECT output_0 FROM ML.PREDICT(MODEL advdata.effnet_image_embed,( SELECT 'gs://gcs-public-data--met/634108/0.jpg' AS filename)) """ Explanation: From the BigQuery web console, click on "schema" tab for the newly loaded model. You will see that the input is a string called filename and the output is called output_0. The model is computationally expensive. End of explanation """
tclaudioe/Scientific-Computing
SC2/U1_EigenWorld.ipynb
bsd-3-clause
import numpy as np from scipy import linalg from matplotlib import pyplot as plt %matplotlib inline """ Explanation: <center> <h1> ILI286 - Computación Científica II </h1> <h2> Valores y Vectores Propios </h2> <h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2> <h2> Version: 1.14</h2> </center> Tabla de Contenidos Introducción Marco Teórico Algoritmos e Implementaciones Power Iteration Inverse Power Iteration Rayleigh Quotient Iteration SciPy Eigenvalue Problema de Aplicación Acknowledgements <div id='intro' /> Introducción Determinar los valores y vectores propios de una matriz, aporta gran información acerca de las características y propiedades de esta, como también posee gran cantidad de aplicaciones prácticas como: Análisis de convergencia de sistemas dinámicos, PCA (Principal Component Analysis), análisis espectral, Eigenfaces, etc. Sin embargo la determinación de los valores y vectores propios no es un problema simple. Como ya debe haber estudiado en cursos anteriores, existe un método directo basado en cálculo de las raíces del polinomio característico $p(x)$. Pero este problema resulta ser mal condicionado, esto es, a pequeñas variaciones en la matriz $A$ original, existe una gran variación en los resultados de los valores y vectores propios obtenidos (Ver polinomio de Wilkinson, texto guia). En este notebook estudiaremos un método iterativo conocido como Power Iteration (y sus extensiones), que de modo similar a una iteración de punto fijo, permite obtener numéricamente los eigen(valores/vectores). <div id='teo' /> Marco Teórico La motivación tras PI (Power Iteration) es que la multiplicación por matrices, tiende a "dirigir" a los vectores hacia el vector propio dominante (aquel con valor propio de mayor magnitud). El algoritmo en cuestión es como sigue: python x = 'Initial guess' for i in range n_iter: u = x / ||x|| #normalization step x = dot(A,u) #power iteration step lamb = dot(u, dot(A, u)) #Rayleigh quotient return x / ||x|| en donde se agrega una paso de normalización, para evitar que la magnitud del vector aumente sin límite, y el valor propio asociado se obtiene por medio del cociente de Rayleigh: $$ \lambda = \frac{\mathbf{x}^\,A\,\mathbf{x}}{\mathbf{x}^\,\mathbf{x}} $$ Para entender porque se de esta convergencia, considere una matriz $A \in \mathbb{R}^{m \times m}$ con valores propios reales $\lambda_1, \lambda_2, \ldots, \lambda_m$ tales que $|\lambda_1| > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_m|$, tales que los vectores propios ${\mathbf{v}1, \mathbf{v}_2, \ldots, \mathbf{v}_m }$ conforman una base de $\mathbb{R}^m$. Sea entonces $x_0$ el _initial guess, este puede ser expresado como una combinación lineal de los vectores propios $\mathbf{v}_k$: \begin{align} A x_0 &= c_1 \,A\,\mathbf{v}_1 + \cdots + c_m \,A\,\mathbf{v}_m = c_1 \lambda_1 \mathbf{v}_1 + \cdots + c_m \lambda_m \mathbf{v}_m \ A^2 x_0 & = c_1 \lambda_1 \,A\,\mathbf{v}_1 + \cdots + c_m \lambda_m \,A\,\mathbf{v}_m = c_1 \lambda_1^2 \mathbf{v}_1 + \cdots + c_m \lambda_m^2 \mathbf{v}_m \ \vdots &= \vdots \ A^k x_0 &= c_1 \lambda_1^k \mathbf{v}_1 + \cdots + c_m \lambda_m^k \mathbf{v}_m \end{align} Factorizando $\lambda_1^k$ del último resultado se obtiene: $$ \frac{A^k x_0}{\lambda_1^k} = c_1 \mathbf{v}_1 + c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^k \mathbf{v}_2 + \cdots + c_m \left(\frac{\lambda_m}{\lambda_1}\right)^k \mathbf{v}_m$$ Dado que $|\lambda_1|>|\lambda_i| \ \ \forall i \neq 1$, a medida que $k \rightarrow \infty$ todos los términos excepto el primero tienden a cero, con razón de convergencia $S \leq |\lambda_2/\lambda_1|$. Obteniendo como resultado un vector que es múltiplo del vector propio dominante. Nota: Para más detalles revisar: Numerical Analysis, Tymothy Sauer, Chapter 12: Eigenvalues and Singular Values <div id='alg' /> Algoritmos e Implementaciones Librerías utilizadas durante la clase End of explanation """ A = np.array([[1, 0.5],[0.5, 1]]) x = np.array([1.,0.]) A = np.array([[1., 0.5,-0.1],[0.5, 1.,10.0],[2.,3.,5.]]) x = np.array([1.,0.,0.]) print("A =\n",A) print("x =",x) """ Explanation: Matriz y vector de prueba End of explanation """ def power_iteration(A, x, k, verbose=False): """ Program 12.1 Power iteration Computes dominant eigenvector of square matrix Input: matrix A, initial (nonzero) vector x, number of steps k Output: dominant eigenvalue lam, eigenvector u """ if verbose: print("Power Iteration Method\n%s"%('='*80)) for j in range(k): u = x/np.linalg.norm(x) x = np.dot(A, u) lam = np.dot(u, x) #not really necessary to compute it at each iteration if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T))) u = x/np.linalg.norm(x) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T))) return (lam, u) # Testing algorithm lam, u = power_iteration(A, x, 20, verbose=True) print("lambda = {0}".format(lam)) print("u (dominant eigenvector) = {0}".format(u)) """ Explanation: <div id='pi' /> Power Iteration A continuación se entrega el código del algoritmo clásico de Power Iteration. Pruebe cambiando las matrices y los parámetros del algoritmo. End of explanation """ def inverse_power_iteration(A, x, s, k, verbose=False): """ Program 12.2 Inverse Power iteration Computes eigenvector of square matrix nearest to input s Input: matrix A, initial (nonzero) vector x, shift s, number of steps k Output: dominant eigenvalue lam, eigenvector of inv(A-sI) """ if verbose: print("Inverse Power Iteration Method\n%s"%('='*80)) As = A - s*np.eye(*A.shape) for j in range(k): u = x/np.linalg.norm(x) x = np.linalg.solve(As, u) # Critical line! lam = np.dot(u.T, x) if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,1./lam+s,str(u.T))) u = x/np.linalg.norm(x) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,1./lam+s,str(u.T))) return (1./lam+s, u) # Testing algoritm lam, u = inverse_power_iteration(A, x, s=1./4, k=10, verbose=True) print("lambda = {0}".format(lam)) print("v = {0}".format(u)) """ Explanation: <div id='invpi' /> Inverse Power Iteration Una de las complicaciones que tiene el algoritmo anterior, es que sólo permite encontrar el valor y vectores propios dominantes. Luego ¿Cómo encontramos el resto?. Para responder esta pregunta, es necesario examinar dos propiedades importantes: Los valores propios de la matriz inversa $A^{-1}$ son los recíprocos de los valores propios de $A$, es decir: $\lambda_1^{-1}, \lambda_2^{-1}, \ldots , \lambda_m^{-1}$. Los vectores propios de se mantienen inalterados. Los valores propios de la matriz con shift $A - sI$ son: $\lambda_1-s, \lambda_2-s, \ldots, \lambda_m-s$. Del mismo modo, los vectores propios se mantienen inalterados. Tarea: Pruebe estas propiedades! La idea es entonces realizar un shift $\widetilde{s}$ cercano a algún valor propio $s_k$, y computar PI sobre $(A - \widetilde{s}I)^{-1}$. Luego: $$ |\lambda_k - \widetilde{s}| < |\lambda_i - \widetilde{s}| \leftrightarrow \bigg| \frac{1}{\lambda_k - \widetilde{s}} \bigg| > \bigg| \frac{1}{\lambda_i - \widetilde{s}} \bigg| \ \ \forall i \neq k \ $$ entonces $\frac{1}{\lambda_k - \widetilde{s}}$ corresponderá con el vector propio dominante de $(A - \widetilde{s}\,I)^{-1}$. Notar que por lo enunciado en las propiedades, los vectores propios se mantienen sin alteraciones. La idea anterior se ve reflejada en el algoritmo implementado a continuación: End of explanation """ def rqi(A, x, k, verbose=False): """ Program 12.3 Rayleigh Quotient Iteration Input: matrix A, initial (nonzero) vector x, number of steps k Output: eigenvalue lam, eigenvector of inv(A-sI) """ if verbose: print("Rayleigh Quotient Iteration\n%s"%('='*80)) for j in range(k): u = x/np.linalg.norm(x) lam = np.dot(u.T, np.dot(A, u)) try: x = np.linalg.solve(A -lam*np.eye(*A.shape), u) except numpy.linalg.LinAlgError: break if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T))) u = x/np.linalg.norm(x) lam = float(np.dot(u.T, np.dot(A, u))) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T))) return (lam, u) """ Explanation: <div id='rq' /> Rayleigh Quotient Iteration Como se analizó anteriormente, PI e Inverse PI tienen convergencia lineal con razón de convergencia $S \approx \frac{\lambda_2}{\lambda_1}$. Además sabemos que Inverse PI converge hacia el valor propio más cercano al shift, y que mientras más cercano sea el shift a tal valor, más rápido se logra la convergencia. Entonces la idea de RQI es la siguiente: Si en cada iteración se tiene una aproximación del valor propio que andamos buscando, podemos ocupar esta aproximación como shift $s$, y dado que el shift será más cercano al valor propio, se acelerará la convergencia. Tal valor aproximado es obtenido con el cociente de Rayleigh, y entonces el shift es actualizado con este cociente en cada iteración. Como resultado se produce el siguiente trade-off: La convergencia pasa a ser cuadrática (de modo general) y cúbica para matrices simétricas. Sin embargo, se paga el costo de tener que resolver un sistema de ecuaciones diferentes en cada iteración. A continuación se presenta una implementación del RQI: End of explanation """ # Testing algorithm lam, v = rqi(A, x, k=2) print("lambda = {0}".format(lam)) print("v = {0}".format(v)) """ Explanation: Preguntas: 1. ¿Porque es necesario el try y except en las líneas 11 y 13? ¿Que significa que el sistema no pueda ser resuelto? 2. Como puede observar RQI no recibe shift como parámetro. ¿A cuál valor/vector propio convergerá? ¿Como forzar/guiar a que tienda hacia un valor/vector propio distinto? End of explanation """ # Full matrices from scipy import linalg as LA N = 3 Aux = np.random.rand(N,N) A = Aux + Aux.T # symmetric, so we'll deal with real eigs. print(LA.eigvals(A)) # Only the eigenvalues, A not necessarily symmetric print("*"*80) print(LA.eigvalsh(A)) # Only the eigenvalues, A symmetric print("*"*80) print(LA.eig(A)) # All the eigenvalues and eigenvectors, A not necessarily symmetric print("*"*80) print(LA.eigh(A)) # All the eigenvalues and eigenvectors, A symmetric (faster) print("*"*80) lambdas, V = LA.eigh(A) # All the eigenvalues and eigenvectors, A symmetric (faster) l1 = lambdas[0] v1 = V[:,0] print(l1) print(v1) print(np.dot(A, v1)) print(l1*v1) """ Explanation: <div id='sp' /> $\texttt{SciPy}$ Eigenvalue La librería scipy tiene implementados algoritmos que permite calcular los valores y vectores propios. Las opciones posibles son: En la librería scipy.linalg: eigvals/eigvalsh/eigvals_banded, eig/eigh/eig_banded, En la librería scipy.sparse.linalg: eigen, eigs, eigsh. En general siempre conviene utilizar las funciones desde scipy y no de numpy. La librería numpy hace un excelente trabajo al permitir el uso de vectores de tipo numérico, pero contiene solo algunos algoritmos numéricos y no necesariamente los más rápidos. A continuación se muestra como utilizar algunas de estas funciones. End of explanation """
liyangbit/liyangbit.github.io
ipynb/zhilian.ipynb
mit
import pymongo import pandas as pd import matplotlib.pyplot as plt import numpy as np % matplotlib inline plt.style.use('ggplot') # 解决matplotlib显示中文问题 plt.rcParams['font.sans-serif'] = ['SimHei'] # 指定默认字体 plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题 """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#读取数据" data-toc-modified-id="读取数据-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>读取数据</a></div><div class="lev1 toc-item"><a href="#数据整理" data-toc-modified-id="数据整理-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>数据整理</a></div><div class="lev2 toc-item"><a href="#将str格式的日期变为-datatime" data-toc-modified-id="将str格式的日期变为-datatime-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>将str格式的日期变为 datatime</a></div><div class="lev2 toc-item"><a href="#筛选月薪格式为“XXXX-XXXX”的信息" data-toc-modified-id="筛选月薪格式为“XXXX-XXXX”的信息-22"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>筛选月薪格式为“XXXX-XXXX”的信息</a></div><div class="lev2 toc-item"><a href="#分割月薪字段,分别获取月薪的下限值和上限值" data-toc-modified-id="分割月薪字段,分别获取月薪的下限值和上限值-23"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>分割月薪字段,分别获取月薪的下限值和上限值</a></div><div class="lev1 toc-item"><a href="#对全国范围内的职位进行分析" data-toc-modified-id="对全国范围内的职位进行分析-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>对全国范围内的职位进行分析</a></div><div class="lev2 toc-item"><a href="#主要城市的招聘职位数量分布情况" data-toc-modified-id="主要城市的招聘职位数量分布情况-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>主要城市的招聘职位数量分布情况</a></div><div class="lev1 toc-item"><a href="#筛选北京和长沙的职位" data-toc-modified-id="筛选北京和长沙的职位-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>筛选北京和长沙的职位</a></div><div class="lev2 toc-item"><a href="#月薪分布情况(全国)" data-toc-modified-id="月薪分布情况(全国)-41"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>月薪分布情况(全国)</a></div><div class="lev2 toc-item"><a href="#相关技能要求" data-toc-modified-id="相关技能要求-42"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>相关技能要求</a></div><div class="lev1 toc-item"><a href="#北京" data-toc-modified-id="北京-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>北京</a></div><div class="lev2 toc-item"><a href="#月薪分布情况" data-toc-modified-id="月薪分布情况-51"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>月薪分布情况</a></div><div class="lev2 toc-item"><a href="#相关技能要求" data-toc-modified-id="相关技能要求-52"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>相关技能要求</a></div><div class="lev1 toc-item"><a href="#长沙" data-toc-modified-id="长沙-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>长沙</a></div><div class="lev2 toc-item"><a href="#相关技能要求" data-toc-modified-id="相关技能要求-61"><span class="toc-item-num">6.1&nbsp;&nbsp;</span>相关技能要求</a></div> End of explanation """ client = pymongo.MongoClient('localhost') db = client['zhilian'] keyword = 'python' table = db[keyword] columns = ['zwmc', 'gsmc', 'zwyx', 'gbsj', 'gzdd', 'fkl', 'brief', 'zw_link', '_id', 'save_date'] # url_set = set([records['zw_link'] for records in table.find()]) # print(url_set) df = pd.DataFrame([records for records in table.find()], columns=columns) # columns_update = ['职位名称', # '公司名称', # '职位月薪', # '公布时间', # '工作地点', # '反馈率', # '招聘简介', # '网页链接', # '_id', # '信息保存日期'] # df.columns = columns_update print('总行数为:{}行'.format(df.shape[0])) df.head(2) """ Explanation: 读取数据 End of explanation """ df['save_date'] = pd.to_datetime(df['save_date']) print(df['save_date'].dtype) # df['save_date'] """ Explanation: 数据整理 将str格式的日期变为 datatime End of explanation """ df_clean = df[['zwmc', 'gsmc', 'zwyx', 'gbsj', 'gzdd', 'fkl', 'brief', 'zw_link', 'save_date']] # 对月薪的数据进行筛选,选取格式为“XXXX-XXXX”的信息,方面后续分析 df_clean = df_clean[df_clean['zwyx'].str.contains('\d+-\d+', regex=True)] print('总行数为:{}行'.format(df_clean.shape[0])) # df_clean.head() """ Explanation: 筛选月薪格式为“XXXX-XXXX”的信息 End of explanation """ # http://stackoverflow.com/questions/14745022/pandas-dataframe-how-do-i-split-a-column-into-two # http://stackoverflow.com/questions/20602947/append-column-to-pandas-dataframe # df_temp.loc[: ,'zwyx_min'],df_temp.loc[: , 'zwyx_max'] = df_temp.loc[: , 'zwyx'].str.split('-',1).str #会有警告 s_min, s_max = df_clean.loc[: , 'zwyx'].str.split('-',1).str df_min = pd.DataFrame(s_min) df_min.columns = ['zwyx_min'] df_max = pd.DataFrame(s_max) df_max.columns = ['zwyx_max'] df_clean_concat = pd.concat([df_clean, df_min, df_max], axis=1) # df_clean['zwyx_min'].astype(int) df_clean_concat['zwyx_min'] = pd.to_numeric(df_clean_concat['zwyx_min']) df_clean_concat['zwyx_max'] = pd.to_numeric(df_clean_concat['zwyx_max']) # print(df_clean['zwyx_min'].dtype) print(df_clean_concat.dtypes) df_clean_concat.head(2) """ Explanation: 分割月薪字段,分别获取月薪的下限值和上限值 End of explanation """ df_clean_concat.sort_values('zwyx_min',inplace=True) # df_clean_concat.tail() """ Explanation: 将数据信息按职位月薪进行排序 End of explanation """ # 判断爬取的数据是否有重复值 print(df_clean_concat[df_clean_concat.duplicated('zw_link')==True]) # df_clean_concat.to_csv('zhilian_python_update.csv') """ Explanation: 判断爬取的数据是否有重复值 End of explanation """ # from IPython.core.display import display, HTML ADDRESS = [ '北京', '上海', '广州', '深圳', '天津', '武汉', '西安', '成都', '大连', '长春', '沈阳', '南京', '济南', '青岛', '杭州', '苏州', '无锡', '宁波', '重庆', '郑州', '长沙', '福州', '厦门', '哈尔滨', '石家庄', '合肥', '惠州', '太原', '昆明', '烟台', '佛山', '南昌', '贵阳', '南宁'] df_city = df_clean_concat.copy() # 由于工作地点的写上,比如北京,包含许多地址为北京-朝阳区等 # 可以用替换的方式进行整理,这里用pandas的replace()方法 for city in ADDRESS: df_city['gzdd'] = df_city['gzdd'].replace([(city+'.*')],[city],regex=True) # 针对全国主要城市进行分析 df_city_main = df_city[df_city['gzdd'].isin(ADDRESS)] df_city_main_count = df_city_main.groupby('gzdd')['zwmc','gsmc'].count() df_city_main_count['gsmc'] = df_city_main_count['gsmc']/(df_city_main_count['gsmc'].sum()) df_city_main_count.columns = ['number', 'percentage'] # 按职位数量进行排序 df_city_main_count.sort_values(by='number', ascending=False, inplace=True) # 添加辅助列,标注城市和百分比,方面在后续绘图时使用 df_city_main_count['label']=df_city_main_count.index+ ' '+ ((df_city_main_count['percentage']*100).round()).astype('int').astype('str')+'%' print(type(df_city_main_count)) # 职位数量最多的Top10城市的列表 print(df_city_main_count.head(10)) """ Explanation: 从上述结果可看出,数据是没有重复的。 对全国范围内的职位进行分析 主要城市的招聘职位数量分布情况 End of explanation """ address_bj_cs = ['北京', '长沙'] df_city_bj_cs = df_city[df_city['gzdd'].isin(address_bj_cs)] # df_city_bj_cs.head() df_city_bj_cs.to_csv('zhilian_'+keyword+'_bj_cs.csv') from matplotlib import cm label = df_city_main_count['label'] sizes = df_city_main_count['number'] # 设置绘图区域大小 fig, axes = plt.subplots(figsize=(10,6),ncols=2) ax1, ax2 = axes.ravel() colors = cm.PiYG(np.arange(len(sizes))/len(sizes)) # colormaps: Paired, autumn, rainbow, gray,spring,Darks # 由于城市数量太多,饼图中不显示labels和百分比 patches, texts = ax1.pie(sizes,labels=None, shadow=False, startangle=0, colors=colors) ax1.axis('equal') ax1.set_title('职位数量分布', loc='center') # ax2 只显示图例(legend) ax2.axis('off') ax2.legend(patches, label, loc='center left', fontsize=9) plt.savefig('job_distribute.jpg') plt.show() """ Explanation: 筛选北京和长沙的职位 End of explanation """ from matplotlib.ticker import FormatStrFormatter fig, (ax1, ax2) = plt.subplots(figsize=(10,8), nrows=2) x_pos = list(range(df_clean_concat.shape[0])) y1 = df_clean_concat['zwyx_min'] ax1.plot(x_pos, y1) ax1.set_title('Trend of min monthly salary in China', size=14) ax1.set_xticklabels('') ax1.set_ylabel('min monthly salary(RMB)') bins = [3000,6000, 9000, 12000, 15000, 18000, 21000, 24000, 100000] counts, bins, patches = ax2.hist(y1, bins, normed=1, histtype='bar', facecolor='g', rwidth=0.8) ax2.set_title('Hist of min monthly salary in China', size=14) ax2.set_yticklabels('') # ax2.set_xlabel('min monthly salary(RMB)') # http://stackoverflow.com/questions/6352740/matplotlib-label-each-bin ax2.set_xticks(bins) #将bins设置为xticks ax2.set_xticklabels(bins, rotation=-90) # 设置为xticklabels的方向 # Label the raw counts and the percentages below the x-axis... bin_centers = 0.5 * np.diff(bins) + bins[:-1] for count, x in zip(counts, bin_centers): # # Label the raw counts # ax2.annotate(str(count), xy=(x, 0), xycoords=('data', 'axes fraction'), # xytext=(0, -70), textcoords='offset points', va='top', ha='center', rotation=-90) # Label the percentages percent = '%0.0f%%' % (100 * float(count) / counts.sum()) ax2.annotate(percent, xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -40), textcoords='offset points', va='top', ha='center', rotation=-90, color='b', size=14) fig.savefig('salary_quanguo_min.jpg') """ Explanation: 月薪分布情况(全国) End of explanation """ df_zwyx_adjust = df_clean_concat[df_clean_concat['zwyx_min']<=20000] fig, (ax1, ax2) = plt.subplots(figsize=(10,8), nrows=2) x_pos = list(range(df_zwyx_adjust.shape[0])) y1 = df_zwyx_adjust['zwyx_min'] ax1.plot(x_pos, y1) ax1.set_title('Trend of min monthly salary in China (adjust)', size=14) ax1.set_xticklabels('') ax1.set_ylabel('min monthly salary(RMB)') bins = [3000,6000, 9000, 12000, 15000, 18000, 21000] counts, bins, patches = ax2.hist(y1, bins, normed=1, histtype='bar', facecolor='g', rwidth=0.8) ax2.set_title('Hist of min monthly salary in China (adjust)', size=14) ax2.set_yticklabels('') # ax2.set_xlabel('min monthly salary(RMB)') # http://stackoverflow.com/questions/6352740/matplotlib-label-each-bin ax2.set_xticks(bins) #将bins设置为xticks ax2.set_xticklabels(bins, rotation=-90) # 设置为xticklabels的方向 # Label the raw counts and the percentages below the x-axis... bin_centers = 0.5 * np.diff(bins) + bins[:-1] for count, x in zip(counts, bin_centers): # # Label the raw counts # ax2.annotate(str(count), xy=(x, 0), xycoords=('data', 'axes fraction'), # xytext=(0, -70), textcoords='offset points', va='top', ha='center', rotation=-90) # Label the percentages percent = '%0.0f%%' % (100 * float(count) / counts.sum()) ax2.annotate(percent, xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -40), textcoords='offset points', va='top', ha='center', rotation=-90, color='b', size=14) fig.savefig('salary_quanguo_min_adjust.jpg') """ Explanation: 不考虑部分极值后,分析月薪分布情况 End of explanation """ brief_list = list(df_clean_concat['brief']) brief_str = ''.join(brief_list) print(type(brief_str)) # print(brief_str) with open('brief_quanguo.txt', 'w', encoding='utf-8') as f: f.write(brief_str) """ Explanation: 相关技能要求 End of explanation """ df_beijing = df_clean_concat[df_clean_concat['gzdd'].str.contains('北京.*', regex=True)] # df_beijing.to_excel('zhilian_kw_python_bj.xlsx') print('总行数为:{}行'.format(df_beijing.shape[0])) # df_beijing.head() from matplotlib.ticker import FormatStrFormatter fig, (ax1, ax2) = plt.subplots(figsize=(10,8), nrows=2) x_pos = list(range(df_beijing.shape[0])) y1 = df_beijing['zwyx_min'] y2 = df_beijing['zwyx_max'] ax1.plot(x_pos, y1) ax1.set_title('Trend of min monthly salary in Beijing', size=14) ax1.set_xticklabels('') ax1.set_ylabel('min monthly salary(RMB)') bins = [3000,6000, 9000, 12000, 15000, 18000, 21000, 24000, 100000] counts, bins, patches = ax2.hist(y1, bins, normed=1, histtype='bar', facecolor='g', rwidth=0.8) ax2.set_title('Hist of min monthly salary in Beijing', size=14) ax2.set_yticklabels('') # ax2.set_xlabel('min monthly salary(RMB)') # http://stackoverflow.com/questions/6352740/matplotlib-label-each-bin ax2.set_xticks(bins) #将bins设置为xticks ax2.set_xticklabels(bins, rotation=-90) # 设置为xticklabels的方向 # Label the raw counts and the percentages below the x-axis... bin_centers = 0.5 * np.diff(bins) + bins[:-1] for count, x in zip(counts, bin_centers): # # Label the raw counts # ax2.annotate(str(count), xy=(x, 0), xycoords=('data', 'axes fraction'), # xytext=(0, -70), textcoords='offset points', va='top', ha='center', rotation=-90) # Label the percentages percent = '%0.0f%%' % (100 * float(count) / counts.sum()) ax2.annotate(percent, xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -40), textcoords='offset points', va='top', ha='center', rotation=-90, color='blue', size=14) fig.savefig('salary_beijing_min.jpg') """ Explanation: 北京 月薪分布情况 End of explanation """ brief_list_bj = list(df_beijing['brief']) brief_str_bj = ''.join(brief_list_bj) print(type(brief_str_bj)) # print(brief_str_bj) # with open('brief_beijing.txt', 'w', encoding='utf-8') as f: # f.write(brief_str_bj) """ Explanation: 相关技能要求 End of explanation """ df_changsha = df_clean_concat[df_clean_concat['gzdd'].str.contains('长沙.*', regex=True)] # df_changsha = pd.DataFrame(df_changsha, ignore_index=True) # df_changsha.to_excel('zhilian_kw_python_cs.xlsx') print('总行数为:{}行'.format(df_changsha.shape[0])) # df_changsha.tail() from matplotlib.ticker import FormatStrFormatter fig, (ax1, ax2) = plt.subplots(figsize=(10,8), nrows=2) x_pos = list(range(df_changsha.shape[0])) y1 = df_changsha['zwyx_min'] ax1.plot(x_pos, y1) ax1.set_title('Trend of min monthly salary in Changsha', size=14) ax1.set_xticklabels('') ax1.set_ylabel('min monthly salary(RMB)') bins = [3000,6000, 9000, 12000, 15000, 18000, 21000, 24000, 50000] counts, bins, patches = ax2.hist(y1, bins, normed=1, histtype='bar', facecolor='g', rwidth=0.8) ax2.set_title('Hist of min monthly salary in Changsha', size=14) ax2.set_yticklabels('') # ax2.set_xlabel('min monthly salary(RMB)') # http://stackoverflow.com/questions/6352740/matplotlib-label-each-bin ax2.set_xticks(bins) #将bins设置为xticks ax2.set_xticklabels(bins, rotation=-90) # 设置为xticklabels的方向 # Label the raw counts and the percentages below the x-axis... bin_centers = 0.5 * np.diff(bins) + bins[:-1] for count, x in zip(counts, bin_centers): # # Label the raw counts # ax2.annotate(str(count), xy=(x, 0), xycoords=('data', 'axes fraction'), # xytext=(0, -70), textcoords='offset points', va='top', ha='center', rotation=-90) # Label the percentages percent = '%0.0f%%' % (100 * float(count) / counts.sum()) ax2.annotate(percent, xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -40), textcoords='offset points', va='top', ha='center', rotation=-90, color='blue', size=14) fig.savefig('salary_changsha_min.jpg') """ Explanation: 长沙 End of explanation """ brief_list_cs = list(df_changsha['brief']) brief_str_cs = ''.join(brief_list_cs) print(type(brief_str_cs)) # print(brief_str_cs) # with open('brief_changsha.txt', 'w', encoding='utf-8') as f: # f.write(brief_str_cs) """ Explanation: 相关技能要求 End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/11f39f61bd7f4cfd5791b0d10da462f2/plot_eeg_erp.ipynb
bsd-3-clause
import mne from mne.datasets import sample """ Explanation: EEG processing and Event Related Potentials (ERPs) For a generic introduction to the computation of ERP and ERF see tut_epoching_and_averaging. :depth: 1 End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # these data already have an EEG average reference raw = mne.io.read_raw_fif(raw_fname, preload=True) """ Explanation: Setup for reading the raw data End of explanation """ raw.pick_types(meg=False, eeg=True, eog=True) """ Explanation: Let's restrict the data to the EEG channels End of explanation """ print(raw.info) """ Explanation: By looking at the measurement info you will see that we have now 59 EEG channels and 1 EOG channel End of explanation """ raw.set_channel_types(mapping={'EOG 061': 'eeg'}) print(raw.info) """ Explanation: In practice it's quite common to have some EEG channels that are actually EOG channels. To change a channel type you can use the :func:mne.io.Raw.set_channel_types method. For example to treat an EOG channel as EEG you can change its type using End of explanation """ raw.rename_channels(mapping={'EOG 061': 'EOG'}) """ Explanation: And to change the nameo of the EOG channel End of explanation """ raw.set_channel_types(mapping={'EOG': 'eog'}) """ Explanation: Let's reset the EOG channel back to EOG type. End of explanation """ print(raw.info['chs'][0]['loc']) """ Explanation: The EEG channels in the sample dataset already have locations. These locations are available in the 'loc' of each channel description. For the first channel we get End of explanation """ raw.plot_sensors() raw.plot_sensors('3d') # in 3D """ Explanation: And it's actually possible to plot the channel locations using :func:mne.io.Raw.plot_sensors. End of explanation """ montage = mne.channels.read_montage('standard_1020') print(montage) """ Explanation: Setting EEG Montage (using standard montages) In the case where your data don't have locations you can set them using a :class:mne.channels.Montage. MNE comes with a set of default montages. To read one of them do: End of explanation """ raw_no_ref, _ = mne.set_eeg_reference(raw, []) """ Explanation: To apply a montage on your data use the set_montage method. function. Here don't actually call this function as our demo dataset already contains good EEG channel locations. Next we'll explore the definition of the reference. Setting EEG reference Let's first remove the reference from our Raw object. This explicitly prevents MNE from adding a default EEG average reference required for source localization. End of explanation """ reject = dict(eeg=180e-6, eog=150e-6) event_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5 events = mne.read_events(event_fname) epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) evoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average() del raw_no_ref # save memory title = 'EEG Original reference' evoked_no_ref.plot(titles=dict(eeg=title), time_unit='s') evoked_no_ref.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') """ Explanation: We next define Epochs and compute an ERP for the left auditory condition. End of explanation """ raw.del_proj() raw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True) evoked_car = mne.Epochs(raw_car, **epochs_params).average() del raw_car # save memory title = 'EEG Average reference' evoked_car.plot(titles=dict(eeg=title), time_unit='s') evoked_car.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') """ Explanation: Average reference: This is normally added by default, but can also be added explicitly. End of explanation """ raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002']) evoked_custom = mne.Epochs(raw_custom, **epochs_params).average() del raw_custom # save memory title = 'EEG Custom reference' evoked_custom.plot(titles=dict(eeg=title), time_unit='s') evoked_custom.plot_topomap(times=[0.1], size=3., title=title, time_unit='s') """ Explanation: Custom reference: Use the mean of channels EEG 001 and EEG 002 as a reference End of explanation """ event_id = {'left/auditory': 1, 'right/auditory': 2, 'left/visual': 3, 'right/visual': 4} epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) epochs = mne.Epochs(raw, **epochs_params) print(epochs) """ Explanation: Evoked arithmetic (e.g. differences) Trial subsets from Epochs can be selected using 'tags' separated by '/'. Evoked objects support basic arithmetic. First, we create an Epochs object containing 4 conditions. End of explanation """ left, right = epochs["left"].average(), epochs["right"].average() # create and plot difference ERP joint_kwargs = dict(ts_args=dict(time_unit='s'), topomap_args=dict(time_unit='s')) mne.combine_evoked([left, -right], weights='equal').plot_joint(**joint_kwargs) """ Explanation: Next, we create averages of stimulation-left vs stimulation-right trials. We can use basic arithmetic to, for example, construct and plot difference ERPs. End of explanation """ aud_l = epochs["auditory", "left"].average() aud_r = epochs["auditory", "right"].average() vis_l = epochs["visual", "left"].average() vis_r = epochs["visual", "right"].average() all_evokeds = [aud_l, aud_r, vis_l, vis_r] print(all_evokeds) """ Explanation: This is an equal-weighting difference. If you have imbalanced trial numbers, you could also consider either equalizing the number of events per condition (using :meth:epochs.equalize_event_counts &lt;mne.Epochs.equalize_event_counts&gt;). As an example, first, we create individual ERPs for each condition. End of explanation """ all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())] print(all_evokeds) # Then, we construct and plot an unweighted average of left vs. right trials # this way, too: mne.combine_evoked( [aud_l, -aud_r, vis_l, -vis_r], weights='equal').plot_joint(**joint_kwargs) """ Explanation: This can be simplified with a Python list comprehension: End of explanation """ # If they are stored in a list, they can be easily averaged, for example, # for a grand average across subjects (or conditions). grand_average = mne.grand_average(all_evokeds) mne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds) # If Evokeds objects are stored in a dictionary, they can be retrieved by name. all_evokeds = dict((cond, epochs[cond].average()) for cond in event_id) print(all_evokeds['left/auditory']) # Besides for explicit access, this can be used for example to set titles. for cond in all_evokeds: all_evokeds[cond].plot_joint(title=cond, **joint_kwargs) """ Explanation: Often, it makes sense to store Evoked objects in a dictionary or a list - either different conditions, or different subjects. End of explanation """