repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
fluffy-hamster/A-Beginners-Guide-to-Python
A Beginners Guide to Python/05. Variables & Assignment.ipynb
mit
the_number_four = 4 print(the_number_four) """ Explanation: Variables & Assignment What is assignment? Well, the short (and simple) answer is to say assignment is the process whereby we give a value a unique name. Once something has a name, we can use it later on. Please note that this is a bit of a oversimplification, I should have said a ‘unique’ name within the ‘scope’ of the variable. but for that sentence to make sense you would need to understand the concept of 'scope' in Python. So for now, let's ignore the intricacies and keep it simple: a variable is a unique name that has some value attached to it. The Syntax: {Unique Variable Name} = {Some Value} Okay, let's give it a go! End of explanation """ x = 4 x = 5 print(x) """ Explanation: QUESTION: Do you have any guesses as to why a name needs to be unique? Well, if we have two variables called "x" and they have different values how is Python supposed to know what "x" you actually want? Remember the computer is stupid, it cannot figure out what you 'obviously' meant. End of explanation """ the_number_four = 4 print("theNumberFour") """ Explanation: So what is going on here? Well, Python first defines ‘x’ as equal to 4. In the very next line Python says "oh, x is equal to 5 now", and it "forgets" the old value of 4. In short, there never two versions of ‘x’ here. Rather, there is one ‘x’ and it used to equal 4 and now it equals 5. So basically, if you reuse names you will lose your data. And thats not a weird bug or quirk; Python was designed to be like this. QUESTION: What happens if we use quotation marks, around the "the_number_four"? End of explanation """ print(the_number_five) the_number_five = 5 """ Explanation: So...what happened? Well, when we use quotation marks Python interprets that as a string. But without quotes, Python looks at the_number_four and says: "This looks like a variable name to me, let's go see if it has been defined somewhere above". “Above” is an important word in that sentence. Why is that? Well, let’s see what happens if we define the thing we want to print below the print statement, End of explanation """ the_number_five = 5 print(the_number_five) """ Explanation: You may recall in the very first lecture we got a name error when we tried to all Print() when we actually meant print(). We got a name error in this case too, but the cause it is a bit different. What happened here is that Python executes code sequentially line by line, thus line 1 is executed before line 2 is and so on. So let’s say on line 10 of some peice of code we have a variable called “Y”. Python will then check lines 1, 2, 3, .., 8, 9, 10 to see if “Y” is defined. If it isn’t Python throws a 'NameError'. How do we fix this error? Well, in this case we just need to define ‘the number 5’ before we try to print it, like so End of explanation """ a = 4 a = b # throws a NameError, b is not defined! # So what should we do if we want b to equal a? Well, we would have to write: a = 4 b = a # and now both a and b should equal 4 print(a) print(b) # and lastly, a and b are both numbers, so a + b is that same as 4 + 4 (more on numbers later). print(4 + 4) print(a + b) """ Explanation: Don't confuse assignment with 'Equals' In Python, assignment works left-to-right, unlike the equals symbol in mathematics assignment doesn't work both ways. In other words: a = b is not the same as b = a Basically, the left-hand-side of the "=" symbol is the variable name, on the right-hand-side is the value. Let me show you quickly... End of explanation """ a = 5 a = a + 5 print(a) """ Explanation: Incrementing Variables Before we move onto the homework I wanted to make a quick little note about "updating" variables. End of explanation """ a = 5 b = 5 a = a + b # 5 + 5 = 10 ('a' therefore now equals 10, not 5) a = a + b # 10 + 5 = 15 print(a) """ Explanation: In the example above we set 'a' to the value 5. In the next line we say assign 'a' to the value ('a' + 5). And since 'a' is 5 Python interprets this as 5 + 5, which is of course 10. Here is another example: End of explanation """ long_variable_name = 10 long_variable_name = long_variable_name + 10 # is the same as: long_variable_name += 10 """ Explanation: The final point I'd like on this is that since this concept is super-useful the designers of Python thought it would be a good idea to be able to do this sort of calculation by using just two characters: End of explanation """ a = 5 a2 = 5 a += 5 a2 = a2 + 5 print(a, a2) # <--- They are both the same, see!? # "updating" works for strings too: s = "hello" s += " " s += "world" print(s) """ Explanation: 'a += b' functions EXACTLY the same as 'a = a + b', the former is simply a short-cut that means you can do less typing, which can be useful if something has a long_variable_name. End of explanation """ # my_name code goes here... # kitchen_utensil code goes here... # copy & paste the above print statement """ Explanation: Homework create a varible called "my_name". Its value should be your name (a string). create a variable called "kitchen_utensil". its value should be your favorite kitchen tool (also a string). Personaly, I like a good spoon (wink wink). Now type into python: print("Hello, my name is " + my_name + " and one time, at band-camp, I used a " + kitchen_utensil + ".") End of explanation """
Hexiang-Hu/mmds
final/.ipynb_checkpoints/Final-basic-checkpoint.ipynb
mit
## Q2 Solution. def hash(x): return math.fmod(3 * x + 2, 11) for i in xrange(1,12): print hash(i) """ Explanation: Q1. Solution 3-shingles for "hello world": hel, ell, llo, lo_, o_w ,_wo, wor, orl, rld => 9 in total Q2. Solution End of explanation """ ## Q3 Solution. prob = 1.0 / 10 a = (1 - prob)**4 print a b = (1 - ( 1 - (1 - prob)**2) )**2 print b c = (1 - (1.0 /10 * 1.0 / 9)) print c """ Explanation: Q3. This question involves three different Bloom-filter-like scenarios. Each scenario involves setting to 1 certain bits of a 10-bit array, each bit of which is initially 0. Scenario A: we use one hash function that randomly, and with equal probability, selects one of the ten bits of the array. We apply this hash function to four different inputs and set to 1 each of the selected bits. Scenario B: We use two hash functions, each of which randomly, with equal probability, and independently of the other hash function selects one of the of 10 bits of the array. We apply both hash functions to each of two inputs and set to 1 each of the selected bits. Scenario C: We use one hash function that randomly and with equal probability selects two different bits among the ten in the array. We apply this hash function to two inputs and set to 1 each of the selected bits. Let a, b, and c be the expected number of bits set to 1 under scenarios A, B, and C, respectively. Which of the following correctly describes the relationships among a, b, and c? End of explanation """ ## Q5 Solution. vec1 = np.array([2, 1, 1]) vec2 = np.array([10, -7, 1]) print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) """ Explanation: Q4. In this market-basket problem, there are 99 items, numbered 2 to 100. There is a basket for each prime number between 2 and 100. The basket for p contains all and only the items whose numbers are a multiple of p. For example, the basket for 17 contains the following items: {17, 34, 51, 68, 85}. What is the support of the pair of items {12, 30}? Q4 Solution. support = 2 => {2,4,6,8, ...} & {3, 6, 9,...} Q5. To two decimal places, what is the cosine of the angle between the vectors [2,1,1] and [10,-7,1]? End of explanation """ ## Q6 Solution. # probability that they agree at one particular band p1 = 0.6**2 print (1 - p1)**3 """ Explanation: Q6. In this question we use six minhash functions, organized as three bands of two rows each, to identify sets of high Jaccard similarity. If two sets have Jaccard similarity 0.6, what is the probability (to two decimal places) that this pair will become a candidate pair? End of explanation """ ## Q7 Solution. p1 = 1 - (1 - .9)**3 p2 = 1 - (1 - .1)**3 print "new LSH is (.4, .6, {}, {})-sensitive family".format(p1, p2) """ Explanation: Q7. Suppose we have a (.4, .6, .9, .1)-sensitive family of functions. If we apply a 3-way OR construction to this family, we get a new family of functions whose sensitivity is: End of explanation """ ## Q9 Solution. M = np.array([[0, 0, 0, .25], [1, 0, 0, .25], [0, 1, 0, .25], [0, 0, 1, .25]]) r = np.array([.25, .25, .25, .25]) for i in xrange(30): r = M.dot(r) print r """ Explanation: Q8. Suppose we have a database of (Class, Student, Grade) facts, each giving the grade the student got in the class. We want to estimate the fraction of students who have gotten A's in at least 10 classes, but we do not want to examine the entire relation, just a sample of 10% of the tuples. We shall hash tuples to 10 buckets, and take only those tuples in the first bucket. But to get a valid estimate of the fraction of students with at least 10 A's, we need to pick our hash key judiciously. To which Attribute(s) of the relation should we apply the hash function? Q8 Solution. We will need to hash it to with regard to class and students Q9 Suppose the Web consists of four pages A, B, C, and D, that form a chain A-->B-->C-->D We wish to compute the PageRank of each of these pages, but since D is a "dead end," we will "teleport" from D with probability 1 to one of the four pages, each with equal probability. We do not teleport from pages A, B, or C. Assuming the sum of the PageRanks of the four pages is 1, what is the PageRank of page B, correct to two decimal places? End of explanation """ ## Q10 Solution. print 1 - (1 - .3)*(1 - .4) """ Explanation: Q10. Suppose in the AGM model we have four individuals (A,B,C,D} and two communities. Community 1 consists of {A,B,C} and Community 2 consists of {B,C,D}. For Community 1 there is a 30% chance- it will cause an edge between any two of its members. For Community 2 there is a 40% chance it will cause an edge between any two of its members. To the nearest two decimal places, what is the probability that there is an edge between B and C? End of explanation """ ##Q12 L = np.array([[-.25, -.5, -.76, -.29, -.03, -.07, -.01], [-.05, -.1, -.15, .20, .26, .51, .77 ]]).T print L V = np.array([[6.74, 0],[0, 5.44]]) print V R = np.array([[-.57, -.11, -.57, -.11, -.57], [-.09, 0.70, -.09, .7, -.09]]) print R print L.dot(V).dot(R) """ Explanation: Q11. X is a dataset of n columns for which we train a supervised Machine Learning algorithm. e is the error of the model measured against a validation dataset. Unfortunately, e is too high because model has overfitted on the training data X and it doesn't generalize well. We now decide to reduce the model variance by reducing the dimensionality of X, using a Singular Value Decomposition, and using the resulting dataset to train our model. If i is the number of singular values used in the SVD reduction, how does e change as a function of i, for i ∈ {1, 2,...,n}? Q11 Solution. A Convex Function starts low End of explanation """ X = 0.8 * np.array([[1.0/3, 0, 0], [1.0/3, 0, 0], [1.0/3, 1, 0]]) X += 0.2 * np.array([[.5, .5, .5], [.5, .5, .5], [ 0, 0, 0]]) print X """ Explanation: Q13. Recall that the power iteration does r=X·r until converging, where X is a nxn matrix and n is the number of nodes in the graph. Using the power iteration notation above, what is matrix X value when solving topic sensitive Pagerank with teleport set {0,1} for the following graph? Use beta=0.8. (Recall that the teleport set contains the destination nodes used when teleporting). End of explanation """
khalido/algorithims
breadth first search.ipynb
gpl-3.0
%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt from geopy.distance import great_circle from collections import deque """ Explanation: First, we need a graph. A graph is just a bunch of objects, typically called nodes which are connected to each other. The connections are called edges, and can have a direction, like Tom owes money to Andy, or can be non-directional, like a road connecting two cities. A graph is a collection of nodes and edges. The BFS algo can tell us if two nodes are connected, and finds the shortest path b/w them. End of explanation """ cols = ["Airport ID", "Name", "City", "Country", "IATA", "ICAO", "Latitude", "Longitude", "Altitude", "Timezone", "DST", "Tz database", "Type", "Source"] airports = pd.read_csv("data/airports.dat", header=None, names=cols, index_col=0) print(f"There are {airports.shape[0]} airports") airports.head() """ Explanation: Date to make a graph with To make the graph interesting, I am using the airlines and route information database from openflights.org. So we have two datasets, one about airports, and one about routes b/w airports. For the purpose of the graph, the airports are the nodes, and the routes are the edges b/w them. The routes have a direction, and the cost or weight of the route is the distance. Nodes aka Airports End of explanation """ # dropping the columns we don't need keep_cols = ["City", "Country", "Latitude", "Longitude"] airports = airports[keep_cols] print(f"There are {airports.shape[0]} airports") airports.head() # getting rid of null values airports = airports.dropna(axis=0) airports.shape """ Explanation: Step one is is to get rid of all the info we don't need for our graph. So for Airports, I am just keeping: - Airport ID, Unique OpenFlights identifier - City - Country - Latitude/Longitude End of explanation """ cols = ["Airline", "Airline ID", "Source airport", "Source Airport ID", "Destination airport", "Dest Airport ID", "Codeshare", "Stops", "Equipment"] routes = pd.read_csv("data/routes.dat", header=None, names=cols) print(f"There are {routes.shape[0]} routes") routes.head() """ Explanation: Edges, aka routes flown The second dataset is the routes flown by airlines. As of January 2012, the OpenFlights/Airline Route Mapper Route Database contains 59036 routes between 3209 airports on 531 airlines spanning the globe, as shown in the map above. End of explanation """ # first drop all rows where stops aren't 0, as we only want direct connections routes = routes[routes["Stops"] == 0] keep_cols = ["Source Airport ID", "Dest Airport ID"] routes = routes[keep_cols] print(f"There are {routes.shape[0]} routes") routes.head(10) """ Explanation: We just need: Airline ID: Unique OpenFlights identifier for airline Dest Airport ID: Unique OpenFlights identifier for dest airport End of explanation """ def make_int_or_null(x): """returns int or np.nan if can't return int""" try: return int(x) except: return np.nan routes = routes.applymap(make_int_or_null) print(f"There are {routes.shape[0]} routes before dropping null values") routes = routes.dropna(axis=0) print(f"there are {routes.shape[0]} after dropping null values") routes.head() """ Explanation: Now there are some values which aren't numbers, so we need to clean them up, as you can see in row 7 above. End of explanation """ def route_distance(edge): """takes a route as a pandas row, and returns distance b/w them""" src_airport = airports.iloc[int(edge["Source Airport ID"])] dest_airport = airports.iloc[int(edge["Dest Airport ID"])] src_lat = src_airport["Latitude"] src_long = src_airport["Longitude"] dest_lat = dest_airport["Latitude"] dest_long = dest_airport["Longitude"] src_loc = (float(src_lat), float(src_long)) dest_loc = (float(dest_lat), float(dest_long)) return great_circle(src_loc, dest_loc).km #checking algo print(route_distance(routes.iloc[100])) """ Explanation: Now we need to be able to get the distance b/w two airports to be able to assign a weight in our graphs. End of explanation """ graph = {} for i, row in airports.iterrows(): graph[i] = [] """ Explanation: so the data seems ready to go. We now have two pandas dataframes, airports and routes, and a function which returns the distance b/w airports. the actual graph there are many ways to make a graph. One way is to build out all the nodes and connections. So here I'll initiate the graph as a dictionary of nodes, with the data being the edges. A blank graph where the key is each airport: End of explanation """ from tqdm import tqdm for i, row in tqdm(routes.iterrows(), total=len(routes), mininterval=0.5): try: src_airport = airports.iloc[int(row["Source Airport ID"])] dest_airport = airports.iloc[int(row["Dest Airport ID"])] except: pass dist = great_circle((src_airport["Latitude"], src_airport["Longitude"]), (dest_airport["Latitude"],dest_airport["Longitude"]) ).km n = int(row["Source Airport ID"]) d = int(row["Dest Airport ID"]) if n in graph.keys(): graph[n].append((d, int(dist))) """ Explanation: Now going through each route and adding (dest_airport, distance) to each src_airport in the graph: End of explanation """ find = airports["City"] == "Karachi" airports[find] """ Explanation: Now say I want to find out the graph for Karachi. Karachi has three airports, so only looking at the first one for now... End of explanation """ for i, t in enumerate(graph[2206]): if t[0] in graph.keys(): print(i, airports.loc[t[0]]["City"], t[1]) else: continue airports.shape """ Explanation: So I can see all the cities it's connected to, though since some cities have multiple airports they show up twice. So how do I deal with that? I can ignore multiple airports as the distance is the same, though a more complex graph would take into account the different $$ value. End of explanation """
michaelgat/Udacity_DL
intro-to-tflearn/TFLearn_Digit_Recognition-MG.ipynb
mit
# Import Numpy, TensorFlow, TFLearn, and MNIST data import numpy as np import tensorflow as tf import tflearn import tflearn.datasets.mnist as mnist """ Explanation: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9. We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network. End of explanation """ # Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) """ Explanation: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. End of explanation """ # Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(0) """ Explanation: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. End of explanation """ # Define the neural network def build_model(): with tf.device("/gpu:0"): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model # This model assumes that your network is named "net" # Create input layer, sized to the shape of the 28x28 image net = tflearn.input_data([None, trainX.shape[1]]) # Create intermediate layers # First layer of 150 seems to make sense in context of 784 pixels per image ~1:5 ratio net = tflearn.fully_connected(net, 150, activation='ReLU') # Second hidden layer of 150 is again an approximate ~1:5 ratio net = tflearn.fully_connected(net, 30, activation='ReLU') # Create output layer net = tflearn.fully_connected(net, 10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy') model = tflearn.DNN(net) return model # Build the model model = build_model() """ Explanation: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! End of explanation """ # Compare the labels that our model predicts with the actual labels # Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample. predictions = np.array(model.predict(testX)).argmax(axis=1) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels actual = testY.argmax(axis=1) test_accuracy = np.mean(predictions == actual, axis=0) # Print out the result print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! End of explanation """
erdc-cm/air-water-vv
3d/bathyduck/Read Lidar.ipynb
mit
%ls lidar """ Explanation: lidar subdir has the files - line.data.mat have "raw" data but includes some filtering and water level extraction - line.science.mat have processed data - Some jpg are also provide End of explanation """ import tables lineData = tables.openFile(r"lidar/20150927-0000-01.FRFNProp.line.data.mat","r") #science = tables.openFile(r"lidar/20150927-0000-01.FRFNProp.line.science.mat","r") """ Explanation: Recent matlab files are just hdf5, which we can get from pytables End of explanation """ for f in lineData.root: for g in f: print g z = lineData.root.lineGriddedFilteredData.waterGridFiltered[:] x = lineData.root.lineCoredat.downLineX[:] %matplotlib notebook from matplotlib import pyplot as plt fig = plt.figure() plt.plot(x,z[:,5000]) """ Explanation: Grab the filtered water levels on the grid in FRF coordinates End of explanation """ import numpy as np xfinite = x[:] xfinite[np.isnan(x)] = -1000.0 i105 = np.where(xfinite > 105.0)[0][0] print i105 fig = plt.figure() plt.plot(lineData.root.lineCoredat.tGPSVector[:500],z[i105,:500]) """ Explanation: data file has nan's where it is unreliable, find the index for about 105m offshore End of explanation """
ShivakumarSwamy/MovieAnalysis
plotDataset2.ipynb
apache-2.0
import random import pandas as pd from plotly.graph_objs import * """ Explanation: <h1> <center> Season Movie Analysis </center> </h1> End of explanation """ from plotly.offline import init_notebook_mode, iplot, plot init_notebook_mode(connected=True) """ Explanation: Using plotly offline mode End of explanation """ dataset = pd.read_csv('finalDataset.csv') dataset.head(3) """ Explanation: Reading the final dataset End of explanation """ monthList = ['Mar|Apr|May', 'Jun|Jul|Aug', 'Sep|Oct|Nov', 'Dec|Jan|Feb'] """ Explanation: List of Seasons [Spring, Summer, Fall(Autum), Winter] with corresponding months End of explanation """ finalList = [['Spring', 'Summer', 'Fall(Autumn)', 'Winter'] ,\ [int(dataset[dataset.RELEASED.str.contains(seasonMonths, na = False)]['ADJ. BOX OFFICE'].mean()) \ for seasonMonths in monthList] , \ [len(dataset[dataset.RELEASED.str.contains(seasonMonths, na = False)]) \ for seasonMonths in monthList] , \ [round(dataset[dataset.RELEASED.str.contains(seasonMonths, na = False)]['RATING'].mean(), 2) \ for seasonMonths in monthList]] """ Explanation: List with * First Index - Season Names * Second Index - Mean of Adj. Box Office of each season * Third Index - Movie Count for each season * Fourth Index - Mean rating for each season End of explanation """ def data_(boxoffice, rating, movieCount, name): return { 'x' : [boxoffice], 'y' : [rating], 'name' : name, 'mode' : 'markers', 'text' : 'Season: ' + name + \ '<br>Mean Rating: ' + str(rating) + \ '<br>Movie Count: ' + str(movieCount) + \ '<br>Mean BoxOffice: ' + str(round(boxoffice/1000000, 2)) + 'M', 'hoverinfo' : 'text', 'marker' : { 'size' : [movieCount], 'sizemode' : 'area', 'line' : { 'width' : 1 } } } """ Explanation: Function that return dicitonary data to plot the data point End of explanation """ figure = { 'data': [data_(finalList[1][i], finalList[3][i], finalList[2][i], finalList[0][i]) for i in range(4)], 'layout' : {} } figure['layout']['xaxis'] = {'title' : 'Mean Box Office', 'titlefont': { 'size' : 16, 'family' : 'Droid Sans' }, 'showline' : True, 'ticks' : 'outside', 'tickwidth' : 2, 'gridcolor' : '#FFFFFF'} figure['layout']['yaxis'] = {'title' : 'Mean Rating', 'titlefont' : { 'size' : 16 , 'family' : 'Droid Sans' }, 'showline' : True, 'ticks' : 'outside', 'tickwidth' : 2, 'gridcolor' : '#FFFFFF'} figure['layout']['title'] = 'Season Movie Analysis' figure['layout']['titlefont'] = {'size' : 20, 'family' : 'Times New Roman'} figure['layout']['plot_bgcolor'] = 'rgb(223, 232, 243)' """ Explanation: Adding Data to figure and setting x-axis, y-axis, title, background of layout End of explanation """ # iplot(figure) plot(figure) """ Explanation: Plotting the figure End of explanation """
nhuntwalker/gatspy
examples/FastLombScargle.ipynb
bsd-2-clause
%matplotlib inline import numpy as np import matplotlib.pyplot as plt # use seaborn's default plotting styles for matplotlib import seaborn; seaborn.set() """ Explanation: Fast Lomb-Scargle Periodograms in Python The Lomb-Scargle Periodogram is a well-known method of finding periodicity in irregularly-sampled time-series data. The common implementation of the periodogram is relatively slow: for $N$ data points, a frequency grid of $\sim N$ frequencies is required and the computation scales as $O[N^2]$. In a 1989 paper, Press and Rybicki presented a faster technique which makes use of fast Fourier transforms to reduce this cost to $O[N\log N]$ on a regular frequency grid. The gatspy package implement this in the LombScargleFast object, which we'll explore below. But first, we'll motivate why this algorithm is needed at all. We'll start this notebook with some standard imports: End of explanation """ def create_data(N, period=2.5, err=0.1, rseed=0): rng = np.random.RandomState(rseed) t = np.arange(N, dtype=float) + 0.3 * rng.randn(N) y = np.sin(2 * np.pi * t / period) + err * rng.randn(N) return t, y, err t, y, dy = create_data(100, period=20) plt.errorbar(t, y, dy, fmt='o'); """ Explanation: To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data: End of explanation """ def freq_grid(t, oversampling=5, nyquist_factor=3): T = t.max() - t.min() N = len(t) df = 1. / (oversampling * T) fmax = 0.5 * nyquist_factor * N / T N = int(fmax // df) return df + df * np.arange(N) """ Explanation: From this, our algorithm should be able to identify any periodicity that is present. Choosing the Frequency Grid The Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose? It turns out that this question is very important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data! Frequency spacing First, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \cdot f$ complete cycles. If our error in frequency is $\delta f$, then $T\cdot\delta f$ is the error in number of cycles between the endpoints of the data. If this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion $$ T\cdot\delta f \ll 1 $$ Commonly, we'll choose some oversampling factor around 5 and use $\delta f = (5T)^{-1}$ as our frequency grid spacing. Frequency limits Next, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\delta f$ as our minimum frequency. But on the high end, we need to make a choice: what's the highest frequency we'd trust our data to be sensitive to? At this point, many people are tempted to mis-apply the Nyquist-Shannon sampling theorem, and choose some version of the Nyquist limit for the data. But this is entirely wrong! The Nyquist frequency applies for regularly-sampled data, but irregularly-sampled data can be sensitive to much, much higher frequencies, and the upper limit should be determined based on what kind of signals you are looking for. Still, a common (if dubious) rule-of-thumb is that the high frequency is some multiple of what Press & Rybicki call the "average" Nyquist frequency, $$ \hat{f}_{Ny} = \frac{N}{2T} $$ With this in mind, we'll use the following function to determine a suitable frequency grid: End of explanation """ t, y, dy = create_data(100, period=2.5) freq = freq_grid(t) print(len(freq)) from gatspy.periodic import LombScargle model = LombScargle().fit(t, y, dy) period = 1. / freq power = model.periodogram(period) plt.plot(period, power) plt.xlim(0, 5); """ Explanation: Now let's use the gatspy tools to plot the periodogram: End of explanation """ t, y, dy = create_data(100, period=0.3) period = 1. / freq_grid(t, nyquist_factor=10) model = LombScargle().fit(t, y, dy) power = model.periodogram(period) plt.plot(period, power) plt.xlim(0, 1); """ Explanation: The algorithm finds a strong signal at a period of 2.5. To demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it: End of explanation """ from gatspy.periodic import LombScargleFast help(LombScargleFast.periodogram_auto) from gatspy.periodic import LombScargleFast t, y, dy = create_data(100) model = LombScargleFast().fit(t, y, dy) period, power = model.periodogram_auto() plt.plot(period, power) plt.xlim(0, 5); """ Explanation: With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data! Nevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency. Scaling with $N$ With these rules in mind, we see that the size of the frequency grid is approximately $$ N_f = \frac{f_{max}}{\delta f} \propto \frac{N/(2T)}{1/T} \propto N $$ So for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space. This is the source of the $N^2$ scaling of the typical periodogram: finding periods in $N$ datapoints requires a grid of $\sim 10N$ frequencies, and $O[N^2]$ operations. When $N$ gets very, very large, this becomes a problem. Fast Periodograms with LombScargleFast Finally we get to the meat of this discussion. In a 1989 paper, Press and Rybicki proposed a clever method whereby a Fast Fourier Transform is used on a grid extirpolated from the original data, such that this problem can be solved in $O[N\log N]$ time. The gatspy package contains a pure-Python implementation of this algorithm, and we'll explore it here. If you're interested in seeing how the algorithm works in Python, check out the code in the gatspy source. It's far more readible and understandable than the Fortran source presented in Press et al. For convenience, the implementation has a periodogram_auto method which automatically selects a frequency/period range based on an oversampling factor and a nyquist factor: End of explanation """ from time import time from gatspy.periodic import LombScargleAstroML, LombScargleFast def get_time(N, Model): t, y, dy = create_data(N) model = Model().fit(t, y, dy) t0 = time() model.periodogram_auto() t1 = time() result = t1 - t0 # for fast operations, we should do several and take the median if result < 0.1: N = min(50, 0.5 / result) times = [] for i in range(5): t0 = time() model.periodogram_auto() t1 = time() times.append(t1 - t0) result = np.median(times) return result N_obs = list(map(int, 10 ** np.linspace(1, 4, 5))) times1 = [get_time(N, LombScargleAstroML) for N in N_obs] times2 = [get_time(N, LombScargleFast) for N in N_obs] plt.loglog(N_obs, times1, label='Naive Implmentation') plt.loglog(N_obs, times2, label='FFT Implementation') plt.xlabel('N observations') plt.ylabel('t (sec)') plt.legend(loc='upper left'); """ Explanation: Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using LombScargleAstroML (a fast implementation of the $O[N^2]$ algorithm) and LombScargleFast, which is the fast FFT-based implementation: End of explanation """
tensorflow/docs-l10n
site/en-snapshot/tutorials/images/transfer_learning.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import matplotlib.pyplot as plt import numpy as np import os import tensorflow as tf """ Explanation: Transfer learning and fine-tuning <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb?force_kitty_mode=1&force_corgi_mode=1"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this tutorial, you will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network. A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. You either use the pretrained model as is or use transfer learning to customize this model to a given task. The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset. In this notebook, you will try two ways to customize a pretrained model: Feature Extraction: Use the representations learned by a previous network to extract meaningful features from new samples. You simply add a new classifier, which will be trained from scratch, on top of the pretrained model so that you can repurpose the feature maps learned previously for the dataset. You do not need to (re)train the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pretrained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained. Fine-Tuning: Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and the last layers of the base model. This allows us to "fine-tune" the higher-order feature representations in the base model in order to make them more relevant for the specific task. You will follow the general machine learning workflow. Examine and understand the data Build an input pipeline, in this case using Keras ImageDataGenerator Compose the model Load in the pretrained base model (and pretrained weights) Stack the classification layers on top Train the model Evaluate model End of explanation """ _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip' path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered') train_dir = os.path.join(PATH, 'train') validation_dir = os.path.join(PATH, 'validation') BATCH_SIZE = 32 IMG_SIZE = (160, 160) train_dataset = tf.keras.utils.image_dataset_from_directory(train_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE) validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE) """ Explanation: Data preprocessing Data download In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a tf.data.Dataset for training and validation using the tf.keras.utils.image_dataset_from_directory utility. You can learn more about loading images in this tutorial. End of explanation """ class_names = train_dataset.class_names plt.figure(figsize=(10, 10)) for images, labels in train_dataset.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") """ Explanation: Show the first nine images and labels from the training set: End of explanation """ val_batches = tf.data.experimental.cardinality(validation_dataset) test_dataset = validation_dataset.take(val_batches // 5) validation_dataset = validation_dataset.skip(val_batches // 5) print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset)) print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset)) """ Explanation: As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, then move 20% of them to a test set. End of explanation """ AUTOTUNE = tf.data.AUTOTUNE train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE) validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE) test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE) """ Explanation: Configure the dataset for performance Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the data performance guide. End of explanation """ data_augmentation = tf.keras.Sequential([ tf.keras.layers.RandomFlip('horizontal'), tf.keras.layers.RandomRotation(0.2), ]) """ Explanation: Use data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce overfitting. You can learn more about data augmentation in this tutorial. End of explanation """ for image, _ in train_dataset.take(1): plt.figure(figsize=(10, 10)) first_image = image[0] for i in range(9): ax = plt.subplot(3, 3, i + 1) augmented_image = data_augmentation(tf.expand_dims(first_image, 0)) plt.imshow(augmented_image[0] / 255) plt.axis('off') """ Explanation: Note: These layers are active only during training, when you call Model.fit. They are inactive when the model is used in inference mode in Model.evaluate or Model.fit. Let's repeatedly apply these layers to the same image and see the result. End of explanation """ preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input """ Explanation: Rescale pixel values In a moment, you will download tf.keras.applications.MobileNetV2 for use as your base model. This model expects pixel values in [-1, 1], but at this point, the pixel values in your images are in [0, 255]. To rescale them, use the preprocessing method included with the model. End of explanation """ rescale = tf.keras.layers.Rescaling(1./127.5, offset=-1) """ Explanation: Note: Alternatively, you could rescale pixel values from [0, 255] to [-1, 1] using tf.keras.layers.Rescaling. End of explanation """ # Create the base model from the pre-trained model MobileNet V2 IMG_SHAPE = IMG_SIZE + (3,) base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') """ Explanation: Note: If using other tf.keras.applications, be sure to check the API doc to determine if they expect pixels in [-1, 1] or [0, 1], or use the included preprocess_input function. Create the base model from the pre-trained convnets You will create the base model from the MobileNet V2 model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like jackfruit and syringe. This base of knowledge will help us classify cats and dogs from our specific dataset. First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer. First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction. End of explanation """ image_batch, label_batch = next(iter(train_dataset)) feature_batch = base_model(image_batch) print(feature_batch.shape) """ Explanation: This feature extractor converts each 160x160x3 image into a 5x5x1280 block of features. Let's see what it does to an example batch of images: End of explanation """ base_model.trainable = False """ Explanation: Feature extraction In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier. Freeze the convolutional base It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's trainable flag to False will freeze all of them. End of explanation """ # Let's take a look at the base model architecture base_model.summary() """ Explanation: Important note about BatchNormalization layers Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial. When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics. When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned. For more details, see the Transfer learning guide. End of explanation """ global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) """ Explanation: Add a classification head To generate predictions from the block of features, average over the spatial 5x5 spatial locations, using a tf.keras.layers.GlobalAveragePooling2D layer to convert the features to a single 1280-element vector per image. End of explanation """ prediction_layer = tf.keras.layers.Dense(1) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) """ Explanation: Apply a tf.keras.layers.Dense layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a logit, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0. End of explanation """ inputs = tf.keras.Input(shape=(160, 160, 3)) x = data_augmentation(inputs) x = preprocess_input(x) x = base_model(x, training=False) x = global_average_layer(x) x = tf.keras.layers.Dropout(0.2)(x) outputs = prediction_layer(x) model = tf.keras.Model(inputs, outputs) """ Explanation: Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the Keras Functional API. As previously mentioned, use training=False as our model contains a BatchNormalization layer. End of explanation """ base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() """ Explanation: Compile the model Compile the model before training it. Since there are two classes, use the tf.keras.losses.BinaryCrossentropy loss with from_logits=True since the model provides a linear output. End of explanation """ len(model.trainable_variables) """ Explanation: The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand trainable parameters in the Dense layer. These are divided between two tf.Variable objects, the weights and biases. End of explanation """ initial_epochs = 10 loss0, accuracy0 = model.evaluate(validation_dataset) print("initial loss: {:.2f}".format(loss0)) print("initial accuracy: {:.2f}".format(accuracy0)) history = model.fit(train_dataset, epochs=initial_epochs, validation_data=validation_dataset) """ Explanation: Train the model After training for 10 epochs, you should see ~94% accuracy on the validation set. End of explanation """ acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,1.0]) plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() """ Explanation: Learning curves Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor. End of explanation """ base_model.trainable = True # Let's take a look to see how many layers are in the base model print("Number of layers in the base model: ", len(base_model.layers)) # Fine-tune from this layer onwards fine_tune_at = 100 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model.layers[:fine_tune_at]: layer.trainable = False """ Explanation: Note: If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like tf.keras.layers.BatchNormalization and tf.keras.layers.Dropout affect accuracy during training. They are turned off when calculating validation loss. To a lesser extent, it is also because training metrics report the average for an epoch, while validation metrics are evaluated after the epoch, so validation metrics see a model that has trained slightly longer. Fine tuning In the feature extraction experiment, you were only training a few layers on top of an MobileNetV2 base model. The weights of the pre-trained network were not updated during training. One way to increase performance even further is to train (or "fine-tune") the weights of the top layers of the pre-trained model alongside the training of the classifier you added. The training process will force the weights to be tuned from generic feature maps to features associated specifically with the dataset. Note: This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned. Also, you should try to fine-tune a small number of top layers rather than the whole MobileNet model. In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning. Un-freeze the top layers of the model All you need to do is unfreeze the base_model and set the bottom layers to be un-trainable. Then, you should recompile the model (necessary for these changes to take effect), and resume training. End of explanation """ model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer = tf.keras.optimizers.RMSprop(learning_rate=base_learning_rate/10), metrics=['accuracy']) model.summary() len(model.trainable_variables) """ Explanation: Compile the model As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly. End of explanation """ fine_tune_epochs = 10 total_epochs = initial_epochs + fine_tune_epochs history_fine = model.fit(train_dataset, epochs=total_epochs, initial_epoch=history.epoch[-1], validation_data=validation_dataset) """ Explanation: Continue training the model If you trained to convergence earlier, this step will improve your accuracy by a few percentage points. End of explanation """ acc += history_fine.history['accuracy'] val_acc += history_fine.history['val_accuracy'] loss += history_fine.history['loss'] val_loss += history_fine.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.ylim([0.8, 1]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.ylim([0, 1.0]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() """ Explanation: Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting. You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets. After fine tuning the model nearly reaches 98% accuracy on the validation set. End of explanation """ loss, accuracy = model.evaluate(test_dataset) print('Test accuracy :', accuracy) """ Explanation: Evaluation and prediction Finally you can verify the performance of the model on new data using test set. End of explanation """ # Retrieve a batch of images from the test set image_batch, label_batch = test_dataset.as_numpy_iterator().next() predictions = model.predict_on_batch(image_batch).flatten() # Apply a sigmoid since our model returns logits predictions = tf.nn.sigmoid(predictions) predictions = tf.where(predictions < 0.5, 0, 1) print('Predictions:\n', predictions.numpy()) print('Labels:\n', label_batch) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(image_batch[i].astype("uint8")) plt.title(class_names[predictions[i]]) plt.axis("off") """ Explanation: And now you are all set to use this model to predict if your pet is a cat or dog. End of explanation """
gsorianob/fiuba-python
.ipynb_checkpoints/Clase 04 - Excepciones, funciones lambda, búsquedas y ordenamientos-checkpoint.ipynb
apache-2.0
lista_de_numeros = [1, 6, 3, 9, 5, 2] lista_ordenada = sorted(lista_de_numeros) print lista_ordenada """ Explanation: 27/10 Ordenamientos y búsquedas Funciones anónimas. Excepciones. Ordenamiento de listas Las listas se pueden ordenar fácilmente usando la función sorted: End of explanation """ lista_de_numeros = [1, 6, 3, 9, 5, 2] print sorted(lista_de_numeros, reverse=True) """ Explanation: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br> Simple, interrogamos un poco a la función: ```Python print sorted.doc sorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list `` Entonces, con sólo pasarle el parámetro de *reverse* enTrue` debería alcanzar: End of explanation """ import random def crear_alumnos(cantidad_de_alumnos=5): nombres = ['Javier', 'Pablo', 'Ramiro', 'Lucas', 'Carlos'] apellidos = ['Saviola', 'Aimar', 'Funes Mori', 'Alario', 'Sanchez'] alumnos = [] for i in range(cantidad_de_alumnos): a = { 'nombre': '{}, {}'.format(random.choice(apellidos), random.choice(nombres)), 'padron': random.randint(90000, 100000), 'nota': random.randint(4, 10) } alumnos.append(a) return alumnos def imprimir_curso(lista): for idx, x in enumerate(lista, 1): print ' {pos:2}. {padron} - {nombre}: {nota}'.format(pos=idx, **x) def obtener_padron(alumno): return alumno['padron'] def ordenar_por_padron(alumno1, alumno2): if alumno1['padron'] < alumno2['padron']: return -1 elif alumno2['padron'] < alumno1['padron']: return 1 else: return 0 curso = crear_alumnos() print 'La lista tiene los alumnos:' imprimir_curso(curso) lista_ordenada = sorted(curso, key=obtener_padron) print 'Y la lista ordenada por padrón:' imprimir_curso(lista_ordenada) otra_lista_ordenada = sorted(curso, cmp=ordenar_por_padron) print 'Y la lista ordenada por padrón:' imprimir_curso(otra_lista_ordenada) """ Explanation: ¿Y si lo que quiero ordenar es una lista de registros?. <br> Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar. End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] if 3 in lista: print '3 esta en la lista' else: print '3 no esta en la lista' if 15 in lista: print '15 esta en la lista' else: print '15 no esta en la lista' """ Explanation: Búsquedas en listas Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in: End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] if 3 not in lista: print '3 NO esta en la lista' else: print '3 SI esta en la lista' """ Explanation: También es muy fácil saber si un elemento no esta en la lista: End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] pos = lista.index(3) print 'El 3 se encuentra en la posición', pos pos = lista.index(15) print 'El 15 se encuentra en la posición', pos """ Explanation: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es: End of explanation """ help("lambda") mi_funcion = lambda x, y: x+y resultado = mi_funcion(1,2) print resultado """ Explanation: Funciones anónimas Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usen en una gran cantidad de lugares se pueden usar las funciones lambda: End of explanation """ curso = crear_alumnos(15) print 'Curso original' imprimir_curso(curso) lista_ordenada = sorted(curso, key=lambda x: (-x['nota'], x['padron'])) print 'Curso ordenado' imprimir_curso(lista_ordenada) """ Explanation: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros). Por ejemplo, para ordenar los alumnos por padrón podríamos usar: Python sorted(curso, key=lambda x: x['padron']) Ahora, si quiero ordenar la lista anterior por nota decreciente y, en caso de igualdad, por padrón podríamos usar: End of explanation """ es_mayor = lambda n1, n2: n1 > n2 es_menor = lambda n1, n2: n1 < n2 def binaria(cmp, lista, clave): """Binaria es una función que busca en una lista la clave pasada. Es un requisito de la búsqueda binaria que la lista se encuentre ordenada, pero no si el orden es ascendente o descendente. Por este motivo es que también recibe una función que le indique en que sentido ir. Si la lista está ordenada en forma ascendente la función que se le pasa tiene que ser verdadera cuando el primer valor es mayor que la segundo; y falso en caso contrario. Si la lista está ordenada en forma descendente la función que se le pasa tiene que ser verdadera cuando el primer valor es menor que la segundo; y falso en caso contrario. """ min = 0 max = len(lista) - 1 centro = (min + max) / 2 while (lista[centro] != clave) and (min < max): if cmp(lista[centro], clave): max = centro - 1 else: min = centro + 1 centro = (min + max) / 2 if lista[centro] == clave: return centro else: return -1 print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8) print binaria(es_menor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8) print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 123) print binaria(es_menor, [9, 8, 7, 6, 5, 4, 3, 2, 1], 6) """ Explanation: Otro ejemplo podría ser implementar una búsqueda binaria que permita buscar tanto en listas crecientes como decrecientes: End of explanation """ print 1/0 """ Explanation: Excepciones Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br> Por ejemplo: End of explanation """ dividendo = 1 divisor = 0 print 'Intentare hacer la división de %d/%d' % (dividendo, divisor) try: resultado = dividendo / divisor print resultado except ZeroDivisionError: print 'No se puede hacer la división ya que el divisor es 0.' """ Explanation: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) # Si de 28 alumnos, aprobaron 15, el porcentaje de aprobados es de... porcentaje_de_aprobados = regla_de_tres(28, 15, 100) print 'Porcentaje de aprobados: %0.2f %%' % porcentaje_de_aprobados """ Explanation: Pero supongamos que implementamos la regla de tres de la siguiente forma: End of explanation """ resultado = regla_de_tres(0, 13, 100) print 'Porcentaje de aprobados: %0.2f %%' % resultado """ Explanation: En cambio, si le pasamos 0 en el lugar de x: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): resultado = 0 try: resultado = dividir(z*y, x) except ZeroDivisionError: print 'No se puede calcular la regla de tres porque el divisor es 0' return resultado print regla_de_tres(0, 1, 2) """ Explanation: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br> Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br> En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) try: print regla_de_tres(0, 1, 2) except ZeroDivisionError: print 'No se puede calcular la regla de tres porque el divisor es 0' """ Explanation: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except ZeroDivisionError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. <br> Incluso, una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura: Capturar múltiples excepciones En algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: En esos casos podemos capturar más de una excepción de la siguiente forma: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except (ZeroDivisionError, TypeError): print 'ERROR: No se puede calcular la división' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas: End of explanation """ try: print 1/0 except ZeroDivisionError: print 'Ha ocurrido un error de división por cero' """ Explanation: Jerarquía de excepciones Existe una <a href="https://docs.python.org/2/library/exceptions.html">jerarquía de excepciones</a>, de forma que si se sabe que puede venir un tipo de error, pero no se sabe exactamente qué excepción puede ocurrir siempre se puede poner una excepción de mayor jerarquía: <img src="excepciones.png"/> Por lo que el error de división por cero se puede evitar como: End of explanation """ try: print 1/0 except Exception: print 'Ha ocurrido un error inesperado' """ Explanation: Y también como: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Este mensaje se mostrará sólo si no ocurre ningún error' finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) """ Explanation: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis. Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final: Python def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' Si el error no es capturado por ninguna clausula se propaga de la misma forma que si no se hubiera puesto nada. Otras cláusulas para el manejo de excepciones Además de las cláusulas try y except existen otras relacionadas con las excepciones que nos permiten manejar de mejor manera el flujo del programa: * else: se usa para definir un bloque de código que se ejecutará sólo si no ocurrió ningún error. * finally: se usa para definir un bloque de código que se ejecutará siempre, independientemente de si se lanzó una excepción o no. End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Ahora hago que ocurra una excepción' print 1/0 finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) """ Explanation: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código: End of explanation """ def dividir_numeros(x, y): if y == 0: raise Exception('Error de división por cero') resultado = x/y print 'El resultado es {0}'.format(resultado) try: dividir_numeros(1, 0) except ZeroDivisionError as e: print 'ERROR: División por cero' except Exception as e: print 'ERROR: ha ocurrido un error del tipo Exception' print '----------' dividir_numeros(1, 0) """ Explanation: Lanzar excepciones Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise: End of explanation """ class ExcepcionDeDivisionPor2(Exception): def __str__(self): return 'ERROR: No se puede dividir por dos' def dividir_numeros(x, y): if y == 2: raise ExcepcionDeDivisionPor2() resultado = x/y try: dividir_numeros(1, 2) except ExcepcionDeDivisionPor2: print 'No se puede dividir por 2' dividir_numeros(1, 2) """ Explanation: Crear excepciones Pero así como podemos usar las excepciones estándares, también podemos crear nuestras propias excepciones: ```Python class MiPropiaExcepcion(Exception): def __str__(self): return 'Mensaje del error' ``` Por ejemplo: End of explanation """
cdt15/lingam
examples/MultiGroupDirectLiNGAM.ipynb
mit
import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0) """ Explanation: MultiGroupDirectLiNGAM Import and settings In this example, we need to import numpy, pandas, and graphviz in addition to lingam. End of explanation """ x3 = np.random.uniform(size=1000) x0 = 3.0*x3 + np.random.uniform(size=1000) x2 = 6.0*x3 + np.random.uniform(size=1000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000) x5 = 4.0*x0 + np.random.uniform(size=1000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000) X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X1.head() m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0], [3.0, 0.0, 2.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.0, 0.0,-1.0, 0.0, 0.0, 0.0], [4.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) x3 = np.random.uniform(size=1000) x0 = 3.5*x3 + np.random.uniform(size=1000) x2 = 6.5*x3 + np.random.uniform(size=1000) x1 = 3.5*x0 + 2.5*x2 + np.random.uniform(size=1000) x5 = 4.5*x0 + np.random.uniform(size=1000) x4 = 8.5*x0 - 1.5*x2 + np.random.uniform(size=1000) X2 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X2.head() m = np.array([[0.0, 0.0, 0.0, 3.5, 0.0, 0.0], [3.5, 0.0, 2.5, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.5, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.5, 0.0,-1.5, 0.0, 0.0, 0.0], [4.5, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) """ Explanation: Test data We generate two datasets consisting of 6 variables. End of explanation """ X_list = [X1, X2] """ Explanation: We create a list variable that contains two datasets. End of explanation """ model = lingam.MultiGroupDirectLiNGAM() model.fit(X_list) """ Explanation: Causal Discovery To run causal discovery for multiple datasets, we create a MultiGroupDirectLiNGAM object and call the fit method. End of explanation """ model.causal_order_ """ Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. End of explanation """ print(model.adjacency_matrices_[0]) make_dot(model.adjacency_matrices_[0]) print(model.adjacency_matrices_[1]) make_dot(model.adjacency_matrices_[1]) """ Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. As you can see from the following, DAG in each dataset is correctly estimated. End of explanation """ X_all = pd.concat([X1, X2]) print(X_all.shape) model_all = lingam.DirectLiNGAM() model_all.fit(X_all) model_all.causal_order_ """ Explanation: To compare, we run DirectLiNGAM with single dataset concatenating two datasets. End of explanation """ make_dot(model_all.adjacency_matrix_) """ Explanation: You can see that the causal structure cannot be estimated correctly for a single dataset. End of explanation """ p_values = model.get_error_independence_p_values(X_list) print(p_values[0]) print(p_values[1]) """ Explanation: Independence between error variables To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$. End of explanation """ results = model.bootstrap(X_list, n_sampling=100) """ Explanation: Bootstrapping In MultiGroupDirectLiNGAM, bootstrap can be executed in the same way as normal DirectLiNGAM. End of explanation """ cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) """ Explanation: Causal Directions The bootstrap method returns a list of multiple BootstrapResult, so we can get the result of bootstrapping from the list. We can get the same number of results as the number of datasets, so we specify an index when we access the results. We can get the ranking of the causal directions extracted by get_causal_direction_counts(). End of explanation """ dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) """ Explanation: Directed Acyclic Graphs Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more. End of explanation """ prob = results[0].get_probabilities(min_causal_effect=0.01) print(prob) """ Explanation: Probability Using the get_probabilities() method, we can get the probability of bootstrapping. End of explanation """ causal_effects = results[0].get_total_causal_effects(min_causal_effect=0.01) df = pd.DataFrame(causal_effects) labels = [f'x{i}' for i in range(X1.shape[1])] df['from'] = df['from'].apply(lambda x : labels[x]) df['to'] = df['to'].apply(lambda x : labels[x]) df """ Explanation: Total Causal Effects Using the get_total_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable. We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below. End of explanation """ df.sort_values('effect', ascending=False).head() """ Explanation: We can easily perform sorting operations with pandas.DataFrame. End of explanation """ df[df['to']=='x1'].head() """ Explanation: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1. End of explanation """ import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline from_index = 3 to_index = 0 plt.hist(results[0].total_effects_[:, to_index, from_index]) """ Explanation: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below. End of explanation """ from_index = 3 # index of x3 to_index = 1 # index of x0 pd.DataFrame(results[0].get_paths(from_index, to_index)) """ Explanation: Bootstrap Probability of Path Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [3, 0, 1] shows the path from variable X3 through variable X0 to variable X1. End of explanation """
GoogleCloudPlatform/training-data-analyst
self-paced-labs/vertex-ai/vertex-distributed-tensorflow/Qwiklab_Running_Distributed_TensorFlow_using_Vertex_AI.ipynb
apache-2.0
import os ! pip3 install --user --upgrade google-cloud-aiplatform """ Explanation: Running Distributed TensorFlow on Vertex AI Overview This tutorial demonstrates how to Train a model using distribution strategies on Vertex AI using the SDK for Python Deploy a custom image classification model for online prediction using Vertex AI Dataset The dataset used for this tutorial is the MNIST dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class (digit) an image is from ten classes (0-9) Objective In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console. The steps performed include: Create a Vertex AI custom job for training a model in distributed fashion. Train the model using TensorFlow's MirroredStrategy. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model resource. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest (preview) version of Vertex SDK for Python. End of explanation """ ! pip3 install --user --upgrade google-cloud-storage """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ ! pip3 install --user --upgrade pillow """ Explanation: Install the pillow library for loading images. End of explanation """ ! pip3 install --user --upgrade numpy """ Explanation: Install the numpy library for manipulation of image data. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: You can safely ignore errors during the numpy installation. Restart the kernel Once you've installed everything, you need to restart the notebook kernel so it can find the packages. End of explanation """ import os PROJECT_ID = "" if not os.getenv("IS_TESTING"): # Get your Google Cloud project ID from gcloud shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) """ Explanation: Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" REGION = "us-central1" # @param {type:"string"} PROJECT_ID if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. End of explanation """ BUCKET_NAME ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cells to create your Cloud Storage bucket. End of explanation """ import os import sys from google.cloud import aiplatform from google.cloud.aiplatform import gapic as aip aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME) """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import Vertex SDK for Python Import the Vertex SDK for Python into your Python environment and initialize it. End of explanation """ TRAIN_GPU, TRAIN_NGPU = (None, None) DEPLOY_GPU, DEPLOY_NGPU = (None, None) """ Explanation: Set hardware accelerators Here to run a container image on a CPU, we set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to (None, None) since this notebook is meant to be run in a Qwiklab environment where GPUs cannot be provisioned. Note: If you happen to be running this notebook from your personal GCP account, set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) See the locations where accelerators are available. End of explanation """ TRAIN_VERSION = "tf-cpu.2-6" DEPLOY_VERSION = "tf2-cpu.2-6" TRAIN_IMAGE = "us-docker.pkg.dev/vertex-ai/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) """ Explanation: Set pre-built containers Vertex AI provides pre-built containers to run training and prediction. For the latest list, see Pre-built containers for training and Pre-built containers for prediction End of explanation """ MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) """ Explanation: Set machine types Next, set the machine types to use for training and prediction. Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction. machine type n1-standard: 3.75GB of memory per vCPU n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation """ JOB_NAME = "custom_job_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] TRAIN_STRATEGY """ Explanation: Distributed training and deployment Now you are ready to start creating your own custom-trained model with MNIST and deploying it as online prediction service. Train a model There are two ways you can train a custom model using a container image: Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model. Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model. Define the command args for the training script Prepare the command-line arguments to pass to your training script. - args: The command line arguments to pass to the corresponding Python module. In this example, they will be: - "--epochs=" + EPOCHS: The number of epochs for training. - "--steps=" + STEPS: The number of steps (batches) per epoch. - "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training. - "single": single device. - "mirror": all GPU devices on a single compute instance. - "multi": all GPU devices on all compute instances. End of explanation """ %%writefile task.py # Single, Mirror and Multi-Machine Distributed Training for MNIST import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--lr', dest='lr', default=0.01, type=float, help='Learning rate.') parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) print('DEVICES', device_lib.list_local_devices()) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") # Single Machine, multiple compute device elif args.distribute == 'mirror': strategy = tf.distribute.MirroredStrategy() # Multiple Machine, multiple compute device elif args.distribute == 'multi': strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # Multi-worker configuration print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync)) # Preparing dataset BUFFER_SIZE = 10000 BATCH_SIZE = 64 def make_datasets_unbatched(): # Scaling MNIST data from (0, 255] to (0., 1.] def scale(image, label): image = tf.cast(image, tf.float32) image /= 255.0 return image, label datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat() # Build the Keras model def build_and_compile_cnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile( loss=tf.keras.losses.sparse_categorical_crossentropy, optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr), metrics=['accuracy']) return model # Train the model NUM_WORKERS = strategy.num_replicas_in_sync # Here the batch size scales up by number of workers since # `tf.data.Dataset.batch` expects the global batch size. GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS MODEL_DIR = os.getenv("AIP_MODEL_DIR") train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE) with strategy.scope(): # Creation of dataset, and model building/compiling need to be within # `strategy.scope()`. model = build_and_compile_cnn_model() model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps) model.save(MODEL_DIR) """ Explanation: Training script In the next cell, you will write the contents of the training script, task.py. In summary: Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service. Loads MNIST dataset from TF Datasets (tfds). Builds a model using TF.Keras model API. Compiles the model (compile()). Sets a training distribution strategy according to the argument args.distribute. Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps Saves the trained model (save(MODEL_DIR)) to the specified model directory. End of explanation """ job = aiplatform.CustomTrainingJob( display_name=JOB_NAME, script_path="task.py", container_uri=TRAIN_IMAGE, requirements=["tensorflow_datasets==1.3.0"], model_serving_container_image_uri=DEPLOY_IMAGE, ) MODEL_DISPLAY_NAME = "mnist-" + TIMESTAMP # Start the training if TRAIN_GPU: model = job.run( model_display_name=MODEL_DISPLAY_NAME, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, ) else: model = job.run( model_display_name=MODEL_DISPLAY_NAME, args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_count=0, ) """ Explanation: Train the model Define your custom training job on Vertex AI. Use the CustomTrainingJob class to define the job, which takes the following parameters: display_name: The user-defined name of this training pipeline. script_path: The local path to the training script. container_uri: The URI of the training container image. requirements: The list of Python package dependencies of the script. model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container. Use the run function to start training, which takes the following parameters: args: The command line arguments to be passed to the Python script. replica_count: The number of worker replicas. model_display_name: The display name of the Model if the script produces a managed Model. machine_type: The type of machine to use for training. accelerator_type: The hardware accelerator type. accelerator_count: The number of accelerators to attach to a worker replica. The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object. You can read more about the CustomTrainingJob.run API here End of explanation """ DEPLOYED_NAME = "mnist_deployed-" + TIMESTAMP TRAFFIC_SPLIT = {"0": 100} MIN_NODES = 1 MAX_NODES = 1 if DEPLOY_GPU: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_GPU.name, accelerator_count=DEPLOY_NGPU, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) else: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=None, accelerator_count=0, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) """ Explanation: To view the training pipeline status, you have to navigate to Vertex AI ➞ Training <img src="https://i.ibb.co/Xs3myZQ/vertex-ai-training.png" width="200"> You can see the status of the current training pipeline as seen below Once the model has been successfully trained, you can see a custom trained model if you head to Vertex AI ➞ Models <img src="https://i.ibb.co/gDYLvM6/screenshot-2021-08-19-at-7-41-25-pm.png" width="200"> Deploy the model Before you use your model to make predictions, you need to deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This will do two things: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. The function takes the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. machine_type: The type of machine to use for training. accelerator_type: The hardware accelerator type. accelerator_count: The number of accelerators to attach to a worker replica. starting_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. Traffic split The traffic_split parameter is specified as a Python dictionary. You can deploy more than one instance of your model to an endpoint, and then set the percentage of traffic that goes to each instance. You can use a traffic split to introduce a new model gradually into production. For example, if you had one existing model in production with 100% of the traffic, you could deploy a new model to the same endpoint, direct 10% of traffic to it, and reduce the original model's traffic to 90%. This allows you to monitor the new model's performance while minimizing the distruption to the majority of users. Compute instance scaling You can specify a single instance (or node) to serve your online prediction requests. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1. If you want to use multiple nodes to serve your online prediction requests, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes. Endpoint The method will block until the model is deployed and eventually return an Endpoint object. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources. End of explanation """ import tensorflow_datasets as tfds import numpy as np datasets, info = tfds.load(name='mnist', with_info=True, batch_size=-1, as_supervised=True) test_dataset = datasets['test'] """ Explanation: In order to view your deployed endpoint, you can head over to Vertex AI ➞ Endpoints <img src="https://i.ibb.co/qBmmYVB/vertex-ai-endpoints.png" width="200"> You can check if your endpoint is in the list of the currently deployed/deploying endpoints. To view the details of the endpoint that is currently deploying, you can simply click on the endpoint name. Once deployment is successfull, you should be able to see a green tick next to the endpoint name in the above screenshot. Make an online prediction request Send an online prediction request to your deployed model. Testing Get the test dataset and load the images/labels. Set the batch size to -1 to load the entire dataset. End of explanation """ x_test, y_test = tfds.as_numpy(test_dataset) # Normalize (rescale) the pixel data by dividing each pixel by 255. x_test = x_test.astype('float32') / 255. """ Explanation: Load the TensorFlow Dataset as NumPy arrays (images, labels) End of explanation """ x_test.shape, y_test.shape #@title Pick the number of test images NUM_TEST_IMAGES = 20 #@param {type:"slider", min:1, max:20, step:1} x_test, y_test = x_test[:NUM_TEST_IMAGES], y_test[:NUM_TEST_IMAGES] """ Explanation: Ensure the shapes are correct here End of explanation """ predictions = endpoint.predict(instances=x_test.tolist()) y_predicted = np.argmax(predictions.predictions, axis=1) correct = sum(y_predicted == np.array(y_test.tolist())) accuracy = len(y_predicted) print( f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}" ) """ Explanation: Send the prediction request Now that you have test images, you can use them to send a prediction request. Use the Endpoint object's predict function, which takes the following parameters: instances: A list of image instances. According to your custom model, each image instance should be a 3-dimensional matrix of floats. This was prepared in the previous step. The predict function returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction: Confidence level for the prediction (predictions), between 0 and 1, for each of the ten classes. You can then run a quick evaluation on the prediction results: 1. np.argmax: Convert each list of confidence levels to a label 2. Compare the predicted labels to the actual labels 3. Calculate accuracy as correct/total End of explanation """ deployed_model_id = endpoint.list_models()[0].id endpoint.undeploy(deployed_model_id=deployed_model_id) """ Explanation: Undeploy the model To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter: deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed. You can retrieve the deployed models using the endpoint's deployed_models property. Since this is the only deployed model on the Endpoint resource, you can omit traffic_split. End of explanation """ delete_training_job = True delete_model = True delete_endpoint = True # Warning: Setting this to true will delete everything in your bucket delete_bucket = True # Delete the training job job.delete() # Delete the model model.delete() # Delete the endpoint endpoint.delete() if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil -m rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Training Job Model Endpoint Cloud Storage Bucket End of explanation """
tylere/jupyterlab-ee
ipynb/ee-test.ipynb
apache-2.0
import ipywidgets ipywidgets.IntSlider() """ Explanation: Test ipywidgets End of explanation """ import ipyleaflet ipyleaflet.Map(zoom=2) """ Explanation: Test ipyleaflet End of explanation """ import ee from IPython.display import Image ee.Initialize() url = ee.Image("CGIAR/SRTM90_V4").getThumbUrl({'min':0, 'max':3000}) Image(url=url) """ Explanation: Test Earth Engine (static maps) Note that you will need to authenticate the Jupyter server to Earth Engine first, before running the ee.Initialize() method. To authenticate, run the command 'earthenging authenticate' in a terminal window and follow the instructions. End of explanation """ import ipyleafletee map1 = ipyleafletee.Map(zoom=2) map1 landsat8 = ( ee.ImageCollection('LANDSAT/LC08/C01/T1_RT_TOA') .filterDate('2017-06-01', '2017-06-9') ) map_id = landsat8.median().visualize( min=0, max=0.3, bands= ['B4', 'B3', 'B2'] ).getMapId() tile_url = "https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}".format(**map_id) map2 = ipyleafletee.Map(zoom=2) map2.add_layer(ipyleafletee.TileLayer(url=tile_url)) display(map2) """ Explanation: Test Earth Engine (interactive maps) End of explanation """
tensorflow/agents
docs/tutorials/9_c51_tutorial.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TF-Agents Authors. End of explanation """ !sudo apt-get update !sudo apt-get install -y xvfb ffmpeg freeglut3-dev !pip install 'imageio==2.4.0' !pip install pyvirtualdisplay !pip install tf-agents !pip install pyglet from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.categorical_dqn import categorical_dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import categorical_q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() """ Explanation: DQN C51/Rainbow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Introduction This example shows how to train a Categorical DQN (C51) agent on the Cartpole environment using the TF-Agents library. Make sure you take a look through the DQN tutorial as a prerequisite. This tutorial will assume familiarity with the DQN tutorial; it will mainly focus on the differences between DQN and C51. Setup If you haven't installed tf-agents yet, run: End of explanation """ env_name = "CartPole-v1" # @param {type:"string"} num_iterations = 15000 # @param {type:"integer"} initial_collect_steps = 1000 # @param {type:"integer"} collect_steps_per_iteration = 1 # @param {type:"integer"} replay_buffer_capacity = 100000 # @param {type:"integer"} fc_layer_params = (100,) batch_size = 64 # @param {type:"integer"} learning_rate = 1e-3 # @param {type:"number"} gamma = 0.99 log_interval = 200 # @param {type:"integer"} num_atoms = 51 # @param {type:"integer"} min_q_value = -20 # @param {type:"integer"} max_q_value = 20 # @param {type:"integer"} n_step_update = 2 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 1000 # @param {type:"integer"} """ Explanation: Hyperparameters End of explanation """ train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) """ Explanation: Environment Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200. End of explanation """ categorical_q_net = categorical_q_network.CategoricalQNetwork( train_env.observation_spec(), train_env.action_spec(), num_atoms=num_atoms, fc_layer_params=fc_layer_params) """ Explanation: Agent C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space. The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value: By learning the distribution rather than simply the expected value, the algorithm is able to stay more stable during training, leading to improved final performance. This is particularly true in situations with bimodal or even multimodal value distributions, where a single average does not provide an accurate picture. In order to train on probability distributions rather than on values, C51 must perform some complex distributional computations in order to calculate its loss function. But don't worry, all of this is taken care of for you in TF-Agents! To create a C51 Agent, we first need to create a CategoricalQNetwork. The API of the CategoricalQNetwork is the same as that of the QNetwork, except that there is an additional argument num_atoms. This represents the number of support points in our probability distribution estimates. (The above image includes 10 support points, each represented by a vertical blue bar.) As you can tell from the name, the default number of atoms is 51. End of explanation """ optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.Variable(0) agent = categorical_dqn_agent.CategoricalDqnAgent( train_env.time_step_spec(), train_env.action_spec(), categorical_q_network=categorical_q_net, optimizer=optimizer, min_q_value=min_q_value, max_q_value=max_q_value, n_step_update=n_step_update, td_errors_loss_fn=common.element_wise_squared_loss, gamma=gamma, train_step_counter=train_step_counter) agent.initialize() """ Explanation: We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated. Note that one other significant difference from vanilla DqnAgent is that we now need to specify min_q_value and max_q_value as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20. End of explanation """ #@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) compute_avg_return(eval_env, random_policy, num_eval_episodes) # Please also see the metrics module for standard implementations of different # metrics. """ Explanation: One last thing to note is that we also added an argument to use n-step updates with $n$ = 2. In single-step Q-learning ($n$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as: $G_t = R_{t + 1} + \gamma V(s_{t + 1})$ where we define $V(s) = \max_a{Q(s, a)}$. N-step updates involve expanding the standard single-step return function $n$ times: $G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$ N-step updates enable the agent to bootstrap from further in the future, and with the right value of $n$, this often leads to faster learning. Although C51 and n-step updates are often combined with prioritized replay to form the core of the Rainbow agent, we saw no measurable improvement from implementing prioritized replay. Moreover, we find that when combining our C51 agent with n-step updates alone, our agent performs as well as other Rainbow agents on the sample of Atari environments we've tested. Metrics and Evaluation The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows. End of explanation """ #@test {"skip": true} replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity) def collect_step(environment, policy): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer replay_buffer.add_batch(traj) for _ in range(initial_collect_steps): collect_step(train_env, random_policy) # This loop is so common in RL, that we provide standard implementations of # these. For more details see the drivers module. # Dataset generates trajectories with shape [BxTx...] where # T = n_step_update + 1. dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=n_step_update + 1).prefetch(3) iterator = iter(dataset) """ Explanation: Data Collection As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy. End of explanation """ #@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. for _ in range(collect_steps_per_iteration): collect_step(train_env, agent.collect_policy) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience) step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return)) returns.append(avg_return) """ Explanation: Training the agent The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing. The following will take ~7 minutes to run. End of explanation """ #@test {"skip": true} steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=550) """ Explanation: Visualization Plots We can plot return vs global steps to see the performance of our agent. In Cartpole-v1, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500. End of explanation """ def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) """ Explanation: Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab. End of explanation """ num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename) """ Explanation: The following code visualizes the agent's policy for a few episodes: End of explanation """
sailuh/perceive
Notebooks/Full_Disclosure/full_disclosure_corpus_statistics.ipynb
gpl-2.0
#Input year for which the word count histogram is being plotted year='2012' #Directory with the files path = 'data/input/bodymessage_corpus/'+year file_name_list=[] file_wc_list=[] file_uq_wc_list=[] file_wc_df = pd.DataFrame(columns = ["file_name","word_count","unique_word_count"]) #function that returns the word count for all the documents in a list def doc_wordcnt(path_file_name, file_name): #use open() for opening file. with open(path_file_name) as f: #create a list of all words fetched from the file using a list comprehension words = [word for line in f for word in line.split()] unique_words = set(words) #append the file name and the word count to a list file_name_list.append(file_name) file_wc_list.append(len(words)) file_uq_wc_list.append(len(unique_words)) #loop within the directory to get all the file names for filename in os.listdir(path): #get the word count for every file doc_wordcnt(path+'/'+filename, filename) #populate the column of the dataframe using the values populated in the list file_wc_df["file_name"] = file_name_list file_wc_df["word_count"] = file_wc_list file_wc_df["unique_word_count"] = file_uq_wc_list file_wc_df.head() #Sort in ascending order of word count to split and plot multiple histograms file_wc_asc_df = file_wc_df.sort_values("word_count", ascending=False); file_uq_wc_asc_df = file_wc_df.sort_values("unique_word_count", ascending=False); """ Explanation: 1. Introduction We are interested in observing the word frequency across e-mail replies of the Full Disclosure (FD) mailing list, which is a "public, vendor-neutral forum for detailed discussion of vulnerabilities and exploitation techniques, as well as tools, papers, news, and events of interest to the community." The analysis and the plots shared in the notebook involves analyzing the corpus of full disclosure mailing list which has been extracted using crawlers and derive insights from this data. In this notebook, 2012 will be used for single-year analysis. 2. Total Word Count per Document Table We create a dataframe from the input file where every row is a document from the corpus, the total word count and the number of unique words for every document. A snippet of the dataframe is shared below for better understanding. End of explanation """ #Create a list that consists of plots to be plotted in a grid plot_lst=[] #Function to plot multiple histograms def plot_main_hist(plot_index, plot_df, plot_year): #plot the histogram based on values in the dataframe range_max=plot_df["word_count"].iloc[0] range_min=plot_df["word_count"].iloc[-1] p = Histogram(plot_df,bins=20, values='word_count',title="Plot "+str(plot_index)+". "+plot_year+" Histogram; Range: "+str(range_min)+" to "+str(range_max)) p.xaxis.axis_label = 'Word Count per Document' p.xaxis.axis_label_text_font_size = '10pt' p.xaxis.major_label_text_font_size = '10pt' p.yaxis.axis_label = 'Frequency' p.yaxis.axis_label_text_font_size = '10pt' p.yaxis.major_label_text_font_size = '10pt' show(p) #Function to plot multiple histograms def plot_hist(plot_index, plot_df, plot_year): #plot the histogram based on values in the dataframe range_max=plot_df["word_count"].iloc[0] range_min=plot_df["word_count"].iloc[-1] p = Histogram(plot_df,bins=20, values='word_count',title="Plot "+str(plot_index)+". "+plot_year+" Histogram; Range: "+str(range_min)+" to "+str(range_max)) p.xaxis.axis_label = 'Word Count per Document' p.xaxis.axis_label_text_font_size = '10pt' p.xaxis.major_label_text_font_size = '10pt' p.yaxis.axis_label = 'Frequency' p.yaxis.axis_label_text_font_size = '10pt' p.yaxis.major_label_text_font_size = '10pt' plot_lst.append(p) #Create dataframes and split the sorted dataframe into 5 dataframes file_wc_asc_df1 = pd.DataFrame(data=None, columns=file_wc_asc_df.columns) file_wc_asc_df2 = pd.DataFrame(data=None, columns=file_wc_asc_df.columns) file_wc_asc_df3 = pd.DataFrame(data=None, columns=file_wc_asc_df.columns) file_wc_asc_df4 = pd.DataFrame(data=None, columns=file_wc_asc_df.columns) file_wc_asc_df5 = pd.DataFrame(data=None, columns=file_wc_asc_df.columns) file_wc_asc_df1=file_wc_asc_df.loc[(file_wc_asc_df['word_count'] >= 0) & (file_wc_asc_df['word_count'] <= 2000)] file_wc_asc_df2=file_wc_asc_df.loc[(file_wc_asc_df['word_count'] >= 2001) & (file_wc_asc_df['word_count'] <= 4000)] file_wc_asc_df3=file_wc_asc_df.loc[(file_wc_asc_df['word_count'] >= 4001) & (file_wc_asc_df['word_count'] <= 6000)] file_wc_asc_df4=file_wc_asc_df.loc[(file_wc_asc_df['word_count'] >= 6001) & (file_wc_asc_df['word_count'] <= 8000)] file_wc_asc_df5=file_wc_asc_df.loc[(file_wc_asc_df['word_count'] >= 8001)] #Get the year from the filename for the first record year=file_wc_asc_df.file_name[0][:4] #call the histogram function for each dataframe plot_main_hist(1, file_wc_asc_df, year) year=file_wc_asc_df1.file_name[0][:4] plot_hist(2.1, file_wc_asc_df2, year) year=file_wc_asc_df1.file_name[0][:4] plot_hist(2.2, file_wc_asc_df3, year) year=file_wc_asc_df1.file_name[0][:4] plot_hist(2.2, file_wc_asc_df4, year) year=file_wc_asc_df1.file_name[0][:4] plot_hist(2.3, file_wc_asc_df5, year) #Make a grid grid = gridplot(plot_lst, ncols=2, plot_width=450, plot_height=350) #Show the results show(grid) """ Explanation: 3. Word Count per Document Histogram The grid displayed below consists of histograms that show the number of words per Document for '2012'. To achieve this, the dataframe is split into five subsets and the histogram is plotted for each of the subset to get a better picture of the number of words. End of explanation """ fileRange_df= pd.DataFrame(columns=['Range','File Names']) range_lst=[] file_names_lst=[] def append_dataframe(dframe): range_min=dframe["word_count"].iloc[-1] range_max=dframe["word_count"].iloc[0] range_str=str(range_min)+" to "+str(range_max) range_lst.append(range_str) file_names_lst.append(', '.join(dframe["file_name"])) append_dataframe(file_wc_asc_df2); append_dataframe(file_wc_asc_df3); append_dataframe(file_wc_asc_df4); append_dataframe(file_wc_asc_df5); fileRange_df["Range"]=range_lst fileRange_df["File Names"]=file_names_lst fileRange_df.head() """ Explanation: 3.1 Large e-mail discussions All the large emails in the corpus for 2012 have been listed in the table below with the respective word count range. End of explanation """ #Create a list that consists of plots to be plotted in a grid plot_lst=[] #Function to histograms def plot_hist(plot_index, plot_df, plot_year): #plot the histogram based on values in the dataframe range_max=plot_df["unique_word_count"].iloc[0] range_min=plot_df["unique_word_count"].iloc[-1] p = Histogram(plot_df,bins=20, values='unique_word_count',title="Plot "+str(plot_index)+". "+plot_year+" Histogram; Range: "+str(range_min)+" to "+str(range_max)) p.xaxis.axis_label = 'Unique Word Count per Document' p.xaxis.axis_label_text_font_size = '10pt' p.xaxis.major_label_text_font_size = '10pt' p.yaxis.axis_label = 'Frequency' p.yaxis.axis_label_text_font_size = '10pt' p.yaxis.major_label_text_font_size = '10pt' show(p) #Create dataframes file_uq_wc_asc_df2 = pd.DataFrame(data=None, columns=file_uq_wc_asc_df.columns) file_uq_wc_asc_df2=file_uq_wc_asc_df.loc[(file_uq_wc_asc_df['unique_word_count'] >= 700) & (file_uq_wc_asc_df['unique_word_count'] <= 4000)] #Get the year from the filename for the first record year=file_uq_wc_asc_df.file_name[0][:4] #call the histogram function for each dataframe plot_hist(1, file_uq_wc_asc_df, year) year=file_uq_wc_asc_df.file_name[0][:4] plot_hist(2.1, file_uq_wc_asc_df2, year) """ Explanation: Vocabulary per Document Histogram The grid displayed below consists of histograms that show the number of unique words per Document for '2012'. As shown below, there is only a single document in the entire dataset that has a count of unique words greater than 2000. Majority of the documents in the dataset have a unique word count between 100 and 200. End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_run_ica.ipynb
bsd-3-clause
# Authors: Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import mne from mne.preprocessing import ICA, create_ecg_epochs from mne.datasets import sample print(__doc__) """ Explanation: Compute ICA components on epochs ICA is fit to MEG raw data. We assume that the non-stationary EOG artifacts have already been removed. The sources matching the ECG are automatically found and displayed. Note that this example does quite a bit of processing, so even on a fast machine it can take about a minute to complete. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True) raw.filter(1, 30, method='iir') # longer + more epochs for more artifact exposure events = mne.find_events(raw, stim_channel='STI 014') epochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5) """ Explanation: Read and preprocess the data. Preprocessing consists of: meg channel selection 1 - 30 Hz band-pass IIR filter epoching -0.2 to 0.5 seconds with respect to events End of explanation """ ica = ICA(n_components=0.95, method='fastica').fit(epochs) ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5) ecg_inds, scores = ica.find_bads_ecg(ecg_epochs) ica.plot_components(ecg_inds) """ Explanation: Fit ICA model using the FastICA algorithm, detect and plot components explaining ECG artifacts. End of explanation """ ica.plot_properties(epochs, picks=ecg_inds) """ Explanation: Plot properties of ECG components: End of explanation """
fweik/espresso
doc/tutorials/lennard_jones/lennard_jones.ipynb
gpl-3.0
import espressomd required_features = ["LENNARD_JONES"] espressomd.assert_features(required_features) from espressomd import observables, accumulators, analyze # Importing other relevant python modules import numpy as np import matplotlib.pyplot as plt from scipy import optimize np.random.seed(42) plt.rcParams.update({'font.size': 22}) # System parameters N_PART = 200 DENSITY = 0.75 BOX_L = np.power(N_PART / DENSITY, 1.0 / 3.0) * np.ones(3) """ Explanation: Introductory Tutorial: Lennard-Jones Liquid Table of Contents Introduction Background The Lennard-Jones Potential Units First steps Overview of a simulation script System setup Placing and accessing particles Setting up non-bonded interactions Energy minimization Choosing the thermodynamic ensemble, thermostat Integrating equations of motion and taking manual measurements Automated data collection Further Exercises Binary Lennard-Jones Liquid References Introduction Welcome to the basic ESPResSo tutorial! In this tutorial, you will learn, how to use the ESPResSo package for your research. We will cover the basics of ESPResSo, i.e., how to set up and modify a physical system, how to run a simulation, and how to load, save and analyze the produced simulation data. More advanced features and algorithms available in the ESPResSo package are described in additional tutorials. Background Today's research on Soft Condensed Matter has brought the needs for having a flexible, extensible, reliable, and efficient (parallel) molecular simulation package. For this reason ESPResSo (Extensible Simulation Package for Research on Soft Matter Systems) <a href='#[1]'>[1]</a> has been developed at the Max Planck Institute for Polymer Research, Mainz, and at the Institute for Computational Physics at the University of Stuttgart in the group of Prof. Dr. Christian Holm <a href='#[2]'>[2,3]</a>. The ESPResSo package is probably the most flexible and extensible simulation package in the market. It is specifically developed for coarse-grained molecular dynamics (MD) simulation of polyelectrolytes but is not necessarily limited to this. For example, it could also be used to simulate granular media. ESPResSo has been nominated for the Heinz-Billing-Preis for Scientific Computing in 2003 <a href='#[4]'>[4]</a>. The Lennard-Jones Potential A pair of neutral atoms or molecules is subject to two distinct forces in the limit of large separation and small separation: an attractive force at long ranges (van der Waals force, or dispersion force) and a repulsive force at short ranges (the result of overlapping electron orbitals, referred to as Pauli repulsion from the Pauli exclusion principle). The Lennard-Jones potential (also referred to as the L-J potential, 6-12 potential or, less commonly, 12-6 potential) is a simple mathematical model that represents this behavior. It was proposed in 1924 by John Lennard-Jones. The L-J potential is of the form \begin{equation} V(r) = 4\epsilon \left[ \left( \dfrac{\sigma}{r} \right)^{12} - \left( \dfrac{\sigma}{r} \right)^{6} \right] \end{equation} where $\epsilon$ is the depth of the potential well and $\sigma$ is the (finite) distance at which the inter-particle potential is zero and $r$ is the distance between the particles. The $\left(\frac{1}{r}\right)^{12}$ term describes repulsion and the $(\frac{1}{r})^{6}$ term describes attraction. The Lennard-Jones potential is an approximation. The form of the repulsion term has no theoretical justification; the repulsion force should depend exponentially on the distance, but the repulsion term of the L-J formula is more convenient due to the ease and efficiency of computing $r^{12}$ as the square of $r^6$. In practice, the L-J potential is typically cutoff beyond a specified distance $r_{c}$ and the potential at the cutoff distance is zero. <figure> <img src='figures/lennard-jones-potential.png' alt='missing' style='width: 600px;'/> <center> <figcaption>Figure 1: Lennard-Jones potential</figcaption> </center> </figure> Units Novice users must understand that ESPResSo has no fixed unit system. The unit system is set by the user. Conventionally, reduced units are employed, in other words LJ units. First steps What is ESPResSo? It is an extensible, efficient Molecular Dynamics package specially powerful on simulating charged systems. In depth information about the package can be found in the relevant sources <a href='#[1]'>[1,4,2,3]</a>. ESPResSo consists of two components. The simulation engine is written in C++ for the sake of computational efficiency. The steering or control level is interfaced to the kernel via an interpreter of the Python scripting languages. The kernel performs all computationally demanding tasks. Before all, integration of Newton's equations of motion, including calculation of energies and forces. It also takes care of internal organization of data, storing the data about particles, communication between different processors or cells of the cell-system. The scripting interface (Python) is used to setup the system (particles, boundary conditions, interactions etc.), control the simulation, run analysis, and store and load results. The user has at hand the full reliability and functionality of the scripting language. For instance, it is possible to use the SciPy package for analysis and PyPlot for plotting. With a certain overhead in efficiency, it can also be bused to reject/accept new configurations in combined MD/MC schemes. In principle, any parameter which is accessible from the scripting level can be changed at any moment of runtime. In this way methods like thermodynamic integration become readily accessible. Note: This tutorial assumes that you already have a working ESPResSo installation on your system. If this is not the case, please consult the first chapters of the user's guide for installation instructions. Overview of a simulation script Typically, a simulation script consists of the following parts: System setup (box geometry, thermodynamic ensemble, integrator parameters) Placing the particles Setup of interactions between particles Warm up (bringing the system into a state suitable for measurements) Integration loop (propagate the system in time and record measurements) System setup The functionality of ESPResSo for python is provided via a python module called espressomd. At the beginning of the simulation script, it has to be imported. End of explanation """ # Test solution of Exercise 1 assert isinstance(system, espressomd.System) """ Explanation: The next step would be to create an instance of the System class. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist. Exercise: Create an instance of an espresso system and store it in a variable called <tt>system</tt>; use <tt>BOX_L</tt> as box length. See ESPResSo documentation and module documentation. python system = espressomd.System(box_l=BOX_L) End of explanation """ SKIN = 0.4 TIME_STEP = 0.01 system.time_step = TIME_STEP system.cell_system.skin = SKIN """ Explanation: It can be used to store and manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>). End of explanation """ # Test that now we have indeed N_PART particles in the system assert len(system.part) == N_PART """ Explanation: Placing and accessing particles Particles in the simulation can be added and accessed via the <tt>part</tt> property of the System class. Individual particles are referred to by an integer id, e.g., <tt>system.part[0]</tt>. If <tt>id</tt> is unspecified, an unused particle id is automatically assigned. It is also possible to use common python iterators and slicing operations to add or access several particles at once. Particles can be grouped into several types, so that, e.g., a binary fluid can be simulated. Particle types are identified by integer ids, which are set via the particles' <tt>type</tt> attribute. If it is not specified, zero is implied. <!-- **Exercise:** --> Create <tt>N_PART</tt> particles at random positions. Use system.part.add(). Either write a loop or use an (<tt>N_PART</tt> x 3) array for positions. Use <tt>np.random.random()</tt> to generate random numbers. python system.part.add(type=[0] * N_PART, pos=np.random.random((N_PART, 3)) * system.box_l) End of explanation """ # Access position of a single particle print("position of particle with id 0:", system.part[0].pos) # Iterate over the first five particles for the purpose of demonstration. # For accessing all particles, use a slice: system.part[:] for i in range(5): print("id", i, "position:", system.part[i].pos) print("id", i, "velocity:", system.part[i].v) # Obtain all particle positions cur_pos = system.part[:].pos """ Explanation: The particle properties can be accessed using standard numpy slicing syntax: End of explanation """ print(system.part[0]) """ Explanation: Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function: End of explanation """ # use LJ units: EPS=SIG=1 LJ_EPS = 1.0 LJ_SIG = 1.0 LJ_CUT = 2.5 * LJ_SIG """ Explanation: Setting up non-bonded interactions Non-bonded interactions act between all particles of a given combination of particle types. In this tutorial, we use the Lennard-Jones non-bonded interaction. First we define the LJ parameters End of explanation """ assert (BOX_L - 2 * SKIN > LJ_CUT).all() """ Explanation: In a periodic system it is in general not straight forward to calculate all non-bonded interactions. Due to the periodicity and to speed up calculations usually a cut-off $r_{cut}$ for infinite-range potentials like Lennard-Jones is applied, such that $V(r>r_c) = 0$. The potential can be shifted to zero at the cutoff value to ensure continuity using the <tt>shift='auto'</tt> option of espressomd.interactions.LennardJonesInteraction. To allow for comparison with the fundamental work on MD simulations of LJ systems <a href='#[6]'>[6]</a> we don't shift the potential to zero at the cutoff and instead correct for the long-range error $V_\mathrm{lr}$ later. To avoid spurious self-interactions of particles with their periodic images one usually forces that the shortest box length is at least twice the cutoff distance: End of explanation """ F_TOL = 1e-2 DAMPING = 30 MAX_STEPS = 10000 MAX_DISPLACEMENT = 0.01 * LJ_SIG EM_STEP = 10 """ Explanation: Exercise: Setup a Lennard-Jones interaction with $\epsilon=$<tt>LJ_EPS</tt> and $\sigma=$<tt>LJ_SIG</tt> that is cut at $r_\mathrm{c}=$<tt>LJ_CUT</tt>$\times\sigma$ and not shifted. Hint: * Have a look at the docs python system.non_bonded_inter[0, 0].lennard_jones.set_params( epsilon=LJ_EPS, sigma=LJ_SIG, cutoff=LJ_CUT, shift=0) Energy minimization In many cases, including this tutorial, particles are initially placed randomly in the simulation box. It is therefore possible that particles overlap, resulting in a huge repulsive force between them. In this case, integrating the equations of motion would not be numerically stable. Hence, it is necessary to remove this overlap. This is typically done by performing a steepest descent minimization of the potential energy until a maximal force criterion is reached. Note: Making sure a system is well equilibrated highly depends on the system's details. In most cases a relative convergence criterion on the forces and/or energies works well but you might have to make sure that the total force is smaller than a threshold value <tt>f_max</tt> at the end of the minimization. Depending on the simulated system other strategies like simulations with small time step or capped forces might be necessary. End of explanation """ # check that after the exercise the total energy is negative assert system.analysis.energy()['total'] < 0 # reset clock system.time = 0. """ Explanation: Exercise: Use <tt>espressomd.integrate.set_steepest_descent</tt> to relax the initial configuration. Use a maximal displacement of <tt>MAX_DISPLACEMENT</tt>. A damping constant <tt>gamma = DAMPING</tt> usually is a good choice. Use the relative change of the systems maximal force as a convergence criterion. See the documentation <tt>espressomd.particle_data</tt> module on how to obtain the forces. The steepest descent has converged if the relative force change < <tt>F_TOL</tt> Break the minimization loop after a maximal number of <tt>MAX_STEPS</tt> steps or if convergence is achieved. Check for convergence every <tt>EMSTEP</tt> steps. Hint: To obtain the initial forces one has to initialize the integrator using <tt>integ_steps=0</tt>, i.e. call <tt>system.integrator.run(0)</tt> before the force array can be accessed. ```python Set up steepest descent integration system.integrator.set_steepest_descent(f_max=0, # use a relative convergence criterion only gamma=DAMPING, max_displacement=MAX_DISPLACEMENT) Initialize integrator to obtain initial forces system.integrator.run(0) old_force = np.max(np.linalg.norm(system.part[:].f, axis=1)) while system.time / system.time_step < MAX_STEPS: system.integrator.run(EM_STEP) force = np.max(np.linalg.norm(system.part[:].f, axis=1)) rel_force = np.abs((force - old_force) / old_force) print(f'rel. force change:{rel_force:.2e}') if rel_force < F_TOL: break old_force = force ``` End of explanation """ # Parameters for the Langevin thermostat # reduced temperature T* = k_B T / LJ_EPS TEMPERATURE = 0.827 # value from Tab. 1 in [6] GAMMA = 1.0 """ Explanation: Choosing the thermodynamic ensemble, thermostat Simulations can be carried out in different thermodynamic ensembles such as NVE (particle __N__umber, __V__olume, __E__nergy), NVT (particle __N__umber, __V__olume, __T__emperature) or NpT-isotropic (particle __N__umber, __p__ressure, __T__emperature). In this tutorial, we use the Langevin thermostat. End of explanation """ # Integration parameters STEPS_PER_SAMPLE = 20 N_SAMPLES = 1000 times = np.zeros(N_SAMPLES) e_total = np.zeros_like(times) e_kin = np.zeros_like(times) T_inst = np.zeros_like(times) """ Explanation: Exercise: Use <tt>system.integrator.set_vv()</tt> to use a Velocity Verlet integration scheme and <tt>system.thermostat.set_langevin()</tt> to turn on the Langevin thermostat. Set the temperature to TEMPERATURE and damping coefficient to GAMMA. For details see the online documentation. python system.integrator.set_vv() system.thermostat.set_langevin(kT=TEMPERATURE, gamma=GAMMA, seed=42) Integrating equations of motion and taking manual measurements Now, we integrate the equations of motion and take measurements of relevant quantities. End of explanation """ plt.figure(figsize=(10, 6)) plt.plot(times, T_inst, label='$T_{\\mathrm{inst}}$') plt.plot(times, [TEMPERATURE] * len(times), label='$T$ set by thermostat') plt.legend() plt.xlabel('t') plt.ylabel('T') plt.show() """ Explanation: Exercise: Integrate the system and measure the total and kinetic energy. Take N_SAMPLES measurements every STEPS_PER_SAMPLE integration steps. Calculate the total and kinetic energies using the analysis method <tt>system.analysis.energy()</tt>. Use the containers times, e_total and e_kin from the cell above to store the time series. From the simulation results, calculate the instantaneous temperature $T_{\mathrm{inst}} = 2/3 \times E_\mathrm{kin}$/<tt>N_PART</tt>. python for i in range(N_SAMPLES): times[i] = system.time energy = system.analysis.energy() e_total[i] = energy['total'] e_kin[i] = energy['kinetic'] system.integrator.run(STEPS_PER_SAMPLE) T_inst = 2. / 3. * e_kin / N_PART End of explanation """ # Use only the data after the equilibration period in the beginning warmup_time = 15 e_total = e_total[times > warmup_time] e_kin = e_kin[times > warmup_time] times = times[times > warmup_time] times -= times[0] def autocor(x): x = np.asarray(x) mean = x.mean() var = np.var(x) xp = x - mean corr = analyze.autocorrelation(xp) / var return corr def fit_correlation_time(data, ts): data = np.asarray(data) data /= data[0] def fitfn(t, t_corr): return np.exp(-t / t_corr) popt, pcov = optimize.curve_fit(fitfn, ts, data) return popt[0] """ Explanation: Since the ensemble average $\langle E_\text{kin}\rangle=3/2 N k_B T$ is related to the temperature, we may compute the actual temperature of the system via $k_B T= 2/(3N) \langle E_\text{kin}\rangle$. The temperature is fixed and does not fluctuate in the NVT ensemble! The instantaneous temperature is calculated via $2/(3N) E_\text{kin}$ (without ensemble averaging), but it is not the temperature of the system. In the first simulation run we picked STEPS_PER_SAMPLE arbitrary. To ensure proper statistics at short total run time, we will now calculate the steps_per_uncorrelated_sample which we will use for the rest of the tutorial. End of explanation """ print(steps_per_uncorrelated_sample) """ Explanation: Exercise * Calculate the autocorrelation of the total energy (store in e_total_autocor). Calculate the correlation time (corr_time) and estimate a number of steps_per_uncorrelated_sample for uncorrelated sampling Hint * we consider samples to be uncorrelated if the time between them is larger than 3 times the correlation time python e_total_autocor = autocor(e_total) corr_time = fit_correlation_time(e_total_autocor[:100], times[:100]) steps_per_uncorrelated_sample = int(np.ceil(3 * corr_time / system.time_step)) End of explanation """ plt.figure(figsize=(10, 6)) plt.plot(times, e_total_autocor, label='data') plt.plot(times, np.exp(-times / corr_time), label='exponential fit') plt.plot(2 * [steps_per_uncorrelated_sample * system.time_step], [min(e_total_autocor), 1], label='sampling interval') plt.xlim(left=-2, right=50) plt.ylim(top=1.2, bottom=-0.15) plt.legend() plt.xlabel('t') plt.ylabel('total energy autocorrelation') plt.show() """ Explanation: We plot the autocorrelation function and the fit to visually confirm a roughly exponential decay End of explanation """ print(f'mean potential energy = {mean_pot_energy:.2f} +- {SEM_pot_energy:.2f}') """ Explanation: For statistical analysis, we only want uncorrelated samples. Exercise: * Calculate the mean and standard error of the mean potential energy per particle from uncorrelated samples (define mean_pot_energy and SEM_pot_energy). Hint * you know how many steps are between samples in e_total and how many steps are between uncorrelated samples. So you have to figure out how many samples to skip python uncorrelated_sample_step = int(np.ceil(steps_per_uncorrelated_sample / STEPS_PER_SAMPLE)) pot_energies = (e_total - e_kin)[::uncorrelated_sample_step] / N_PART mean_pot_energy = np.mean(pot_energies) SEM_pot_energy = np.std(pot_energies) / np.sqrt(len(pot_energies)) End of explanation """ tail_energy_per_particle = 8. / 3. * np.pi * DENSITY * LJ_EPS * \ LJ_SIG**3 * (1. / 3. * (LJ_SIG / LJ_CUT)**9 - (LJ_SIG / LJ_CUT)**3) mean_pot_energy_corrected = mean_pot_energy + tail_energy_per_particle print(f'corrected mean potential energy = {mean_pot_energy_corrected:.2f}') """ Explanation: For comparison to literature values we need to account for the error made by the LJ truncation. For an isotropic system one can assume that the density is homogeneous behind the cutoff, which allows to calculate the so-called long-range corrections to the energy and pressure, $$V_\mathrm{lr} = 1/2 \rho \int_{r_\mathrm{c}}^\infty 4 \pi r^2 g(r) V(r) \,\mathrm{d}r,$$ Using that the radial distribution function $g(r)=1$ for $r>r_\mathrm{cut}$ one obtains $$V_\mathrm{lr} = -\frac{8}{3}\pi \rho \varepsilon \sigma^3 \left[\frac{1}{3} (\sigma/r_{cut})^9 - (\sigma/r_{cut})^3 \right].$$ Similarly, a long-range contribution to the pressure can be derived <a href='#[5]'>[5]</a>. End of explanation """ # Parameters for the radial distribution function N_BINS = 100 R_MIN = 0.0 R_MAX = system.box_l[0] / 2.0 """ Explanation: This value differs quite strongly from the uncorrected one but agrees well with the literature value $U^i = -5.38$ given in Table 1 of Ref. <a href='#[6]'>[6]</a>. Automated data collection As we have seen, it is easy to manually extract information from an ESPResSo simulation, but it can get quite tedious. Therefore, ESPResSo provides a number of data collection tools to make life easier (and less error-prone). We will now demonstrate those with the calculation of the radial distribution function. Observables extract properties from the particles and calculate some quantity with it, e.g. the center of mass, the total energy or a histogram. Accumulators allow the calculation of observables while running the system and then doing further analysis. Examples are a simple time series or more advanced methods like correlators. For our purposes we need an accumulator that calculates the average of the RDF samples. End of explanation """ system.integrator.run(N_SAMPLES * steps_per_uncorrelated_sample) """ Explanation: Exercise * Instantiate a RDF observable * Instantiate a MeanVarianceCalculator accumulator to track the RDF over time. Samples should be taken every steps_per_uncorrelated_sample steps. * Add the accumulator to the auto_update_accumulators of the system for automatic updates python rdf_obs = observables.RDF(ids1=system.part[:].id, min_r=R_MIN, max_r=R_MAX, n_r_bins=N_BINS) rdf_acc = accumulators.MeanVarianceCalculator(obs=rdf_obs, delta_N=steps_per_uncorrelated_sample) system.auto_update_accumulators.add(rdf_acc) Now we don't need an elaborate integration loop anymore, instead the RDFs are calculated and accumulated automatically End of explanation """ fig, ax = plt.subplots(figsize=(10, 7)) ax.plot(rs, rdf, label='simulated') plt.legend() plt.xlabel('r') plt.ylabel('RDF') """ Explanation: Exercise * Get the mean RDF (define rdf) from the accmulator * Get the histogram bin centers (define rs) from the observable python rdf = rdf_acc.mean() rs = rdf_obs.bin_centers() End of explanation """ # comparison to literature def calc_literature_rdf(rs, temperature, density, LJ_eps, LJ_sig): T_star = temperature / LJ_eps rho_star = density * LJ_sig**3 # expression of the factors Pi from Equations 2-8 with coefficients qi from Table 1 # expression for a,g def P(q1, q2, q3, q4, q5, q6, q7, q8, q9): return \ q1 + q2 * np.exp(-q3 * T_star) + q4 * np.exp(-q5 * T_star) + q6 / rho_star + q7 / rho_star**2 \ + q8 * np.exp(-q3 * T_star) / rho_star**3 + q9 * \ np.exp(-q5 * T_star) / rho_star**4 a = P(9.24792, -2.64281, 0.133386, -1.35932, 1.25338, 0.45602, -0.326422, 0.045708, -0.0287681) g = P(0.663161, -0.243089, 1.24749, -2.059, 0.04261, 1.65041, -0.343652, -0.037698, 0.008899) # expression for c,k def P(q1, q2, q3, q4, q5, q6, q7, q8): return \ q1 + q2 * np.exp(-q3 * T_star) + q4 * rho_star + q5 * rho_star**2 + q6 * rho_star**3 \ + q7 * rho_star**4 + q8 * rho_star**5 c = P(-0.0677912, -1.39505, 0.512625, 36.9323, - 36.8061, 21.7353, -7.76671, 1.36342) k = P(16.4821, -0.300612, 0.0937844, -61.744, 145.285, -168.087, 98.2181, -23.0583) # expression for b,h def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * rho_star) b = P(-8.33289, 2.1714, 1.00063) h = P(0.0325039, -1.28792, 2.5487) # expression for d,l def P(q1, q2, q3, q4): return q1 + q2 * \ np.exp(-q3 * rho_star) + q4 * rho_star d = P(-26.1615, 27.4846, 1.68124, 6.74296) l = P(-6.7293, -59.5002, 10.2466, -0.43596) # expression for s def P(q1, q2, q3, q4, q5, q6, q7, q8): return \ (q1 + q2 * rho_star + q3 / T_star + q4 / T_star**2 + q5 / T_star**3) \ / (q6 + q7 * rho_star + q8 * rho_star**2) s = P(1.25225, -1.0179, 0.358564, -0.18533, 0.0482119, 1.27592, -1.78785, 0.634741) # expression for m def P(q1, q2, q3, q4, q5, q6): return \ q1 + q2 * np.exp(-q3 * T_star) + q4 / T_star + \ q5 * rho_star + q6 * rho_star**2 m = P(-5.668, -3.62671, 0.680654, 0.294481, 0.186395, -0.286954) # expression for n def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * T_star) n = P(6.01325, 3.84098, 0.60793) # fitted expression (=theoretical curve) # slightly more than 1 to smooth out the discontinuity in the range [1.0, 1.02] theo_rdf_cutoff = 1.02 theo_rdf = 1 + 1 / rs**2 * (np.exp(-(a * rs + b)) * np.sin(c * rs + d) + np.exp(-(g * rs + h)) * np.cos(k * rs + l)) theo_rdf[np.nonzero(rs <= theo_rdf_cutoff)] = \ s * np.exp(-(m * rs + n)**4)[np.nonzero(rs <= theo_rdf_cutoff)] return theo_rdf theo_rdf = calc_literature_rdf(rs, TEMPERATURE, DENSITY, LJ_EPS, LJ_SIG) ax.plot(rs, theo_rdf, label='literature') ax.legend() fig """ Explanation: We now plot the experimental radial distribution. Empirical radial distribution functions have been determined for pure fluids <a href='#[7]'>[7]</a>, mixtures <a href='#[8]'>[8]</a> and confined fluids <a href='#[9]'>[9]</a>. We will compare our distribution $g(r)$ to the theoretical distribution $g(r^, \rho^, T^*)$ of a pure fluid <a href='#[7]'>[7]</a>. End of explanation """
sfomel/ipython
Swan.ipynb
gpl-2.0
%%file data.scons Flow('trace',None,'spike n1=2001 d1=0.001 k1=1001 | ricker1 frequency=30') Flow('gather','trace','spray axis=2 n=49 d=25 o=0 label=Offset unit=m | nmostretch inv=y half=n v0=2000') Result('gather','window f1=888 n1=392 | grey title=Gather') from m8r import view view('gather') """ Explanation: Make data End of explanation """ import m8r gather = m8r.File('gather.rsf') %matplotlib inline import matplotlib.pylab as plt import numpy as np plt.imshow(np.transpose(gather[:,888:1280]),aspect='auto') """ Explanation: Display the same with Python (matplotlib) End of explanation """ %%file nmo.scons Flow('nmo','gather','nmostretch half=n v0=1800') Result('nmo','window f1=888 n1=200 | grey title=NMO') view('nmo') """ Explanation: Apply moveout with incorrect velocity End of explanation """ %%file slope.scons Flow('slope','nmo','dip rect1=100 rect2=5 order=2') Result('slope','grey color=linearlfb mean=y scalebar=y title=Slope') view('slope') """ Explanation: Slope estimation End of explanation """ %%file flat.scons Flow('paint','slope','pwpaint order=2') Result('paint','window f1=888 n1=200 | contour title=Painting') Flow('flat','nmo paint','iwarp warp=${SOURCES[1]}') Result('flat','window f1=888 n1=200 | grey title=Flattening') view('paint') view('flat') """ Explanation: Non-physical flattening by predictive painting End of explanation """ %%file twarp.scons Flow('twarp','paint','math output=x1 | iwarp warp=$SOURCE') Result('twarp','window j1=20 | transp | graph yreverse=y min2=0.888 max2=1.088 pad=n title="Time Warping" ') view('twarp') """ Explanation: Velocity estimation by time warping Predictive painting produces $t_0(t,x)$. Time warping converts it into $t(t_0,x)$. End of explanation """ %%file lsfit.scons Flow('num','twarp','math output="(input*input-x1*x1)*x2^2" | stack norm=n') Flow('den','twarp','math output="x2^4" | stack norm=n') Flow('vel','num den','div ${SOURCES[1]} | math output="1800/sqrt(1800*1800*input+1)" ') Result('vel', ''' window f1=888 n1=200 | graph yreverse=y transp=y title="Estimated Velocity" label2=Velocity unit2=m/s grid2=y pad=n min2=1950 max2=2050 ''') view('vel') """ Explanation: We now want to fit $t^2(t_0,x)-t_0^2 \approx \Delta S\,x^2$, where $\Delta S = \frac{1}{v^2} - \frac{1}{v_0^2}$. The least-squares fit is $\Delta S = \displaystyle \frac{\int x^2\left[t^2(t_0,x)-t_0^2\right]\,dx}{\int x^4\,dx}$. The velocity estimate is $v = \displaystyle \frac{v_0}{\sqrt{\Delta S\,v_0^2 + 1}}$. End of explanation """ %%file nmo2.scons Flow('nmo2','gather vel','nmo half=n velocity=${SOURCES[1]}') Result('nmo2','window f1=888 n1=200 | grey title="Physical Flattening" ') view('nmo2') """ Explanation: Last step - physical flattening End of explanation """
apozas/BIST-Python-Bootcamp
3_Types_Functions_FlowControl.ipynb
gpl-3.0
x_int = 3 x_float = 3. x_string = 'three' x_list = [3, 'three'] type(x_float) type(x_string) type(x_list) """ Explanation: 3 Types, Functions and Flow Control Data types End of explanation """ abs(-1) import math math.floor(4.5) math.exp(1) math.log(1) math.log10(10) math.sqrt(9) round(4.54,1) """ Explanation: Numbers End of explanation """ round? """ Explanation: If this should not make sense, you can print some documentation: End of explanation """ string = 'Hello World!' string2 = "This is also allowed, helps if you want 'this' in a string and vice versa" len(string) """ Explanation: Strings End of explanation """ print(string) print(string[0]) print(string[2:5]) print(string[2:]) print(string[:5]) print(string * 2) print(string + 'TEST') print(string[-1]) """ Explanation: Slicing End of explanation """ print(string/2) print(string - 'TEST') print(string**2) """ Explanation: String Operations End of explanation """ x = 'test' x.capitalize() x.find('e') x = 'TEST' x.lower() """ Explanation: capitalizing strings: End of explanation """ print('Pi is {:06.2f}'.format(3.14159)) print('Space can be filled using {:_>10}'.format(x)) """ Explanation: Enviromnents like Jupyter and Spyder allow you to explore the methods (like .capitalize() or .upper() using x. and pressing tab. Formating You can also format strings, e.g. to display rounded numbers End of explanation """ print(f'{x} 1 2 3') """ Explanation: With python 3.6 this became even more readable End of explanation """ x_list x_list[0] x_list.append('III') x_list x_list.append('III') x_list del x_list[-1] x_list y_list = ['john', '2.', '1'] y_list + x_list x_list*2 z_list=[4,78,3] max(z_list) min(z_list) sum(z_list) z_list.count(4) z_list.append(4) z_list.count(4) z_list.sort() z_list z_list.reverse() z_list """ Explanation: Lists End of explanation """ y_tuple = ('john', '2.', '1') type(y_tuple) y_list y_list[0] = 'Erik' y_list y_tuple[0] = 'Erik' """ Explanation: Tuples Tuples are immutable and can be thought of as read-only lists. End of explanation """ tinydict = {'name': 'john', 'code':6734, 'dept': 'sales'} type(tinydict) print(tinydict) print(tinydict.keys()) print(tinydict.values()) tinydict['code'] tinydict['surname'] tinydict['dept'] = 'R&D' # update existing entry tinydict['surname'] = 'Sloan' # Add new entry tinydict['surname'] del tinydict['code'] # remove entry with key 'code' tinydict['code'] tinydict.clear() del tinydict """ Explanation: Dictionaries Dictonaries are lists with named entries. There is also named tuples, which are immutable dictonaries. Use OrderedDict from collections if you need to preserve the order. End of explanation """ dic = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'} dic """ Explanation: When duplicate keys encountered during assignment, the last assignment wins End of explanation """ len(dic) """ Explanation: Finding the total number of items in the dictionary: End of explanation """ str(dic) """ Explanation: Produces a printable string representation of a dictionary: End of explanation """ def mean(mylist): """Calculate the mean of the elements in mylist""" number_of_items = len(mylist) sum_of_items = sum(mylist) return sum_of_items / number_of_items type(mean) z_list mean(z_list) help(mean) mean? """ Explanation: Functions End of explanation """ count = 0 while (count < 9): print('The count is: ' + str(count)) count += 1 print('Good bye!') """ Explanation: Flow Control In general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on. There may be a situation when you need to execute a block of code several number of times. In Python a block is delimitated by intendation, i.e. all lines starting at the same space are one block. Programming languages provide various control structures that allow for more complicated execution paths. A loop statement allows us to execute a statement or group of statements multiple times. while Loop Repeats a statement or group of statements while a given condition is TRUE. It tests the condition before executing the loop body. End of explanation """ fruits = ['banana', 'apple', 'mango'] for fruit in fruits: # Second Example print('Current fruit :', fruit) """ Explanation: A loop becomes infinite loop if a condition never becomes FALSE. You must use caution when using while loops because of the possibility that this condition never resolves to a FALSE value. This results in a loop that never ends. Such a loop is called an infinite loop. An infinite loop might be useful in client/server programming where the server needs to run continuously so that client programs can communicate with it as and when required. for Loop Executes a sequence of statements multiple times and abbreviates the code that manages the loop variable. End of explanation """ for index, fruit in enumerate(fruits): print('Current fruit :', fruits[index]) """ Explanation: Sometimes one also needs the index of the element, e.g. to plot a subset of data on different subplots. Then enumerate provides an elegant ("pythonic") way: End of explanation """ for index in range(len(fruits)): print('Current fruit:', fruits[index]) """ Explanation: In principle one could also iterate over an index going from 0 to the number of elements: End of explanation """ fruits_with_b = [fruit for fruit in fruits if fruit.startswith('b')] fruits_with_b """ Explanation: for loops can be elegantly integrated for creating lists End of explanation """ fruits_with_b = [] for fruit in fruits: if fruit.startswith('b'): fruits_with_b.append(fruit) fruits_with_b """ Explanation: This is equivalent to the following loop: End of explanation """ for x in range(1, 3): for y in range(1, 4): print(f'{x} * {y} = {x*y}') """ Explanation: Nested Loops Python programming language allows to use one loop inside another loop. A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa. End of explanation """ x = 'Mark' if x in ['Mark', 'Jack', 'Mary']: print('present!') else: print('absent!') x = 'Tom' if x in ['Mark', 'Jack', 'Mary']: print('present!') else: print('absent!') """ Explanation: if The if statement will evaluate the code only if a given condition is met (used with logical operator such as ==, &lt;, &gt;, =&lt;, =&gt;, not, is, in, etc. Optionally we can introduce a elsestatement to execute an alternative code when the condition is not met. End of explanation """ x = 'Tom' if x in ['Mark', 'Jack', 'Mary']: print('present in list A!') elif x in ['Tom', 'Dick', 'Harry']: print('present in list B!') else: print('absent!') """ Explanation: We can also use one or more elif statements to check multiple expressions for TRUE and execute a block of code as soon as one of the conditions evaluates to TRUE End of explanation """ var = 10 while var > 0: print('Current variable value: ' + str(var)) var = var -1 if var == 5: break print('Good bye!') """ Explanation: else statements can be also use with while and for loops (the code will be executed in the end) You can also use nested if statements. break It terminates the current loop and resumes execution at the next statement. The break statement can be used in both while and for loops. If you are using nested loops, the break statement stops the execution of the innermost loop and start executing the next line of code after the block. End of explanation """ for letter in 'Python': if letter == 'h': continue print('Current Letter: ' + letter) """ Explanation: continue The continue statement rejects all the remaining statements in the current iteration of the loop and moves the control back to the top of the loop (like a "skip"). The continue statement can be used in both while and for loops. End of explanation """ for letter in 'Python': if letter == 'h': pass print('This is pass block') print('Current Letter: ' + letter) print('Good bye!') """ Explanation: pass The pass statement is a null operation; nothing happens when it executes. The pass is also useful in places where your code will eventually go, but has not been written yet End of explanation """ for letter in 'Python': if letter == 'h': continue print('This is pass block') print('Current Letter: ' + letter) print('Good bye!') """ Explanation: pass and continuecould seem similar but they are not. The printed message "This is pass block", wouldn't have been printed if continue had been used instead. pass does nothing, continue goes to the next iteration. End of explanation """
merryjman/astronomy
Word_frequency.ipynb
gpl-3.0
import re import pandas as pd import urllib.request frequency = {} document_text = urllib.request.urlopen \ ('http://www.textfiles.com/etext/FICTION/bronte-jane-178.txt') \ .read().decode('utf-8') text_string = document_text.lower() match_pattern = re.findall(r'\b[a-z]{3,15}\b', text_string) for word in match_pattern: count = frequency.get(word,0) frequency[word] = count + 1 frequency_list = frequency.keys() d = [] for word in frequency_list: var = word + "," + str(frequency[word]) + "\r" d.append({'word':word, 'Frequency': frequency[word]}) df = pd.DataFrame(d) """ Explanation: Word Frequency in Literary Text Click on the play icon above to "run" each box of code. This program generates a table of how often words appear in a file and sorts them to show the ones the author used most frequently. This example uses Jane Eyre, but there are tons of books to choose from here with lots of books in .txt format. End of explanation """ df1 = df.sort_values(by="Frequency", ascending=False) # the next line displays the first number of rows you select df1.head(10) """ Explanation: Word frequency list End of explanation """ df2 = df1.query('word not in \ ("the","and","it","was","for","but","that") \ ') df2.head(10) """ Explanation: Filtering the results This next part removes some of the less interesting words from the list. End of explanation """
crowd-course/datascience
5-classification/5.2 - Logistic Regression in Classifying Breast Cancer .ipynb
mit
import pandas as pd import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix %matplotlib inline import matplotlib.pyplot as plt from matplotlib import cm # this is for colormaps for 2D surfaces from mpl_toolkits.mplot3d import Axes3D # this is for 3D plots plt.rcParams['figure.figsize'] = (10, 10) """ Explanation: Classifying Malignant/Benign Breast Tumors with Logistic Regression Welcome to the practical section of module 5.2. Here we'll be using Logistic Regression with the Wisconsin Breast Cancer Database to predict whether a patient's tumor is benign or malignant based on tumor cell charactaristics. This is just one example from many to which machine learning and classification could offer great insights and aid. By the end of the module, we'll have a trained logistic regression model on the a subset of the features presented in the dataset that is very accurate at diagnosing the condition of the tumor based on these features. We'll also see how we can make interseting inferences from the model that could be helpful for the physicians in diagnosing cancer End of explanation """ dataset = pd.read_csv('../datasets/breast-cancer-wisconson.csv') dataset[:10] """ Explanation: Visualizing the Data We'll start off by exploring and visualizing our dataset to see what attributes we have, how the class of the tumor is represented and how the data correlates. End of explanation """ focus_dataset = dataset[["CT", "UCS", "UCSh", "Class"]] # take only the attributes we'll focus on focus_dataset.is_copy = False # this is just to hide a nasty warning! focus_dataset.loc[:, ("Class")] = [0 if tclass == 2 else 1 for tclass in dataset["Class"]] # convert Class to 0/1 focus_dataset[:10] """ Explanation: To understand the meaning of the abbreviations we can consult the dataset's website to find a description of each attribute in order. Here we'll be focusing on three attributes: CT, UCS and UCSh which stand for Clump Thickness, Uniformity of Cell Size and Uniformity of Cell Shape respectively. If you noticed the Class attribute at the end (which gives the class of the tumor), you'll find that it takes either 2 or 4; where 2 represents a benign tumor while 4 represents a malignant tumor. We'll chnage that to a more expressive values and make a benign tumor represented by 0 and mlignants by 1s. End of explanation """ benign_selector = focus_dataset["Class"] == 0 benign_datapoints = focus_dataset[benign_selector] malignant_datapoints = focus_dataset[~benign_selector] plot3D = plt.figure() plot3D = plot3D.gca(projection='3d') # here we create our scatter plot given our data plot3D.scatter(benign_datapoints["CT"], benign_datapoints["UCS"], benign_datapoints["UCSh"], c='green') plot3D.scatter(malignant_datapoints["CT"], malignant_datapoints["UCS"], malignant_datapoints["UCSh"], c='red') # now we'll put some labels on the axes plot3D.set_xlabel("CT") plot3D.set_ylabel("UCS") plot3D.set_zlabel("UCSh") # finally we show our plot plt.show() """ Explanation: We can visually present our data using scatter plots in a 3D space to have more understanding of the how the data points group and how the attributes of Clump Thickness, Uniformity of Cell Size and Uniformity of Cell Shape correlate with the class of the tumor. End of explanation """ dataset_size = len(focus_dataset) training_size = np.floor(dataset_size * 0.6).astype(int) X_training = focus_dataset[["CT", "UCS", "UCSh"]][:training_size] y_training = focus_dataset["Class"][:training_size] X_test = focus_dataset[["CT", "UCS", "UCSh"]][training_size:] y_test = focus_dataset["Class"][training_size:] """ Explanation: The green data points in the plot represent benign tumors while the red ones represent malignant ones. The 3D visualization shows that the data points for each class tends to cluster togther with the exception of some noisy data points. This suggets that Logistic Regression would be a good estimator for the dataset. Training the Logistic Regression Model Scikit-learn's linear models bundle offers the LogisticRegression algorithm that trains a data set to fit a hypothesis function in the form: $$h_w(X) = g(w^TX) \hspace{0.5em} \text{where } g \text{ is a logistic function}$$ Just like we saw in the videos. However, LogisticRegression doesn't use gradient descent to train the model, it uses other iterative optimzation methods (shuch as Newton's method); This is very suitable for small and medium datasets like we have here. But in the case of a large dataset, gradient descent would be the better choice. To train a logistic regression model using gradient descent with scikit-learn, use SGDClassifer. Here we'll work with LogisticRegression. Before training the model, we'll first split our data into two parts: one for training and one for testing and evaluation. Remember that the general rule is to divide the dataset into three parts: training, validation, and testing; but because we'll not be doing any parameter tuning here we won't need the validation set, hence we'll divide the dataset into two parts only. End of explanation """ model = LogisticRegression() model.fit(X_training, y_training) w0 = model.intercept_ w1 = coefs[0] w2 = coefs[1] w3 = coefs[2] print "Trained Model: h(x) = g(%0.2f + %0.2fx₁ + %0.2fx₂ + %0.2fx₃)" % (w0, w1, w2, w3) """ Explanation: We can use LogisticRegression in the same way that we used the SGDRegressor when we worked with Linear Regression, and this is an amazing charactaristic of scikit-learn; that it has a standard API across the very different models. So to train our model, we'll create an instance of LogisticRegression and just call the fit method on that instace and pass our features (X) and labels (y). End of explanation """ benign_selector = focus_dataset["Class"] == 0 benign_datapoints = focus_dataset[benign_selector] malignant_datapoints = focus_dataset[~benign_selector] plot3D = plt.figure() plot3D = plot3D.gca(projection='3d') # here we create our scatter plot given our data plot3D.scatter(benign_datapoints["CT"], benign_datapoints["UCS"], benign_datapoints["UCSh"], c='green') plot3D.scatter(malignant_datapoints["CT"], malignant_datapoints["UCS"], malignant_datapoints["UCSh"], c='red') X_plane, Y_plane = np.meshgrid(focus_dataset["CT"], focus_dataset["UCS"]) Z_plane = (-w0/w3) + (-w1/w3) * X_plane + (-w2/w3) * Y_plane plot3D.plot_surface(X_plane,Y_plane, Z_plane, cmap=cm.coolwarm) # now we'll put some labels on the axes plot3D.set_xlabel("CT") plot3D.set_ylabel("UCS") plot3D.set_zlabel("UCSh") # finally we show our plot plt.show() """ Explanation: Where $x_1,x_2,x_3$ represent Clump Thickness, Uniformity of Cell Size, and Uniformity of Cell Shape respectively. Visualizing the Decision Boundary As we learned from the videos, a classifer when presented with new data baiscally takes a decision: whether this new data belongs to class A or class B. We can visualize how the classifier makes a decision by trying to visualize its decision boundary: the boundary below which data belongs to some class, and above which data belongs to the other class. You can imagine the decision boundary as a line (or as in our case here, a plane) that seperates the two classes. The equation of the boundary is $w^TX = 0$, but we cannot plot it in this form. We'll need to convert it to a form we can plot. In our case here: $w^TX = w_0 + w_1x_1 + w_2x_2 + w_3x_3$ So the equation of the boundary plane is: $w_0 + w_1x_1 + w_2x_2 + w_3x_3 = 0$ To be able to plot this, we need a formula that gives the value $x_3$ in terms of $x_1$ and $x_2$. With very simple algebriac manipulation we can find that: $$x_3 = \frac{-w_0}{w_3} + \frac{-w_1}{w_3}x_1 + \frac{-w_2}{w_3}x_2$$ This is the equation we're going to use to plot our decision boundary plane. End of explanation """ accuracy = model.score(X_test, y_test) "Model's Mean Accuracy: %0.3f" % (accuracy) """ Explanation: Evaluation and Inferences As we know, machine learning models are mainly used for two purposes: predecting information for new data, and making inferences about the data we have. In the folloing we'll explore how good our trained Logistic Regression model in prediction and what can we infer from the formulation of the model. Evaluating Predection Accuracy There are many ways we can evaluate the accuracy of a classifier. One of these ways is provided by scikit-learn through the score method of the model. This function computes the mean accuracy of the model, which is expressed mathematically as: $$\frac{1}{n}\sum_{i = 1}^{n} 1(y_i = \widehat{y_i}) \hspace{0.5em} \text{where } \widehat{y_i} \text{ is the predecited value}$$ $$,1(P) = \left{ \begin{matrix} 1 \hspace{1.5em} \text{if } P \text{ is true}\ 0 \hspace{1.5em} \text{otherwise} \end{matrix} \right.$$ The closer this value is to one, the more accurate our model is. End of explanation """ y_predicted = model.predict(X_test) cmatrix = confusion_matrix(y_test, y_predicted) print "Number of True-Negatives: %d" % (cmatrix[0,0]) print "Number of True-Positives: %d" % (cmatrix[1, 1]) print "Number of False-Negatives: %d" % (cmatrix[1, 0]) print "Number of False-Positives: %d" % (cmatrix[0, 1]) """ Explanation: Such a high value tells us that the model is very good at predection, but it's not perfect; appearntly there some missclassifications. When we deal with a sensitive topic like diagnosing cancer, we need to know exactly how the data are missclassified. Obviously, missclassifying a benign tumor as malignant (a false-positive) is not as bad as missclassifying a malignant tumor as benign (a false-negative). A way to look closely at false-positives and false-negatives is called the confusion matrix. A confusion matrix is a matrix $C$ where the rows represent the true classes of the data, and the columns represent the predicted classes of the data. Each element $C_{ij}$ is the number of data points that its true class is $i$ and its predicted class is $j$. To compute the confustion matrix we'll need the true classes of the data (we have those in y_test) and the predicted classes of the data (we can get those with the model's predict method). Once we have those we can use scikit-learn's confusion_matrix function from the metrics package by passing to it the true and predicted classes of our data. End of explanation """ plt.matshow(cmatrix) # this plots the matrix plt.colorbar() # puts a color bar to show what each color represents plt.ylabel('True Class') plt.xlabel('Predicted Class') plt.show() """ Explanation: We found that a big part of the error comes from false-positives, this boosts our confidence in the model's accuracy. We can plot the confusion matrix in a heatmap fashion using metaplotlib for better visulaization of the accuracy. End of explanation """
nre-aachen/GeMpy
Tutorial/ch1.ipynb
mit
# These two lines are necessary only if gempy is not installed import sys, os sys.path.append("../") # Importing gempy import gempy as gp # Embedding matplotlib figures into the notebooks %matplotlib inline # Aux imports import numpy as np """ Explanation: Chapter 1: GemPy Basic In this first example, we will show how to construct a first basic model and the main objects and functions. First we import gempy: End of explanation """ geo_data = gp.read_pickle('NoFault.pickle') geo_data.n_faults = 0 print(geo_data) """ Explanation: All data get stored in a python object InputData. Therefore we can use python serialization to save the input of the models. In the next chapter we will see different ways to create data but for this example we will use a stored one End of explanation """ print(gp.get_grid(geo_data)) """ Explanation: This geo_data object contains essential information that we can access through the correspondent getters. Such a the coordinates of the grid. End of explanation """ gp.get_raw_data(geo_data, 'interfaces').head() """ Explanation: The main input the potential field method is the coordinates of interfaces points as well as the orientations. These pandas dataframes can we access by the following methods: Interfaces Dataframe End of explanation """ gp.get_raw_data(geo_data, 'foliations').head() """ Explanation: Foliations Dataframe End of explanation """ gp.plot_data(geo_data, direction='y') """ Explanation: It is important to notice the columns of each data frame. These not only contains the geometrical properties of the data but also the formation and series at which they belong. This division is fundamental in order to preserve the depositional ages of the setting to model. A projection of the aforementioned data can be visualized in to 2D by the following function. It is possible to choose the direction of visualization as well as the series: End of explanation """ gp.visualize(geo_data) """ Explanation: GemPy supports visualization in 3D as well trough vtk. End of explanation """ interp_data = gp.InterpolatorInput(geo_data, u_grade = [3]) print(interp_data) """ Explanation: The ins and outs of Input data objects As we have seen objects DataManagement.InputData (usually called geo_data in the tutorials) aim to have all the original geological properties, measurements and geological relations stored. Once we have the data ready to generate a model, we will need to create the next object type towards the final geological model: End of explanation """ gp.get_kriging_parameters(interp_data) """ Explanation: By default (there is a flag in case you do not need) when we create a interp_data object we also compile the theano function that compute the model. That is the reason why takes long. gempy.DataManagement.InterpolatorInput (usually called interp_data in the tutorials) prepares the original data to the interpolation algorithm by scaling the coordinates for better and adding all the mathematical parametrization needed. End of explanation """ sol = gp.compute_model(interp_data) """ Explanation: These later parameters have a default value computed from the original data or can be changed by the user (be careful of changing any of these if you do not fully understand their meaning). At this point, we have all what we need to compute our model. By default everytime we compute a model we obtain 3 results: Lithology block model The potential field Faults network block model End of explanation """ gp.plot_section(geo_data, sol[0], 25) """ Explanation: This solution can be plot with the correspondent plotting function. Blocks: End of explanation """ gp.plot_potential_field(geo_data, sol[1], 25) """ Explanation: Potential field: End of explanation """
gaufung/PythonStandardLibrary
DataStructures/array.ipynb
mit
import array import binascii s= b'this is a array' a = array.array('b', s) print('As byte string', s) print('As array ', a) print('As hex', binascii.hexlify(a)) """ Explanation: Type code for array code | type | minimum size :---: | :---: | :---: b | int | 1 B | int | 1 h | signed int | 2 H | unsigned int | 2 i | sign int | 2 I | unsigned int | 2 l | signed long | 4 L | unsigned long | 4 q | signed long long | 8 Q | unsigned long long | 8 f | float | 4 d | double float | 8 Initilization End of explanation """ import array import pprint a = array.array('i', range(3)) print('initialize\n', a) a.extend(range(3)) print('Extend\n',a) print('Slice \n', a[2:5]) print('Iterartor\n') print(list(enumerate(a))) """ Explanation: Manuipulating Arrays End of explanation """ import array import binascii import tempfile a = array.array('i', range(5)) print('A1:', a) # Write the array of numbers to a temporary file output = tempfile.NamedTemporaryFile() a.tofile(output.file) # must pass an *actual* file output.flush() # Read the raw data with open(output.name, 'rb') as input: raw_data = input.read() print('Raw Contents:', binascii.hexlify(raw_data)) # Read the data into an array input.seek(0) a2 = array.array('i') a2.fromfile(input, len(a)) print('A2:', a2) """ Explanation: Arrays and Files End of explanation """ import array import binascii def to_hex(a): chars_per_item = a.itemsize * 2 # 2 hex digits hex_version = binascii.hexlify(a) num_chunks = len(hex_version) // chars_per_item for i in range(num_chunks): start = i * chars_per_item end = start + chars_per_item yield hex_version[start:end] start = int('0x12345678', 16) end = start + 5 a1 = array.array('i', range(start, end)) a2 = array.array('i', range(start, end)) a2.byteswap() fmt = '{:>12} {:>12} {:>12} {:>12}' print(fmt.format('A1 hex', 'A1', 'A2 hex', 'A2')) print(fmt.format('-' * 12, '-' * 12, '-' * 12, '-' * 12)) fmt = '{!r:>12} {:12} {!r:>12} {:12}' for values in zip(to_hex(a1), a1, to_hex(a2), a2): print(fmt.format(*values)) """ Explanation: Alternative Byte Ordering End of explanation """
UWashington-Astro300/Astro300-W17
09_Python_LaTeX.ipynb
mit
%matplotlib inline import sympy as sp import numpy as np import matplotlib.pyplot as plt """ Explanation: Python, SymPy, and $\LaTeX$ End of explanation """ sp.init_printing() # Turns on pretty printing np.sqrt(8) sp.sqrt(8) """ Explanation: # Symbolic Mathematics (SymPy) End of explanation """ x, y, z = sp.symbols('x y z') my_equation = 2 * x + y my_equation my_equation + 3 my_equation - x my_equation / x """ Explanation: You have to explicitly tell SymPy what symbols you want to use End of explanation """ sp.simplify(my_equation / x) another_equation = (x + 2) * (x - 3) another_equation sp.expand(another_equation) yet_another_equation = 2 * x**2 + 5 * x + 3 sp.factor(yet_another_equation) sp.solve(yet_another_equation,x) long_equation = 2*y*x**3 + 12*x**2 - x + 3 - 8*x**2 + 4*x + x**3 + 5 + 2*y*x**2 + x*y long_equation sp.collect(long_equation,x) sp.collect(long_equation,y) """ Explanation: SymPy has all sorts of ways to manipulates symbolic equations End of explanation """ yet_another_equation sp.diff(yet_another_equation,x) sp.diff(yet_another_equation,x,2) sp.integrate(yet_another_equation,x) sp.integrate(yet_another_equation,(x,0,5)) # limits x = 0 to 5 """ Explanation: SymPy can do your calculus homework. End of explanation """ AA = sp.Matrix([[1,3,5],[2,5,1],[2,3,8]]) bb = sp.Matrix([[10],[8],[3]]) print(AA**-1) print(AA**-1 * bb) """ Explanation: System of 3 equations example $$ \begin{array}{c} x + 3y + 5z = 10 \ 2x + 5y + z = 8 \ 2x + 3y + 8z = 3 \ \end{array} \hspace{3cm} \left[ \begin{array}{ccc} 1 & 3 & 5 \ 2 & 5 & 1 \ 2 & 3 & 8 \end{array} \right] \left[ \begin{array}{c} x\ y\ z \end{array} \right] = \left[ \begin{array}{c} 10\ 8\ 3 \end{array} \right] $$ End of explanation """ plt.style.use('ggplot') x = np.linspace(0,2*np.pi,100) y = np.sin(5*x) * np.exp(-x) plt.plot(x,y) plt.title("The function $y\ =\ \sin(5x)\ e^{-x}$") plt.xlabel("This is in units of 2$\pi$") plt.text(2.0, 0.4, '$\Delta t = \gamma\, \Delta t$', color='green', fontsize=36) """ Explanation: SymPy can do so much more. It really is magic. Complete documentation can be found here Python uses the $\LaTeX$ language to typeset equations. Use a single set of $ to make your $\LaTeX$ inline and a double set $$ to center This code will produce the output: $$ \int \cos(x)\ dx = \sin(x) $$ Use can use $\LaTeX$ in plots: End of explanation """ a = 1/( ( z + 2 ) * ( z + 1 ) ) print(sp.latex(a)) """ Explanation: Use can use SymPy to make $\LaTeX$ equations for you! End of explanation """ print(sp.latex(sp.Integral(z**2,z))) """ Explanation: $$ \frac{1}{\left(z + 1\right) \left(z + 2\right)} $$ End of explanation """ from astropy.io import ascii from astropy.table import QTable my_table = QTable.read('Zodiac.csv', format='ascii.csv') my_table[0:3] ascii.write(my_table, format='latex') """ Explanation: $$ \int z^{2}\, dz $$ Astropy can output $\LaTeX$ tables End of explanation """ t = np.linspace(0,12*np.pi,2000) fig,ax = plt.subplots(1,1) # One window fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot """ Explanation: Some websites to open up for class: Special Relativity ShareLatex ShareLatex Docs and Help Latex Symbols Latex draw symbols The SAO/NASA Astrophysics Data System Latex wikibook Assignment for Week 9 End of explanation """
tuanavu/python-cookbook-3rd
notebooks/ch01/12_determine_the_top_n_items_occurring_in_a_list.ipynb
mit
words = [ 'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes', 'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the', 'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into', 'my', 'eyes', "you're", 'under' ] from collections import Counter word_counts = Counter(words) top_three = word_counts.most_common(3) print(top_three) # outputs [('eyes', 8), ('the', 5), ('look', 4)] print(word_counts['not']) print(word_counts['eyes']) """ Explanation: Determining the Most Frequently Occurring Items in a Sequence Problem Determine the most frequently occurring items in the sequence. Solution most_common() method in collections.Counter End of explanation """ # morewords = ['why','are','you','not','looking','in','my','eyes'] # for word in morewords: # word_counts[word] += 1 """ Explanation: Discussion Increment the count manually End of explanation """ morewords = ['why','are','you','not','looking','in','my','eyes'] word_counts.update(morewords) print(word_counts.most_common(3)) """ Explanation: Update word counts using update() End of explanation """ a = Counter(words) b = Counter(morewords) print(a) print(b) # Combine counts c = a + b c # Subtract counts d = a - b d """ Explanation: You can use Counter to do mathematical operations. End of explanation """
ledeprogram/algorithms
class6/donow/Lee_Dongjin_6_Donow.ipynb
gpl-3.0
import pandas as pd %matplotlib inline import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line) import statsmodels.formula.api as smf # package we'll be using for linear regression import numpy as np import scipy as sp """ Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model End of explanation """ df = pd.read_csv("data/hanford.csv") df """ Explanation: 2. Read in the hanford.csv file End of explanation """ df.describe() """ Explanation: 3. Calculate the basic descriptive statistics on the data End of explanation """ df.plot(kind='scatter', x='Exposure', y='Mortality') r = df.corr()['Exposure']['Mortality'] r """ Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation? End of explanation """ lm = smf.ols(formula="Mortality~Exposure",data=df).fit() intercept, slope = lm.params lm.params """ Explanation: Yes, there seems to be a correlation wothy of investigation. 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure End of explanation """ # Method 01 (What we've learned from the class) df.plot(kind='scatter', x='Exposure', y='Mortality') plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red") # Method 02 (Another version) _ so much harder ...than what we have learned def plot_correlation( ds, x, y, ylim=(100,240) ): plt.xlim(0,14) plt.ylim(ylim[0],ylim[1]) plt.scatter(ds[x], ds[y], alpha=0.6, s=50) for abc, row in ds.iterrows(): plt.text(row[x], row[y],abc ) plt.xlabel(x) plt.ylabel(y) # Correlation trend_variable = np.poly1d(np.polyfit(ds[x], ds[y], 1)) trendx = np.linspace(0, 14, 4) plt.plot(trendx, trend_variable(trendx), color='r') r = sp.stats.pearsonr(ds[x],ds[y]) plt.text(trendx[3], trend_variable(trendx[3]),'r={:.3f}'.format(r[0]), color = 'r' ) plt.tight_layout() plot_correlation(df,'Exposure','Mortality') r_squared = r **2 r_squared """ Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination) End of explanation """ def predicting_mortality_rate(exposure): return intercept + float(exposure) * slope predicting_mortality_rate(10) """ Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 End of explanation """
jtwhite79/pyemu
verification/henry/verify_unc_results.ipynb
bsd-3-clause
%matplotlib inline import os import numpy as np import matplotlib.pyplot as plt import pandas as pd import pyemu """ Explanation: verify pyEMU results with the henry problem End of explanation """ la = pyemu.Schur("pest.jco",verbose=False,forecasts=[]) la.drop_prior_information() jco_ord = la.jco.get(la.pst.obs_names,la.pst.adj_par_names) ord_base = "pest_ord" jco_ord.to_binary(ord_base + ".jco") la.pst.write(ord_base+".pst") """ Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian End of explanation """ pv_names = [] predictions = ["pd_ten", "c_obs10_2"] for pred in predictions: pv = jco_ord.extract(pred).T pv_name = pred + ".vec" pv.to_ascii(pv_name) pv_names.append(pv_name) """ Explanation: extract and save the forecast sensitivity vectors End of explanation """ prior_uncfile = "pest.unc" la.parcov.to_uncfile(prior_uncfile,covmat_file=None) """ Explanation: save the prior parameter covariance matrix as an uncertainty file End of explanation """ post_mat = "post.cov" post_unc = "post.unc" args = [ord_base + ".pst","1.0",prior_uncfile, post_mat,post_unc,"1"] pd7_in = "predunc7.in" f = open(pd7_in,'w') f.write('\n'.join(args)+'\n') f.close() out = "pd7.out" pd7 = os.path.join("i64predunc7.exe") os.system(pd7 + " <" + pd7_in + " >"+out) for line in open(out).readlines(): print(line) """ Explanation: PRECUNC7 write a response file to feed stdin to predunc7 End of explanation """ post_pd7 = pyemu.Cov.from_ascii(post_mat) la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions) post_pyemu = la_ord.posterior_parameter #post_pyemu = post_pyemu.get(post_pd7.row_names) """ Explanation: load the posterior matrix written by predunc7 End of explanation """ delta = (post_pd7 - post_pyemu).x (post_pd7 - post_pyemu).to_ascii("delta.cov") print(delta.sum()) print(delta.max(),delta.min()) """ Explanation: The cumulative difference between the two posterior matrices: End of explanation """ args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"] pd1_in = "predunc1.in" pd1 = os.path.join("i64predunc1.exe") pd1_results = {} for pv_name in pv_names: args[3] = pv_name f = open(pd1_in, 'w') f.write('\n'.join(args) + '\n') f.close() out = "predunc1" + pv_name + ".out" os.system(pd1 + " <" + pd1_in + ">" + out) f = open(out,'r') for line in f: if "pre-cal " in line.lower(): pre_cal = float(line.strip().split()[-2]) elif "post-cal " in line.lower(): post_cal = float(line.strip().split()[-2]) f.close() pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal] """ Explanation: PREDUNC1 write a response file to feed stdin. Then run predunc1 for each forecast End of explanation """ pyemu_results = {} for pname in la_ord.prior_prediction.keys(): pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]), np.sqrt(la_ord.posterior_prediction[pname])] """ Explanation: organize the pyemu results into a structure for comparison End of explanation """ f = open("predunc1_textable.dat",'w') for pname in pd1_results.keys(): print(pname) f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\ .format(pd1_results[pname][0],pyemu_results[pname][0], pd1_results[pname][1],pyemu_results[pname][1])) print("prior",pname,pd1_results[pname][0],pyemu_results[pname][0]) print("post",pname,pd1_results[pname][1],pyemu_results[pname][1]) f.close() """ Explanation: compare the results: End of explanation """ f = open("pred_list.dat",'w') out_files = [] for pv in pv_names: out_name = pv+".predvar1b.out" out_files.append(out_name) f.write(pv+" "+out_name+"\n") f.close() args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"] for i in range(36): args.append(str(i)) args.append('') args.append("n") #no for most parameters args.append("y") #yes for mult f = open("predvar1b.in", 'w') f.write('\n'.join(args) + '\n') f.close() os.system("predvar1b.exe <predvar1b.in") pv1b_results = {} for out_file in out_files: pred_name = out_file.split('.')[0] f = open(out_file,'r') for _ in range(3): f.readline() arr = np.loadtxt(f) pv1b_results[pred_name] = arr """ Explanation: PREDVAR1b write the nessecary files to run predvar1b End of explanation """ la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco", predictions=predictions, omitted_parameters="mult1", verbose=False) df = la_ord_errvar.get_errvar_dataframe(np.arange(36)) df """ Explanation: now for pyemu End of explanation """ fig = plt.figure(figsize=(6,6)) max_idx = 15 idx = np.arange(max_idx) for ipred,pred in enumerate(predictions): arr = pv1b_results[pred][:max_idx,:] first = df[("first", pred)][:max_idx] second = df[("second", pred)][:max_idx] third = df[("third", pred)][:max_idx] ax = plt.subplot(len(predictions),1,ipred+1) #ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5) #ax.plot(first,color='b') #ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5) #ax.plot(second,color='g') #ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5) #ax.plot(third,color='r') ax.scatter(idx,arr[:,1],marker='x',s=40,color='g', label="PREDVAR1B - first term") ax.scatter(idx,arr[:,2],marker='x',s=40,color='b', label="PREDVAR1B - second term") ax.scatter(idx,arr[:,3],marker='x',s=40,color='r', label="PREVAR1B - third term") ax.scatter(idx,first,marker='o',facecolor='none', s=50,color='g',label='pyEMU - first term') ax.scatter(idx,second,marker='o',facecolor='none', s=50,color='b',label="pyEMU - second term") ax.scatter(idx,third,marker='o',facecolor='none', s=50,color='r',label="pyEMU - third term") ax.set_ylabel("forecast variance") ax.set_title("forecast: " + pred) if ipred == len(predictions) -1: ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75), scatterpoints=1,ncol=2) ax.set_xlabel("singular values") #break plt.savefig("predvar1b_ver.eps") """ Explanation: generate some plots to verify End of explanation """ cmd_args = [os.path.join("i64identpar.exe"),ord_base,"5", "null","null","ident.out","/s"] cmd_line = ' '.join(cmd_args)+'\n' print(cmd_line) print(os.getcwd()) os.system(cmd_line) identpar_df = pd.read_csv("ident.out",delim_whitespace=True) la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco", predictions=predictions, verbose=False) df = la_ord_errvar.get_identifiability_dataframe(5) df """ Explanation: Identifiability End of explanation """ fig = plt.figure() ax = plt.subplot(111) axt = plt.twinx() ax.plot(identpar_df["identifiability"]) ax.plot(df["ident"].values) ax.set_xlim(-10,600) diff = identpar_df["identifiability"].values - df["ident"].values #print(diff) axt.plot(diff) axt.set_ylim(-1,1) ax.set_xlabel("parmaeter") ax.set_ylabel("identifiability") axt.set_ylabel("difference") """ Explanation: cheap plot to verify End of explanation """
dmnfarrell/gordon-group
mirnaseq/miRNA_features_analysis.ipynb
apache-2.0
import glob,os import pandas as pd import numpy as np import mirnaseq.mirdeep2 as mdp from mirnaseq import base, analysis pd.set_option('display.width', 130) %matplotlib inline import pylab as plt from mpl_toolkits.mplot3d import Axes3D base.seabornsetup() plt.rcParams['savefig.dpi']=80 plt.rcParams['font.size']=16 home = os.path.expanduser("~") from sklearn import linear_model from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.grid_search import GridSearchCV, RandomizedSearchCV from sklearn.metrics import classification_report, accuracy_score from sklearn.decomposition.pca import PCA, RandomizedPCA from sklearn.lda import LDA #iconmap data path = os.path.join(home,'mirnaseq/data/iconmap/analysis/') os.chdir(path) respath = 'iconmap_results_mirdeep' ic = mdp.getResults(respath) ic = mdp.filterExprResults(ic,meanreads=150,freq=0.8) k = ic[ic.novel==False] condmap = mdp.getLabelMap(respath,'samplelabels.csv') condmap = condmap.sort('id') labels=['control','infected'] y=condmap.status.values #labels=[0,6] #y=condmap.month.values #douwe data path = os.path.join(home,'mirnaseq/data/douwe/analysis/') os.chdir(path) respath = 'douwe_mirdeep_rnafiltered' d = mdp.getResults(respath) d = mdp.filterExprResults(d,meanreads=150,freq=0.8) k = d[d.novel==False] condmap = mdp.getLabelMap(respath,'samplelabels.csv') condmap = condmap.sort('id') #labels=['START','EARLY','LATE'] #y=condmap.timegroup.values #labels=[1,2] #y=condmap.pool.values print condmap[:5] #use normalised columns scols = condmap.id+'(norm)' df = k.set_index('#miRNA') X=df[scols].T X=X.dropna(axis=1) X=X.drop_duplicates() names = list(X.columns) X=X.values #standardize data from sklearn import preprocessing #scaler = preprocessing.StandardScaler().fit(X) scaler = preprocessing.MinMaxScaler().fit(X) X = scaler.transform(X) def plot2D(X, x1=0, x2=1, ax=None): if ax == None: plt.figure(figsize=(6,6)) for c, i in zip("brgyc", labels): ax.scatter(X[y == i, x1], X[y == i, x2], c=c, s=100, label=i, alpha=0.8) ax.legend() return def plot3D(X): fig = plt.figure(figsize=(8,6)) ax = Axes3D(fig, rect=[0, 0, .95, 1]) for c, i in zip("brgyc", labels): ax.scatter(X[y == i, 0], X[y == i, 1], X[y == i, 2], c=c, s=100, label=i) plt.legend() for label in labels: ax.text3D(X[y == label, 0].mean(), X[y == label, 1].mean() + 1.5, X[y == label, 2].mean(), label, horizontalalignment='center', bbox=dict(alpha=.5, edgecolor='w', facecolor='w')) def doPCA(X): #do PCA pca = PCA(n_components=5) pca.fit(X) print pca.explained_variance_ratio_ Xt = pca.fit_transform(X) f,(ax1,ax2) = plt.subplots(1,2,figsize=(10,5)) plot2D(X, ax=ax1) plot2D(Xt, ax=ax2) plt.xlabel('PC1') plt.ylabel('PC2') plt.tight_layout() #plot3D(X) plot3D(Xt) return Xt def plotClassification(cl, X, ax=None): """Plot classifier result on a 2D axis""" if ax == None: f = plt.figure(figsize=(6,6)) ax = f.add_subplot(111) X = X[:, :2] cl.fit(X, y) h = .04 e=0.1 x_min, x_max = X[:, 0].min() - e, X[:, 0].max() + e y_min, y_max = X[:, 1].min() - e, X[:, 1].max() + e xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = cl.predict(np.c_[xx.ravel(), yy.ravel()]) #get cats as integers Z = pd.Categorical(Z).labels # Put the result into a color plot Z = Z.reshape(xx.shape) ax.pcolormesh(xx, yy, Z, cmap=plt.cm.coolwarm) for c, i in zip("brgyc", labels): ax.scatter(X[y == i, 0], X[y == i, 1], c=c, s=100, label=i, lw=2) ax.legend() return ax def classify(X, y, cl, name=''): """Test classification""" np.random.seed() ind = np.random.permutation(len(X)) from sklearn.cross_validation import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.4) cl.fit(Xtrain, ytrain) ypred = cl.predict(Xtest) print classification_report(ytest, ypred) #print accuracy_score(ytest, ypred) from sklearn import cross_validation yl = pd.Categorical(y).labels sc = cross_validation.cross_val_score(cl, X, yl, scoring='roc_auc', cv=5) print("AUC: %0.2f (+/- %0.2f)" % (sc.mean(), sc.std() * 2)) ax = plotClassification(cl, X) ax.set_title(name) return cl """ Explanation: feature selection and prediction testing for miRNA data Can specific miRNAs or a set of principal components be used to distinguish classes i.e. control vs infected. References: Y. Taguchi and Y. Murakami, “Principal component analysis based feature extraction approach to identify circulating microRNA biomarkers.,” PLoS One, vol. 8, no. 6, p. e66714, 2013. End of explanation """ labels = ['CTRL','MAP'] y = condmap.infection.values Xt = doPCA(X) """ Explanation: Do a principal component analysis with the normalised data and plot the first 2 PCs by category End of explanation """ knn = KNeighborsClassifier() logit = linear_model.LogisticRegression(C=1e5) svm = SVC(kernel='linear') classify(X, y, knn, 'knn') cl = classify(X, y, logit, 'logistic regression') cl2 = classify(Xt, y, knn, 'knn pca features') labels = ['Neg','Pos'] y = condmap.ELISA.values Xt = doPCA(X) cl = classify(X, y, knn, 'knn') """ Explanation: Create a classifier using the miRNA abuandances as features End of explanation """ yl = pd.Categorical(y).labels #clf = KNeighborsClassifier() clf = linear_model.LogisticRegression(penalty='l1') from sklearn.feature_selection import RFE from sklearn.feature_selection import SelectKBest, SelectPercentile s = SelectKBest(k=10) #s = SelectPercentile(percentile = 60) s.fit_transform(X, y) '''res = sorted(zip(map(lambda x: round(x, 4), s.scores_), names), reverse=True) for i in res[:20]: print i''' # create the RFE model and select attributes fs = RFE(clf, 10) fs = fs.fit(X, y) res = sorted(zip(map(lambda x: round(x, 4), fs.ranking_), names)) for i in res[:20]: print i #using a pipeline to find best no. of principle components as features for prediction labels = ['CTRL','MAP'] y = condmap.infection.values pca = RandomizedPCA() pca.fit(X) from sklearn.pipeline import Pipeline pipe = Pipeline(steps=[('pca', pca), ('knn', cl)]) components = range(3,30,2) Cs = np.logspace(-4, 4, 3) params = dict(pca__n_components=components) #,logistic__C=Cs) est = GridSearchCV(pipe, params, cv=3) est.fit(X, y) print est.best_estimator_.named_steps['pca'].n_components plt.figure(figsize=(6,3)) plt.plot(pca.explained_variance_, linewidth=2) plt.axvline(est.best_estimator_.named_steps['pca'].n_components, linestyle=':', label='n_components chosen') plt.legend() def doLDA(X,y): lda = LDA(n_components=2) Xl = lda.fit_transform(X,y) #ypred = lda.fit(X, y).predict(X) fig = plt.figure(figsize=(6,6)) for c, i in zip("grb", labels): #plt.scatter(Xl[y == i, 0], Xl[y == i, 1], c=c, s=100, label=i) plt.plot(Xl[y == i, 0], 'o', c=c, label=i) plt.legend() labels = ['Neg','Pos'] y = condmap.ELISA.values doLDA(X,y) """ Explanation: Find best parameters using feature selection with original column data (no PCA) End of explanation """
hasadna/knesset-data-pipelines
jupyter-notebooks/Render site pages for development and debugging.ipynb
mit
!{'cd /pipelines; KNESSET_LOAD_FROM_URL=1 dpp run --concurrency 4 '\ './committees/kns_committee,'\ './people/committee-meeting-attendees,'\ './members/mk_individual'} """ Explanation: Render site pages dpp runs the knesset data pipelines periodically on our server. This notebook shows how to run pipelines that render pages for the static website at https://oknesset.org Load the source data Download the source data, can take a few minutes. End of explanation """ !{'cd /pipelines; dpp run --verbose ./committees/dist/build'} """ Explanation: Run the build pipeline This pipeline aggregates the relevant data and allows to filter for quicker development cycles. You can uncomment and modify the filter step in committees/dist/knesset.source-spec.yaml under the build pipeline to change the filter. The build pipeline can take a few minutes to process for the first time. End of explanation """ !{'pip install --upgrade dataflows'} """ Explanation: Download some protocol files for rendering upgrade to latest dataflows library End of explanation """ session_ids = [2063122, 2063126] from dataflows import Flow, load, printer, filter_rows sessions_data = Flow( load('/pipelines/data/committees/kns_committeesession/datapackage.json'), filter_rows(lambda row: row['CommitteeSessionID'] in session_ids), printer(tablefmt='html') ).results() import os import subprocess import sys for session in sessions_data[0][0]: for attr in ['text_parsed_filename', 'parts_parsed_filename']: pathpart = 'meeting_protocols_text' if attr == 'text_parsed_filename' else 'meeting_protocols_parts' url = 'https://production.oknesset.org/pipelines/data/committees/{}/{}'.format(pathpart, session[attr]) filename = '/pipelines/data/committees/{}/{}'.format(pathpart, session[attr]) os.makedirs(os.path.dirname(filename), exist_ok=True) cmd = 'curl -s -o {} {}'.format(filename, url) print(cmd, file=sys.stderr) subprocess.check_call(cmd, shell=True) """ Explanation: Restart the kernel if an upgrade was done Choose some session IDs to download protocol files for: End of explanation """ %%bash find /pipelines/data/committees/dist -type f -name '*.hash' -delete """ Explanation: Delete dist hash files End of explanation """ !{'cd /pipelines; dpp run ./committees/dist/render_meetings'} """ Explanation: Render pages Should run the render pipelines in the following order: Meetings: End of explanation """ from dataflows import Flow, load, printer, filter_rows, add_field def add_filenames(): def _add_filenames(row): for ext in ['html', 'json']: row['rendered_'+ext] = '/pipelines/data/committees/dist/dist/meetings/{}/{}/{}.{}'.format( str(row['CommitteeSessionID'])[0], str(row['CommitteeSessionID'])[1], str(row['CommitteeSessionID']), ext) return Flow( add_field('rendered_html', 'string'), add_field('rendered_json', 'string'), _add_filenames ) rendered_meetings = Flow( load('/pipelines/data/committees/dist/rendered_meetings_stats/datapackage.json'), add_filenames(), filter_rows(lambda row: row['CommitteeSessionID'] in session_ids), printer(tablefmt='html') ).results()[0][0] """ Explanation: Rendered meetings stats End of explanation """ !{'cd /pipelines; dpp run ./committees/dist/render_committees'} """ Explanation: Committees and homepage End of explanation """ !{'cd /pipelines; dpp run ./committees/dist/create_members,./committees/dist/build_positions,./committees/dist/create_factions'} """ Explanation: Members / Factions End of explanation """
tacticsiege/TacticToolkit
examples/2017-09-11_TacticToolkit_Intro.ipynb
mit
# until we can install, add parent dir to path so ttk is found import sys sys.path.insert(0, '..') # basic imports import pandas as pd import numpy as np import re import matplotlib %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) import matplotlib.pyplot as plt """ Explanation: TacticToolkit Introduction TacticToolkit is a codebase to assist with machine learning and natural language processing. We build on top of sklearn, tensorflow, keras, nltk, spaCy and other popular libraries. The TacticToolkit will help throughout; from data acquisition to preprocessing to training to inference. | Modules | Description | |---------------|------------------------------------------------------| | corpus | Load and work with text corpora | | data | Data generation and common data functions | | plotting | Predefined and customizable plots | | preprocessing | Transform and clean data in preparation for training | | sandbox | Newer experimental features and references | | text | Text manipulation and processing | End of explanation """ # simple text normalization # apply individually # apply to sentences # simple text tokenization # harder text tokenization # sentence tokenization # paragraph tokenization """ Explanation: Let's start with some text The ttk.text module includes classes and functions to make working with text easier. These are meant to supplement existing nltk and spaCy text processing, and often work in conjunction with these libraries. Below is an overview of some of the major components. We'll explore these objects with some simple text now. | Class | Purpose | |-----------------|------------------------------------------------------------------------| | Normalizer | Normalizes text by formatting, stemming and substitution | | Tokenizer | High level tokenizer, provides word, sentence and paragraph tokenizers | End of explanation """ from ttk.corpus import load_headline_corpus # load the dated corpus. # This will attempt to download the corpus from github if it is not present locally. corpus = load_headline_corpus(verbose=True) # inspect categories print (len(corpus.categories()), 'categories') for cat in corpus.categories(): print (cat) # all main corpus methods allow lists of categories and dates filters d = '2017-08-22' print (len(corpus.categories(dates=[d])), 'categories') for cat in corpus.categories(dates=[d]): print (cat) # use the Corpus Reporters to get summary reports from ttk.corpus import CategorizedDatedCorpusReporter reporter = CategorizedDatedCorpusReporter() # summarize categories print (reporter.category_summary(corpus)) # reporters can return str, list or dataframe for s in reporter.date_summary(corpus, dates=['2017-08-17', '2017-08-18', '2017-08-19',], output='list'): print (s) cat_frame = reporter.category_summary(corpus, categories=['BBC', 'CNBC', 'CNN', 'NPR',], output='dataframe') cat_frame.head() """ Explanation: Corpii? Corpuses? Corpora! The ttk.corpus module builds on the nltk.corpus model, adding new corpus readers and corpus processing objects. It also includes loading functions for the corpora included with ttk, which will download the content from github as needed. We'll use the Dated Headline corpus included with ttk. This corpus was created using ttk, and is maintained in a complimentary github project, TacticCorpora (https://github.com/tacticsiege/TacticCorpora). First, a quick look at the corpus module's major classes and functions. | Class | Purpose | |------------------------------|----------------------------------------------------------------------------------| |CategorizedDatedCorpusReader |Extends nltk's CategorizedPlainTextCorpusReader to include a second category, Date| |CategorizedDatedCorpusReporter|Summarizes corpora. Filterable, and output can be str, list or DataFrame | | Function | Purpose | |--------------------------------------|-------------------------------------------------------------------------| | load_headline_corpus(with_date=True) | Loads Categorized or CategorizedDated CorpusReader from headline data | End of explanation """
luofan18/deep-learning
sentiment-rnn/Sentiment_RNN_Solution.ipynb
mit
import numpy as np import tensorflow as tf with open('../sentiment-network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment-network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] """ Explanation: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. End of explanation """ from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] """ Explanation: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. End of explanation """ from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) """ Explanation: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. End of explanation """ labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) """ Explanation: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively. End of explanation """ non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] """ Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list. End of explanation """ reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx]) """ Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general. End of explanation """ seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] """ Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. End of explanation """ split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) """ Explanation: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. End of explanation """ lstm_size = 256 lstm_layers = 1 batch_size = 100 learning_rate = 0.001 tf.reset_default_graph() """ Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate End of explanation """ n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') """ Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder. End of explanation """ # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) """ Explanation: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. End of explanation """ with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) """ Explanation: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out. End of explanation """ with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) """ Explanation: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed. End of explanation """ with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. End of explanation """ with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. End of explanation """ def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] """ Explanation: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. End of explanation """ epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") """ Explanation: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. End of explanation """ test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) """ Explanation: Testing End of explanation """
FRESNA/atlite
examples/logfiles_and_messages.ipynb
gpl-3.0
import logging logging.basicConfig(level=logging.INFO) """ Explanation: Logfiles and messages Atlite uses the logging library for displaying messages with different purposes. Minimum information We recommend that you always use logging when using atlite with information messages enabled. The simplest way is to End of explanation """ import warnings import logging warnings.simplefilter('default', DeprecationWarning) logging.captureWarnings(True) logging.basicConfig(level=logging.INFO) """ Explanation: This will prompt messagges with the priority level of "information". Our recommendation End of explanation """ import warnings import logging warnings.simplefilter('always', DeprecationWarning) logging.captureWarnings(True) logging.basicConfig(level=logging.DEBUG) """ Explanation: Maximum information (aka 'Information overload') This configuration will nag you about nearly everything there is every time. Use it for debugging. End of explanation """ logging.basicConfig(level=logging.INFO) """ Explanation: Adjusting the level of detail/verbosity Usually recieving information messages is enough verbosity, i.e. End of explanation """ logging.basicConfig(level=logging.DEBUG) """ Explanation: When debugging your programm you might want to recieve more detailed information on what is going on and include further debugging messages. To do so: End of explanation """ logging.basicConfig(filename='example.log', level=logging.INFO) """ Explanation: Creating logfiles When running automated scripts or for documentation purposes, you might want to redirect and save the logging output in a log file. To do so, simply add the filename argument to your basicConfig call as shown below and substite example.log for your preferred logfile name End of explanation """ import warnings warnings.simplefilter('always', DeprecationWarning) logging.captureWarnings(True) """ Explanation: (Deprecation) warnings When features of the programme (or underlying libraries) become obsolete (deprecated), users nowadays usually do not get a warning. (More details on that here) We recommend you turn on those warnings to get a headsup if you are using features which will be removed in future releases. To enable the warnings and have the logging library handle them, use the following lines at the beginning of your code End of explanation """
karlnapf/shogun
doc/ipython-notebooks/classification/Classification.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from shogun import * import shogun as sg #Needed lists for the final plot classifiers_linear = []*10 classifiers_non_linear = []*10 classifiers_names = []*10 fadings = []*10 """ Explanation: Visual Comparison Between Different Classification Methods in Shogun Notebook by Youssef Emad El-Din (Github ID: <a href="https://github.com/youssef-emad/">youssef-emad</a>) This notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear seperable, and one is not. <a href ="#section1">Data Generation and Visualization</a> <a href ="#section2">Support Vector Machine</a> <a href ="#section2a">Linear SVM</a> <a href ="#section2b">Gaussian Kernel</a> <a href ="#section2c">Sigmoid Kernel</a> <a href ="#section2d">Polynomial Kernel</a> <a href ="#section3">Naive Bayes</a> <a href ="#section4">Nearest Neighbors</a> <a href ="#section5">Linear Discriminant Analysis</a> <a href ="#section6">Quadratic Discriminat Analysis</a> <a href ="#section7">Gaussian Process</a> <a href ="#section7a">Logit Likelihood model</a> <a href ="#section7b">Probit Likelihood model</a> <a href ="#section8">Putting It All Together</a> End of explanation """ shogun_feats_linear = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat'))) shogun_labels_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat'))) shogun_feats_non_linear = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat'))) shogun_labels_non_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat'))) feats_linear = shogun_feats_linear.get('feature_matrix') labels_linear = shogun_labels_linear.get('labels') feats_non_linear = shogun_feats_non_linear.get('feature_matrix') labels_non_linear = shogun_labels_non_linear.get('labels') """ Explanation: <a id = "section1">Data Generation and Visualization</a> Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CDenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CBinaryLabels.html">BinaryLables</a> classes. End of explanation """ def plot_binary_data(plot,X_train, y_train): """ This function plots 2D binary data with different colors for different labels. """ plot.xlabel(r"$x$") plot.ylabel(r"$y$") plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro') plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo') def compute_plot_isolines(classifier,feats,size=200,fading=True): """ This function computes the classification of points on the grid to get the decision boundaries used in plotting """ x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size) x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size) x, y = np.meshgrid(x1, x2) plot_features=features(np.array((np.ravel(x), np.ravel(y)))) if fading == True: plot_labels = classifier.apply(plot_features).get('current_values') else: plot_labels = classifier.apply(plot_features).get('labels') z = plot_labels.reshape((size, size)) return x,y,z def plot_model(plot,classifier,features,labels,fading=True): """ This function plots an input classification model """ x,y,z = compute_plot_isolines(classifier,features,fading=fading) plot.pcolor(x,y,z,cmap='RdBu_r') plot.contour(x, y, z, linewidths=1, colors='black') plot_binary_data(plot,features, labels) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Linear Features") plot_binary_data(plt,feats_linear, labels_linear) plt.subplot(122) plt.title("Non Linear Features") plot_binary_data(plt,feats_non_linear, labels_non_linear) """ Explanation: Data visualization methods. End of explanation """ plt.figure(figsize=(15,5)) c = 0.5 epsilon =1e-3 svm_linear = LibLinear(c,shogun_feats_linear,shogun_labels_linear) svm_linear.put('liblinear_solver_type', L2R_L2LOSS_SVC) svm_linear.put('epsilon', epsilon) svm_linear.train() classifiers_linear.append(svm_linear) classifiers_names.append("SVM Linear") fadings.append(True) plt.subplot(121) plt.title("Linear SVM - Linear Features") plot_model(plt,svm_linear,feats_linear,labels_linear) svm_non_linear = LibLinear(c,shogun_feats_non_linear,shogun_labels_non_linear) svm_non_linear.put('liblinear_solver_type', L2R_L2LOSS_SVC) svm_non_linear.put('epsilon', epsilon) svm_non_linear.train() classifiers_non_linear.append(svm_non_linear) plt.subplot(122) plt.title("Linear SVM - Non Linear Features") plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSVM.html">Support Vector Machine</a> <a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibLinear.html">Linear SVM</a> Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification End of explanation """ gaussian_c=0.7 gaussian_kernel_linear=sg.kernel("GaussianKernel", log_width=np.log(100)) gaussian_svm_linear=sg.machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_linear, labels=shogun_labels_linear) gaussian_svm_linear.train(shogun_feats_linear) classifiers_linear.append(gaussian_svm_linear) fadings.append(True) gaussian_kernel_non_linear=sg.kernel("GaussianKernel", log_width=np.log(100)) gaussian_svm_non_linear=sg.machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear) gaussian_svm_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(gaussian_svm_non_linear) classifiers_names.append("SVM Gaussian Kernel") plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Gaussian Kernel - Linear Features") plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear) plt.subplot(122) plt.title("SVM Gaussian Kernel - Non Linear Features") plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear) """ Explanation: SVM - Kernels Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernel.html">CKernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelMachine.html">CKernelMachine</a> base class. <a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">Gaussian Kernel</a> End of explanation """ sigmoid_c = 0.9 sigmoid_kernel_linear = SigmoidKernel(shogun_feats_linear,shogun_feats_linear,200,1,0.5) sigmoid_svm_linear = sg.machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_linear, labels=shogun_labels_linear) sigmoid_svm_linear.train() classifiers_linear.append(sigmoid_svm_linear) classifiers_names.append("SVM Sigmoid Kernel") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Sigmoid Kernel - Linear Features") plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear) sigmoid_kernel_non_linear = SigmoidKernel(shogun_feats_non_linear,shogun_feats_non_linear,400,2.5,2) sigmoid_svm_non_linear = sg.machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear) sigmoid_svm_non_linear.train() classifiers_non_linear.append(sigmoid_svm_non_linear) plt.subplot(122) plt.title("SVM Sigmoid Kernel - Non Linear Features") plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a> End of explanation """ poly_c = 0.5 degree = 4 poly_kernel_linear = sg.kernel('PolyKernel', degree=degree, c=1.0) poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear) poly_svm_linear = sg.machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_linear, labels=shogun_labels_linear) poly_svm_linear.train() classifiers_linear.append(poly_svm_linear) classifiers_names.append("SVM Polynomial kernel") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Polynomial Kernel - Linear Features") plot_model(plt,poly_svm_linear,feats_linear,labels_linear) poly_kernel_non_linear = sg.kernel('PolyKernel', degree=degree, c=1.0) poly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear) poly_svm_non_linear = sg.machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear) poly_svm_non_linear.train() classifiers_non_linear.append(poly_svm_non_linear) plt.subplot(122) plt.title("SVM Polynomial Kernel - Non Linear Features") plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a> End of explanation """ multiclass_labels_linear = shogun_labels_linear.get('labels') for i in range(0,len(multiclass_labels_linear)): if multiclass_labels_linear[i] == -1: multiclass_labels_linear[i] = 0 multiclass_labels_non_linear = shogun_labels_non_linear.get('labels') for i in range(0,len(multiclass_labels_non_linear)): if multiclass_labels_non_linear[i] == -1: multiclass_labels_non_linear[i] = 0 shogun_multiclass_labels_linear = MulticlassLabels(multiclass_labels_linear) shogun_multiclass_labels_non_linear = MulticlassLabels(multiclass_labels_non_linear) naive_bayes_linear = GaussianNaiveBayes() naive_bayes_linear.put('features', shogun_feats_linear) naive_bayes_linear.put('labels', shogun_multiclass_labels_linear) naive_bayes_linear.train() classifiers_linear.append(naive_bayes_linear) classifiers_names.append("Naive Bayes") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Naive Bayes - Linear Features") plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False) naive_bayes_non_linear = GaussianNaiveBayes() naive_bayes_non_linear.put('features', shogun_feats_non_linear) naive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear) naive_bayes_non_linear.train() classifiers_non_linear.append(naive_bayes_non_linear) plt.subplot(122) plt.title("Naive Bayes - Non Linear Features") plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False) """ Explanation: <a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianNaiveBayes.html">Naive Bayes</a> End of explanation """ number_of_neighbors = 10 distances_linear = sg.distance('EuclideanDistance') distances_linear.init(shogun_feats_linear, shogun_feats_linear) knn_linear = KNN(number_of_neighbors,distances_linear,shogun_labels_linear) knn_linear.train() classifiers_linear.append(knn_linear) classifiers_names.append("Nearest Neighbors") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Nearest Neighbors - Linear Features") plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False) distances_non_linear = sg.distance('EuclideanDistance') distances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear) knn_non_linear = KNN(number_of_neighbors,distances_non_linear,shogun_labels_non_linear) knn_non_linear.train() classifiers_non_linear.append(knn_non_linear) plt.subplot(122) plt.title("Nearest Neighbors - Non Linear Features") plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False) """ Explanation: <a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CKNN.html">Nearest Neighbors</a> End of explanation """ gamma = 0.1 lda_linear = sg.machine('LDA', gamma=gamma, labels=shogun_labels_linear) lda_linear.train(shogun_feats_linear) classifiers_linear.append(lda_linear) classifiers_names.append("LDA") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("LDA - Linear Features") plot_model(plt,lda_linear,feats_linear,labels_linear) lda_non_linear = sg.machine('LDA', gamma=gamma, labels=shogun_labels_non_linear) lda_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(lda_non_linear) plt.subplot(122) plt.title("LDA - Non Linear Features") plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a> End of explanation """ qda_linear = QDA(shogun_feats_linear, shogun_multiclass_labels_linear) qda_linear.train() classifiers_linear.append(qda_linear) classifiers_names.append("QDA") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("QDA - Linear Features") plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False) qda_non_linear = QDA(shogun_feats_non_linear, shogun_multiclass_labels_non_linear) qda_non_linear.train() classifiers_non_linear.append(qda_non_linear) plt.subplot(122) plt.title("QDA - Non Linear Features") plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False) """ Explanation: <a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CQDA.html">Quadratic Discriminant Analysis</a> End of explanation """ # create Gaussian kernel with width = 2.0 kernel = sg.kernel("GaussianKernel", log_width=np.log(2)) # create zero mean function zero_mean = ZeroMean() # create logit likelihood model likelihood = LogitLikelihood() # specify EP approximation inference method inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood) # create and train GP classifier, which uses Laplace approximation gaussian_logit_linear = GaussianProcessClassification(inference_model_linear) gaussian_logit_linear.train() classifiers_linear.append(gaussian_logit_linear) classifiers_names.append("Gaussian Process Logit") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Gaussian Process - Logit - Linear Features") plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear) inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear, zero_mean, shogun_labels_non_linear, likelihood) gaussian_logit_non_linear = GaussianProcessClassification(inference_model_non_linear) gaussian_logit_non_linear.train() classifiers_non_linear.append(gaussian_logit_non_linear) plt.subplot(122) plt.title("Gaussian Process - Logit - Non Linear Features") plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CGaussianProcessBinaryClassification.html">Gaussian Process</a> <a id ="section7a">Logit Likelihood model</a> Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLogitLikelihood.html">CLogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CEPInferenceMethod.html">CEPInferenceMethod</a> classes are used. End of explanation """ likelihood = ProbitLikelihood() inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood) gaussian_probit_linear = GaussianProcessClassification(inference_model_linear) gaussian_probit_linear.train() classifiers_linear.append(gaussian_probit_linear) classifiers_names.append("Gaussian Process Probit") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Gaussian Process - Probit - Linear Features") plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear) inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear, zero_mean, shogun_labels_non_linear, likelihood) gaussian_probit_non_linear = GaussianProcessClassification(inference_model_non_linear) gaussian_probit_non_linear.train() classifiers_non_linear.append(gaussian_probit_non_linear) plt.subplot(122) plt.title("Gaussian Process - Probit - Non Linear Features") plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear) """ Explanation: <a id ="section7b">Probit Likelihood model</a> Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CProbitLikelihood.html">CProbitLikelihood</a> class is used. End of explanation """ figure = plt.figure(figsize=(30,9)) plt.subplot(2,11,1) plot_binary_data(plt,feats_linear, labels_linear) for i in range(0,10): plt.subplot(2,11,i+2) plt.title(classifiers_names[i]) plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i]) plt.subplot(2,11,12) plot_binary_data(plt,feats_non_linear, labels_non_linear) for i in range(0,10): plt.subplot(2,11,13+i) plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i]) """ Explanation: <a id="section8">Putting It All Together</a> End of explanation """
atulsingh0/MachineLearning
python_DC/Pandas_#1.ipynb
gpl-3.0
# # Create array of DataFrame values: np_vals # np_vals = df.values # # Create new array of base 10 logarithm values: np_vals_log10 # np_vals_log10 = np.log10(np_vals) # # Create array of new DataFrame by passing df to np.log10(): df_log10 # df_log10 = np.log10(df) # # Print original and new data containers # print(type(np_vals), type(np_vals_log10)) # print(type(df), type(df_log10)) """ Explanation: Import numpy using the standard alias np. Assign the numerical values in the DataFrame df to an array np_vals using the attribute values. Pass np_vals into the NumPy method log10() and store the results in np_vals_log10. Pass the entire df DataFrame into the NumPy method log10() and store the results in df_log10. Call print() and type() on both df_vals_log10 and df_log10, and compare. This has been done for you. End of explanation """ # creating two lists list_keys = ['Country', 'Total'] list_values = [['United States', 'Soviet Union', 'United Kingdom'], [1118, 473, 273]] print(type(list_keys)) #creating a list which have key, valye touple zipped = list(zip(list_keys, list_values)) print(zipped) # creating a disctionary from zipped values data = dict(zipped) print(data, "\n", type(data)) # creating a dataframe df = pd.DataFrame(data) print(df) # we can chanage the columns names df.columns = ['Country_Name', 'Total_1'] print(df) # Broadcasting of a variable in dataframe state = 'UP' cities = ['Kanpur', 'Agra', 'Lucknow'] # creating DataFrame df = pd.DataFrame({'state':state, 'cities':cities}) print(df) # # Read in the file: df1 # df1 = pd.read_csv('world_population.csv') # # Create a list of the new column labels: new_labels # new_labels = ['year', 'population'] # # Read in the file, specifying the header and names parameters: df2 # df2 = pd.read_csv('world_population.csv', header=0, names=new_labels) # # Print both the DataFrames # print(df1) # print(df2) """ Explanation: Creating DataFrames End of explanation """ # # Create a plot with color='red' # df.plot(color='red') # # Add a title # plt.title('Temperature in Austin') # # Specify the x-axis label # plt.xlabel('Hours since midnight August 1, 2010') # # Specify the y-axis label # plt.ylabel('Temperature (degrees F)') # # Display the plot # plt.show() """ Explanation: Use pd.read_csv() without using any keyword arguments to read file_messy into a pandas DataFrame df1. Use .head() to print the first 5 rows of df1 and see how messy it is. Do this in the IPython Shell first so you can see how modifying read_csv() can clean up this mess. Using the keyword arguments delimiter=' ', header=3 and comment='#', use pd.read_csv() again to read file_messy into a new DataFrame df2. Print the output of df2.head() to verify the file was read correctly. Use the DataFrame method .to_csv() to save the DataFrame df2 to the file file_clean. Be sure to specify index=False. Use the DataFrame method .to_excel() to save the DataFrame df2 to the file 'file_clean.xlsx'. Again, remember to specify index=False. Create the plot with the DataFrame method df.plot(). Specify a color of 'red'. Note: c and color are interchangeable as parameters here, but we ask you to be explicit and specify color. Use plt.title() to give the plot a title of 'Temperature in Austin'. Use plt.xlabel() to give the plot an x-axis label of 'Hours since midnight August 1, 2010'. Use plt.ylabel() to give the plot a y-axis label of 'Temperature (degrees F)'. Finally, display the plot using plt.show(). End of explanation """ # # Plot all columns (default) # df.plot() # plt.show() # # Plot all columns as subplots # df.plot(subplots=True) # plt.show() # # Plot just the Dew Point data # column_list1 = ["Dew Point (deg F)"] # df[column_list1].plot() # plt.show() # # Plot the Dew Point and Temperature data, but not the Pressure data # column_list2 = ['Temperature (deg F)','Dew Point (deg F)'] # df[column_list2].plot() # plt.show() """ Explanation: Plot all columns together on one figure by calling df.plot(), and noting the vertical scaling problem. Plot all columns as subplots. To do so, you need to specify subplots=True inside .plot(). Plot a single column of dew point data. To do this, define a column list containing a single column name 'Dew Point (deg F)', and call df[column_list1].plot(). Plot two columns of data, 'Temperature (deg F)' and 'Dew Point (deg F)'. To do this, define a list containing those column names and pass it into df[], as df[column_list2].plot(). End of explanation """ # # Create a list of y-axis column names: y_columns # y_columns = ['AAPL', 'IBM'] # # Generate a line plot # df.plot(x='Month', y=y_columns) # # Add the title # plt.title('Monthly stock prices') # # Add the y-axis label # plt.ylabel('Price ($US)') # # Display the plot # plt.show() """ Explanation: Create a list of y-axis column names called y_columns consisting of 'AAPL' and 'IBM'. Generate a line plot with x='Month' and y=y_columns as inputs. Add a title to the plot. Specify the y-axis label. Display the plot. End of explanation """
girving/tensorflow
tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb
apache-2.0
!pip install unidecode """ Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Text Generation using a RNN <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table> This notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention. This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output. Here is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below: ``` were to the death of him And nothing of the field in the view of hell, When I said, banish him, I will not burn thee that would live. HENRY BOLINGBROKE: My gracious uncle-- DUKE OF YORK: As much disgraced to the court, the gods them speak, And now in peace himself excuse thee in the world. HORTENSIO: Madam, 'tis not the cause of the counterfeit of the earth, And leave me to the sun that set them on the earth And leave the world and are revenged for thee. GLOUCESTER: I would they were talking with the very name of means To make a puppet of a guest, and therefore, good Grumio, Nor arm'd to prison, o' the clouds, of the whole field, With the admire With the feeding of thy chair, and we have heard it so, I thank you, sir, he is a visor friendship with your silly your bed. SAMPSON: I do desire to live, I pray: some stand of the minds, make thee remedies With the enemies of my soul. MENENIUS: I'll keep the cause of my mistress. POLIXENES: My brother Marcius! Second Servant: Will't ple ``` Of course, while some of the sentences are grammatical, most do not make sense. But, consider: Our model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text). The structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play). As a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun! Install unidecode library A helpful library to convert unicode to ASCII. End of explanation """ # Import TensorFlow >= 1.10 and enable eager execution import tensorflow as tf # Note: Once you enable eager execution, it cannot be disabled. tf.enable_eager_execution() import numpy as np import os import re import random import unidecode import time """ Explanation: Import tensorflow and enable eager execution. End of explanation """ path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') """ Explanation: Download the dataset In this example, we will use the shakespeare dataset. You can use any other dataset that you like. End of explanation """ text = unidecode.unidecode(open(path_to_file).read()) # length of text is the number of characters in it print (len(text)) """ Explanation: Read the dataset End of explanation """ # unique contains all the unique characters in the file unique = sorted(set(text)) # creating a mapping from unique characters to indices char2idx = {u:i for i, u in enumerate(unique)} idx2char = {i:u for i, u in enumerate(unique)} # setting the maximum length sentence we want for a single input in characters max_length = 100 # length of the vocabulary in chars vocab_size = len(unique) # the embedding dimension embedding_dim = 256 # number of RNN (here GRU) units units = 1024 # batch size BATCH_SIZE = 64 # buffer size to shuffle our dataset BUFFER_SIZE = 10000 """ Explanation: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs End of explanation """ input_text = [] target_text = [] for f in range(0, len(text)-max_length, max_length): inps = text[f:f+max_length] targ = text[f+1:f+1+max_length] input_text.append([char2idx[i] for i in inps]) target_text.append([char2idx[t] for t in targ]) print (np.array(input_text).shape) print (np.array(target_text).shape) """ Explanation: Creating the input and output tensors Vectorizing the input and the target text because our model cannot understand strings only numbers. But first, we need to create the input and output vectors. Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first. For example, consider that the string = 'tensorflow' and the max_length is 9 So, the input = 'tensorflo' and output = 'ensorflow' After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above. End of explanation """ dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) """ Explanation: Creating batches and shuffling them using tf.data End of explanation """ class Model(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, units, batch_size): super(Model, self).__init__() self.units = units self.batch_sz = batch_size self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) if tf.test.is_gpu_available(): self.gru = tf.keras.layers.CuDNNGRU(self.units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') else: self.gru = tf.keras.layers.GRU(self.units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') self.fc = tf.keras.layers.Dense(vocab_size) def call(self, x, hidden): x = self.embedding(x) # output shape == (batch_size, max_length, hidden_size) # states shape == (batch_size, hidden_size) # states variable to preserve the state of the model # this will be used to pass at every step to the model while training output, states = self.gru(x, initial_state=hidden) # reshaping the output so that we can pass it to the Dense layer # after reshaping the shape is (batch_size * max_length, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # The dense layer will output predictions for every time_steps(max_length) # output shape after the dense layer == (max_length * batch_size, vocab_size) x = self.fc(output) return x, states """ Explanation: Creating the model We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model. Embedding layer GRU layer (you can use an LSTM layer here) Fully connected layer End of explanation """ model = Model(vocab_size, embedding_dim, units, BATCH_SIZE) optimizer = tf.train.AdamOptimizer() # using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors def loss_function(real, preds): return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds) """ Explanation: Call the model and set the optimizer and the loss function End of explanation """ checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) """ Explanation: Checkpoints (Object-based saving) End of explanation """ # Training step EPOCHS = 20 for epoch in range(EPOCHS): start = time.time() # initializing the hidden state at the start of every epoch hidden = model.reset_states() for (batch, (inp, target)) in enumerate(dataset): with tf.GradientTape() as tape: # feeding the hidden state back into the model # This is the interesting step predictions, hidden = model(inp, hidden) # reshaping the target because that's how the # loss function expects it target = tf.reshape(target, (-1,)) loss = loss_function(target, predictions) grads = tape.gradient(loss, model.variables) optimizer.apply_gradients(zip(grads, model.variables)) if batch % 100 == 0: print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1, batch, loss)) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) """ Explanation: Train the model Here we will use a custom training loop with the help of GradientTape() We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model. Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input. There are a lot of interesting things happening here. The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0. The model then returns the predictions P1 and H1. For the next batch of input, the model receives I1 and H1. The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state. We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this. After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input) Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function. Note:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch. End of explanation """ # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) """ Explanation: Restore the latest checkpoint End of explanation """ # Evaluation step(generating text using the model learned) # number of characters to generate num_generate = 1000 # You can change the start string to experiment start_string = 'Q' # converting our start string to numbers(vectorizing!) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # empty string to store our results text_generated = '' # low temperatures results in more predictable text. # higher temperatures results in more surprising text # experiment to find the best setting temperature = 1.0 # hidden state shape == (batch_size, number of rnn units); here batch size == 1 hidden = [tf.zeros((1, units))] for i in range(num_generate): predictions, hidden = model(input_eval, hidden) # using a multinomial distribution to predict the word returned by the model predictions = predictions / temperature predicted_id = tf.argmax(predictions[0]).numpy() # We pass the predicted word as the next input to the model # along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated += idx2char[predicted_id] print (start_string + text_generated) """ Explanation: Predicting using our trained model The below code block is used to generated the text We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate. We get predictions using the start_string and the hidden state Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words. If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome! End of explanation """
WormLabCaltech/mprsq
src/stats_tutorials/Model Selection.ipynb
mit
# important stuff: import os import pandas as pd import numpy as np import statsmodels.tools.numdiff as smnd import scipy # Graphics import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns from matplotlib import rc rc('text', usetex=True) rc('text.latex', preamble=r'\usepackage{cmbright}') rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']}) # Magic function to make matplotlib inline; # other style specs must come AFTER %matplotlib inline # This enables SVG graphics inline. # There is a bug, so uncomment if it works. %config InlineBackend.figure_formats = {'png', 'retina'} # JB's favorite Seaborn settings for notebooks rc = {'lines.linewidth': 2, 'axes.labelsize': 18, 'axes.titlesize': 18, 'axes.facecolor': 'DFDFE5'} sns.set_context('notebook', rc=rc) sns.set_style("dark") mpl.rcParams['xtick.labelsize'] = 16 mpl.rcParams['ytick.labelsize'] = 16 mpl.rcParams['legend.fontsize'] = 14 """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Generating-synthetic-data" data-toc-modified-id="Generating-synthetic-data-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Generating synthetic data</a></div><div class="lev1 toc-item"><a href="#Line-fitting-using-Bayes'-theorem" data-toc-modified-id="Line-fitting-using-Bayes'-theorem-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Line fitting using Bayes' theorem</a></div><div class="lev1 toc-item"><a href="#Quantifying-the-probability-of-a-fixed-model:" data-toc-modified-id="Quantifying-the-probability-of-a-fixed-model:-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Quantifying the probability of a fixed model:</a></div><div class="lev1 toc-item"><a href="#Selecting-between-two-models" data-toc-modified-id="Selecting-between-two-models-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Selecting between two models</a></div><div class="lev2 toc-item"><a href="#Different-datasets-will-prefer-different-models" data-toc-modified-id="Different-datasets-will-prefer-different-models-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Different datasets will prefer different models</a></div><div class="lev1 toc-item"><a href="#The-larger-the-dataset,-the-more-resolving-power" data-toc-modified-id="The-larger-the-dataset,-the-more-resolving-power-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>The larger the dataset, the more resolving power</a></div> Welcome to our primer on Bayesian Model Selection. As always, we begin by loading our required libraries. End of explanation """ n = 50 # number of data points x = np.linspace(-10, 10, n) yerr = np.abs(np.random.normal(0, 2, n)) y = np.linspace(5, -5, n) + np.random.normal(0, yerr, n) plt.scatter(x, y) """ Explanation: Generating synthetic data First, we will generate the data. We will pick evenly spaced x-values. The y-values will be picked according to the equation $y=-\frac{1}{2}x$ but we will add Gaussian noise to each point. Each y-coordinate will have an associated error. The size of the error bar will be selected randomly. After we have picked the data, we will plot it to visualize it. It looks like a fairly straight line. End of explanation """ # bayes model fitting: def log_prior(theta): beta = theta return -1.5 * np.log(1 + beta ** 2) def log_likelihood(beta, x, y, yerr): sigma = yerr y_model = beta * x return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2) def log_posterior(theta, x, y, yerr): return log_prior(theta) + log_likelihood(theta, x, y, yerr) def neg_log_prob_free(theta, x, y, yerr): return -log_posterior(theta, x, y, yerr) """ Explanation: Line fitting using Bayes' theorem Now that we have generated our data, we would like to find the line of best fit given our data. To do this, we will perform a Bayesian regression. Briefly, Bayes equation is, $$ P(\alpha~|D, M_1) \propto P(D~|\alpha, M_1)P(\alpha~|M_1). $$ In other words, the probability of the slope given that Model 1 (a line with unknown slope) and the data is proportional to the probability of the data given the model and alpha times the probability of alpha given the model. Some necessary nomenclature at this point: * $P(D~|\alpha, M_1)\cdot P(\alpha|M_1)$ is called the posterior probability * $P(\alpha~|M_1)$ is called the prior * $P(D~|\alpha, M_1)$ is called the likelihood I claim that a functional form that will allow me to fit a line through this data is: $$ P(X|D) \propto \prod_{Data} \mathrm{exp}(-{\frac{(y_{Obs} - \alpha x)^2}{2\sigma_{Obs}^2}})\cdot (1 + \alpha^2)^{-3/2} $$ The first term in the equation measures the deviation between the observed y-coordinates and the predicted y-coordinates from a theoretical linear model, where $\alpha$ remains to be determined. We weight the result by the observed error, $\sigma_{Obs}$. Then, we multiply by a prior that tells us what values of $\alpha$ should be considered. How to pick a good prior is somewhat difficult and a bit of an artform. One way is to pick a prior that is uninformative for a given parameter. In this case, we want to make sure that we sample slopes between [0,1] as densely as we sample [1,$\infty$]. For a more thorough derivation and explanation, please see this excellent blog post by Jake Vanderplas. The likelihood is the first term, and the prior is the second. We code it up in the next functions, with a minor difference. It is often computationally much more tractable to compute the natural logarithm of the posterior, and we do so here. We can now use this equation to find the model we are looking for. How? Well, the equation above basically tells us what model is most likely given that data and the prior information on the model. If we maximize the probability of the model, whatever parameter combination can satisfy that is a model that we are interested in! End of explanation """ # calculate probability of free model: res = scipy.optimize.minimize(neg_log_prob_free, 0, args=(x, y, yerr), method='Powell') plt.scatter(x, y) plt.plot(x, x*res.x, '-', color='g') print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior(res.x, x, y, yerr)))) print('The optimized probability is {0:.4g}x'.format(np.float64(res.x))) """ Explanation: Specificity is necessary for credibility. Let's show that by optimizing the posterior function, we can fit a line. We optimize the line by using the function scipy.optimize.minimize. However, minimizing the logarithm of the posterior does not achieve anything! We are looking for the place at which the equation we derived above is maximal. That's OK. We will simply multiply the logarithm of the posterior by -1 and minimize that. End of explanation """ # bayes model fitting: def log_likelihood_fixed(x, y, yerr): sigma = yerr y_model = -1/2*x return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2) def log_posterior_fixed(x, y, yerr): return log_likelihood_fixed(x, y, yerr) plt.scatter(x, y) plt.plot(x, -0.5*x, '-', color='purple') print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior_fixed(x, y, yerr)))) """ Explanation: We can see that the model is very close to the model we drew the data from. It works! However, the probability of this model is not very large. Why? Well, that's because the posterior probability is spread out over a large number of parameters. Bayesians like to think that a parameter is actually a number plus or minutes some jitter. Therefore, the probability of the parameter being exactly one number is usually smaller the larger the jitter. In thise case, the jitter is not terribly a lot, but the probability of this one parameter being exactly -0.5005 is quite low, even though it is the best guess for the slope given the data. Quantifying the probability of a fixed model: Suppose now that we had a powerful theoretical tool that allowed us to make a very, very good guess as to what line the points should fall on. Suppose this powerful theory now tells us that the line should be: $$ y = -\frac{1}{2}x. $$ Using Bayes' theorem, we could quantify the probability that the model is correct, given the data. Now, the prior is simply going to be 1 when the slope is -0.5, and 0 otherwise. This makes the equation: $$ P(X|D) \propto \prod_{Data}\mathrm{exp}({-\frac{(y_{Obs} + 0.5x)^2}{2\sigma_{Obs}}}) $$ Notice that this equation cannot be minimized. It is a fixed statement, and its value depends only on the data. End of explanation """ def model_selection(X, Y, Yerr, **kwargs): guess = kwargs.pop('guess', -0.5) # calculate probability of free model: res = scipy.optimize.minimize(neg_log_prob_free, guess, args=(X, Y, Yerr), method='Powell') # Compute error bars second_derivative = scipy.misc.derivative(log_posterior, res.x, dx=1.0, n=2, args=(X, Y, Yerr), order=3) cov_free = -1/second_derivative alpha_free = np.float64(res.x) log_free = log_posterior(alpha_free, X, Y, Yerr) # log goodness of fit for fixed models log_MAP = log_posterior_fixed(X, Y, Yerr) good_fit = log_free - log_MAP # occam factor - only the free model has a penalty log_occam_factor =(-np.log(2 * np.pi) + np.log(cov_free)) / 2 + log_prior(alpha_free) # give more standing to simpler models. but just a little bit! lg = log_free - log_MAP + log_occam_factor - 2 return lg """ Explanation: We can see that the probability of this model is very similar to the probability of the alternative model we fit above. How can we pick which one to use? Selecting between two models An initial approach to selecting between these two models would be to take the probability of each model given the data and to find the quotient, like so: $$ OR = \frac{P(M_1~|D)}{P(M_2~|D)} = \frac{P(D~|M_1)P(M_1)}{P(D~|M_2)P(M_1)} $$ However, this is tricky to evaluate. First of all, the equations we derived above are not solely in terms of $M_1$ and $D$. They also include $\alpha$ for the undetermined slope model. We can get rid of this parameter via a technique known as marginalization (basically, integrating the equations over $\alpha$). Even more philosophically difficult are the terms $P(M_i)$. How is one to evaluate the probability of a model being true? The usual solution to this is to set $P(M_i) \sim 1$ and let those terms cancel out. However, in the case of models that have been tested before or where there is a powerful theoretical reason to believe one is more likely than the other, it may be entirely reasonable to specify that one model is several times more likely than the other. For now, we set the $P(M_i)$ to unity. We can approximate the odds-ratio for our case as follows: $$ OR = \frac{P(D|\alpha^)}{P(D|M_2)} \cdot \frac{P(\alpha^|M_1) (2\pi)^{1/2} \sigma_\alpha^*}{1}, $$ where $\alpha^$ is the parameter we found when we minimized the probability function earlier. Here, the second term we added represents the complexity of each model. The denominator in the second term is 1 because the fixed model cannot become any simpler. On the other hand, we penalize the model with free slope by multiplying the probability of the observed slope by the square root of two pi and then multiplying all of this by the uncertainty in the parameter $\alpha$. This is akin to saying that the less likely we think $\alpha$ should be a priori*, or the more uncertain we are that $\alpha$ is actually a given number, then we should give points to the simpler model. End of explanation """ model_selection(x, y, yerr) """ Explanation: We performed the Odds Ratio calculation on logarithmic space, so negative values show that the simpler (fixed slope) model is preferred, whereas if the values are positive and large, the free-slope model is preferred. As a guide, Bayesian statisticians usually suggest that 10^2 or above is a good ratio to abandon one model completely in favor of another. End of explanation """ n = 50 # number of data points x = np.linspace(-10, 10, n) yerr = np.abs(np.random.normal(0, 2, n)) y = x*-0.55 + np.random.normal(0, yerr, n) plt.scatter(x, y) model_selection(x, y, yerr) """ Explanation: Different datasets will prefer different models Let's try this again. Maybe the answer will change sign this time. End of explanation """ def simulate_many_odds_ratios(n): """ Given a number `n` of data points, simulate 1,000 data points drawn from a null model and an alternative model and compare the odds ratio for each. """ iters = 1000 lg1 = np.zeros(iters) lg2 = np.zeros(iters) for i in range(iters): x = np.linspace(-10, 10, n) yerr = np.abs(np.random.normal(0, 2, n)) # simulate two models: only one matches the fixed model y1 = -0.5*x + np.random.normal(0, yerr, n) y2 = -0.46*x + np.random.normal(0, yerr, n) lg1[i] = model_selection(x, y1, yerr) m2 = model_selection(x, y2, yerr) # Truncate OR for ease of plotting if m2 < 10: lg2[i] = m2 else: lg2[i] = 10 return lg1, lg2 def make_figures(n): lg1, lg2 = simulate_many_odds_ratios(n) lg1 = np.sort(lg1) lg2 = np.sort(lg2) fifty_point1 = lg1[int(np.floor(len(lg1)/2))] fifty_point2 = lg2[int(np.floor(len(lg2)/2))] fig, ax = plt.subplots(ncols=2, figsize=(15, 7), sharey=True) fig.suptitle('Log Odds Ratio for n={0} data points'.format(n), fontsize=20) sns.kdeplot(lg1, label='slope=-0.5', ax=ax[0], cumulative=False) ax[0].axvline(x=fifty_point1, ls='--', color='k') ax[0].set_title('Data drawn from null model') ax[0].set_ylabel('Density') sns.kdeplot(lg2, label='slope=-0.46', ax=ax[1], cumulative=False) ax[1].axvline(x=fifty_point2, ls='--', color='k') ax[1].set_title('Data drawn from alternative model') fig.text(0.5, 0.04, 'Log Odds Ratio', ha='center', size=18) return fig, ax fig, ax = make_figures(n=5) """ Explanation: Indeed, the answer changed sign. Odds Ratios, p-values and everything else should always be interpreted conservatively. I prefer odds ratios that are very large, larger than 1,000 before stating that one model is definitively preferred. Otherwise, I tend to prefer the simpler model. The larger the dataset, the more resolving power What distribution of answers would you get if you obtained five points? Ten? Fifteen? I've written a couple of short functions to help us find out. In the functions below, I simulate two datasets. One datasets is being plucked from points that obey the model $$ y = -\frac{1}{2}x, $$ whereas the second model is being plucked from $$ y = -0.46x. $$ Clearly, the fixed model $y=-0.5x$ should only be preferred for the first dataset, and the free model is the correct one to use for the second model. Now let us find out if this is the case. By the way, the function below trims odds ratios to keep them from becoming too large. If an odds ratio is bigger than 10, we set it equal to 10 for plotting purposes. End of explanation """ fig, ax = make_figures(n=50) """ Explanation: Here we can see that with five data points, the odds ratio will tend to prefer the simpler model. We do not have too much information---why request the extra information? Note that for the second dataset in some cases the deviations are great enough that the alternative model is strongly preferred (right panel, extra bump at 10). However, this is rare. End of explanation """
jrmontag/STLDecompose
STL-usage-example.ipynb
mit
def get_statsmodels_df(): """Return packaged data in a pandas.DataFrame""" # some hijinks to get around outdated statsmodels code dataset = sm.datasets.co2.load() start = dataset.data['date'][0].decode('utf-8') index = pd.date_range(start=start, periods=len(dataset.data), freq='W-SAT') obs = pd.DataFrame(dataset.data['co2'], index=index, columns=['co2']) return obs obs = get_statsmodels_df() obs.head() """ Explanation: We'll use some of the data that comes pre-packaged with statsmodels to demonstrate the library functionality. The data set below comprises incomplete, daily measurements of CO2 levels in Hawaii. Note: at the time of this writing, the current release of statsmodels includes a utility method for loading these datasets as a pandas.DataFrame which appears to be broken. Below is a short hack inspired by the current master branch on the statsmodels GitHub page. End of explanation """ obs = (obs .resample('D') .mean() .interpolate('linear')) obs.head(10) obs.index obs.head(1000).plot() """ Explanation: Because it's based on some existing statsmodels functionality, STLDecompose requires two things of the input dataframe: 1. continuous observations (no missing data points) 2. a pandas DateTimeIndex Since these are both very situation-dependent, we leave it to the user to define how they want to acheive these goals - pandas provides a number of ways to work with missing data. In particular, the functions shown below make these steps relatively straightforward. Below, we add use linear interpolation, and resample to daily observations. The resulting frame meets both of our criteria. End of explanation """ decomp = decompose(obs, period=365) decomp """ Explanation: Decompose One of the primary pieces of functionality is the STL decomposition. The associated method requires the observation frame, and the primary (largest) period of seasonality. This period is specified in terms of index positions, and so care is needed for the user to correctly specify the periodicity in terms of their observations. For example, with daily observations and large annual cycles, period=365. For hourly observations with large daily cycles, period=24. Some inspection, and trial and error may be helpful. End of explanation """ decomp.plot(); """ Explanation: The resulting object is an extended version of the statsmodels.tsa.seasonal.DecomposeResult. Like the statsmodels object, the arrays of values are available on the object (the observations; and the trend, seasonal, and residual components). An extra attribute (the average seasonal cycle) has been added for the purpose of forecasting. We inherit the built-in .plot() method on the object. End of explanation """ len(obs) short_obs = obs.head(10000) # apply the decomp to the truncated observation short_decomp = decompose(short_obs, period=365) short_decomp """ Explanation: Forecast While the STL decomposition is interesting on its own, STLDecompose also provides some relatively naive capabilities for using the decomposition to forecast based on our observations. We'll use the same data set, but pretend that we only had the first two third of observations. Then we can compare our forecast to the real observation data. End of explanation """ fcast = forecast(short_decomp, steps=8000, fc_func=drift) fcast.head() """ Explanation: The forecast() method requires the following arguments: - the previously fit DecomposeResult - the number of steps forward for which we'd like the forecast - the specific forecasting function to be applied to the decomposition There are a handful of predefined functions that can be imported from the stldecompose.forecast_funcs module. These implementations are based on Hyndman's online textbook. The user can also define their own forecast function, following the patterns demonstrated in the predefined functions. The return type of the forecast() method is a pandas.Dataframe with a column name that represents the forecast function and an appropriate DatetimeIndex. End of explanation """ plt.plot(obs, '--', label='truth') plt.plot(short_obs, '--', label='obs') plt.plot(short_decomp.trend, ':', label='decomp.trend') plt.plot(fcast, '-', label=fcast.columns[0]) plt.xlim('1970','2004'); plt.ylim(330,380); plt.legend(); """ Explanation: If desired, we can then plot the corresponding components of the observation and forecast to check and verify the results. End of explanation """ fcast = forecast(short_decomp, steps=8000, fc_func=drift, seasonal=True) plt.plot(obs, '--', label='truth') plt.plot(short_obs, '--', label='obs') plt.plot(short_decomp.trend, ':', label='decomp.trend') plt.plot(fcast, '-', label=fcast.columns[0]) plt.xlim('1970','2004'); plt.ylim(330,380); plt.legend(); fcast.head() """ Explanation: To include the estimated seasonal component in the forecast, use the boolean seasonal keyword. End of explanation """
jbn/fast_proportional_selection
index.ipynb
mit
import random from bisect import bisect_left import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns %matplotlib inline """ Explanation: Fast Proportional Selection [RETWEET] Proportional selection -- or, roulette wheel selection -- comes up frequently when developing agent-based models. Based on the code I have read over the years, researchers tend to write proportional selection as either a linear walk or a bisecting search. I compare the two approaches, then introduce Lipowski and Lipowska's stochastic acceptance algorithm. For most of our uses, I argue that their algorithm is a better choice. See also: This IPython notebook's repository on GitHub. Preliminaries I will only use Python's internal random module for random number generation. I include numpy and pandas convenience only when running demos. I import seaborne because it overrides some matplotlib defaults in a pretty way. End of explanation """ class PropSelection(object): def __init__(self, n): self._n = n self._frequencies = [0] * n def copy_from(self, values): assert len(values) == self._n for i, x in enumerate(values): self[i] = x def __getitem__(self, i): return self._frequencies[i] def normalize(self): pass """ Explanation: A Proportional Selection Base Class I am interested in testing how the different algorithms perform when implemented and used. Algorithmic analysis gives us asymptotic estimations. From these, we know which algorithm should be the fastest, in the limit. But, for trivial values of $n$, asymptotics can be misleading. $O(1)$ is not always faster than $O(n)$. $O(1)$ is really $O(c)$, and $c$ can be very costly! I use a base class, ProportionalSelection, to generate an equal playing field. This class takes the size of the vector of frequencies representing a distribution. I use frequencies because they are more natural to think about, and are easier to update. The client calls the normalize method any time the underlying frequencies change. Call the object like a dictionary to update a frequency. End of explanation """ class LinearWalk(PropSelection): def __init__(self, n): super(LinearWalk, self).__init__(n) self._total = 0 def __setitem__(self, i, x): self._total += (x - self._frequencies[i]) self._frequencies[i] = x def sample(self): terminal_cdf_point = random.randint(0, self._total - 1) accumulator = 0 for i, k in enumerate(self._frequencies): accumulator += k if accumulator > terminal_cdf_point: return i """ Explanation: Linear Walk Sampling via linear walk is $O(n)$. The algorithm generates a random number between 0 and the sum of the frequencies. Then, it walks through the array of frequencies, producing a running total. At some point the running total exceeds the generated threshold. The index at that point is the selection. The algorithm has no cost associated with updates to the underlying frequency distribution. End of explanation """ class BisectingSearch(PropSelection): def __init__(self, n): super(BisectingSearch, self).__init__(n) self._cdf = None self._total = 0 def __setitem__(self, i, x): self._total += (x - self._frequencies[i]) self._frequencies[i] = x def normalize(self): total = float(sum(self._frequencies)) cdf = [] accumulator = 0.0 for x in self._frequencies: accumulator += (x / float(total)) cdf.append(accumulator) self._cdf = cdf def sample(self): return bisect_left(self._cdf, random.random()) """ Explanation: Bisecting Search Sampling via bisecting search is $O(log~n)$. From an asymptotic perspective, this is better than a linear walk. However, the algorithm achieves this by spending some compute time up front. That is, before sampling occurs. It cannot sample directly over the frequency distribution. Instead, it transforms the frequencies into a cumulative density function (CDF). This is an $O(n)$ operation. It must occur every time an element in the frequency distribution changes. Given the CDF, the algorithm draws a random number from [0, 1). It then uses bisection to identify the insertion point in the CDF for this number. This point is the selected index. End of explanation """ class StochasticAcceptance(PropSelection): def __init__(self, n): super(StochasticAcceptance, self).__init__(n) self._max_value = 0 def __setitem__(self, i, x): last_x = self._frequencies[i] if x > self._max_value: self._max_value = float(x) elif last_x == self._max_value and x < last_x: self._max_value = float(max(self._frequencies)) self._frequencies[i] = x def sample(self): n = self._n max_value = self._max_value freqs = self._frequencies while True: i = int(n * random.random()) if random.random() < freqs[i] / max_value: return i """ Explanation: Stochastic Acceptance For sampling, stochastic acceptance is $O(1)$. With respect to time, this dominates both the linear walk and bisecting search methods. Yet, this is asymptotic. The algorithm generates many random numbers per selection. In fact, the number of random variates grows in proportion to $n$. So, the random number generator matters. This algorithm has another advantage. It can operate on the raw frequency distribution, like linear walk. It only needs to track the maximum value in the frequency distribution. End of explanation """ fig, ax = plt.subplots(1, 4, sharey=True, figsize=(10,2)) def plot_proportions(xs, ax, **kwargs): xs = pd.Series(xs) xs /= xs.sum() return xs.plot(kind='bar', ax=ax, **kwargs) def sample_and_plot(roulette_algo, ax, n_samples=10000, **kwargs): samples = [roulette_algo.sample() for _ in range(n_samples)] value_counts = pd.Series(samples).value_counts().sort_index() props = (value_counts / value_counts.sum()) props.plot(kind='bar', ax=ax, **kwargs) return samples freqs = np.random.randint(1, 100, 10) plot_proportions(freqs, ax[0], color=sns.color_palette()[1], title="Target Distribution") klasses = [LinearWalk, BisectingSearch, StochasticAcceptance] for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ xs = sample_and_plot(algo, ax=plt.subplot(ax[i+1]), title=name) """ Explanation: First Demonstration: Sampling The following code generates a target frequency distribution. Then, it instantiates each algorithm; copies the frequency distribution; and, draws 10,000 samples. For each algorithm, this code compiles the resulting probability distribution. For comparison, I plot these side by side. In the figure below, the target distribution is to the left (green). The linear walk, bisecting search, and stochastic acceptance algorithms are to the right of the targert distribution (blue). Visually, there is a compelling case for the distributions being equal. End of explanation """ import timeit def sample_n_times(algo, n): samples = [] for _ in range(n): start = timeit.default_timer() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) timings = [] for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ timings.append((name, sample_n_times(algo, 10000))) """ Explanation: Second Demonstration: Performance Testing The following code times the sample method for each algorithm. I am using the timeit module's default_timer for timing. For such fast functions, this may lead to measurement error. But, over 10,000 samples, I expect these errors to wash out. End of explanation """ values = np.vstack([times for _, times in timings]).T values = values[np.all(values < np.percentile(values, 90, axis=0), axis=1)] sns.boxplot(values, names=[name for name, _ in timings]); """ Explanation: The graph immediately below plots the kernel density estimation of timings for each algorithm. I truncate the results, limiting the range to everything less than the 90th percentile. (I'll explain why momentarily.) Bisecting search appears to be the fastest and the most stable. This makes sense. It has nice worse-case properties. Stochastic acceptance and linear walk both display variability in timings. Again, the timer is not very precise. But, since bisecting search used the same timer, a comparison is possible. Linear walk has a worst case performance of $O(n)$. That is, if it starts at index 0 and generates the maximum value, it has to traverse the entire array. Bisecting search generates a stream of random numbers until finding an acceptable one. Technically, this algorithm has no limit. It could loop infinitely, waiting for a passing condition. But, probabilistically, this is fantastically unlikely. (Sometimes, you come across coders saying code like this is incorrect. That's pretty absurd. Most of the time, the probability of pathological conditions is so small, it's irrelevant. Most of the time, the machine running your code is more likely to crumb to dust before an error manifests.) For real-time code, timing variability matters. Introduce some jitter into something like a HFT algorithm, and you lose. But, for agent-based models and offline machine learning, variability doesn't matter. For us, averages matter. End of explanation """ values = np.vstack([times for _, times in timings]).T values = values[np.all(values > np.percentile(values, 90, axis=0), axis=1)] sns.boxplot(values, names=[name for name, _ in timings]); """ Explanation: The relationship between algorithms remains the same. But, the difference between linear walk and stochastic acceptance grows. Over the entire distribution, stochastic acceptance lags both linear walk and bisecting search. End of explanation """ import timeit def sample_n_times(algo, n): samples = [] for _ in range(n): start = timeit.default_timer() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) averages = [] for n in [10, 100, 1000, 10000, 100000, 1000000]: row = {'n': n} freqs = np.random.randint(1, 100, n) for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ row[name] = np.mean(sample_n_times(algo, 10000)) averages.append(row) """ Explanation: Third Demonstration: Average Time as a Function of N The previous demonstrations fixed n to 10. What happens as n increases? End of explanation """ averages_df = pd.DataFrame(averages).set_index('n') averages_df.plot(logy=True, logx=True, style={'BisectingSearch': 'o-', 'LinearWalk': 's--', 'StochasticAcceptance': 'd:'})#marker='o') plt.ylabel('$Average runtime$'); """ Explanation: The following graph plots the average time as a function of n, the number of elements in the distribution. There is nothing unexpected. Linear walk gets increasingly terrible. It's $O(n)$. Bisecting search out-performs Stochastic acceptance. They appear to be converging. But, this convergence occurs at the extreme end of n. Few simulations sample over a distribution of 1,000,00 values. At this point, it seems like bisecting search is the best choice. End of explanation """ import timeit def normalize_sample_n_times(algo, n_samples, n): samples = [] for _ in range(n_samples): algo[random.randint(0, n-1)] = random.randint(1, 100) start = timeit.default_timer() algo.normalize() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) averages = [] for n in [10, 100, 1000, 10000, 100000]: row = {'n': n} freqs = np.random.randint(1, 100, n) for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ row[name] = np.mean(normalize_sample_n_times(algo, 1000, n)) averages.append(row) averages_df = pd.DataFrame(averages).set_index('n') averages_df.plot(logy=True, logx=True, style={'BisectingSearch': 'o-', 'LinearWalk': 's--', 'StochasticAcceptance': 'd:'})#marker='o') plt.ylabel('$Average runtime$'); """ Explanation: Fourth Demonstration: Time Given a Dynamic Distribution Many of my simulations use proportional selection with dynamic proportions. For example, consider preferential attachment in social network generation. Edges form probabilistically, proportional to a node's degree. But, when an edge forms, the degree changes as well. In this case, the distribution changes for each sample! Below, I repeat the previous experiment, but I change the distribution and call normalize before each sample. Bisecting search is now the loser in this race. After each frequency alteration, it runs an expensive $O(n)$ operation. Then, it still must run its $O(log~n)$ operation at sample time. Linear walk and stochastic acceptance incur almost no performance penalty for alterations. Linear walk merely updates the total count. And, stochastic acceptance only runs a calculation if the alteration reduces the maximum value. (This hints at an important exception. As the range of frequencies narrows and the number of elements increases, performance suffers. The number of $O(n)$ searches for the new maximum becomes expensive.) End of explanation """
istinspring/products-matching
notebooks/0.2-group-sizes.ipynb
gpl-3.0
%matplotlib inline import numpy as np import pandas as pd from scipy.stats import entropy from tabulate import tabulate from pymongo import MongoClient import matplotlib.pyplot as plt plt.style.use('seaborn') plt.rcParams["figure.figsize"] = (20,8) db = MongoClient()['stores'] TOTAL_NUMBER_OF_PRODUCTS = db.data.count() results = db.data.aggregate( [ { "$group": { "_id": "$size", "count": {"$sum": 1}, } }, { "$sort": { "count": -1, } } ] ) ALL_SIZES = [(str(x['_id']), x['count']) for x in list(results)] print('Number of uniq. sizes: {}'.format(len(ALL_SIZES))) """ Explanation: Group sizes Get all unique size labels from the database. End of explanation """ DISTRIBUTORS = list(db.data.distinct("source")) results = db.data.aggregate( [ { "$group": { "_id": "$source", "sizes": {"$addToSet": "$size"}, } }, { "$project": { "_id": 1, "count": {"$size": "$sizes"} } }, { "$sort": { "count": -1, } } ] ) SIZES_PER_DISTRIBUTOR = [ (str(x['_id']), x['count']) for x in list(results) ] print(tabulate(SIZES_PER_DISTRIBUTOR, headers=['Distributor', 'Number of uniq. Sizes'], tablefmt="simple")) df_values_by_key = pd.DataFrame(SIZES_PER_DISTRIBUTOR, index=[x[0] for x in SIZES_PER_DISTRIBUTOR], columns=['Distributor', 'Sizes']) df_values_by_key.iloc[::-1].plot.barh() """ Explanation: Sizes per distributor End of explanation """ import operator all_sizes_table = [] number_of_sizes = 180 for sizes in zip(ALL_SIZES[0:number_of_sizes:3], ALL_SIZES[1:number_of_sizes:3], ALL_SIZES[2:number_of_sizes:3]): all_sizes_table.append(list(reduce(operator.add, sizes))) print( tabulate( all_sizes_table[:60], headers=3*['Size', 'Number of Products'], tablefmt="simple")) """ Explanation: Print joint table with first 60 sizes. End of explanation """ # calculate probability vector p = [x[1] for x in ALL_SIZES] size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS # calculate entropy first_entropy = entropy(size_prob_vector) print("Data entropy:", first_entropy) """ Explanation: Calculate entropy End of explanation """ # create new collection db.data.aggregate( [ { "$project": { "_id": 1, "source": 1, "size": 1, }, }, { "$out": "size_mapping" } ] ) print('Db "size_mapping" created') # create indexes db.size_mapping.create_index([("size", 1)]) db.size_mapping.create_index([("source", 1)]) print('Indexes "size", "source" for "size_mapping" created.') print(list(db.size_mapping.find().limit(5))) """ Explanation: Create new collection from data only with '_id', 'source' and 'size' fields End of explanation """ SIZES_LIST_PER_DISTRIBUTOR = db.size_mapping.aggregate( [ { "$group": { "_id": "$source", "sizes": {"$addToSet": "$size"}, }, }, { "$project": { "_id": 1, "sizes": 1, "number_of_sizes": {"$size": "$sizes"}, } }, { "$sort": { "number_of_sizes": -1 } } ] ) TABLE_SIZES_LIST_PER_DISTRIBUTOR = [ (str(x['_id']), x['sizes'], x['number_of_sizes']) for x in SIZES_LIST_PER_DISTRIBUTOR ] for distr, sizes, num in TABLE_SIZES_LIST_PER_DISTRIBUTOR: print('Sizes for: "{}"'.format(distr)) print(", ".join(sizes)) print(80*"-") """ Explanation: Sizes list per distributor End of explanation """ SIZES_MAPPING = { 'ALL': [], 'NO SIZE': ['PLAIN', 'CONE', 'BLANKET'], 'ONE': ['OS', 'ONE SIZE', '1 SIZ', 'O/S'], 'XS': ['XXS', 'XX-SMALL', '2XS'], 'S': ['SMALL', 'S/M'], 'M': ['MEDIUM', 'S/M', 'M/L'], 'L': ['LARGE', 'L/XL', 'M/L'], 'XL': ['EXTRA', 'XLT', 'XT', 'L/XL'], '2XL': ['2X', 'XXL', '2XT', '2XLL', '2X/', '2XLT'], '3XL': ['3X', '3XT', '3XLL', '3XLT'], '4XL': ['4X', '4XT', '4XLT'], '5XL': ['5X', '5XT', '5XLT'], '6XL': ['6X'], } def build_matching_table(matching_rules): """Build matching table from matching rules :param matching_rules: matching rules used to build matching table :type matching_rules: dict :return: matching table `{'S/M: ['S', 'M'], '2X': ['2XL'], ...}` :rtype: dict """ matching_table = {} # transform matching rules to the "shortcut": "group_key" table for key, values in matching_rules.items(): if not values: # skip undefined rules i.e. "[]" continue # add rule for key if key not in matching_table: # NOTE: set('ab') would be {'a', 'b'} # so it's impossible to matching_table[key] = set(key) matching_table[key] = set() matching_table[key].add(key) for value in values: if value not in matching_table: matching_table[value] = set() matching_table[value].add(key) else: matching_table[value].add(key) return matching_table MATCHING_RULES = build_matching_table(SIZES_MAPPING) print(tabulate(MATCHING_TABLE.items(), headers=['From', 'To'], tablefmt="simple")) # process data into the new table # def get_groups(mtable, size): # """Get size groups for the given `size` according to matching table # :param size: size (case insensetive) # :type size: str # :return: list of strings i.e. size groups or ``['UNDEFINED']`` # if not found # :rtype: list or ['UNDEFINED'] # """ # return list(mtable.get(size, default=size)) # for k, v in MATCHING_TABLE.items(): # res = db.size_mapping.update_many( # {"size": k}, # {"$set": {"size": get_groups(MATCHING_TABLE, k)}}) # print(res.raw_result) """ Explanation: Tagging according to size Since the number of sizes is low (1117 uniq sizes), the task could be resolved using tivial brute force, i.e. map sizes using mapping table. During the observation of data i noticed that sizes are defined for adult, youth, toddler and baby: Adult: 'S', 'M', 'L' etc. Youth: 'YS', 'YL' etc. Kid: '4', '6' etc. Toddler: '2T', '3T' etc. Baby: '3M', '6M', 'NB' (new born) etc. kid, toddler, baby sizes chart youth sizes chart I.e. could tag products accodring to the size. python TAG_FROM_SIZE = { 'adult': ['XS', 'S', 'M', 'L', 'XL', '2XL', '3XL', '4XL', '5XL', '6XL'], 'youth': ['YXS', 'YSM', 'YMD', 'YLG', 'YXL', '8H', '10H', '12H', '14H', '16H', '18H', '20H'], 'kid': [] } End of explanation """ results = db.size_mapping.aggregate( [ { "$group": { "_id": "$size", "count": {"$sum": 1}, } }, { "$sort": { "count": -1, } } ] ) NEW_SIZES = [(str(x['_id']), x['count']) for x in list(results)] print( "\n" + tabulate(NEW_SIZES[:20], headers=['Size', 'Number of Products'], tablefmt="orgtbl") + "\n" ) # calculate probability vector p = [] for _, count in NEW_SIZES: p.append(count) size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS # calculate entropy first_entropy = entropy(size_prob_vector) print("Data entropy: ", first_entropy) from functools import reduce total_matched_products = (sum([x[1] for x in NEW_SIZES[:11]])) percent_from_db_total = round((total_matched_products / TOTAL_NUMBER_OF_PRODUCTS) * 100, 2) print("Matched: {} Percent from total: {}".format(total_matched_products, percent_from_db_total)) """ Explanation: Let's calculate data entropy for results End of explanation """
interactiveaudiolab/nussl
docs/recipes/wham/ideal_ratio_mask.ipynb
mit
from nussl import datasets, separation, evaluation import os import multiprocessing from concurrent.futures import ThreadPoolExecutor import logging import json import tqdm import glob import numpy as np import termtables # set up logging logger = logging.getLogger() logger.setLevel(logging.INFO) """ Explanation: Evaluating ideal ratio mask on WHAM! This recipe evaluates an oracle ideal ratio mask on the mix_clean and min subset in the WHAM dataset. This recipe is annotated as a notebook for documentation but can be run directly as a script in docs/recipes/ideal_ratio_mask.py. We evaluate three approaches to constructing the ideal ratio mask: Magnitude spectrum approximation Phase sensitive spectrum approximation Truncated phase sensitive spectrum approximation Imports End of explanation """ WHAM_ROOT = '/home/data/wham/' NUM_WORKERS = multiprocessing.cpu_count() // 4 OUTPUT_DIR = os.path.expanduser('~/.nussl/recipes/ideal_ratio_mask/') APPROACHES = { 'Phase-sensitive spectrum approx.': { 'kwargs': { 'range_min': -np.inf, 'range_max':np.inf }, 'approach': 'psa', 'dir': 'psa' }, 'Truncated phase-sensitive approx.': { 'kwargs': { 'range_min': 0.0, 'range_max': 1.0 }, 'approach': 'psa', 'dir': 'tpsa' }, 'Magnitude spectrum approximation': { 'kwargs': {}, 'approach': 'msa', 'dir': 'msa' } } RESULTS_DIR = os.path.join(OUTPUT_DIR, 'results') for key, val in APPROACHES.items(): _dir = os.path.join(RESULTS_DIR, val['dir']) os.makedirs(_dir, exist_ok=True) """ Explanation: Setting up Make sure to point WHAM_ROOT where you've actually built and saved the WHAM dataset. There's a few different ways to use ideal ratio masks, so we're going to set those up in a dictionary. End of explanation """ test_dataset = datasets.WHAM(WHAM_ROOT, sample_rate=8000, split='tt') for key, val in APPROACHES.items(): def separate_and_evaluate(item): output_path = os.path.join( RESULTS_DIR, val['dir'], f"{item['mix'].file_name}.json") separator = separation.benchmark.IdealRatioMask( item['mix'], item['sources'], approach=val['approach'], mask_type='soft', **val['kwargs']) estimates = separator() evaluator = evaluation.BSSEvalScale( list(item['sources'].values()), estimates, compute_permutation=True) scores = evaluator.evaluate() with open(output_path, 'w') as f: json.dump(scores, f) pool = ThreadPoolExecutor(max_workers=NUM_WORKERS) for i, item in enumerate(tqdm.tqdm(test_dataset)): if i == 0: separate_and_evaluate(item) else: pool.submit(separate_and_evaluate, item) pool.shutdown(wait=True) json_files = glob.glob(f"{RESULTS_DIR}/{val['dir']}/*.json") df = evaluation.aggregate_score_files(json_files) overall = df.mean() print(''.join(['-' for i in range(len(key))])) print(key.upper()) print(''.join(['-' for i in range(len(key))])) headers = ["", f"OVERALL (N = {df.shape[0]})", ""] metrics = ["SAR", "SDR", "SIR"] data = np.array(df.mean()).T data = [metrics, data] termtables.print(data, header=headers, padding=(0, 1), alignment="ccc") """ Explanation: Evaluation End of explanation """
jmschrei/pomegranate
tutorials/B_Model_Tutorial_5_Bayes_Classifiers.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set_style('whitegrid') import numpy from pomegranate import * numpy.random.seed(0) numpy.set_printoptions(suppress=True) %load_ext watermark %watermark -m -n -p numpy,scipy,pomegranate """ Explanation: Naive Bayes and Bayes Classifiers author: Jacob Schreiber <br> contact: jmschreiber91@gmail.com Bayes classifiers are some of the simplest machine learning models that exist, due to their intuitive probabilistic interpretation and simple fitting step. Each class is modeled as a probability distribution, and the data is interpreted as samples drawn from these underlying distributions. Fitting the model to data is as simple as calculating maximum likelihood parameters for the data that falls under each class, and making predictions is as simple as using Bayes' rule to determine which class is most likely given the distributions. Bayes' Rule is the following: \begin{equation} P(M|D) = \frac{P(D|M)P(M)}{P(D)} \end{equation} where M stands for the model and D stands for the data. $P(M)$ is known as the <i>prior</i>, because it is the probability that a sample is of a certain class before you even know what the sample is. This is generally just the frequency of each class. Intuitively, it makes sense that you would want to model this, because if one class occurs 10x more than another class, it is more likely that a given sample will belong to that distribution. $P(D|M)$ is the likelihood, or the probability, of the data under a given model. Lastly, $P(M|D)$ is the posterior, which is the probability of each component of the model, or class, being the component which generated the data. It is called the posterior because the prior corresponds to probabilities before seeing data, and the posterior corresponds to probabilities after observing the data. In cases where the prior is uniform, the posterior is just equal to the normalized likelihoods. This equation forms the basis of most probabilistic modeling, with interesting priors allowing the user to inject sophisticated expert knowledge into the problem directly. pomegranate implements two distinct models of this format with the simpler being the naive Bayes classifier. The naive Bayes classifier assumes that each feature is independent from each other feature, and so breaks down $P(D|M)$ to be $\prod\limits_{i=1}^{d} P(D_{i}|M_{i})$ where $i$ is a specific feature in a data set that has $d$ features in it. This typically means faster calculations because covariance across features doesn't need to be considered, but it also is a natural way to model each feature as a different distribution because it ignores the complexities of modeling the covariance between, say, an exponential distribution and a normal distribution. The more general model is the Bayes classifier which does not assume that each feature is independent of the others. This allows you to plug in anything for the likelihood function, whether it be a multivariate Gaussian distribution or a whole other compositional model. For instance, one could create a classifier whose components are each large mixture models. This enables much more complex models to be learned within the simple framework of Bayes' rule. End of explanation """ X = numpy.concatenate((numpy.random.normal(3, 1, 200), numpy.random.normal(10, 2, 1000))) y = numpy.concatenate((numpy.zeros(200), numpy.ones(1000))) x1 = X[:200] x2 = X[200:] plt.figure(figsize=(16, 5)) plt.hist(x1, bins=25, color='m', edgecolor='m', label="Class A") plt.hist(x2, bins=25, color='c', edgecolor='c', label="Class B") plt.xlabel("Value", fontsize=14) plt.ylabel("Count", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: Simple Gaussian Example End of explanation """ d1 = NormalDistribution.from_samples(x1) d2 = NormalDistribution.from_samples(x2) idxs = numpy.arange(0, 15, 0.1) p1 = list(map(d1.probability, idxs)) p2 = list(map(d2.probability, idxs)) plt.figure(figsize=(16, 5)) plt.plot(idxs, p1, color='m'); plt.fill_between(idxs, 0, p1, facecolor='m', alpha=0.2) plt.plot(idxs, p2, color='c'); plt.fill_between(idxs, 0, p2, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: The data seems like it comes from two normal distributions, with the cyan class being more prevalent than the magenta class. A natural way to model this data would be to create a normal distribution for the cyan data, and another for the magenta distribution. Let's take a look at doing that. All we need to do is use the from_samples class method of the NormalDistribution class. End of explanation """ magenta_prior = 1. * len(x1) / len(X) cyan_prior = 1. * len(x2) / len(X) plt.figure(figsize=(4, 6)) plt.title("Prior Probabilities P(M)", fontsize=14) plt.bar(0, magenta_prior, facecolor='m', edgecolor='m') plt.bar(1, cyan_prior, facecolor='c', edgecolor='c') plt.xticks([0, 1], ['P(Magenta)', 'P(Cyan)'], fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: It looks like some aspects of the data are captured well by doing things this way-- specifically the mean and variance of the normal distributions. This allows us to easily calculate $P(D|M)$ as the probability of a sample under either the cyan or magenta distributions using the normal (or Gaussian) probability density equation: \begin{align} P(D|M) &= P(x|\mu, \sigma) \ &= \frac{1}{\sqrt{2\pi\sigma^{2}}} exp \left(-\frac{(x-u)^{2}}{2\sigma^{2}} \right) \end{align} However, if we look at the original data, we see that the cyan distributions is both much wider than the purple distribution and much taller, as there were more samples from that class in general. If we reduce that data down to these two distributions, we lose the class imbalance. We want our prior to model this class imbalance, with the reasoning being that if we randomly draw a sample from the samples observed thus far, it is far more likely to be a cyan than a magenta sample. Let's take a look at this class imbalance exactly. End of explanation """ d1 = NormalDistribution.from_samples(x1) d2 = NormalDistribution.from_samples(x2) idxs = numpy.arange(0, 15, 0.1) p_magenta = numpy.array(list(map(d1.probability, idxs))) * magenta_prior p_cyan = numpy.array(list(map(d2.probability, idxs))) * cyan_prior plt.figure(figsize=(16, 5)) plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2) plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("P(M)P(D|M)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: The prior $P(M)$ is a vector of probabilities over the classes that the model can predict, also known as components. In this case, if we draw a sample randomly from the data that we have, there is a ~83% chance that it will come from the cyan class and a ~17% chance that it will come from the magenta class. Let's multiply the probability densities we got before by this imbalance. End of explanation """ magenta_posterior = p_magenta / (p_magenta + p_cyan) cyan_posterior = p_cyan / (p_magenta + p_cyan) plt.figure(figsize=(16, 5)) plt.subplot(211) plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2) plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("P(M)P(D|M)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(212) plt.plot(idxs, magenta_posterior, color='m') plt.plot(idxs, cyan_posterior, color='c') plt.xlabel("Value", fontsize=14) plt.ylabel("P(M|D)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: This looks a lot more faithful to the original data, and actually corresponds to $P(M)P(D|M)$, the prior multiplied by the likelihood. However, these aren't actually probability distributions anymore, as they no longer integrate to 1. This is why the $P(M)P(D|M)$ term has to be normalized by the $P(D)$ term in Bayes' rule in order to get a probability distribution over the components. However, $P(D)$ is difficult to determine exactly-- what is the probability of the data? Well, we can sum over the classes to get that value, since $P(D) = \sum_{i=1}^{c} P(D|M)P(M)$ for a problem with c classes. This translates into $P(D) = P(M=Cyan)P(D|M=Cyan) + P(M=Magenta)P(D|M=Magenta)$ for this specific problem, and those values can just be pulled from the unnormalized plots above. This gives us the full Bayes' rule, with the posterior $P(M|D)$ being the proportion of density of the above plot coming from each of the two distributions at any point on the line. Let's take a look at the posterior probabilities of the two classes on the same line. End of explanation """ idxs = idxs.reshape(idxs.shape[0], 1) X = X.reshape(X.shape[0], 1) model = NaiveBayes.from_samples(NormalDistribution, X, y) posteriors = model.predict_proba(idxs) plt.figure(figsize=(14, 4)) plt.plot(idxs, posteriors[:,0], color='m') plt.plot(idxs, posteriors[:,1], color='c') plt.xlabel("Value", fontsize=14) plt.ylabel("P(M|D)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: The top plot shows the same densities as before, while the bottom plot shows the proportion of the density belonging to either class at that point. This proportion is known as the posterior $P(M|D)$, and can be interpreted as the probability of that point belonging to each class. This is one of the native benefits of probabilistic models, that instead of providing a hard class label for each sample, they can provide a soft label in the form of the probability of belonging to each class. We can implement all of this simply in pomegranate using the NaiveBayes class. End of explanation """ X = numpy.concatenate([numpy.random.normal(3, 2, size=(150, 2)), numpy.random.normal(7, 1, size=(250, 2))]) y = numpy.concatenate([numpy.zeros(150), numpy.ones(250)]) plt.figure(figsize=(8, 8)) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: Looks like we're getting the same plots for the posteriors just through fitting the naive Bayes model directly to data. The predictions made will come directly from the posteriors in this plot, with cyan predictions happening whenever the cyan posterior is greater than the magenta posterior, and vice-versa. Naive Bayes In the univariate setting, naive Bayes is identical to a general Bayes classifier. The divergence occurs in the multivariate setting, the naive Bayes model assumes independence of all features, while a Bayes classifier is more general and can support more complicated interactions or covariances between features. Let's take a look at what this means in terms of Bayes' rule. \begin{align} P(M|D) &= \frac{P(M)P(D|M)}{P(D)} \ &= \frac{P(M)\prod_{i=1}^{d}P(D_{i}|M_{i})}{P(D)} \end{align} This looks fairly simple to compute, as we just need to pass each dimension into the appropriate distribution and then multiply the returned probabilities together. This simplicity is one of the reasons why naive Bayes is so widely used. Let's look closer at using this in pomegranate, starting off by generating two blobs of data that overlap a bit and inspecting them. End of explanation """ from sklearn.naive_bayes import GaussianNB model = NaiveBayes.from_samples(NormalDistribution, X, y) clf = GaussianNB().fit(X, y) xx, yy = numpy.meshgrid(numpy.arange(-2, 10, 0.02), numpy.arange(-4, 12, 0.02)) Z1 = model.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = clf.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(16, 8)) plt.subplot(121) plt.title("pomegranate naive Bayes", fontsize=16) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.contour(xx, yy, Z1, cmap='Blues') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(122) plt.title("sklearn naive Bayes", fontsize=16) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.contour(xx, yy, Z2, cmap='Blues') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() """ Explanation: Now, let's fit our naive Bayes model to this data using pomegranate. We can use the from_samples class method, pass in the distribution that we want to model each dimension, and then the data. We choose to use NormalDistribution in this particular case, but any supported distribution would work equally well, such as BernoulliDistribution or ExponentialDistribution. To ensure we get the correct decision boundary, let's also plot the boundary recovered by sklearn. End of explanation """ def plot_signal(X, n): plt.figure(figsize=(16, 6)) t_current = 0 for i in range(n): mu, std, t = X[i] chunk = numpy.random.normal(mu, std, int(t)) plt.plot(numpy.arange(t_current, t_current+t), chunk, c='cm'[i % 2]) t_current += t plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel("Time (s)", fontsize=14) plt.ylabel("Signal", fontsize=14) plt.ylim(20, 40) plt.show() def create_signal(n): X, y = [], [] for i in range(n): mu = numpy.random.normal(30.0, 0.4) std = numpy.random.lognormal(-0.1, 0.4) t = int(numpy.random.exponential(50)) + 1 X.append([mu, std, int(t)]) y.append(0) mu = numpy.random.normal(30.5, 0.8) std = numpy.random.lognormal(-0.3, 0.6) t = int(numpy.random.exponential(200)) + 1 X.append([mu, std, int(t)]) y.append(1) return numpy.array(X), numpy.array(y) X_train, y_train = create_signal(1000) X_test, y_test = create_signal(250) plot_signal(X_train, 20) """ Explanation: Drawing the decision boundary helps to verify that we've produced a good result by cleanly splitting the two blobs from each other. Bayes' rule provides a great deal of flexibility in terms of what the actually likelihood functions are. For example, when considering a multivariate distribution, there is no need for each dimension to be modeled by the same distribution. In fact, each dimension can be modeled by a different distribution, as long as we can multiply the $P(D|M)$ terms together. Classifying Signal with Different Distributions on Different Features Let's consider the example of some noisy signals that have been segmented. We know that they come from two underlying phenomena, the cyan phenomena and the magenta phenomena, and want to classify future segments. To do this, we have three features-- the mean signal of the segment, the standard deviation, and the duration. End of explanation """ model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train) print("Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()) clf = GaussianNB().fit(X_train, y_train) print("sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()) """ Explanation: We can start by modeling each variable as Gaussians, like before, and see what accuracy we get. End of explanation """ plt.figure(figsize=(14, 4)) plt.subplot(131) plt.title("Mean") plt.hist(X_train[y_train == 0, 0], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 0], color='m', alpha=0.5, bins=25) plt.subplot(132) plt.title("Standard Deviation") plt.hist(X_train[y_train == 0, 1], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 1], color='m', alpha=0.5, bins=25) plt.subplot(133) plt.title("Duration") plt.hist(X_train[y_train == 0, 2], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 2], color='m', alpha=0.5, bins=25) plt.show() """ Explanation: We get identical values for sklearn and for pomegranate, which is good. However, let's take a look at the data itself to see whether a Gaussian distribution is the appropriate distribution for the data. End of explanation """ model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train) print("Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()) clf = GaussianNB().fit(X_train, y_train) print("sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()) model = NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train) print("Heterogeneous Naive Bayes: ", (model.predict(X_test) == y_test).mean()) """ Explanation: So, unsurprisingly (since you can see that I used non-Gaussian distributions to generate the data originally), it looks like only the mean follows a normal distribution, whereas the standard deviation seems to follow either a gamma or a log-normal distribution. We can take advantage of that by explicitly using these distributions instead of approximating them as normal distributions. pomegranate is flexible enough to allow for this, whereas sklearn currently is not. End of explanation """ %timeit GaussianNB().fit(X_train, y_train) %timeit NaiveBayes.from_samples(NormalDistribution, X_train, y_train) %timeit NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train) """ Explanation: It looks like we're able to get a small improvement in accuracy just by using appropriate distributions for the features, without any type of data transformation or filtering. This certainly seems worthwhile if you can determine what the appropriate underlying distribution is. Next, there's obviously the issue of speed. Let's compare the speed of the pomegranate implementation and the sklearn implementation. End of explanation """ pom_time, skl_time = [], [] n1, n2 = 15000, 60000, for d in range(1, 101, 5): X = numpy.concatenate([numpy.random.normal(3, 2, size=(n1, d)), numpy.random.normal(7, 1, size=(n2, d))]) y = numpy.concatenate([numpy.zeros(n1), numpy.ones(n2)]) tic = time.time() for i in range(25): GaussianNB().fit(X, y) skl_time.append((time.time() - tic) / 25) tic = time.time() for i in range(25): NaiveBayes.from_samples(NormalDistribution, X, y) pom_time.append((time.time() - tic) / 25) plt.figure(figsize=(14, 6)) plt.plot(range(1, 101, 5), pom_time, color='c', label="pomegranate") plt.plot(range(1, 101, 5), skl_time, color='m', label="sklearn") plt.xticks(fontsize=14) plt.xlabel("Number of Dimensions", fontsize=14) plt.yticks(fontsize=14) plt.ylabel("Time (s)") plt.legend(fontsize=14) plt.show() """ Explanation: Looks as if on this small dataset they're all taking approximately the same time. This is pretty much expected, as the fitting step is fairly simple and both implementations use C-level numerics for the calculations. We can give a more thorough treatment of the speed comparison on larger datasets. Let's look at the average time it takes to fit a model to data of increasing dimensionality across 25 runs. End of explanation """ tilt_a = [[-2, 0.5], [5, 2]] tilt_b = [[-1, 1.5], [3, 3]] X = numpy.concatenate((numpy.random.normal(4, 1, size=(250, 2)).dot(tilt_a), numpy.random.normal(3, 1, size=(800, 2)).dot(tilt_b))) y = numpy.concatenate((numpy.zeros(250), numpy.ones(800))) model_a = NaiveBayes.from_samples(NormalDistribution, X, y) model_b = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y) xx, yy = numpy.meshgrid(numpy.arange(-5, 30, 0.02), numpy.arange(0, 25, 0.02)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(18, 8)) plt.subplot(121) plt.contour(xx, yy, Z1, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.xlim(-5, 30) plt.ylim(0, 25) plt.subplot(122) plt.contour(xx, yy, Z2, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.xlim(-5, 30) plt.ylim(0, 25) plt.show() """ Explanation: It appears as if the two implementations are basically the same speed. This is unsurprising given the simplicity of the calculations, and as mentioned before, the low level implementation. Multivariate Gaussian Bayes Classifiers The natural generalization of the naive Bayes classifier is to allow any multivariate function take the place of $P(D|M)$ instead of it being the product of several univariate probability distributions. One immediate difference is that now instead of creating a Gaussian model with effectively a diagonal covariance matrix, you can now create one with a full covariance matrix. Let's see an example of that at work. End of explanation """ print("naive training accuracy: {:4.4}".format((model_a.predict(X) == y).mean())) print("bayes classifier training accuracy: {:4.4}".format((model_b.predict(X) == y).mean())) """ Explanation: It looks like we are able to get a better boundary between the two blobs of data. The primary for this is because the data don't form spherical clusters, like you assume when you force a diagonal covariance matrix, but are tilted ellipsoids, that can be better modeled by a full covariance matrix. We can quantify this quickly by looking at performance on the training data. End of explanation """ X = numpy.empty(shape=(0, 2)) X = numpy.concatenate((X, numpy.random.normal(4, 1, size=(200, 2)).dot([[-2, 0.5], [2, 0.5]]))) X = numpy.concatenate((X, numpy.random.normal(3, 1, size=(350, 2)).dot([[-1, 2], [1, 0.8]]))) X = numpy.concatenate((X, numpy.random.normal(7, 1, size=(700, 2)).dot([[-0.75, 0.8], [0.9, 1.5]]))) X = numpy.concatenate((X, numpy.random.normal(6, 1, size=(120, 2)).dot([[-1.5, 1.2], [0.6, 1.2]]))) y = numpy.concatenate((numpy.zeros(550), numpy.ones(820))) model_a = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y) gmm_a = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 0]) gmm_b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 1]) model_b = BayesClassifier([gmm_a, gmm_b], weights=numpy.array([1-y.mean(), y.mean()])) xx, yy = numpy.meshgrid(numpy.arange(-10, 10, 0.02), numpy.arange(0, 25, 0.02)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) centroids1 = numpy.array([distribution.mu for distribution in model_a.distributions]) centroids2 = numpy.concatenate([[distribution.mu for distribution in component.distributions] for component in model_b.distributions]) plt.figure(figsize=(18, 8)) plt.subplot(121) plt.contour(xx, yy, Z1, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.scatter(centroids1[:,0], centroids1[:,1], color='k', s=100) plt.subplot(122) plt.contour(xx, yy, Z2, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.scatter(centroids2[:,0], centroids2[:,1], color='k', s=100) plt.show() """ Explanation: Looks like there is a significant boost. Naturally you'd want to evaluate the performance of the model on separate validation data, but for the purposes of demonstrating the effect of a full covariance matrix this should be sufficient. Gaussian Mixture Model Bayes Classifier While using a full covariance matrix is certainly more complicated than using only the diagonal, there is no reason that the $P(D|M)$ has to even be a single simple distribution versus a full probabilistic model. After all, all probabilistic models, including general mixtures, hidden Markov models, and Bayesian networks, can calculate $P(D|M)$. Let's take a look at an example of using a mixture model instead of a single gaussian distribution. End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/06_structured/labs/6_deploy.ipynb
apache-2.0
# change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION os.environ['TFVERSION'] = '1.13' %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi %%bash # copy solution to Lab #5 (skip this step if you still have results from Lab 5 in your bucket) gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight/trained_model """ Explanation: <h1> Deploying and predicting with model </h1> This notebook illustrates: <ol> <li> Deploying model <li> Predicting with model </ol> End of explanation """ %%bash gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} #gcloud ai-platform models delete ${MODEL_NAME} #gcloud ai-platform models create ${MODEL_NAME} --regions $REGION #gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION """ Explanation: Task 1 What files are present in the model trained directory (gs://${BUCKET}/babyweight/trained_model)? Hint (highlight to see): <p style='color:white'> Run gsutil ls in a bash cell. Answer: model checkpoints are in the trained model directory and several exported models (model architecture + weights) are in the export/exporter subdirectory </p> <h2> Task 2: Deploy trained, exported model </h2> Uncomment and run the the appropriate gcloud lines ONE-BY-ONE to deploy the trained model to act as a REST web service. Hint (highlight to see): <p style='color:white'> The very first time, you need only the last two gcloud calls to create the model and the version. To experiment later, you might need to delete any deployed version, but should not have to recreate the model </p> End of explanation """ from oauth2client.client import GoogleCredentials import requests import json MODEL_NAME = 'babyweight' MODEL_VERSION = 'ml_on_gcp' token = GoogleCredentials.get_application_default().get_access_token().access_token api = 'https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict' \ .format(PROJECT, MODEL_NAME, MODEL_VERSION) headers = {'Authorization': 'Bearer ' + token } data = { 'instances': [ # TODO: complete { 'key': 'b1', 'is_male': 'True', 'mother_age': 26.0, 'plurality': 'Single(1)', 'gestation_weeks': 39 }, ] } response = requests.post(api, json=data, headers=headers) print(response.content) """ Explanation: Task 3: Write Python code to invoke the deployed model (online prediction) <p> Send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances. The deployed model requires the input instances to be formatted as follows: <pre> { 'key': 'b1', 'is_male': 'True', 'mother_age': 26.0, 'plurality': 'Single(1)', 'gestation_weeks': 39 }, </pre> The key is an arbitrary string. Allowed values for is_male are True, False and Unknown. Allowed values for plurality are Single(1), Twins(2), Triplets(3), Multiple(2+) End of explanation """ %%writefile inputs.json {"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} {"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} %%bash INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs gsutil cp inputs.json $INPUT gsutil -m rm -rf $OUTPUT gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \ --data-format=TEXT --region ${REGION} \ --input-paths=$INPUT \ --output-path=$OUTPUT \ --model=babyweight --version=ml_on_gcp """ Explanation: <h2> Task 4: Try out batch prediction </h2> <p> Batch prediction is commonly used when you thousands to millions of predictions. Create a file withe one instance per line and submit using gcloud. End of explanation """
jo-tez/aima-python
knowledge_FOIL.ipynb
mit
from knowledge import * from notebook import pseudocode, psource """ Explanation: KNOWLEDGE The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach. Execute the cell below to get started. End of explanation """ psource(FOIL_container) """ Explanation: CONTENTS Overview Inductive Logic Programming (FOIL) OVERVIEW Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and to find a proper hypothesis. First-Order Logic Usually knowledge in this field is represented as first-order logic, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples. Representation In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions. For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example: {'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True} A hypothesis can be the following: [{'Species': 'Cat'}] which means an animal will take an umbrella if and only if it is a cat. Consistency We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True. Inductive Logic Programming (FOIL) Inductive logic programming (ILP) combines inductive methods with the power of first-order representations, concentrating in particular on the representation of hypotheses as logic programs. The general knowledge-based induction problem is to solve the entailment constraint: <br> <br> $ Background ∧ Hypothesis ∧ Descriptions \vDash Classifications $ for the unknown $Hypothesis$, given the $Background$ knowledge described by $Descriptions$ and $Classifications$. The first approach to ILP works by starting with a very general rule and gradually specializing it so that it fits the data. <br> This is essentially what happens in decision-tree learning, where a decision tree is gradually grown until it is consistent with the observations. <br> To do ILP we use first-order literals instead of attributes, and the $Hypothesis$ is a set of clauses (set of first order rules, where each rule is similar to a Horn clause) instead of a decision tree. <br> The FOIL algorithm learns new rules, one at a time, in order to cover all given positive and negative examples. <br> More precicely, FOIL contains an inner and an outer while loop. <br> - outer loop: <font color='blue'>(function foil()) </font> add rules until all positive examples are covered. <br> (each rule is a conjuction of literals, which are chosen inside the inner loop) inner loop: <font color ='blue'>(function new_clause()) </font> add new literals until all negative examples are covered, and some positive examples are covered. <br> In each iteration, we select/add the most promising literal, according to an estimate of its utility. <font color ='blue'>(function new_literal()) </font> <br> The evaluation function to estimate utility of adding literal $L$ to a set of rules $R$ is <font color ='blue'>(function gain()) </font> : $$ FoilGain(L,R) = t \big( \log_2{\frac{p_1}{p_1+n_1}} - \log_2{\frac{p_0}{p_0+n_0}} \big) $$ where: $p_0: \text{is the number of possitive bindings of rule R } \\ n_0: \text{is the number of negative bindings of R} \\ p_1: \text{is the is the number of possitive bindings of rule R'}\\ n_0: \text{is the number of negative bindings of R'}\\ t: \text{is the number of possitive bindings of rule R that are still covered after adding literal L to R}$ Calculate the extended examples for the chosen literal <font color ='blue'>(function extend_example()) </font> <br> (the set of examples created by extending example with each possible constant value for each new variable in literal) Finally, the algorithm returns a disjunction of first order rules (= conjuction of literals) End of explanation """ A, B, C, D, E, F, G, H, I, x, y, z = map(expr, 'ABCDEFGHIxyz') small_family = FOIL_container([expr("Mother(Anne, Peter)"), expr("Mother(Anne, Zara)"), expr("Mother(Sarah, Beatrice)"), expr("Mother(Sarah, Eugenie)"), expr("Father(Mark, Peter)"), expr("Father(Mark, Zara)"), expr("Father(Andrew, Beatrice)"), expr("Father(Andrew, Eugenie)"), expr("Father(Philip, Anne)"), expr("Father(Philip, Andrew)"), expr("Mother(Elizabeth, Anne)"), expr("Mother(Elizabeth, Andrew)"), expr("Male(Philip)"), expr("Male(Mark)"), expr("Male(Andrew)"), expr("Male(Peter)"), expr("Female(Elizabeth)"), expr("Female(Anne)"), expr("Female(Sarah)"), expr("Female(Zara)"), expr("Female(Beatrice)"), expr("Female(Eugenie)"), ]) target = expr('Parent(x, y)') examples_pos = [{x: expr('Elizabeth'), y: expr('Anne')}, {x: expr('Elizabeth'), y: expr('Andrew')}, {x: expr('Philip'), y: expr('Anne')}, {x: expr('Philip'), y: expr('Andrew')}, {x: expr('Anne'), y: expr('Peter')}, {x: expr('Anne'), y: expr('Zara')}, {x: expr('Mark'), y: expr('Peter')}, {x: expr('Mark'), y: expr('Zara')}, {x: expr('Andrew'), y: expr('Beatrice')}, {x: expr('Andrew'), y: expr('Eugenie')}, {x: expr('Sarah'), y: expr('Beatrice')}, {x: expr('Sarah'), y: expr('Eugenie')}] examples_neg = [{x: expr('Anne'), y: expr('Eugenie')}, {x: expr('Beatrice'), y: expr('Eugenie')}, {x: expr('Mark'), y: expr('Elizabeth')}, {x: expr('Beatrice'), y: expr('Philip')}] # run the FOIL algorithm clauses = small_family.foil([examples_pos, examples_neg], target) print (clauses) """ Explanation: Example Family Suppose we have the following family relations: <br> <br> Given some positive and negative examples of the relation 'Parent(x,y)', we want to find a set of rules that satisfies all the examples. <br> A definition of Parent is $Parent(x,y) \Leftrightarrow Mother(x,y) \lor Father(x,y)$, which is the result that we expect from the algorithm. End of explanation """ target = expr('Grandparent(x, y)') examples_pos = [{x: expr('Elizabeth'), y: expr('Peter')}, {x: expr('Elizabeth'), y: expr('Zara')}, {x: expr('Elizabeth'), y: expr('Beatrice')}, {x: expr('Elizabeth'), y: expr('Eugenie')}, {x: expr('Philip'), y: expr('Peter')}, {x: expr('Philip'), y: expr('Zara')}, {x: expr('Philip'), y: expr('Beatrice')}, {x: expr('Philip'), y: expr('Eugenie')}] examples_neg = [{x: expr('Anne'), y: expr('Eugenie')}, {x: expr('Beatrice'), y: expr('Eugenie')}, {x: expr('Elizabeth'), y: expr('Andrew')}, {x: expr('Elizabeth'), y: expr('Anne')}, {x: expr('Elizabeth'), y: expr('Mark')}, {x: expr('Elizabeth'), y: expr('Sarah')}, {x: expr('Philip'), y: expr('Anne')}, {x: expr('Philip'), y: expr('Andrew')}, {x: expr('Anne'), y: expr('Peter')}, {x: expr('Anne'), y: expr('Zara')}, {x: expr('Mark'), y: expr('Peter')}, {x: expr('Mark'), y: expr('Zara')}, {x: expr('Andrew'), y: expr('Beatrice')}, {x: expr('Andrew'), y: expr('Eugenie')}, {x: expr('Sarah'), y: expr('Beatrice')}, {x: expr('Mark'), y: expr('Elizabeth')}, {x: expr('Beatrice'), y: expr('Philip')}, {x: expr('Peter'), y: expr('Andrew')}, {x: expr('Zara'), y: expr('Mark')}, {x: expr('Peter'), y: expr('Anne')}, {x: expr('Zara'), y: expr('Eugenie')}, ] clauses = small_family.foil([examples_pos, examples_neg], target) print(clauses) """ Explanation: Indeed the algorithm returned the rule: <br>$Parent(x,y) \Leftrightarrow Mother(x,y) \lor Father(x,y)$ Suppose that we have some positive and negative results for the relation 'GrandParent(x,y)' and we want to find a set of rules that satisfies the examples. <br> One possible set of rules for the relation $Grandparent(x,y)$ could be: <br> <br> Or, if $Background$ included the sentence $Parent(x,y) \Leftrightarrow [Mother(x,y) \lor Father(x,y)]$ then: $$Grandparent(x,y) \Leftrightarrow \exists \: z \quad Parent(x,z) \land Parent(z,y)$$ End of explanation """ """ A H |\ /| | \ / | v v v v B D-->E-->G-->I | / | | / | vv v C F """ small_network = FOIL_container([expr("Conn(A, B)"), expr("Conn(A ,D)"), expr("Conn(B, C)"), expr("Conn(D, C)"), expr("Conn(D, E)"), expr("Conn(E ,F)"), expr("Conn(E, G)"), expr("Conn(G, I)"), expr("Conn(H, G)"), expr("Conn(H, I)")]) target = expr('Reach(x, y)') examples_pos = [{x: A, y: B}, {x: A, y: C}, {x: A, y: D}, {x: A, y: E}, {x: A, y: F}, {x: A, y: G}, {x: A, y: I}, {x: B, y: C}, {x: D, y: C}, {x: D, y: E}, {x: D, y: F}, {x: D, y: G}, {x: D, y: I}, {x: E, y: F}, {x: E, y: G}, {x: E, y: I}, {x: G, y: I}, {x: H, y: G}, {x: H, y: I}] nodes = {A, B, C, D, E, F, G, H, I} examples_neg = [example for example in [{x: a, y: b} for a in nodes for b in nodes] if example not in examples_pos] clauses = small_network.foil([examples_pos, examples_neg], target) print(clauses) """ Explanation: Indeed the algorithm returned the rule: <br>$Grandparent(x,y) \Leftrightarrow \exists \: v \: \: Parent(x,v) \land Parent(v,y)$ Example Network Suppose that we have the following directed graph and we want to find a rule that describes the reachability between two nodes (Reach(x,y)). <br> Such a rule could be recursive, since y can be reached from x if and only if there is a sequence of adjacent nodes from x to y: $$ Reach(x,y) \Leftrightarrow \begin{cases} Conn(x,y), \: \text{(if there is a directed edge from x to y)} \ \lor \quad \exists \: z \quad Reach(x,z) \land Reach(z,y) \end{cases}$$ End of explanation """
mne-tools/mne-tools.github.io
0.17/_downloads/99b2d0c9aaf0ce2af85d098f7ac4577c/plot_head_positions.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) from os import path as op import mne print(__doc__) data_path = op.join(mne.datasets.testing.data_path(verbose=True), 'SSS') pos = mne.chpi.read_head_pos(op.join(data_path, 'test_move_anon_raw.pos')) """ Explanation: Visualize subject head movement Show how subjects move as a function of time. End of explanation """ mne.viz.plot_head_positions(pos, mode='traces') """ Explanation: Visualize the subject head movements as traces: End of explanation """ mne.viz.plot_head_positions(pos, mode='field') """ Explanation: Or we can visualize them as a continuous field (with the vectors pointing in the head-upward direction): End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
apache-2.0
!pip install -q -U "tensorflow-text==2.8.*" !pip install -q tf-models-official==2.4.0 """ Explanation: Fine-Tuning a BERT Model Learning objectives Get the dataset from TensorFlow Datasets. Preprocess the data. Build the model. Train the model. Re-encoding a large dataset. Introduction In this notebook, you will work through fine-tuning a BERT model using the tensorflow-models PIP package. The pretrained BERT model this tutorial is based on is also available on TensorFlow Hub, to see how to use it refer to the Hub Appendix Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference. Setup Install the TensorFlow Model Garden pip package tf-models-official is the stable Model Garden package. Note that it may not include the latest changes in the tensorflow_models github repo. To include latest changes, you may install tf-models-nightly, which is the nightly Model Garden package created daily automatically. pip will install all models and dependencies automatically. End of explanation """ import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds tfds.disable_progress_bar() from official.modeling import tf_utils from official import nlp from official.nlp import bert # Load the required submodules import official.nlp.optimization import official.nlp.bert.bert_models import official.nlp.bert.configs import official.nlp.bert.run_classifier import official.nlp.bert.tokenization import official.nlp.data.classifier_data_lib import official.nlp.modeling.losses import official.nlp.modeling.models import official.nlp.modeling.networks """ Explanation: Imports End of explanation """ gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12" tf.io.gfile.listdir(gs_folder_bert) """ Explanation: Resources This directory contains the configuration, vocabulary, and a pre-trained checkpoint used in this tutorial: End of explanation """ hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3" """ Explanation: You can get a pre-trained BERT encoder from TensorFlow Hub: End of explanation """ glue, info = tfds.load('glue/mrpc', with_info=True, # It's small, load the whole dataset batch_size=-1) list(glue.keys()) """ Explanation: The data For this example we used the GLUE MRPC dataset from TFDS. This dataset is not set up so that it can be directly fed into the BERT model, so this section also handles the necessary preprocessing. Get the dataset from TensorFlow Datasets The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. Number of labels: 2. Size of training dataset: 3668. Size of evaluation dataset: 408. Maximum sequence length of training and evaluation dataset: 128. End of explanation """ info.features """ Explanation: The info object describes the dataset and it's features: End of explanation """ info.features['label'].names """ Explanation: The two classes are: End of explanation """ glue_train = glue['train'] for key, value in glue_train.items(): print(f"{key:9s}: {value[0].numpy()}") """ Explanation: Here is one example from the training set: End of explanation """ # Set up tokenizer to generate Tensorflow dataset tokenizer = # TODO 1: Your code goes here print("Vocab size:", len(tokenizer.vocab)) """ Explanation: The BERT tokenizer To fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training. The BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a keras.layer like you can with preprocessing.TextVectorization. The following code rebuilds the tokenizer that was used by the base model: End of explanation """ tokens = tokenizer.tokenize("Hello TensorFlow!") print(tokens) ids = tokenizer.convert_tokens_to_ids(tokens) print(ids) """ Explanation: Tokenize a sentence: End of explanation """ tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]']) """ Explanation: Preprocess the data The section manually preprocessed the dataset into the format expected by the model. This dataset is small, so preprocessing can be done quickly and easily in memory. For larger datasets the tf_models library includes some tools for preprocessing and re-serializing a dataset. See Appendix: Re-encoding a large dataset for details. Encode the sentences The model expects its two inputs sentences to be concatenated together. This input is expected to start with a [CLS] "This is a classification problem" token, and each sentence should end with a [SEP] "Separator" token: End of explanation """ def encode_sentence(s): tokens = list(tokenizer.tokenize(s.numpy())) tokens.append('[SEP]') return tokenizer.convert_tokens_to_ids(tokens) sentence1 = tf.ragged.constant([ encode_sentence(s) for s in glue_train["sentence1"]]) sentence2 = tf.ragged.constant([ encode_sentence(s) for s in glue_train["sentence2"]]) print("Sentence1 shape:", sentence1.shape.as_list()) print("Sentence2 shape:", sentence2.shape.as_list()) """ Explanation: Start by encoding all the sentences while appending a [SEP] token, and packing them into ragged-tensors: End of explanation """ cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0] input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1) _ = plt.pcolormesh(input_word_ids.to_tensor()) """ Explanation: Now prepend a [CLS] token, and concatenate the ragged tensors to form a single input_word_ids tensor for each example. RaggedTensor.to_tensor() zero pads to the longest sequence. End of explanation """ input_mask = tf.ones_like(input_word_ids).to_tensor() plt.pcolormesh(input_mask) """ Explanation: Mask and input type The model expects two additional inputs: The input mask The input type The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding. End of explanation """ type_cls = tf.zeros_like(cls) type_s1 = tf.zeros_like(sentence1) type_s2 = tf.ones_like(sentence2) input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor() plt.pcolormesh(input_type_ids) """ Explanation: The "input type" also has the same shape, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of. End of explanation """ def encode_sentence(s, tokenizer): tokens = list(tokenizer.tokenize(s)) tokens.append('[SEP]') return tokenizer.convert_tokens_to_ids(tokens) def bert_encode(glue_dict, tokenizer): num_examples = len(glue_dict["sentence1"]) sentence1 = tf.ragged.constant([ encode_sentence(s, tokenizer) for s in np.array(glue_dict["sentence1"])]) sentence2 = tf.ragged.constant([ encode_sentence(s, tokenizer) for s in np.array(glue_dict["sentence2"])]) cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0] input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1) input_mask = tf.ones_like(input_word_ids).to_tensor() type_cls = tf.zeros_like(cls) type_s1 = tf.zeros_like(sentence1) type_s2 = tf.ones_like(sentence2) input_type_ids = tf.concat( [type_cls, type_s1, type_s2], axis=-1).to_tensor() inputs = { 'input_word_ids': input_word_ids.to_tensor(), 'input_mask': input_mask, 'input_type_ids': input_type_ids} return inputs glue_train = bert_encode(glue['train'], tokenizer) glue_train_labels = glue['train']['label'] glue_validation = bert_encode(glue['validation'], tokenizer) glue_validation_labels = glue['validation']['label'] glue_test = bert_encode(glue['test'], tokenizer) glue_test_labels = glue['test']['label'] """ Explanation: Put it all together Collect the above text parsing code into a single function, and apply it to each split of the glue/mrpc dataset. End of explanation """ # Print the key value and shapes for key, value in glue_train.items(): # TODO 2: Your code goes here print(f'glue_train_labels shape: {glue_train_labels.shape}') """ Explanation: Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match: End of explanation """ import json bert_config_file = os.path.join(gs_folder_bert, "bert_config.json") config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read()) bert_config = bert.configs.BertConfig.from_dict(config_dict) config_dict """ Explanation: The model Build the model The first step is to download the configuration for the pre-trained model. End of explanation """ bert_classifier, bert_encoder = bert.bert_models.classifier_model( bert_config, num_labels=2) """ Explanation: The config defines the core BERT Model, which is a Keras model to predict the outputs of num_classes from the inputs with maximum sequence length max_seq_length. This function returns both the encoder and the classifier. End of explanation """ tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48) """ Explanation: The classifier has three inputs and one output: End of explanation """ glue_batch = {key: val[:10] for key, val in glue_train.items()} bert_classifier( glue_batch, training=True ).numpy() """ Explanation: Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes: End of explanation """ tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48) """ Explanation: The TransformerEncoder in the center of the classifier above is the bert_encoder. Inspecting the encoder, we see its stack of Transformer layers connected to those same three inputs: End of explanation """ checkpoint = tf.train.Checkpoint(encoder=bert_encoder) checkpoint.read( os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed() """ Explanation: Restore the encoder weights When built the encoder is randomly initialized. Restore the encoder's weights from the checkpoint: End of explanation """ # Set up epochs and steps epochs = 3 batch_size = 32 eval_batch_size = 32 train_data_size = len(glue_train_labels) steps_per_epoch = int(train_data_size / batch_size) num_train_steps = steps_per_epoch * epochs warmup_steps = int(epochs * train_data_size * 0.1 / batch_size) # creates an optimizer with learning rate schedule optimizer = # TODO 3: Your code goes here """ Explanation: Note: The pretrained TransformerEncoder is also available on TensorFlow Hub. See the Hub appendix for details. Set up the optimizer BERT adopts the Adam optimizer with weight decay (aka "AdamW"). It also employs a learning rate schedule that firstly warms up from 0 and then decays to 0. End of explanation """ type(optimizer) """ Explanation: This returns an AdamWeightDecay optimizer with the learning rate schedule set: End of explanation """ metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)] loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) bert_classifier.compile( optimizer=optimizer, loss=loss, metrics=metrics) # Train the model bert_classifier.fit(# TODO 4: Your code goes here) """ Explanation: To see an example of how to customize the optimizer and it's schedule, see the Optimizer schedule appendix. Train the model The metric is accuracy and we use sparse categorical cross-entropy as loss. End of explanation """ my_examples = bert_encode( glue_dict = { 'sentence1':[ 'The rain in Spain falls mainly on the plain.', 'Look I fine tuned BERT.'], 'sentence2':[ 'It mostly rains on the flat lands of Spain.', 'Is it working? This does not match.'] }, tokenizer=tokenizer) """ Explanation: Now run the fine-tuned model on a custom example to see that it works. Start by encoding some sentence pairs: End of explanation """ result = bert_classifier(my_examples, training=False) result = tf.argmax(result).numpy() result np.array(info.features['label'].names)[result] """ Explanation: The model should report class 1 "match" for the first example and class 0 "no-match" for the second: End of explanation """ export_dir='./saved_model' tf.saved_model.save(bert_classifier, export_dir=export_dir) reloaded = tf.saved_model.load(export_dir) reloaded_result = reloaded([my_examples['input_word_ids'], my_examples['input_mask'], my_examples['input_type_ids']], training=False) original_result = bert_classifier(my_examples, training=False) # The results are (nearly) identical: print(original_result.numpy()) print() print(reloaded_result.numpy()) """ Explanation: Save the model Often the goal of training a model is to use it for something, so export the model and then restore it to be sure that it works. End of explanation """ processor = nlp.data.classifier_data_lib.TfdsProcessor( tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2", process_text_fn=bert.tokenization.convert_to_unicode) """ Explanation: Appendix <a id=re_encoding_tools></a> Re-encoding a large dataset This tutorial you re-encoded the dataset in memory, for clarity. This was only possible because glue/mrpc is a very small dataset. To deal with larger datasets tf_models library includes some tools for processing and re-encoding a dataset for efficient training. The first step is to describe which features of the dataset should be transformed: End of explanation """ # Set up output of training and evaluation Tensorflow dataset train_data_output_path="./mrpc_train.tf_record" eval_data_output_path="./mrpc_eval.tf_record" max_seq_length = 128 batch_size = 32 eval_batch_size = 32 # Generate and save training data into a tf record file input_meta_data = (# TODO 5: Your code goes here) """ Explanation: Then apply the transformation to generate new TFRecord files. End of explanation """ training_dataset = bert.run_classifier.get_dataset_fn( train_data_output_path, max_seq_length, batch_size, is_training=True)() evaluation_dataset = bert.run_classifier.get_dataset_fn( eval_data_output_path, max_seq_length, eval_batch_size, is_training=False)() """ Explanation: Finally create tf.data input pipelines from those TFRecord files: End of explanation """ training_dataset.element_spec """ Explanation: The resulting tf.data.Datasets return (features, labels) pairs, as expected by keras.Model.fit: End of explanation """ def create_classifier_dataset(file_path, seq_length, batch_size, is_training): """Creates input dataset from (tf)records files for train/eval.""" dataset = tf.data.TFRecordDataset(file_path) if is_training: dataset = dataset.shuffle(100) dataset = dataset.repeat() def decode_record(record): name_to_features = { 'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64), 'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64), 'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64), 'label_ids': tf.io.FixedLenFeature([], tf.int64), } return tf.io.parse_single_example(record, name_to_features) def _select_data_from_record(record): x = { 'input_word_ids': record['input_ids'], 'input_mask': record['input_mask'], 'input_type_ids': record['segment_ids'] } y = record['label_ids'] return (x, y) dataset = dataset.map(decode_record, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.map( _select_data_from_record, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(batch_size, drop_remainder=is_training) dataset = dataset.prefetch(tf.data.AUTOTUNE) return dataset # Set up batch sizes batch_size = 32 eval_batch_size = 32 # Return Tensorflow dataset training_dataset = create_classifier_dataset( train_data_output_path, input_meta_data['max_seq_length'], batch_size, is_training=True) evaluation_dataset = create_classifier_dataset( eval_data_output_path, input_meta_data['max_seq_length'], eval_batch_size, is_training=False) training_dataset.element_spec """ Explanation: Create tf.data.Dataset for training and evaluation If you need to modify the data loading here is some code to get you started: End of explanation """ # Note: 350MB download. import tensorflow_hub as hub hub_model_name = "bert_en_uncased_L-12_H-768_A-12" hub_encoder = hub.KerasLayer(f"https://tfhub.dev/tensorflow/{hub_model_name}/3", trainable=True) print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables") """ Explanation: <a id="hub_bert"></a> TFModels BERT on TFHub You can get the BERT model off the shelf from TFHub. It would not be hard to add a classification head on top of this hub.KerasLayer End of explanation """ result = hub_encoder( inputs=dict( input_word_ids=glue_train['input_word_ids'][:10], input_mask=glue_train['input_mask'][:10], input_type_ids=glue_train['input_type_ids'][:10],), training=False, ) print("Pooled output shape:", result['pooled_output'].shape) print("Sequence output shape:", result['sequence_output'].shape) """ Explanation: Test run it on a batch of data: End of explanation """ hub_classifier = nlp.modeling.models.BertClassifier( bert_encoder, num_classes=2, dropout_rate=0.1, initializer=tf.keras.initializers.TruncatedNormal( stddev=0.02)) """ Explanation: At this point it would be simple to add a classification head yourself. The bert_models.classifier_model function can also build a classifier onto the encoder from TensorFlow Hub: End of explanation """ tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64) """ Explanation: The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The BertEncoder model is now a single layer: End of explanation """ bert_encoder_config = config_dict.copy() # You need to rename a few fields to make this work: bert_encoder_config['attention_dropout_rate'] = bert_encoder_config.pop('attention_probs_dropout_prob') bert_encoder_config['activation'] = tf_utils.get_activation(bert_encoder_config.pop('hidden_act')) bert_encoder_config['dropout_rate'] = bert_encoder_config.pop('hidden_dropout_prob') bert_encoder_config['initializer'] = tf.keras.initializers.TruncatedNormal( stddev=bert_encoder_config.pop('initializer_range')) bert_encoder_config['max_sequence_length'] = bert_encoder_config.pop('max_position_embeddings') bert_encoder_config['num_layers'] = bert_encoder_config.pop('num_hidden_layers') bert_encoder_config manual_encoder = nlp.modeling.networks.BertEncoder(**bert_encoder_config) """ Explanation: <a id="model_builder_functions"></a> Low level model building If you need a more control over the construction of the model it's worth noting that the classifier_model function used earlier is really just a thin wrapper over the nlp.modeling.networks.BertEncoder and nlp.modeling.models.BertClassifier classes. Just remember that if you start modifying the architecture it may not be correct or possible to reload the pre-trained checkpoint so you'll need to retrain from scratch. Build the encoder: End of explanation """ checkpoint = tf.train.Checkpoint(encoder=manual_encoder) checkpoint.read( os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed() """ Explanation: Restore the weights: End of explanation """ result = manual_encoder(my_examples, training=True) print("Sequence output shape:", result[0].shape) print("Pooled output shape:", result[1].shape) """ Explanation: Test run it: End of explanation """ manual_classifier = nlp.modeling.models.BertClassifier( bert_encoder, num_classes=2, dropout_rate=bert_encoder_config['dropout_rate'], initializer=bert_encoder_config['initializer']) manual_classifier(my_examples, training=True).numpy() """ Explanation: Wrap it in a classifier: End of explanation """ optimizer = nlp.optimization.create_optimizer( 2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps) """ Explanation: <a id="optimizer_schedule"></a> Optimizers and schedules The optimizer used to train the model was created using the nlp.optimization.create_optimizer function: End of explanation """ epochs = 3 batch_size = 32 eval_batch_size = 32 train_data_size = len(glue_train_labels) steps_per_epoch = int(train_data_size / batch_size) num_train_steps = steps_per_epoch * epochs decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay( initial_learning_rate=2e-5, decay_steps=num_train_steps, end_learning_rate=0) plt.plot([decay_schedule(n) for n in range(num_train_steps)]) """ Explanation: That high level wrapper sets up the learning rate schedules and the optimizer. The base learning rate schedule used here is a linear decay to zero over the training run: End of explanation """ warmup_steps = num_train_steps * 0.1 warmup_schedule = nlp.optimization.WarmUp( initial_learning_rate=2e-5, decay_schedule_fn=decay_schedule, warmup_steps=warmup_steps) # The warmup overshoots, because it warms up to the `initial_learning_rate` # following the original implementation. You can set # `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the # overshoot. plt.plot([warmup_schedule(n) for n in range(num_train_steps)]) """ Explanation: This, in turn is wrapped in a WarmUp schedule that linearly increases the learning rate to the target value over the first 10% of training: End of explanation """ optimizer = nlp.optimization.AdamWeightDecay( learning_rate=warmup_schedule, weight_decay_rate=0.01, epsilon=1e-6, exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias']) """ Explanation: Then create the nlp.optimization.AdamWeightDecay using that schedule, configured for the BERT model: End of explanation """
PMEAL/OpenPNM
examples/simulations/transient/transient_fickian_diffusion.ipynb
mit
import numpy as np import openpnm as op %config InlineBackend.figure_formats = ['svg'] np.random.seed(10) %matplotlib inline np.set_printoptions(precision=5) """ Explanation: Transient Fickian Diffusion The package OpenPNM allows for the simulation of many transport phenomena in porous media such as Stokes flow, Fickian diffusion, advection-diffusion, transport of charged species, etc. Transient and steady-state simulations are both supported. An example of a transient Fickian diffusion simulation through a Cubic pore network is shown here. First, OpenPNM is imported. End of explanation """ ws = op.Workspace() ws.settings["loglevel"] = 40 proj = ws.new_project() """ Explanation: Define new workspace and project End of explanation """ shape = [13, 29, 1] net = op.network.Cubic(shape=shape, spacing=1e-4, project=proj) """ Explanation: Generate a pore network An arbitrary Cubic 3D pore network is generated consisting of a layer of $29\times13$ pores with a constant pore to pore centers spacing of ${10}^{-4}{m}$. End of explanation """ geo = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts) """ Explanation: Create a geometry Here, a geometry, corresponding to the created network, is created. The geometry contains information about the size of pores and throats in the network such as length and diameter, etc. OpenPNM has many prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. In this example, a simple geometry known as SpheresAndCylinders that assigns random diameter values to pores throats, with certain constraints, is used. End of explanation """ phase = op.phases.Water(network=net) """ Explanation: Add a phase Then, a phase (water in this example) is added to the simulation and assigned to the network. The phase contains the physical properties of the fluid considered in the simulation such as the viscosity, etc. Many predefined phases as available on OpenPNM. End of explanation """ phys = op.physics.GenericPhysics(network=net, phase=phase, geometry=geo) """ Explanation: Add a physics Next, a physics object is defined. The physics object stores information about the different physical models used in the simulation and is assigned to specific network, geometry and phase objects. This ensures that the different physical models will only have access to information about the network, geometry and phase objects to which they are assigned. In fact, models (such as Stokes flow or Fickian diffusion) require information about the network (such as the connectivity between pores), the geometry (such as the pores and throats diameters), and the phase (such as the diffusivity coefficient). End of explanation """ phase['pore.diffusivity'] = 2e-09 """ Explanation: The diffusivity coefficient of the considered chemical species in water is also defined. End of explanation """ mod = op.models.physics.diffusive_conductance.ordinary_diffusion phys.add_model(propname='throat.diffusive_conductance', model=mod, regen_mode='normal') """ Explanation: Defining a new model The physical model, consisting of Fickian diffusion, is defined and attached to the physics object previously defined. End of explanation """ fd = op.algorithms.TransientFickianDiffusion(network=net, phase=phase) """ Explanation: Define a transient Fickian diffusion algorithm Here, an algorithm for the simulation of transient Fickian diffusion is defined. It is assigned to the network and phase of interest to be able to retrieve all the information needed to build systems of linear equations. End of explanation """ fd.set_value_BC(pores=net.pores('back'), values=0.5) fd.set_value_BC(pores=net.pores('front'), values=0.2) """ Explanation: Add boundary conditions Next, Dirichlet boundary conditions are added over the back and front boundaries of the network. End of explanation """ x0 = 0.2 tspan = (0, 100) saveat = 5 """ Explanation: Define initial conditions Initial conditions must be specified when alg.run is called as alg.run(x0=x0), where x0 could either be a scalar (in which case it'll be broadcasted to all pores), or an array. Setup the transient algorithm settings The settings of the transient algorithm are updated here. When calling alg.run, you can pass the following arguments: - x0: initial conditions - tspan: integration time span - saveat: the interval at which the solution is to be stored End of explanation """ print(fd.settings) """ Explanation: Print the algorithm settings One can print the algorithm's settings as shown here. End of explanation """ soln = fd.run(x0=x0, tspan=tspan, saveat=saveat) """ Explanation: Note that the quantity corresponds to the quantity solved for. Run the algorithm The algorithm is run here. End of explanation """ print(fd) """ Explanation: Post process and export the results Once the simulation is successfully performed. The solution at every time steps is stored within the algorithm object. The algorithm's stored information is printed here. End of explanation """ soln(10) """ Explanation: Note that the solutions at every exported time step contain the @ character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example. To print the solution at $t=10s$ End of explanation """ phase.update(fd.results()) """ Explanation: The solution is here stored in the phase before export. End of explanation """ import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable c = fd.x.reshape(shape) fig, ax = plt.subplots(figsize=(6, 6)) im = ax.imshow(c[:,:,0]) ax.set_title('concentration (mol/m$^3$)') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="4%", pad=0.1) plt.colorbar(im, cax=cax); """ Explanation: Visialization using Matplotlib One can perform post processing and visualization using the exported files on an external software such as Paraview. Additionally, the Pyhton library Matplotlib can be used as shown here to plot the concentration color map at steady-state. End of explanation """
elastic/examples
Machine Learning/Query Optimization/notebooks/1 - Query tuning.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 import importlib import os import sys from elasticsearch import Elasticsearch from skopt.plots import plot_objective # project library sys.path.insert(0, os.path.abspath('..')) import qopt importlib.reload(qopt) from qopt.notebooks import evaluate_mrr100_dev, optimize_query_mrr100 from qopt.optimize import Config # use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/) # es = Elasticsearch('http://localhost:9200') es = Elasticsearch('http://35.234.93.126:9200') # set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls # max_concurrent_searches = 10 max_concurrent_searches = 30 index = 'msmarco-document' template_id = 'cross_fields' """ Explanation: Tuning query parameters for the MSMARCO Document dataset The following shows a principled, data-driven approach to tuning parameters of a basic query, such as field boosts, using the MSMARCO Document dataset. End of explanation """ %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params={ 'operator': 'OR', 'minimum_should_match': 50, # in percent/% 'tie_breaker': 0.0, 'url|boost': 1.0, 'title|boost': 1.0, 'body|boost': 1.0, }) """ Explanation: Baseline evaluation Let's look at a basic query that might be used to search documents that have multuple fields. This query uses the multi_match query of type cross_fields to search for query terms across the url, title, and body fields. Here's what the search template looks like that we'll be using: json { "query": { "multi_match": { "type": "cross_fields", "query": "{{query_string}}", "operator": "{{operator}}", "minimum_should_match": "{{minimum_should_match}}%", "tie_breaker": "{{tie_breaker}}", "fields": [ "url^{{url|boost}}", "title^{{title|boost}}", "body^{{body|boost}}" ] } } } First we'll run an evaluation on the "development" (or dev in MSMARCO terms) dataset using the standardized metric for MSMARCO, MRR@100. This will show us what our baseline is that we want to improve against. We'll use default values for all parameters in the template above. End of explanation """ %%time _ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'grid', 'space': { 'operator': ['OR', 'AND'], 'minimum_should_match': [30, 40, 50, 60, 70], }, 'default': { 'tie_breaker': 0.0, 'url|boost': 1.0, 'title|boost': 1.0, 'body|boost': 1.0, } })) """ Explanation: Now we have a baseline score that we'll iterate towards improving. Getting started with tuning: low dimensionality and discrete parameter values There's a lot of parameters that we need to choose for this query. There's three boost parameters (one for each field) and then the operator, minimum_should_match, and tie_breaker parameters. We're going to break this up into steps and optimize using basic approaches first. Breaking things up like this allows us to tackle the problem in steps and avoids introducing a large amount of complexity and what is called a large parameter space. A parameter space is the total scope of all parameters combined, and all possible values of those parameters. If we have 3 parameters, we have a 3 dimensional space and each possible combination of parameters is what makes up the coordinates of that space. Let's start with just using all the default values but looking at the difference between the operator values ['OR', 'AND'] in combination with a few optoins for minimum_should_match, since these parameters sometimes have an effect together. Since we have just two dimensions and a few possible values each, the parameter space is very small. In this case it's operator:2 * minimum_should_match:5 = 10. That's pretty small and so we'll use a simple grid search that will test every possible combination of those parmeters, 10 tests in all. When we do this search over the parameters space, we'll use a different dataset to avoid overfitting (coming up with parameter values that work only on one dataset) and then do another evaluation on the dev dataset (as above) after finding what we think are the optimal parameter values in order to check our results. To make this process a bit faster (although less robust), we'll continue to use MRR@100 but with a training query dataset of just 1,000 queries compared to the over 3,000 in the development dataset. End of explanation """ %%time _, _, final_params_msmtb, _ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'grid', 'space': { 'minimum_should_match': [30, 40, 50, 60, 70], 'tie_breaker': [0.0, 0.25, 0.5, 0.75, 1.0], }, 'default': { 'operator': 'OR', 'url|boost': 1.0, 'title|boost': 1.0, 'body|boost': 1.0, } })) """ Explanation: The first thing we notice is that there's really not much difference between the variants of minimum_should_match. There is however a very big difference between OR and AND operator. It's pretty clear that with the kinds of queries we get in this dataset (long-form, natural language questions), that OR is always better than AND. Based on this we're going to just assume that OR is always the better option and we'll continue to look for a good minimum_should_match. Let's do that in combination with tie_breaker now since those two parameters can have an impact on each other. We'll start simple again with a grid search over a limited number of parameter values for each, all of which are discrete values. With two dimensions and five parameter values each, we have a parameter space of size 25 and can test every possible value in a reasonable amount of time. End of explanation """ %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_msmtb) """ Explanation: Well that looks pretty good and we see some improvements on the training set. Let's evaluate on the development dataset now using the best parameters we've found so far. This will show us where we are relative to the baseline query. End of explanation """ final_params_msmtb %%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'bayesian', 'num_iterations': 50, 'num_initial_points': 25, 'space': { 'url|boost': { 'low': 0.0, 'high': 10.0 }, 'title|boost': { 'low': 0.0, 'high': 10.0 }, 'body|boost': { 'low': 0.0, 'high': 10.0 }, }, 'default': final_params_msmtb, })) %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts) """ Explanation: Definitely a good improvement and all we've done is optimize a few basic query parameters! Advanced tuning: high dimensionality and continuous parameter values So the question now is, can we improve this query further by tuning each of the field boosts? One of the difficulties with picking field boosts intuitively is that they are not necesarrily interpretable in relation to each other. That is, a weight of 2.0 on the title field does not mean that it is two times more important than the body field with a boost of 1.0. In order to find the best field boosts, we'll need to search over a parameter space that includes a continuous range from 0.0 to 10.0. We can't just use a grid search as we did above, testing each possible combination of the three field boosts, as an exhaustive search would require 1,000 evaluations if we test just at steps of 1 and no finer (10 steps per parameter, 3 parameters, makes 10 * 10 * 10 evaluations). Since the evaluation method is time consuming and a grid search over all combinations would be prohibitive, we'll use Bayesian optimization to pick the combinations of boosts. We're going to use a lot of iterations and probably more than necessary, but it'll be useful in a later step to plot the parameter space. Note that there are two iteration parameters that we need to set. First the num_iterations which controls the total numnber of iterations in the process. Second the num_initial_points is used to control the number of iterations that will randomly select parameter values from the parameter space, which are used to seed the Bayesian optimization process. That means the number of iterations that actually use Bayesian optimization (prior results used to select the next parameter values to try) is in fact num_iterations - num_initial_points. It's important to not set the num_initial_points too low or your Bayesian process may find the best parameters much more slowly (i.e. need more total num_iterations). Since we already found the optimal operator, minimum_should_match and tie_breaker parameters in the steps above, we'll bring those forward as default parameters that don't need to optimized any further. The current best parameters are as follows. End of explanation """ _ = plot_objective(metadata_boosts, sample_source='result') """ Explanation: Great! It looks like we made an improvement over both the baseline and the preivous best parameters found. This example shows how important it is to not tune field boosts manually, as there is no intuitive relationship between the boost values of fields. Note that due to some randomness in executions of this process, re-running the optimization process may provide slightly different optimal boosts. Most importantly for field boost tuning in general, the relative value between the fields should be about the same. Exploring a parameter space Now that we see the results of a parameter tuning process we can actually look at the details to understand a little bit about the field boost parameter space in particlar. That is, for every combination of the three boost parameters that we tried, we can get a 3-dimensional space and look at what kind of relationships there are between the various dimensions in the parameter space. Here's a plot showing the three parameters and all the combinations that were attempted. Note that we invert the MRR score (multiple by -1) since the library we are relying (scikit-optimize) on wants to minimize a score, while MRR should be maximized. End of explanation """ %%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'bayesian', 'num_iterations': 75, 'num_initial_points': 30, 'space': { 'minimum_should_match': { 'low': 30, 'high': 70 }, 'tie_breaker': { 'low': 0.0, 'high': 1.0 }, 'url|boost': { 'low': 0.0, 'high': 10.0 }, 'title|boost': { 'low': 0.0, 'high': 10.0 }, 'body|boost': { 'low': 0.0, 'high': 10.0 }, }, 'default': { 'operator': 'OR', } }), verbose=False) _ = plot_objective(metadata_boosts, sample_source='result') %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts) """ Explanation: Experiment: all-in-one tuning You might be wondering why bother doing the tuning step-wise, first searching through a few parameters, then moving onto the next set. When parameters are dependent on others, it might make sense to put all the parameters into a single, giant parameters space and try and optimize all at once. This is a more difficult optimization problem as the space is huge, but let's give it a shot and see what happens. Same number of iterations, same parameter space First we'll use the same number of total) iterations and combined parameter space as in the above steps. This is more of an apples-to-apples comparison then. However since we can easily search over continuous parameter spaces using Bayesian optimization techniques, we'll make it slightly harder but more complete by allowing for any value in our range of minimum_should_match and tie_breaker, instead of providing just the limited, discrete values as we did above (that was more for a grid search example than anything else). Note that in the following examples we supress progress output since we are using so many iterations. End of explanation """ %%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'bayesian', 'num_iterations': 100, 'num_initial_points': 40, 'space': { 'minimum_should_match': { 'low': 40, 'high': 60 }, # 50 +/- 10 'tie_breaker': { 'low': 0.1, 'high': 0.4 }, # 0.25 +/- 0.15 'url|boost': { 'low': 0.0, 'high': 10.0 }, 'title|boost': { 'low': 0.0, 'high': 10.0 }, 'body|boost': { 'low': 0.0, 'high': 10.0 }, }, 'default': { 'operator': 'OR', } }), verbose=False) _ = plot_objective(metadata_boosts, sample_source='result') %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts) """ Explanation: Ok, so not a big difference to the step-wise method we used above, but maybe it was a bit simpler to just throw in a huge parameter space. More iterations, smaller parameter space using hints from prior grid search Let's see if we can do even better by throwing more iterations into it, and by using a smaller search space for parameters that we already have a good range for minimum_should_match and tie_breaker, from the above grid search. This is kind of a hint and maybe not a fair comparison, but let's see if it makes any difference. We're not against using any prior knowledge to our advantage! End of explanation """ %%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'method': 'random', 'num_iterations': 75, 'space': { 'minimum_should_match': { 'low': 30, 'high': 70 }, 'tie_breaker': { 'low': 0.0, 'high': 1.0 }, 'url|boost': { 'low': 0.0, 'high': 10.0 }, 'title|boost': { 'low': 0.0, 'high': 10.0 }, 'body|boost': { 'low': 0.0, 'high': 10.0 }, }, 'default': { 'operator': 'OR', } }), verbose=False) _ = plot_objective(metadata_boosts, sample_source='result') %%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts) """ Explanation: Looks like we did about the same as the other methods in terms of MRR@100. In terms of simplicity though, this approach definitely wins as we can throw all the parameters in at once and not have to think too much about order and parameter dependencies. Random search Something we haven't tried yet is a fully random search. When initializing Bayesian optimization, we're doing a uniform random sample from the parameter space, then using those points to seed the process. A common approach is actually to just do all your search iterations with random parameters. Let's use the same parameter space and try out a fully random search with a lot of iterations and see what happens. End of explanation """
richjimenez/mysql-data-raspberry-pi
deliver/lesson-2-jupyter-notebook-for-data-analysis.ipynb
mit
%load_ext sql """ Explanation: Lesson 2: Setup Jupyter Notebook for Data Analysis Learning Objectives: <ol> <li>Create Python tools for data analysis using Jupyter Notebooks</li> <li>Learn how to access data from MySQL databases for data analysis</li> </ol> Exercise 1: Install Anaconda Access https://conda.io/miniconda.html and download the Windows Installer.<br> Run the following commands on the Anaconda command prompt:<br> <pre> conda install numpy, pandas, matplotlib conda update conda </pre> Sometimes data analysis requires previous versions of Python or other tools for a project.<br> Next we will setup three environments that can be used with various project requirements. Exercise 2: Configure conda environments for Python 2 and Python 3 data analysis To create a <b>Python 2</b> enviroment run the following from the Anaconda command prompt:<br> <pre> conda update conda -y conda create -n py2 python=2 anaconda jupyter notebook -y </pre> To activate the environment:<br> <pre>source activate py2</pre> On MacOS or Linux: <pre>source activate py2</pre> To deactivate the environment:<pr> <pre>source deactivate py2</pre> On MacOS or Linux: <pre>source deactivate py2</pre> To create the <b>Python 3</b> environment run the following from the Anaconda command prompt: <pre> conda create -n py3 python=3 anaconda jupyter notebook -y </pre> To activate the environment: <pre>activate py3</pre> On MacOS or Linux: <pre>source activate py3</pre> To deactivate the enviroment: <pre>deactivate py3</pre> On MacOS or Linux: <pre>source deactivate py3</pre> Setup Jupyter Notebook to access data from MySQL databases Exercise 3: Load the mysql libraries into the environment and access data from MySQL database Run the following commands from the Anaconda command line:<br> <pre> pip install ipython-sql conda install mysql-python </pre> This will install sql magic capabilities to Jupyter Notebook Load the sql magic jupyter notebook extension: End of explanation """ %config SqlMagic.autopandas=True """ Explanation: Configure sql magic to output queries as pandas dataframes: End of explanation """ import pandas as pd import numpy as np """ Explanation: Import the data analysis libraries: End of explanation """ import MySQLdb """ Explanation: Import the MySQLdb library End of explanation """ %%sql mysql://pilogger:foobar@172.20.101.81/pidata SELECT * FROM temps LIMIT 10; """ Explanation: Connect to the MySQL database using sql magic commands The connection to the MySQL database uses the following format: <pre> mysql://username:password@hostname/database </pre> To start a sql command block type: <pre>%%sql</pre> Note: Make sure the %%sql is on the top of the cell<br> Then the remaining lines can contain SQL code.<br> Example: to connect to <b>pidata</b> database and select records from the <b>temps</b> table: End of explanation """ df = %sql SELECT * FROM temps WHERE datetime > date(now()); df """ Explanation: Example to create a pandas dataframe using the results of a mysql query End of explanation """ type(df) """ Explanation: Note the data type of the dataframe df: End of explanation """ %%sql use pidata; show tables; """ Explanation: Use %%sql to start a block of sql statements Example: Show tables in the pidata database End of explanation """ #Enter the values for you database connection database = "pidata" # e.g. "pidata" hostname = "172.20.101.81" # e.g.: "mydbinstance.xyz.us-east-1.rds.amazonaws.com" port = 3306 # e.g. 3306 uid = "pilogger" # e.g. "user1" pwd = "foobar" # e.g. "Password123" conn = MySQLdb.connect( host=hostname, user=uid, passwd=pwd, db=database ) cur = conn.cursor() """ Explanation: Exercise 4: Another way to access mysql data and load into a pandas dataframe Connect using the mysqldb python library: End of explanation """ new_dataframe = pd.read_sql("SELECT * \ FROM temps", con=conn) conn.close() new_dataframe """ Explanation: Create a dataframe from the results of a sql query from the pandas object: End of explanation """ %%sql mysql://admin:admin@172.20.101.81/pidata DROP TABLE if exists temps3; CREATE TABLE temps3 ( device varchar(20) DEFAULT NULL, datetime datetime DEFAULT NULL, temp float DEFAULT NULL, hum float DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; """ Explanation: Now let's create the tables to hold the sensor data from our Raspberry Pi <pre> <b>Logon using an admin account and create a table called temps3 to hold sensor data:</b> The table contains the following fields: device -- VARCHAR, Name of the device that logged the data datetime -- DATETIME, Date time in ISO 8601 format YYYY-MM-DD HH:MM:SS temp -- FLOAT, temperature data hum -- FLOAT, humidity data </pre> End of explanation """ %%sql mysql://admin:admin@172.20.101.81 CREATE USER 'user1'@'%' IDENTIFIED BY 'logger'; GRANT SELECT, INSERT, DELETE, UPDATE ON pidata.temps3 TO 'user1'@'%'; FLUSH PRIVILEGES; %sql select * from mysql.user; """ Explanation: Next we will create a user to access the newly created table that will be used by the Raspberry Pi program Example: Start a connection using an admin account, create a new user called user1. Grant limited privileges to the pidata.temps3 table Note: Creating a user with @'%' allows the user to access the database from any host End of explanation """ %%sql mysql://user1:logger@172.20.101.81/pidata select * from temps3; """ Explanation: Next we will test access to the newly created table using the new user Start a new connection using the new user End of explanation """ for x in range(10): %sql INSERT INTO temps3 (device,datetime,temp,hum) VALUES('pi222',date(now()),73.2,22.0); %sql SELECT * FROM temps3; """ Explanation: Let's add some test data to make sure we can insert using the new user End of explanation """ %sql DELETE FROM temps3; %sql SELECT * FROM temps3; %%sql mysql://admin:admin@172.20.101.81 drop user if exists 'user1'@'%'; %sql select * from mysql.user; """ Explanation: Now we will delete the rows in the database End of explanation """
kirichoi/tellurium
examples/notebooks/core/tellurium_stochastic.ipynb
apache-2.0
from __future__ import print_function import tellurium as te te.setDefaultPlottingEngine('matplotlib') %matplotlib inline import numpy as np r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 40') r.integrator = 'gillespie' r.integrator.seed = 1234 results = [] for k in range(1, 50): r.reset() s = r.simulate(0, 40) results.append(s) r.plot(s, show=False, alpha=0.7) te.show() """ Explanation: Back to the main Index Stochastic simulation Stochastic simulations can be run by changing the current integrator type to 'gillespie' or by using the r.gillespie function. End of explanation """ results = [] for k in range(1, 20): r.reset() r.setSeed(123456) s = r.simulate(0, 40) results.append(s) r.plot(s, show=False, loc=None, color='black', alpha=0.7) te.show() """ Explanation: Seed Setting the identical seed for all repeats results in identical traces in each simulation. End of explanation """ import tellurium as te import numpy as np r = te.loada('S1 -> S2; k1*S1; k1 = 0.02; S1 = 100') r.setSeed(1234) for k in range(1, 20): r.resetToOrigin() res1 = r.gillespie(0, 10) # change in parameter after the first half of the simulation r.k1 = r.k1*20 res2 = r.gillespie (10, 20) sim = np.vstack([res1, res2]) te.plot(sim[:,0], sim[:,1:], alpha=0.7, names=['S1', 'S2'], tags=['S1', 'S2'], show=False) te.show() """ Explanation: Combining Simulations You can combine two timecourse simulations and change e.g. parameter values in between each simulation. The gillespie method simulates up to the given end time 10, after which you can make arbitrary changes to the model, then simulate again. When using the te.plot function, you can pass the parameter names, which controls the names that will be used in the figure legend, and tags, which ensures that traces with the same tag will be drawn with the same color. End of explanation """
adelle207/pyladies.cz
original/v1/s010-data/data.ipynb
mit
import numpy pole = numpy.array([0, 1, 2, 3, 4, 5, 6]) pole pole[0] pole[1:-2] pole[0] = 9 pole """ Explanation: IPython IPython je nástroj, který zjednodušuje interaktivní práci v Pythonu, zvlášť výpočty a experimenty. Dá se spustit z příkazové řádky jako ipython – pak se chová podobně jako python, jen s barvičkama a různýma vychytávkama navíc. My si ale vyzkoušíme IPython Notebook, který běží v prohlížeči, a umožní nám se vracet k předchozím příkazům, zobrazovat obrázky, nebo třeba prokládat kód hezky formátovanými poznámkami (jako je ta, kterou právě čteš). Zadej: $ ipython notebook IPython Notebook se ovládá docela intuitivně: do políčka napíšeš kód, zmáčkneš <kbd>Shift+Enter</kbd>. Kód se provede, zobrazí se výsledek, a objeví se políčko pro další kousek kódu. Když je kód špatně, dá se opravit a pomocí <kbd>Shift+Enter</kbd> spustit znovu. Numpy Základní knihovna, používaná na výpočty, je Numpy, která definuje typ array (pole). Takové pole se chová v mnoha ohledech podobně jako seznam. Když budeme chtít pole vytvořit, použijeme nejčastěji právě seznam (nebo jiný objekt který se dá použít pro for cyklus): End of explanation """ pole.append(1) """ Explanation: V jiných ohledech se ale chová jinak. Například takovému poli nejde měnit velikost – nemá metody jako append: End of explanation """ pole[0] = 'ahoj' """ Explanation: Další zajímavost je, že pole má daný typ prvků. Když uděláme pole celých čísel, nejdou do něj pak vkládat třeba řetězce: End of explanation """ pole[0] = 3.9 pole """ Explanation: ... a pokud do takového pole přiřadíme desetinné číslo, zaokrouhlí se dolů (pomocí funkce int): End of explanation """ pole_float = numpy.array([0, 1, 2, 3, 4, 5, 6], dtype=float) pole_float pole_str = numpy.array([0, 1, 2, 3, 4, 5, 6], dtype=str) pole_str """ Explanation: Typ pole se dá zadat při jeho vytváření, pomocí argumentu dtype. End of explanation """ pole1 = numpy.array(range(10)) pole1 pole1 + 5 pole1 / 4 pole2 = numpy.array(range(-10, 20, 3)) pole2 pole1 + pole2 pole1 * pole2 """ Explanation: Operace s poli Aritmetické operace, jako součet, se provádí na jednotlivých prvcích – není to jako u seznamů. Přičteme-li k poli číslo, přičte se ke všem prvkům pole; stejně u násobení atd. Přičteme-li k poli pole, sečtou se odpovídající prvky. End of explanation """ numpy.hstack([pole1, pole2]) """ Explanation: A jak spojit dvě pole dohromady, tak jak to dělá + u seznamů? Na to má Numpy speciální funkci hstack. End of explanation """ numpy.sin(pole) """ Explanation: Všechny funkce Numpy se dají najít v domkumentaci, i když nezasvěcení v ní můžou ze začátku docela tápat. Základní matematické funkce jso popsány pod "Available ufuncs". End of explanation """ pole1 < pole2 """ Explanation: Ještě pozor na to, že i porovnávací operátory pracují na jednotlivých prvcích. Vznikle pole boolů. End of explanation """ if any(pole1 < pole2): print('Některé prvky pole1 jsou menší než odpovídající prvky pole2') if all(pole1 < pole2): print('Všechny prvky pole1 jsou menší než odpovídající prvky pole2') """ Explanation: Pokud budeš chtít takové pole použít v ifu, použij funkci any (vrací True pokud jsou některé z prvků true) nebo all (vrací True pokud jsou všechny prvky true) End of explanation """ x = numpy.linspace(-10, 10, 1000000) x """ Explanation: Pole, matematické funkce, a grafy V praxi jsou pole, se kterými se dělá nějaké to zpracovávání dat, obrovská. Udělej si pole s milionem prvků, pomocí funkce linspace která bere minimální a maximální hodnotu, a délku pole. (Zabere to několik MB paměti; pokud máš menší počítač, zkus třeba jen 10 tisíc.) End of explanation """ sin_x = numpy.sin(x) sin_x """ Explanation: Pak na všechna tahle čísla zároveň zavolej funkci sin. Všimni si jak je to rychlé – kdybys na to použila pythoní cyklus for, trvalo by to mnohem déle. Obecně je dobré, když děláš výpočty pomocí Numpy, používat pole a vestavěné funkce místo Pythoních cyklů. End of explanation """ from matplotlib import pyplot %matplotlib inline """ Explanation: A teď si ten sinus výstup nakresli pomocí knihovny matplotlib. Nejdřív nějaké ty importy a nastavení: řádek začínající % je IPythonová vychytávka, která způsobí že se grafy vykreslí pr'imo v prohlížeči. End of explanation """ pyplot.plot(x, sin_x) """ Explanation: Teď stačí použít funkci matplotlib.pyplot.plot, která bere (v základu) dva argumenty – pole hodnot pro x-ovou osu, a pole hodnot pro osu y-ovou. End of explanation """ matice = numpy.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) matice matice[0] matice[0:-1] matice[0][2] """ Explanation: Dvojrozměrná pole Pole z Numpy ale nemusí být jen řady čísel. Můžou to být i tabulky čísel! Dvou- (a více-)rozměrným polím se říká matice, a chovají se trochu jako seznamy seznamů. End of explanation """ matice[0, 2] matice[0, 1:] matice[:, :2] matice[::2, ::2] """ Explanation: Na rozdíl od seznamů seznamů se matice dají indexovat dvojicí čísel (nebo složitějších indexů), oddělených čárkou. Dají se tak jednoduše vybírat celé řádky nebo sloupce. End of explanation """ matice2 = numpy.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) matice2 matice2[1, 1] = 0 matice2 matice2[::2, ::2] = 0 matice2 matice2[::2, ::2] = numpy.array([[-1, -3], [-7, -9]]) matice2[1, 1] = -5 matice2 """ Explanation: A do vybraných řádků nebo sloupců se dají i přiřazovat hodnoty. End of explanation """ pyplot.imshow(matice2, interpolation='none', cmap='gray') # Pojmenované argumenty specifikují jak přesně se graf vykreslí: # interpolation='none' má smysl pro malé matice, a # cmap='gray' vykresluje v odstínech šedi: # černá je nejmenší číslo, bílá největší """ Explanation: Pomocí knihovny matplotlib si můžeme matici "nakreslit" jako orázek. End of explanation """ obrazek = numpy.zeros([100, 100]) for x in range(-50, 50): for y in range(-50, 50): obrazek[x+50, y+50] = x ** 2 + y ** 2 pyplot.imshow(obrazek) """ Explanation: Když každému číslu přiřadit barvu, End of explanation """ n = numpy.linspace(-50, 50, 1000) x, y = numpy.meshgrid(n, n) z = x ** 2 + y ** 2 pyplot.imshow(z) """ Explanation: Jak je napsáno výše, je lepší nepoužívat for-cykly, protože s velkou spoustou čísel by to bylo pomalé. Numpy má spoustu funkcí, které dokážou cykly nahradit – jen je potřeba je znát, či umět najít v dokumentaci. End of explanation """ from PIL import Image # Obrázek © Mokele, http://en.wikipedia.org/wiki/User:HCA # http://commons.wikimedia.org/wiki/File:Ball_python_lucy.JPG krajta = numpy.array(Image.open('krajta.jpg')) krajta """ Explanation: Trojrozměrná pole?! Pole v Numpy mohou mít i více rozměrů než dva. Trojrozměrná matice je jako seznam seznamů seznamů. Každý pixel v obrázku, který jsme kreslily před chvílí, by sám obsahoval několik hodnot. Příklad trojrozměrné matice je počítačový obrázek – pro každý pixel se zapisuje intenzita červené, zelené a modré barvy (angl. red, green, blue; RGB) – tedy 3 hodnoty. End of explanation """ pyplot.imshow(krajta) """ Explanation: Funkce imshow si s takovou věcí poradí, a nakreslí obrázek správných barev. (Jen si pořád myslí že to je graf, tak automaticky doplní osy.) End of explanation """ pyplot.imshow(krajta[400:-200, 0:300]) """ Explanation: Indexování funguje stejně jako u dvourozměrných polí. V případě obrázků představuje první číslo řádek, druhé sloupec, ... End of explanation """ krajta[:, :, 2] = krajta[:, :, 2] / 2 pyplot.imshow(krajta) """ Explanation: ... a třetí index je barva. Můžeš třeba snížit intenzitu modré, aby obrázek zežloutl: End of explanation """ krajta[..., 0] = krajta[..., 0] / 2 pyplot.imshow(krajta) """ Explanation: (A aby se nemuselo při indexování vícerozměrných polí opakovat :, :, :, :, dá se to nahradit třemi tečkami:) End of explanation """ import pandas data = pandas.read_csv('SLDB_ZV.CSV', encoding='cp1250') data.head() # metoda head() nám ukáže jen prvních pár řádků, aby výpis nebyl tak dlouhý. """ Explanation: Tabulková data s Pandas Tak, teď když něco víš o tom, jak funguje Numpy a jeho pole, můžeš se (s trochou googlení) směle pustit do analýzy obrazů, zvuků, matematických funkcí nebo fyzikálních simulací. Pokud znáš např. Matlab, zkus googlit "Python" + jméno funkce v Matlabu; často zjistíš že Numpy (nebo jiná knihovna, třeba Scipy) podobnou funkci obsahuje taky. A že na její použití nepotřebuješ drahou licenci :) Většina dat, které se analyzují, nebudou matice jako takové, ale budou to tabulky se sloupečky. My zkusíme zpracovat data ze sčítání lidu, domů a bytů, které zveřejňuje Český statistický úřad. Nejvhodnější formát pro stažení je pro nás je CSV; pro porozumění si stáhni i "Popis dat" v PDF. Koukni se stažený soubor v textovém editoru, ať vidíš o co jde. Pak ho načti pomocí Pandas: End of explanation """ data.columns = ['typ', 'název', 'číslo', 'kód', 'obyvatel', 'muži', 'ženy', 'obyv_0', 'obyv_15', 'obyv_65', 'aktivní', 'zamestnaní', 'domy', 'byty', 'domácnosti'] data.head() """ Explanation: Názvy sloupců od ČSÚ nedávají moc smysl, tak si je přejmenuj: End of explanation """ data.loc[10] """ Explanation: Data v Pandas tabulce se dají indexovat, ale na rozdíl od matic, indexy vybírají sloupce. Pokud chceš vybrat řádek musíš napsat: End of explanation """ data['obyvatel'] """ Explanation: Sloupce se pak indexují jménem (řetězcem), nikoli číslem: End of explanation """ data.set_index([data.index, 'název'], drop=False, inplace=True) data.head() data['obyvatel'] """ Explanation: Možná sis v předchozím výstupu všimla levého sloupce, který obsahuje čísla řádků. Pandas mu říká index. Je to speciální sloupec, který "pojmenovává" celou řádku, a vypisuje se na příhodných místech. Bylo by fajn tam mít místo čísla jméno. Index se dá se změnit, ale musíš si dávat pozor, aby neobsahoval duplikáty. V ČR je 15 obcí jménem 'Nová Ves', takže nemůžeme jednoznačně pojmenovat jenom jménem. Naštěstí Pandas umí takzvané složené indexy, takže můžeme zkombinovat číslo řádku (kvůli unikátnosti) a jméno (pro přehlednost): End of explanation """ data['ženy'] / data['muži'] """ Explanation: Při aritmetických operacích se sloupečky chovají podobně jako matice: operace se provede nad odpovídajícími hodnotami. Dá se třeba jednoduše zjistit poměr počtů mužů a žen: End of explanation """ data['poměr'] = data['ženy'] / data['muži'] data.head() """ Explanation: Následující kód přidá vypočítaná data do tabulky jako nový sloupec: End of explanation """ data['typ'] == 'kraj' """ Explanation: Podobně jako / se dá použít i většina ostatních operátorů, třeba ==: End of explanation """ kraje = data[data['typ'] == 'kraj'] kraje """ Explanation: A speciální vychytávka: sloupcem typu bool se dá indexovat, a vybrat tak příslušné řádky tabulky. Takhle se tvoří tabulka, ve které jsou jenom kraje: End of explanation """ kraje[['název', 'číslo']] """ Explanation: Když budeš chtít vybrat víc sloupců, dá se indexovat i seznamem řetězců: End of explanation """ kraje[['muži', 'ženy']].plot(kind='bar') kraje.plot(kind='scatter', x='domy', y='byty', s=kraje['domácnosti']/1000) """ Explanation: No a nakonec si ukážeme nějaké ty grafy a obrázky. End of explanation """
ethen8181/machine-learning
model_deployment/onnxruntime/text_classification_onnxruntime.ipynb
mit
# code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', '..', 'notebook_format')) from formats import load_style load_style(css_style='custom2.css', plot_style=False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 import os import time import numpy as np import pandas as pd import matplotlib.pyplot as plt %watermark -a 'Ethen' -d -t -v -p datasets,transformers,torch,tokenizers,numpy,pandas,matplotlib,onnxruntime """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Finetuning-Pre-trained-BERT-Model-on-Text-Classification-Task-And-Inferencing-with-ONNX-Runtime" data-toc-modified-id="Finetuning-Pre-trained-BERT-Model-on-Text-Classification-Task-And-Inferencing-with-ONNX-Runtime-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Finetuning Pre-trained BERT Model on Text Classification Task And Inferencing with ONNX Runtime</a></span><ul class="toc-item"><li><span><a href="#Tokenizer" data-toc-modified-id="Tokenizer-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Tokenizer</a></span></li><li><span><a href="#Model-FineTuning" data-toc-modified-id="Model-FineTuning-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Model FineTuning</a></span></li><li><span><a href="#Onnx-Runtime" data-toc-modified-id="Onnx-Runtime-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Onnx Runtime</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> End of explanation """ from datasets import load_dataset, DatasetDict, Dataset dataset_dict = load_dataset("quora") dataset_dict dataset_dict['train'][0] test_size = 0.1 val_size = 0.1 dataset_dict_test = dataset_dict['train'].train_test_split(test_size=test_size) dataset_dict_train_val = dataset_dict_test['train'].train_test_split(test_size=val_size) dataset_dict = DatasetDict({ "train": dataset_dict_train_val["train"], "val": dataset_dict_train_val["test"], "test": dataset_dict_test["test"] }) dataset_dict """ Explanation: Finetuning Pre-trained BERT Model on Text Classification Task And Inferencing with ONNX Runtime In this article, we'll be going over two main things: Process of finetuning a pre-trained BERT model towards a text classification task, more specificially, the Quora Question Pairs challenge. Process of converting our model into ONNX format, and perform inferencing benchmark with ONNX runtime. End of explanation """ from transformers import AutoTokenizer # https://huggingface.co/transformers/model_doc/distilbert.html pretrained_model_name_or_path = "distilbert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path) tokenizer """ Explanation: Tokenizer We won't be going over the details of the pre-trained tokenizer or model and only load a pre-trained one available from the huggingface model repository. End of explanation """ encoded_input = tokenizer( 'What is the step by step guide to invest in share market in india?', 'What is the step by step guide to invest in share market?' ) encoded_input """ Explanation: We can feed our tokenizer directly with a pair of sentences. End of explanation """ tokenizer.decode(encoded_input["input_ids"]) """ Explanation: Decoding the tokenized inputs, this model's tokenizer adds some special tokens such as, [SEP], that is used to indicate which token belongs to which segment/pair. End of explanation """ def tokenize_fn(examples): labels = [int(label) for label in examples['is_duplicate']] texts = [question['text'] for question in examples['questions']] texts1 = [text[0] for text in texts] texts2 = [text[1] for text in texts] tokenized_examples = tokenizer(texts1, texts2) tokenized_examples['labels'] = labels return tokenized_examples dataset_dict_tokenized = dataset_dict.map( tokenize_fn, batched=True, num_proc=8, remove_columns=['is_duplicate', 'questions'] ) dataset_dict_tokenized dataset_dict_tokenized['train'][0] """ Explanation: The proprocessing step will be task specific, if we happen to be using another dataset, this function needs to be modified accordingly. End of explanation """ model_checkpoint = 'text_classification' num_labels = 2 from transformers import ( AutoModelForSequenceClassification, TrainingArguments, Trainer, DataCollatorWithPadding ) # we'll save the model after fine tuning it once, so we can skip the fine tuning part during # the second round if we detect that we already have one available if os.path.isdir(model_checkpoint): model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) else: model = AutoModelForSequenceClassification.from_pretrained(pretrained_model_name_or_path, num_labels=num_labels) print('# of parameters: ', model.num_parameters()) model data_collator = DataCollatorWithPadding(tokenizer, padding=True) data_collator """ Explanation: Model FineTuning Having preprocessed our raw dataset, for our text classification task, we use AutoModelForSequenceClassification class to load the pre-trained model, the only other argument we need to specify is the number of class/label our text classification task has. Upon instantiating the model for the first time, we'll see some warnings generated, telling us we should fine tune this model on our down stream tasks before using it. End of explanation """ batch_size = 128 args = TrainingArguments( "quora", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=2, weight_decay=0.01, load_best_model_at_end=True ) trainer = Trainer( model, args, data_collator=data_collator, train_dataset=dataset_dict_tokenized["train"], eval_dataset=dataset_dict_tokenized['val'] ) if not os.path.isdir(model_checkpoint): trainer.train() model.save_pretrained(model_checkpoint) """ Explanation: We can perform all sorts of hyper parameter tuning on the fine tuning step, here we'll pick some default parameters for illustration purposes. End of explanation """ import torch import torch.nn.functional as F def predict(model, example, round_digits: int = 5): input_ids = example['input_ids'].to(model.device) attention_mask = example['attention_mask'].to(model.device) batch_labels = example['labels'].detach().cpu().numpy().tolist() model.eval() with torch.no_grad(): batch_output = model(input_ids, attention_mask) batch_scores = F.softmax(batch_output.logits, dim=-1)[:, 1] batch_scores = np.round(batch_scores.detach().cpu().numpy(), round_digits).tolist() return batch_scores, batch_labels from torch.utils.data import DataLoader def predict_data_loader(model, data_loader: DataLoader) -> pd.DataFrame: scores = [] labels = [] for example in data_loader: batch_scores, batch_labels = predict(model, example) scores += batch_scores labels += batch_labels df_predictions = pd.DataFrame.from_dict({'scores': scores, 'labels': labels}) return df_predictions data_loader = DataLoader(dataset_dict_tokenized['test'], collate_fn=data_collator, batch_size=64) start = time.time() df_predictions = predict_data_loader(model, data_loader) end = time.time() print('elapsed: ', end - start) print(df_predictions.shape) df_predictions.head() import sklearn.metrics as metrics def compute_binary_classification_metrics(y_true, y_score, round_digits: int = 3): auc = metrics.roc_auc_score(y_true, y_score) log_loss = metrics.log_loss(y_true, y_score) precision, recall, threshold = metrics.precision_recall_curve(y_true, y_score) f1 = 2 * (precision * recall) / (precision + recall) mask = ~np.isnan(f1) f1 = f1[mask] precision = precision[mask] recall = recall[mask] best_index = np.argmax(f1) precision = precision[best_index] recall = recall[best_index] f1 = f1[best_index] return { 'auc': round(auc, round_digits), 'precision': round(precision, round_digits), 'recall': round(recall, round_digits), 'f1': round(f1, round_digits), 'log_loss': round(log_loss, round_digits) } compute_binary_classification_metrics(df_predictions['labels'], df_predictions['scores']) """ Explanation: The next couple of code chunks performs batch inferencing on our dataset, and reports standard binary classification evaluation metrics. End of explanation """ data_loader = DataLoader(dataset_dict_tokenized['test'], collate_fn=data_collator, batch_size=8) example = next(iter(data_loader)) model = model.to('cuda') input_ids = example['input_ids'].to(model.device) example['input_ids'][:2] """ Explanation: Onnx Runtime This section walks through the process of serializing our Pytorch model into ONNX format, and using ONNX runtime for inferencing. Exporting the model can be done via the torch.onnx.export function, which requires a sample input. End of explanation """ opset_version = 12 onnx_model_path = 'text_classification.onnx' torch.onnx.export( model, input_ids, onnx_model_path, opset_version=opset_version, input_names=['input_ids'], output_names=['output'], dynamic_axes={ 'input_ids': {0: 'batch_size', 1: 'sequence_len'}, 'output': {0: 'batch_size'}, } ) """ Explanation: For our neural network computational graph, we expect a single input with a dynamic batch size and sequence length, as well as a single output, which a dynamic batch size. We need to specify dynamic_axes, so during inferencing, onnx won't be limited to the sample input size we've provided when exporting the model into onnx format. End of explanation """ from onnxruntime.quantization import quantize_dynamic, QuantType quantized_onnx_model_path = 'quantized_text_classification.onnx' quantize_dynamic( onnx_model_path, quantized_onnx_model_path, activation_type=QuantType.QUInt8, weight_type=QuantType.QUInt8 ) print('ONNX full precision model size (MB):', os.path.getsize(onnx_model_path) / (1024 * 1024)) print('ONNX quantized model size (MB):', os.path.getsize(quantized_onnx_model_path) / (1024 * 1024)) """ Explanation: We'll also experiment dynamic quantization on our onnx model. Quantization here perfers to converting our weights or activations in our models from a floating point representation to an integer reresentation. Based on the documentation, this is the simplest form of applying quantization, where our weights are quantized ahead of time but the activations are dynamically quantized during inference time. Given an already trained model, quantization provides a quick way for us to make a tradeoff between model performance versus model size and latency without the need of re-training our models with different parameter settings. End of explanation """ from onnxruntime import InferenceSession, SessionOptions def create_inference_session( model_path: str, intra_op_num_threads: int = 24, provider: str = 'CPUExecutionProvider' ) -> InferenceSession: options = SessionOptions() options.intra_op_num_threads = intra_op_num_threads # load the model as a onnx graph session = InferenceSession(model_path, options, providers=[provider]) session.disable_fallback() return session """ Explanation: One of the most important parameters to tune when using onnxruntime is intra_op_num_threads. End of explanation """ model = model.to('cuda') # sample data input_ids = dataset_dict_tokenized['test']['input_ids'][:1] # gpu inferencing model.eval() with torch.no_grad(): torch_input_ids = torch.LongTensor(input_ids).to(model.device) torch_output = model(torch_input_ids).logits.detach().cpu().numpy() # onnx runtime inferencing input_feed = {'input_ids': input_ids} session = create_inference_session(onnx_model_path) onnx_output = session.run(['output'], input_feed)[0] quantized_session = create_inference_session(quantized_onnx_model_path) quantized_onnx_output = quantized_session.run(['output'], input_feed)[0] if np.allclose(torch_output, onnx_output, atol=1e-5): print("Exported model has been tested with ONNXRuntime, and the result looks good!") print('input_ids: ', input_ids) print('onnx output: ', onnx_output) print('quantized onnx output: ', quantized_onnx_output) """ Explanation: We test the inferencing API by feeding the original PyTorch model and ONNX model with a sample input. End of explanation """ from contextlib import contextmanager @contextmanager def track_infer_time(buffer): start = time.time() yield end = time.time() elasped_ms = (end - start) * 1000 buffer.append(elasped_ms) from tqdm import trange def benchmark_pytorch(model, input_ids, n_rounds: int, n_warmup_rounds: int = 0): for _ in range(n_warmup_rounds): model(input_ids) time_buffer = [] for _ in trange(n_rounds, desc="Benchmarking"): with track_infer_time(time_buffer): model(input_ids).logits.detach().cpu().numpy() return time_buffer def benchmark_onnx(session, input_ids, n_rounds: int, n_warmup_rounds: int = 0): """Expects the input_ids to be padded to the same sequence length, e.g. coming from DataLoader""" time.sleep(10) input_feed = {'input_ids': input_ids.detach().cpu().numpy()} for _ in range(n_warmup_rounds): session.run(['output'], input_feed)[0] time_buffer = [] for _ in trange(n_rounds, desc="Benchmarking"): with track_infer_time(time_buffer): onnx_output = session.run(['output'], input_feed)[0] return time_buffer def benchmark_dynamic_onnx(session, input_ids, n_rounds: int, n_warmup_rounds: int = 0): """ Expects the input ids to be of dynamic size (i.e. non-padded), e.g. coming from a slice of dataset. """ time.sleep(10) input_feed = {'input_ids': [input_ids[0]]} for _ in range(n_warmup_rounds): session.run(['output'], input_feed)[0] time_buffer = [] for _ in trange(n_rounds, desc="Benchmarking"): with track_infer_time(time_buffer): batch_onnx_output = [] for input_id in input_ids: input_feed = {'input_ids': [input_id]} onnx_output = session.run(['output'], input_feed)[0] batch_onnx_output.append(onnx_output) outputs = np.concatenate(batch_onnx_output) return time_buffer n_rounds = 50 batch_sizes = [1, 4, 8, 16, 32, 64] model_options = [ 'pytorch_gpu', # pytorch cpu was really slow, not including it in the benchmark 'onnx_cpu', 'dynamic_onnx_cpu', 'quantized_dynamic_onnx_cpu' ] batch_time_results = {} for batch_size in batch_sizes: results = {} for model_option in model_options: if model_option == 'pytorch_gpu': model = model.to('cuda') model = model.half() data_loader = DataLoader( dataset_dict_tokenized['test'], collate_fn=data_collator, batch_size=batch_size ) example = next(iter(data_loader)) input_ids = example['input_ids'].to(model.device) time_buffer = benchmark_pytorch(model, input_ids, n_rounds) elif model_option == 'onnx_cpu': data_loader = DataLoader( dataset_dict_tokenized['test'], collate_fn=data_collator, batch_size=batch_size ) example = next(iter(data_loader)) input_ids = example['input_ids'].to('cpu') time_buffer = benchmark_onnx(session, input_ids, n_rounds) elif model_option == 'dynamic_onnx_cpu': input_ids = dataset_dict_tokenized['test'][:batch_size]['input_ids'] time_buffer = benchmark_dynamic_onnx(session, input_ids, n_rounds) elif model_option == 'quantized_dynamic_onnx_cpu': input_ids = dataset_dict_tokenized['test'][:batch_size]['input_ids'] time_buffer = benchmark_dynamic_onnx(quantized_session, input_ids, n_rounds) else: raise ValueError(f'unknown model option {model_option}') results[model_option] = time_buffer time_results = {k: np.mean(v) for k, v in results.items()} for model_option, timing in time_results.items(): if model_option in batch_time_results: batch_time_results[model_option].append(timing) else: batch_time_results[model_option] = [timing] pd.DataFrame(batch_time_results) # change default style figure and font size plt.rcParams['figure.figsize'] = 10, 8 plt.rcParams['font.size'] = 12 for model_name, time_list in batch_time_results.items(): plt.plot(time_list, label=model_name) plt.xticks(range(len(batch_sizes)), batch_sizes) plt.xlabel('batch size') plt.ylabel('time (ms)') plt.title('Inference time Comparison') plt.legend() plt.show() """ Explanation: The next few section runs the benchmark on different batch sizes and inferencing options. End of explanation """
roebius/deeplearning_keras2
nbs2/translate-pytorch.ipynb
apache-2.0
%matplotlib inline import re, pickle, collections, bcolz, numpy as np, keras, sklearn, math, operator from gensim.models import word2vec, KeyedVectors # - added KeyedVectors.load_word2vec_format import torch, torch.nn as nn from torch.autograd import Variable from torch import optim import torch.nn.functional as F # path='/data/datasets/fr-en-109-corpus/' # dpath = '/data/translate/' path='data/translate/fr-en-109-corpus/' dpath = 'data/translate/' """ Explanation: Translating French to English with Pytorch End of explanation """ fname=path+'giga-fren.release2.fixed' en_fname = fname+'.en' fr_fname = fname+'.fr' """ Explanation: Prepare corpus The French-English parallel corpus can be downloaded from http://www.statmt.org/wmt10/training-giga-fren.tar. It was created by Chris Callison-Burch, who crawled millions of web pages and then used 'a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other'. End of explanation """ re_eq = re.compile('^(Wh[^?.!]+\?)') re_fq = re.compile('^([^?.!]+\?)') lines = ((re_eq.search(eq), re_fq.search(fq)) for eq, fq in zip(open(en_fname), open(fr_fname))) qs = [(e.group(), f.group()) for e,f in lines if e and f]; len(qs) qs[:6] """ Explanation: To make this problem a little simpler so we can train our model more quickly, we'll just learn to translate questions that begin with 'Wh' (e.g. what, why, where which). Here are our regexps that filter the sentences we want. End of explanation """ pickle.dump(qs, open(dpath+'fr-en-qs.pkl', 'wb')) qs = pickle.load(open(dpath+'fr-en-qs.pkl', 'rb')) en_qs, fr_qs = zip(*qs) """ Explanation: Because it takes a while to load the data, we save the results to make it easier to load in later. End of explanation """ re_apos = re.compile(r"(\w)'s\b") # make 's a separate word re_mw_punc = re.compile(r"(\w[’'])(\w)") # other ' in a word creates 2 words re_punc = re.compile("([\"().,;:/_?!—])") # add spaces around punctuation re_mult_space = re.compile(r" *") # replace multiple spaces with just one def simple_toks(sent): sent = re_apos.sub(r"\1 's", sent) sent = re_mw_punc.sub(r"\1 \2", sent) sent = re_punc.sub(r" \1 ", sent).replace('-', ' ') sent = re_mult_space.sub(' ', sent) return sent.lower().split() fr_qtoks = list(map(simple_toks, fr_qs)); fr_qtoks[:4] en_qtoks = list(map(simple_toks, en_qs)); en_qtoks[:4] simple_toks("Rachel's baby is cuter than other's.") """ Explanation: Because we are translating at word level, we need to tokenize the text first. (Note that it is also possible to translate at character level, which doesn't require tokenizing.) There are many tokenizers available, but we found we got best results using these simple heuristics. End of explanation """ PAD = 0; SOS = 1 """ Explanation: Special tokens used to pad the end of sentences, and to mark the start of a sentence. End of explanation """ def toks2ids(sents): voc_cnt = collections.Counter(t for sent in sents for t in sent) vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True) vocab.insert(PAD, "<PAD>") vocab.insert(SOS, "<SOS>") w2id = {w:i for i,w in enumerate(vocab)} ids = [[w2id[t] for t in sent] for sent in sents] return ids, vocab, w2id, voc_cnt fr_ids, fr_vocab, fr_w2id, fr_counts = toks2ids(fr_qtoks) en_ids, en_vocab, en_w2id, en_counts = toks2ids(en_qtoks) """ Explanation: Enumerate the unique words (vocab) in the corpus, and also create the reverse map (word->index). Then use this mapping to encode every sentence as a list of int indices. End of explanation """ def load_glove(loc): return (bcolz.open(loc+'.dat')[:], pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'), pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1')) en_vecs, en_wv_word, en_wv_idx = load_glove('data/glove/results/6B.100d') en_w2v = {w: en_vecs[en_wv_idx[w]] for w in en_wv_word} n_en_vec, dim_en_vec = en_vecs.shape en_w2v['king'] """ Explanation: Word vectors Stanford's GloVe word vectors can be downloaded from https://nlp.stanford.edu/projects/glove/ (in the code below we have preprocessed them into a bcolz array). We use these because each individual word has a single word vector, which is what we need for translation. Word2vec, on the other hand, often uses multi-word phrases. End of explanation """ # w2v_path='/data/datasets/nlp/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin' w2v_path='data/frwac/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin' # fr_model = word2vec.Word2Vec.load_word2vec_format(w2v_path, binary=True) # - Deprecated fr_model = KeyedVectors.load_word2vec_format(w2v_path, binary=True) fr_voc = fr_model.vocab dim_fr_vec = 200 """ Explanation: For French word vectors, we're using those from http://fauconnier.github.io/index.html End of explanation """ def create_emb(w2v, targ_vocab, dim_vec): vocab_size = len(targ_vocab) emb = np.zeros((vocab_size, dim_vec)) found=0 for i, word in enumerate(targ_vocab): try: emb[i] = w2v[word]; found+=1 except KeyError: emb[i] = np.random.normal(scale=0.6, size=(dim_vec,)) return emb, found en_embs, found = create_emb(en_w2v, en_vocab, dim_en_vec); en_embs.shape, found fr_embs, found = create_emb(fr_model, fr_vocab, dim_fr_vec); fr_embs.shape, found """ Explanation: We need to map each word index in our vocabs to their word vector. Not every word in our vocabs will be in our word vectors, since our tokenization approach won't be identical to the word vector creators - in these cases we simply create a random vector. End of explanation """ from keras.preprocessing.sequence import pad_sequences maxlen = 30 en_padded = pad_sequences(en_ids, maxlen, 'int64', "post", "post") fr_padded = pad_sequences(fr_ids, maxlen, 'int64', "post", "post") en_padded.shape, fr_padded.shape, en_embs.shape """ Explanation: Prep data Each sentence has to be of equal length. Keras has a convenient function pad_sequences to truncate and/or pad each sentence as required - even although we're not using keras for the neural net, we can still use any functions from it we need! End of explanation """ from sklearn import model_selection fr_train, fr_test, en_train, en_test = model_selection.train_test_split( fr_padded, en_padded, test_size=0.1) [o.shape for o in (fr_train, fr_test, en_train, en_test)] """ Explanation: And of course we need to separate our training and test sets... End of explanation """ fr_train[0], en_train[0] """ Explanation: Here's an example of a French and English sentence, after encoding and padding. End of explanation """ def long_t(arr): return Variable(torch.LongTensor(arr)).cuda() fr_emb_t = torch.FloatTensor(fr_embs).cuda() en_emb_t = torch.FloatTensor(en_embs).cuda() def create_emb(emb_mat, non_trainable=False): output_size, emb_size = emb_mat.size() emb = nn.Embedding(output_size, emb_size) emb.load_state_dict({'weight': emb_mat}) if non_trainable: for param in emb.parameters(): param.requires_grad = False return emb, emb_size, output_size """ Explanation: Model Basic encoder-decoder End of explanation """ class EncoderRNN(nn.Module): def __init__(self, embs, hidden_size, n_layers=2): super(EncoderRNN, self).__init__() self.emb, emb_size, output_size = create_emb(embs, True) self.n_layers = n_layers self.hidden_size = hidden_size self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers) # ,bidirectional=True) def forward(self, input, hidden): return self.gru(self.emb(input), hidden) def initHidden(self, batch_size): return Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size)) def encode(inp, encoder): batch_size, input_length = inp.size() hidden = encoder.initHidden(batch_size).cuda() enc_outputs, hidden = encoder(inp, hidden) return long_t([SOS]*batch_size), enc_outputs, hidden """ Explanation: Turning a sequence into a representation can be done using an RNN (called the 'encoder'. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a sentence. * bidirectional=True passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards. * We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after). End of explanation """ class DecoderRNN(nn.Module): def __init__(self, embs, hidden_size, n_layers=2): super(DecoderRNN, self).__init__() self.emb, emb_size, output_size = create_emb(embs) self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers) self.out = nn.Linear(hidden_size, output_size) def forward(self, inp, hidden): emb = self.emb(inp).unsqueeze(1) res, hidden = self.gru(emb, hidden) res = F.log_softmax(self.out(res[:,0])) return res, hidden """ Explanation: Finally, we arrive at a vector representation of the sequence which captures everything we need to translate it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each word is in the output sequence. End of explanation """ v=np.array([1,2,3]); v, v.shape m=np.array([v,v*2,v*3]); m, m.shape m+v v1=np.expand_dims(v,-1); v1, v1.shape m+v1 """ Explanation: This graph demonstrates the accuracy decay for a neural translation task. With an encoding/decoding technique, larger input sequences result in less accuracy. <img src="https://smerity.com/media/images/articles/2016/bahdanau_attn.png" width="600"> This can be mitigated using an attentional model. Adding broadcasting to Pytorch Using broadcasting makes a lot of numerical programming far simpler. Here's a couple of examples, using numpy: End of explanation """ def unit_prefix(x, n=1): for i in range(n): x = x.unsqueeze(0) return x def align(x, y, start_dim=2): xd, yd = x.dim(), y.dim() if xd > yd: y = unit_prefix(y, xd - yd) elif yd > xd: x = unit_prefix(x, yd - xd) xs, ys = list(x.size()), list(y.size()) nd = len(ys) for i in range(start_dim, nd): td = nd-i-1 if ys[td]==1: ys[td] = xs[td] elif xs[td]==1: xs[td] = ys[td] return x.expand(*xs), y.expand(*ys) # def aligned_op(x,y,f): return f(*align(x,y,0)) # def add(x, y): return aligned_op(x, y, operator.add) # def sub(x, y): return aligned_op(x, y, operator.sub) # def mul(x, y): return aligned_op(x, y, operator.mul) # def div(x, y): return aligned_op(x, y, operator.truediv) # - Redefining the functions so that built-in Pytorch broadcasting will be used def add(x, y): return x + y def sub(x, y): return x - y def mul(x, y): return x * y def div(x, y): return x / y def dot(x, y): assert(1<y.dim()<5) x, y = align(x, y) if y.dim() == 2: return x.mm(y) elif y.dim() == 3: return x.bmm(y) else: xs,ys = x.size(), y.size() res = torch.zeros(*(xs[:-1] + (ys[-1],))) for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i])) return res """ Explanation: But Pytorch doesn't support broadcasting. So let's add it to the basic operators, and to a general tensor dot product: End of explanation """ def Arr(*sz): return torch.randn(sz)/math.sqrt(sz[0]) m = Arr(3, 2); m2 = Arr(4, 3) v = Arr(2) b = Arr(4,3,2); t = Arr(5,4,3,2) mt,bt,tt = m.transpose(0,1), b.transpose(1,2), t.transpose(2,3) def check_eq(x,y): assert(torch.equal(x,y)) check_eq(dot(m,mt),m.mm(mt)) check_eq(dot(v,mt), v.unsqueeze(0).mm(mt)) check_eq(dot(b,bt),b.bmm(bt)) check_eq(dot(b,mt),b.bmm(unit_prefix(mt).expand_as(bt))) exp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3) check_eq(dot(t,tt),exp) check_eq(add(m,v),m+unit_prefix(v).expand_as(m)) check_eq(add(v,m),m+unit_prefix(v).expand_as(m)) check_eq(add(m,t),t+unit_prefix(m,2).expand_as(t)) check_eq(sub(m,v),m-unit_prefix(v).expand_as(m)) check_eq(mul(m,v),m*unit_prefix(v).expand_as(m)) check_eq(div(m,v),m/unit_prefix(v).expand_as(m)) """ Explanation: Let's test! End of explanation """ def Var(*sz): return nn.Parameter(Arr(*sz)).cuda() class AttnDecoderRNN(nn.Module): def __init__(self, embs, hidden_size, n_layers=2, p=0.1): super(AttnDecoderRNN, self).__init__() self.emb, emb_size, output_size = create_emb(embs) self.W1 = Var(hidden_size, hidden_size) self.W2 = Var(hidden_size, hidden_size) self.W3 = Var(emb_size+hidden_size, hidden_size) self.b2 = Var(hidden_size) self.b3 = Var(hidden_size) self.V = Var(hidden_size) self.gru = nn.GRU(hidden_size, hidden_size, num_layers=2) self.out = nn.Linear(hidden_size, output_size) def forward(self, inp, hidden, enc_outputs): emb_inp = self.emb(inp) w1e = dot(enc_outputs, self.W1) w2h = add(dot(hidden[-1], self.W2), self.b2).unsqueeze(1) u = F.tanh(add(w1e, w2h)) a = mul(self.V,u).sum(2).squeeze(1) # - replaced .squeeze(2) that generates a dimension error a = F.softmax(a).unsqueeze(2) Xa = mul(a, enc_outputs).sum(1) res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), self.W3) res = add(res, self.b3).unsqueeze(0) res, hidden = self.gru(res, hidden) res = F.log_softmax(self.out(res.squeeze(0))) return res, hidden """ Explanation: Attentional model End of explanation """ def get_batch(x, y, batch_size=16): idxs = np.random.permutation(len(x))[:batch_size] return x[idxs], y[idxs] hidden_size = 128 fra, eng = get_batch(fr_train, en_train, 4) inp = long_t(fra) targ = long_t(eng) emb, emb_size, output_size = create_emb(en_emb_t) emb.cuda() inp.size() W1 = Var(hidden_size, hidden_size) W2 = Var(hidden_size, hidden_size) W3 = Var(emb_size+hidden_size, hidden_size) b2 = Var(1,hidden_size) b3 = Var(1,hidden_size) V = Var(1,1,hidden_size) gru = nn.GRU(hidden_size, hidden_size, num_layers=2).cuda() out = nn.Linear(hidden_size, output_size).cuda() # - Added the encoder creation in this cell encoder = EncoderRNN(fr_emb_t, hidden_size).cuda() dec_inputs, enc_outputs, hidden = encode(inp, encoder) enc_outputs.size(), hidden.size() emb_inp = emb(dec_inputs); emb_inp.size() w1e = dot(enc_outputs, W1); w1e.size() w2h = dot(hidden[-1], W2) w2h = (w2h+b2.expand_as(w2h)).unsqueeze(1); w2h.size() u = F.tanh(w1e + w2h.expand_as(w1e)) a = (V.expand_as(u)*u).sum(2).squeeze(1) # - replaced .squeeze(2) that generates a dimension error a = F.softmax(a).unsqueeze(2); a.size(),a.sum(1).squeeze(1) Xa = (a.expand_as(enc_outputs) * enc_outputs).sum(1); Xa.size() res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), W3) res = (res+b3.expand_as(res)).unsqueeze(0); res.size() res, hidden = gru(res, hidden); res.size(), hidden.size() res = F.log_softmax(out(res.squeeze(0))); res.size() """ Explanation: Attention testing Pytorch makes it easy to check intermediate results, when creating a custom architecture such as this one, since you can interactively run each function. End of explanation """ def train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit): decoder_input, encoder_outputs, hidden = encode(inp, encoder) target_length = targ.size()[1] enc_opt.zero_grad(); dec_opt.zero_grad() loss = 0 for di in range(target_length): decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs) decoder_input = targ[:, di] loss += crit(decoder_output, decoder_input) loss.backward() enc_opt.step(); dec_opt.step() return loss.data[0] / target_length def req_grad_params(o): return (p for p in o.parameters() if p.requires_grad) def trainEpochs(encoder, decoder, n_epochs, print_every=1000, lr=0.01): loss_total = 0 # Reset every print_every enc_opt = optim.RMSprop(req_grad_params(encoder), lr=lr) dec_opt = optim.RMSprop(decoder.parameters(), lr=lr) crit = nn.NLLLoss().cuda() for epoch in range(n_epochs): fra, eng = get_batch(fr_train, en_train, 64) inp = long_t(fra) targ = long_t(eng) loss = train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit) loss_total += loss if epoch % print_every == print_every-1: print('%d %d%% %.4f' % (epoch, epoch / n_epochs * 100, loss_total / print_every)) loss_total = 0 """ Explanation: Train Pytorch has limited functionality for training models automatically - you will generally have to write your own training loops. However, Pytorch makes it far easier to customize how this training is done, such as using teacher forcing. End of explanation """ hidden_size = 128 encoder = EncoderRNN(fr_emb_t, hidden_size).cuda() decoder = AttnDecoderRNN(en_emb_t, hidden_size).cuda() trainEpochs(encoder, decoder, 10000, print_every=500, lr=0.005) """ Explanation: Run End of explanation """ def evaluate(inp): decoder_input, encoder_outputs, hidden = encode(inp, encoder) target_length = maxlen decoded_words = [] for di in range(target_length): decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs) topv, topi = decoder_output.data.topk(1) ni = topi[0][0] if ni==PAD: break decoded_words.append(en_vocab[ni]) decoder_input = long_t([ni]) return decoded_words def sent2ids(sent): ids = [fr_w2id[t] for t in simple_toks(sent)] return pad_sequences([ids], maxlen, 'int64', "post", "post") def fr2en(sent): ids = long_t(sent2ids(sent)) trans = evaluate(ids) return ' '.join(trans) i=8 print(en_qs[i],fr_qs[i]) fr2en(fr_qs[i]) """ Explanation: Testing End of explanation """
ernestyalumni/servetheloop
packetDef/podCommands.ipynb
mit
# find out where we are on the file directory import os, sys print( os.getcwd()) print( os.listdir(os.getcwd())) """ Explanation: server/udp/podCommands.js - from node.js /JavaScript to Python (object) End of explanation """ wherepodCommandsis = os.getcwd()+'/reactGS/server/udp/' print(wherepodCommandsis) """ Explanation: The reactGS folder "mimics" the actual react-groundstation github repository, only copying the file directory structure, but the source code itself (which is a lot) isn't completely copied over. I wanted to keep these scripts/notebooks/files built on top of that github repository to be separate from the actual working code. End of explanation """ import json f_podCmds_json = open(wherepodCommandsis+'podCmds_lst.json','rb') rawjson_podCmds = f_podCmds_json.read() f_podCmds_json.close() print(type(rawjson_podCmds)) podCmds_lst=json.loads(rawjson_podCmds) print(type(podCmds_lst)) print(len(podCmds_lst)) # there are 104 available commands for the pod! for cmd in podCmds_lst: print cmd """ Explanation: node.js/(JavaScript) to json; i.e. node.js/(JavaScript) $\to$ json Make a copy of server/udp/podCommands.js. In this copy, comment out var chalk = require('chalk') (this is the only thing you have to do manually). Run this in the directory containing your copy of podCommands.js: node traverse_podCommands.js This should generate a json file podCmds_lst.json Available podCommands as a Python list; json to Python list, i.e. json $\to$ Python list End of explanation """ f_podCmds = open(wherepodCommandsis+'podCommands.js','rb') raw_podCmds = f_podCmds.read() f_podCmds.close() print(type(raw_podCmds)) print(len(raw_podCmds)) # get the name of the functions cmdnameslst = [func[:func.find("(")].strip() for func in raw_podCmds.split("function ")] funcparamslst = [func[func.find("(")+1:func.find(")")] if func[func.find("(")+1:func.find(")")] is not '' else None for func in raw_podCmds.split("function ")] #raw_podCmds.split("function ")[3][ raw_podCmds.split("function ")[3].find("(")+1:raw_podCmds.split("function ")[3].find(")")] # more parsing of this list of strings funcparamslst_cleaned = [] for param in funcparamslst: if param is None: funcparamslst_cleaned.append(None) else: funcparamslst_cleaned.append( param.strip().split(',') ) print(len(raw_podCmds.split("function ")) ) # 106 commands # get the index value (e.g. starts at position 22) of where "udp.tx.transmitPodCommand" starts, treating it as a string #whereisudptransmit = [func.find("udp.tx.transmitPodCommand(") for func in raw_podCmds.split("function ")] whereisudptransmit = [] for func in raw_podCmds.split("function "): val = func.find("udp.tx.transmitPodCommand(") if val is not -1: if func.find("// ",val-4) is not -1 or func.find("// udp",val-4) is not -1: whereisudptransmit.append(None) else: whereisudptransmit.append(val) else: whereisudptransmit.append(None) #whereisudptransmit = [func.find("udp.tx.transmitPodCommand(") for func in raw_podCmds.split("function ")] # remove -1 values #whereisudptransmit = filter(lambda x : x != -1, whereisudptransmit) rawParams=[funcstr[ funcstr.find("(",val)+1:funcstr.find(")",val)] if val is not None else None for funcstr, val in zip(raw_podCmds.split("function "), whereisudptransmit)] funcparamslst_cleaned[:10] raw_podCmds.split("function ")[4].find("// ",116-4); # more parsing of this list of strings cleaningParams = [] for rawparam in rawParams: if rawparam is None: cleaningParams.append(None) else: cleanParam = [] cleanParam.append( rawparam.split(',')[0].strip("'") ) for strval in rawparam.split(',')[1:]: strval2 = strval.strip() try: strval2 = int(strval2,16) strval2 = hex(strval2) except ValueError: strval2 cleanParam.append(strval2) cleaningParams.append(cleanParam) cleaningParams[:10] # get the name of the functions #[func[:func.find("(")] # if func.find("()") is not -1 else None for func in raw_podCmds.split("function ")]; cmdnameslst = [func[:func.find("(")].strip() for func in raw_podCmds.split("function ")] # each node js function has its arguments; do that first podfunclst = zip(cmdnameslst, funcparamslst_cleaned) print(len(podfunclst)) podfunclst[:10]; # each node js function has its arguments; do that first podCommandparams = zip(podfunclst, cleaningParams) print(len(podCommandparams)) podCommandparams[-2] """ Explanation: Dirty parsing of podCommands.js and the flight control parameters End of explanation """ podCommandparams[:10] try: import CPickle as pickle except ImportError: import pickle podCommandparamsfile = open("podCommandparams.pkl",'wb') pickle.dump( podCommandparams , podCommandparamsfile ) podCommandparamsfile.close() # open up a pickle file like so: podCommandparamsfile_recover = open("podCommandparams.pkl",'rb') podCommandparams_recover = pickle.load(podCommandparamsfile_recover) podCommandparamsfile_recover.close() podCommandparams_recover[:10] """ Explanation: So the structure of our result is as follows: Python tuples (each of size 2 for each of the tuples) """ ( (Name of pod command as a string, None if there are no function parameters or Python list of function arguments), Python list [ Subsystem name as a string, paramter1 as a hex value, paramter2 as a hex value, paramter3 as a hex value, paramter4 as a hex value] ) """ Notice that in the original code, there's some TO DO's still left (eek!) so that those udp.tx.transmitPodCommand is commented out or left as TODO, and some are dependent upon arguments in the function (and thus will change, the parameter is a variable). End of explanation """ tocsv = [] for cmd in podCommandparams_recover: name = cmd[0][0] funcparam = cmd[0][1] if funcparam is None: fparam = None else: fparam = ";".join(funcparam) udpparam = cmd[1] if udpparam is None: uname = None uparam = None else: uname = udpparam[0] uparam = ";".join( udpparam[1:] ) tocsv.append([name,fparam,uname,uparam]) """ Explanation: Going to .csv @nuttwerx and @ernestyalumni decided upon separating the multiple entries in a field by the semicolon ";": End of explanation """ header = ["Command name","Function args", "Pod Node", "Command Args"] tocsv.insert(0,header) """ Explanation: Add the headers in manually: 1 = Command name; 2 = Function args; 3 = Pod Node; 4 = Command Args End of explanation """ import csv f_podCommands_tocsv = open("podCommands.csv",'w') tocsv_writer = csv.writer( f_podCommands_tocsv ) tocsv_writer.writerows(tocsv) f_podCommands_tocsv.close() #tocsv.insert(0,header) no need #tocsv[:10] no need """ Explanation: The csv fields format is as follows: (function name) , (function arguments (None is there are none)) , (UDP transmit name (None is there are no udp transmit command)), (UDP transmit parameters, 4 of them, separated by semicolon, or None if there are no udp transmit command ) End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/mpi-esm-1-2-hr/aerosol.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'aerosol') """ Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: DWD Source ID: MPI-ESM-1-2-HR Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 69 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:57 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) """ Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) """ Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact internal mixture? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) """ Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation """
tensorflow/docs
site/en/tutorials/estimator/keras_model_to_estimator.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import tensorflow as tf import numpy as np import tensorflow_datasets as tfds """ Explanation: Create an Estimator from a Keras model <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/keras_model_to_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details. Overview TensorFlow Estimators are supported in TensorFlow, and can be created from new and existing tf.keras models. This tutorial contains a complete, minimal example of that process. Note: If you have a Keras model, you can use it directly with tf.distribute strategies without converting it to an estimator. As such, model_to_estimator is no longer recommended. Setup End of explanation """ model = tf.keras.models.Sequential([ tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(3) ]) """ Explanation: Create a simple Keras model. In Keras, you assemble layers to build models. A model is (usually) a graph of layers. The most common type of model is a stack of layers: the tf.keras.Sequential model. To build a simple, fully-connected network (i.e. multi-layer perceptron): End of explanation """ model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer='adam') model.summary() """ Explanation: Compile the model and get a summary. End of explanation """ def input_fn(): split = tfds.Split.TRAIN dataset = tfds.load('iris', split=split, as_supervised=True) dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels)) dataset = dataset.batch(32).repeat() return dataset """ Explanation: Create an input function Use the Datasets API to scale to large datasets or multi-device training. Estimators need control of when and how their input pipeline is built. To allow this, they require an "Input function" or input_fn. The Estimator will call this function with no arguments. The input_fn must return a tf.data.Dataset. End of explanation """ for features_batch, labels_batch in input_fn().take(1): print(features_batch) print(labels_batch) """ Explanation: Test out your input_fn End of explanation """ import tempfile model_dir = tempfile.mkdtemp() keras_estimator = tf.keras.estimator.model_to_estimator( keras_model=model, model_dir=model_dir) """ Explanation: Create an Estimator from the tf.keras model. A tf.keras.Model can be trained with the tf.estimator API by converting the model to an tf.estimator.Estimator object with tf.keras.estimator.model_to_estimator. End of explanation """ keras_estimator.train(input_fn=input_fn, steps=500) eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10) print('Eval result: {}'.format(eval_result)) """ Explanation: Train and evaluate the estimator. End of explanation """
graphistry/pygraphistry
demos/demos_databases_apis/gpu_rapids/part_iii_gpu_blazingsql.ipynb
bsd-3-clause
!wget -Pq data/ https://blazingsql-colab.s3.amazonaws.com/netflow_parquet/1_0_0.parquet !wget -Pq data/ https://blazingsql-colab.s3.amazonaws.com/netflow_parquet/1_1_0.parquet !wget -Pq data/ https://blazingsql-colab.s3.amazonaws.com/netflow_parquet/1_2_0.parquet !wget -Pq data/ https://blazingsql-colab.s3.amazonaws.com/netflow_parquet/1_3_0.parquet !ls -alh data """ Explanation: BlazingSQL + Graphistry: Netflow analysis This tutorial shows running BlazingSQL (GPU-accelerated SQL) on raw parquet files and visually analyzing the result with Graphistry Download data End of explanation """ from blazingsql import BlazingContext bc = BlazingContext() local_path = !pwd local_path bc.create_table('netflow', local_path[0] + '/data/*_0.parquet') """ Explanation: Load data into table End of explanation """ %%time result = bc.sql(''' SELECT a.firstSeenSrcIp as source, a.firstSeenDestIp as destination, count(a.firstSeenDestPort) as targetPorts, SUM(a.firstSeenSrcTotalBytes) as bytesOut, SUM(a.firstSeenDestTotalBytes) as bytesIn, SUM(a.durationSeconds) as durationSeconds, MIN(parsedDate) as firstFlowDate, MAX(parsedDate) as lastFlowDate, COUNT(*) as attemptCount FROM netflow a GROUP BY a.firstSeenSrcIp, a.firstSeenDestIp ''').get() gdf = result.columns gdf.head(3) """ Explanation: Compute IP<>IP flow summaries End of explanation """ import graphistry # To specify Graphistry account & server, use: # graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com') # For more options, see https://github.com/graphistry/pygraphistry#configure len(gdf.to_pandas()) graphistry.bind(source='source', destination='destination').plot(gdf.to_pandas()) """ Explanation: Visualize network End of explanation """
elliotk/twitter_eda
develop/20171010_realdonaldtrump_tweet_counts.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns plt.style.use('fivethirtyeight') import tweepy import numpy as np import pandas as pd from collections import Counter from datetime import datetime # Turn on retina mode for high-quality inline plot resolution from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina') # Version of Python import platform platform.python_version() # Import Twitter API keys from credentials import * # Helper function to connect to Twitter API def twitter_setup(): auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET) api = tweepy.API(auth) return api # Extract Twitter data extractor = twitter_setup() # Twitter user twitter_handle = 'realdonaldtrump' # Get most recent two hundred tweets tweets = extractor.user_timeline(screen_name=twitter_handle, count=200) print('Number of tweets extracted: {}.\n'.format(len(tweets))) """ Explanation: Setup End of explanation """ # Inspect attributes of tweepy object print(dir(tweets[0])) # look at the first element/record """ Explanation: Tweet activity Let's explore counts by hour, day of the week, and weekday versus weekend hourly trends. End of explanation """ # What format is it in? answer: GMT, according to Twitter API print(tweets[0].created_at) # Create datetime index: convert to GMT then to Eastern daylight time EDT tweet_dates = pd.DatetimeIndex([tweet.created_at for tweet in tweets], tz='GMT').tz_convert('US/Eastern') """ Explanation: Hmmm, what's this created_at attribute? End of explanation """ # Count the number of tweets per hour num_per_hour = pd.DataFrame( { 'counts': Counter(tweet_dates.hour) }) # Create hours data frame hours = pd.DataFrame({'hours': np.arange(24)}) """ Explanation: Hourly counts: End of explanation """ # Merge data frame objects on common index, peform left outer join and fill NaN with zero-values hour_counts = pd.merge(hours, num_per_hour, left_index=True, right_index=True, how='left').fillna(0) """ Explanation: Because there are hours of the day where there are no tweets, one must explicitly add any zero-count hours to the index. End of explanation """ # Count the number of tweets by day of the week num_per_day = pd.DataFrame( { 'counts': Counter(tweet_dates.weekday) }) # Create days data frame days = pd.DataFrame({'day': np.arange(7)}) # Merge data frame objects on common index, perform left outer join and fill NaN with zero-values daily_counts = pd.merge(days, num_per_day, left_index=True, right_index=True, how='left').fillna(0) """ Explanation: Day of the week counts: End of explanation """ # Flag the weekend from weekday tweets weekend = np.where(tweet_dates.weekday < 5, 'weekday', 'weekend') # Construct multiply-indexed DataFrame obj indexed by weekday/weekend and by hour by_time = pd.DataFrame([tweet.created_at for tweet in tweets], columns=['counts'], index=tweet_dates).groupby([weekend, tweet_dates.hour]).count() # Optionally, set the names attribute of the index by_time.index.names=['daytype', 'hour'] # Show two-dimensional view of multiply-indexed DataFrame by_time.unstack() # Merge DataFrame on common index, perform left outer join and fill NaN with zero-values by_time = pd.merge(hours, by_time.unstack(level=0), left_index=True, right_index=True, how='left').fillna(0) # Show last five records by_time.tail() """ Explanation: Weekday vs weekend hourly counts: End of explanation """ # Optional: Create xtick labels in Standard am/pm time format xticks = pd.date_range('00:00', '23:00', freq='H', tz='US/Eastern').map(lambda x: pd.datetime.strftime(x, '%I %p')) """ Explanation: Visualize tweet counts By hour: End of explanation """ %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } # Plot ax = hour_counts.plot(x='hours', y='counts', kind='line', figsize=(12, 8)) ax.set_xticks(np.arange(24)) #ax.set_xticklabels(xticks, rotation=50) #ax.set_title('Number of Tweets per hour') #ax.set_xlabel('Hour') #ax.set_ylabel('No. of Tweets') #ax.set_yticklabels(labels=['0 ', '5 ', '10 ', '15 ', '20 ', '25 ', '30 ', '35 ', '40 ']) ax.tick_params(axis='both', which='major', labelsize=14) ax.axhline(y=0, color='black', linewidth=1.3, alpha=0.7) ax.set_xlim(left=-1, right=24) ax.xaxis.label.set_visible(False) now = datetime.strftime(datetime.now(), '%a, %Y-%b-%d at %I:%M %p EDT') ax.text(x=-2.25, y=-5., s = u"\u00A9" + 'THE_KLEI {} Source: Twitter, Inc. '.format(now), fontsize=14, color='#f0f0f0', backgroundcolor='grey') ax.text(x=-2.35, y=40, s="When does @{} tweet? - time of the day".format(twitter_handle), fontsize=26, weight='bold', alpha=0.75) ax.text(x=-2.35, y=38, s='Number of Tweets per hour based-on 200 most-recent tweets as of {}'.format(datetime.strftime(datetime.now(), '%b %d, %Y')), fontsize=19, alpha=0.85) plt.show() """ Explanation: Let's see if we can "fancy-it-up" a bit by making it 538 blog-like. Note: The following cell disables notebook autoscrolling for long outputs. Otherwise, the notebook will embed the plot inside a scrollable cell, which is more difficult to read the plot. End of explanation """ # Plot daily_counts.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'] daily_counts['counts'].plot(title='Daily tweet counts', figsize=(12, 8), legend=True) plt.show() """ Explanation: By day: End of explanation """ %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } # Plot fig, ax = plt.subplots(2, 1, figsize=(14, 12)) # weekdays by_time.loc[:, [('counts', 'weekday')]].plot(ax=ax[0], title='Weekdays', kind='line') # weekends by_time.loc[:, [('counts', 'weekend')]].plot(ax=ax[1], title='Weekend', kind='line') ax[0].set_xticks(np.arange(24)) #ax[0].set_xticklabels(xticks, rotation=50) ax[1].set_xticks(np.arange(24)) #ax[1].set_xticklabels(xticks, rotation=50) plt.show() """ Explanation: By weekday and weekend: End of explanation """
mroberge/hydrofunctions
docs/notebooks/Writing_Valid_Requests_for_NWIS.ipynb
mit
# First, import hydrofunctions. import hydrofunctions as hf """ Explanation: Writing Valid Requests for NWIS The USGS National Water Information System (NWIS) is capable of handling a wide range of requests. A few features in Hydrofunctions are set up to help you write a successful request. End of explanation """ minimum_request = hf.NWIS('01585200') """ Explanation: What can we specify? The NWIS can handle data requests that specify: Where: we need to specify which stations we are interested in. Service: the NWIS provides daily averages ('dv') and 'instantaneous values' ('iv') When: we can specify a range of dates, a period of time before now, or just get the most recent observation. What: we can specify which parameter we want, or just get everything collected at the site. the data service we want. The only required element is a station: End of explanation """ minimum_request """ Explanation: Since we only specified the where, the NWIS will assume the following elements: Service: if not specified, provide the daily average value ('dv') When: if a start_date or period is not given, then provide the most recent reading. What: if you don't ask for a specific parameter (parameterCd), you will get everything. Let's see what our request came back with: End of explanation """ minimum_request.df() """ Explanation: Here's what the data look like in table form: End of explanation """ # For example, let's mistpye one of our parameters that worked so well above: notSoGoodNWIS = hf.NWIS('01585200', 'xx', period='P200D') """ Explanation: Different ways to specify which site you want You can specify a site four different ways: as a number or list of site numbers using stateCd and a two letter postal code to retrieve every site in the state using countyCd and a FIPS code to retrieve every site in a county or list of counties using bBox to retrieve everything inside of a bounding box of latitudes and longitudes. You are required to set one of these parameters, but only one. All of these parameters are demonstrated in Selecting Sites Different ways to specify time You can specify time in three different ways: if you specify nothing, you'll get the most recent reading. period will return up to 999 days of the most recent data: period='P11D' start_date will return all of the data starting at this date: start_date='2014-12-31' If you specify a start_date, you can also specify an end_date, which is given in the same format. What happens when you make a bad request? The power of the NWIS also makes it easy to make mistakes. So, we've added a series of helpful error messages to let you know when something went wrong, and why it went wrong. End of explanation """ # Let's ask for the impossible: the start date is AFTER the end date: badRequest = hf.get_nwis('01585200', 'dv', '2017-12-31', '2017-01-01') """ Explanation: Okay, maybe I shouldn't have typed 'xx' for our service. Some errors get caught by hydrofunctions, but some don't. Sometimes we end up asking NWIS for something that doesn't make sense, or something that it doesn't have, or maybe NWIS isn't available. In this case, hydrofunctions will receive an error message from NWIS and help you figure out what went wrong. End of explanation """ # Use the help() function to see all of the parameters for a function, their default values, # and a short explanation of what it all means. Or you can type ?hf.NWIS to access the same information. help(hf.NWIS) # Use the dir() function to see what sort of methods you have available to you, # or type hf.NWIS.<TAB> to see the same list. dir(hf.NWIS) """ Explanation: Getting help I probably shouldn't have started with all of the things that go wrong! My point is that we've got ya. Where can you go to learn how to do things the RIGHT way? The User's Guide The USGS guide to their waterservices But we also have a few built-in helpers that you can use right here, right now: help() and ? will list the docstring for whatever object you are curious about dir() and .\<TAB> will tell you about available methods. End of explanation """
const-yield/const-yield.github.io
notebooks/2017-10-13/Demonstration_of_Linear_Discriminant_Anysis_on_Sythetic_Data.ipynb
mit
import numpy as np import numpy.random as rand import matplotlib.pyplot as plt matplotlib inline mu1, mu2, mu3 = [15,20], [24,25], [38,40] cov = [[10, 0], [0, 10]] n_samples = 5000 data1 = rand.multivariate_normal(mu1, cov, n_samples) data2 = rand.multivariate_normal(mu2, cov, n_samples) data3 = rand.multivariate_normal(mu3, cov, n_samples) data = np.vstack((data1, data2, data3)) plt.axis('equal') plt.plot(data1[:,0], data1[:,1], '^b', label='Class_1') plt.plot(data2[:,0], data2[:,1], 'sr', label='Class_2') plt.plot(data3[:,0], data3[:,1], 'ok', label='Class_3') plt.title('Original samples') plt.legend(loc='best') """ Explanation: Generate synthetic data points Data points are sampled from three normal distributions. End of explanation """ mu1 = np.mean(data1, 0) mu2 = np.mean(data2, 0) mu3 = np.mean(data3, 0) mu = np.mean(data, 0) """ Explanation: Compute mean values End of explanation """ s1 = np.outer(mu1-mu, mu1-mu)*data1.shape[0] s2 = np.outer(mu2-mu, mu2-mu)*data2.shape[0] s3 = np.outer(mu3-mu, mu3-mu)*data3.shape[0] S_b = s1 + s2 + s3 """ Explanation: Compute the between-class scatter matrix $S_b$ $S_b = \sum_{i}^{C} n_i (\mu^{i} - \mu)(\mu^{i} - \mu)^T$ End of explanation """ def compute_within_scatter_matrix(data, mu): """ Compute the within-class scatter matrix for a given class :param data: a numpy matrix of (n_samples, n_sample_dimensions) :param mu: a list of n_sample_dimensions """ matrix = np.zeros((data.shape[1], data.shape[1])) spread = data - mu for s in range(spread.shape[0]): matrix += np.outer(spread[s,:], spread[s,:]) return matrix s1 = compute_within_scatter_matrix(data1, mu1) s2 = compute_within_scatter_matrix(data2, mu2) s3 = compute_within_scatter_matrix(data3, mu3) S_w = s1 + s2 + s3 """ Explanation: Compute the within-class scatter matrix $S_w$ $S_w = \sum_{i=1}^{C} \sum_{j=1}^{n_i} (x^{i}_j - \mu_i)(x^{i}_j - \mu_i)^T$ End of explanation """ eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_w).dot(S_b)) for eig_idx, eig_val in enumerate(eig_vals): print('Eigvector #{}: {} (Eigvalue:{:.3f})'.format(eig_idx, eig_vecs[:, eig_idx], eig_val)) """ Explanation: Solve the generalized eigenvalue problem for the matrix $S^{−1}{w}S{b}$ End of explanation """ S = np.linalg.inv(S_w).dot(S_b) for eig_idx, eig_val in enumerate(eig_vals): eig_vec = eig_vecs[:, eig_idx] np.testing.assert_array_almost_equal(S.dot(eig_vec), eig_val*eig_vec, decimal=6, err_msg='', verbose=True) """ Explanation: Double-check the computed eigen-vectors and eigen-values End of explanation """ eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))] eig_pairs = sorted( eig_pairs, key=lambda x:x[0], reverse=True) eigv_sum = sum(eig_vals) for eig_val, eig_vec in eig_pairs: print('Eigvector: {} (Eigvalue:\t{:.3f},\t{:.2%} variance explained)'.format(eig_vec, eig_val, (eig_val/eigv_sum))) """ Explanation: Sort the eigenvectors by decreasing eigenvalues End of explanation """ W = eig_pairs[0][1] print('Matrix W:\n', W.real) """ Explanation: If we take a look at the eigenvalues, we can already see that the second eigenvalue are much smaller than the first one. Since Rank(AB) $\leq$ Rank(A), and Rank(AB) $\leq$ Rank(B), we have Rank($S_w^{-1}S_b$) $\leq$ Rank($S_b$). Due to that $S_b$ is the sum of $C$ matrices with rank 1 or less, Rank($S_b$) can be $C$-1 at most, where $C$ is the number of classes. This means that FDA can find at most $C$-1 meaningful features. The remining features discovered by FDA are arbitrary. Choose m eigenvectors with the largest eigenvalues After sorting the eigenpairs by decreasing eigenvalues, we can then construct our $d \times m$-dimensional transformation matrix $W$. Here we choose the top most informative eigven-pair, as its eigenvalue explains 99.41% of the variance. As a result, the original d-dimensional (d=2) data points will be projected to a m-dimensional features space (m=1). End of explanation """ X1_fda = W.dot(data1.T) X2_fda = W.dot(data2.T) X3_fda = W.dot(data3.T) """ Explanation: Transforming the samples onto the new space As the last step, we use the 1 $\times$ 2 dimensional matrix $W$ to transform our samples onto the embedding space via the equation $Y = W^TX$. FDA learns a linear transformation matrix $W \in R^{d \times m} (m \ll d)$, which maps each $d$-dimensional (d=2) $x_i$ to a $m$-dimensional (m=1) $y_i$: $y_i = W^Tx_j $. End of explanation """ slope = W[1]/W[0] Y1_fda = slope * X1_fda Y2_fda = slope * X2_fda Y3_fda = slope * X3_fda plt.axis('equal') plt.plot(X1_fda, Y1_fda, '^b', label='Class_1') plt.plot(X2_fda, Y2_fda, 'sr', label='Class_2') plt.plot(X3_fda, Y3_fda, 'ok', label='Class_3') plt.title('Projected samples') plt.legend(loc='best') """ Explanation: Now the transformed samples are scalar values. They are essentially the projection of the original data samples on the selected eigen vector, which corresponds to a straight line. To better visualize the projection, we visualize the transformed samples on the straight line under the original 2-dimensional space. End of explanation """
wayfair/gists
data-science/ViolinPlot_BlogPost/ViolinPlots_BlogPost.ipynb
mit
%matplotlib inline import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from fuzzywuzzy import fuzz import numpy as np # some settings to be used throughout the notebook pd.set_option('max_colwidth', 70) wf_colors = ["#C7DEB1","#9763A4"] # make some fake data for a demo split-violin plot data1 = pd.DataFrame({'Variable': np.random.randn(100)*.2 + 1, 'Label':'Binary Case 1'}) data2 = pd.DataFrame({'Variable': np.random.randn(100)*.3, 'Label':'Binary Case 2'}) df = data1.append(data2) # violin plots in seaborn require 2 catagorical variables ('x' and 'hue'). We use 'Label' for hue. df['Category'] = '' # placeholder for 'x' categorical variable # make the plot fig, ax = plt.subplots(1,1,figsize=(8, 6)) sns.violinplot(x='Category', y="Variable", hue="Label", data=df, split=True, ax=ax, palette=wf_colors) ax.set_xlabel(' ') ax.set_ylabel('Some Continuous Variable', fontsize=16) ax.set_title('Example Split Violin Plot', fontsize=18) plt.show() """ Explanation: Why Violin Plots are Awesome for Feature Engineering Using NLP to Identify Similar Products At Wayfair, technology and data expertise enable data scientists to transform new web datasets into intelligent machine algorithms that re-imagine how traditional commerce works.In this post, we introduce how visual tools like Violin Plots amplify our data acumen to unlock deep insights. The savvy data scientist recognizes the value of a Violin Plot when engineering new model features. We share how this method is applied in an e-commerce example where fuzzy text matching systems are developed to identify similar products sold online. Key article takeaways: * Skillful usage of Violin Plots can improve feature engineering and selection * A good Violin Plot communicates more information about data irregularities than standard summary statistics and correlation coefficients Good data visualizations are helpful at every step of a data science project. When starting out, the right data visualizations can inform how one should frame their data science problem. Visualizations also can help guide decisions surrounding which data inputs to use, and are helpful when evaluating model accuracy and feature importance. When debugging an existing model, visualizations help diagnose data irregularities and bias in model predictions. Finally, when communicating with business stakeholders, the right visualization makes a clear point without any additional explanation. A type of data visualization that is particularly helpful when working on binary classification problems is the split violin plot. In my experience, this is a type of plot that is not nearly as famous as it should be. In brief, a split violin plot takes a variable grouped by two categories and plots a smoothed histogram of the variable in each group on opposite sides of a shared axis. The code below make a quick example plot to illustrate. End of explanation """ # read in data data = pd.read_csv('productnames.csv') df = data[['Product1', 'Product2', 'Match']] # what does the data look like? df.head() """ Explanation: What I like most about violin plots is that they show you the entire distribution of your data. If data inputs violate your assumptions (e.g. multimodal, full of null values, skewed by bad imputation or extreme outliers) you see the problems at a quick glance and in incredible detail. This is better than a few representative percentiles as in a box and whisker plot, or a table of summary statistics. They avoid the problem of oversaturation prevalent in scatter plots with lots of points, and reveal outliers more clearly than you would in a histogram without a lot of fine-tuning. We’ll illustrate these advantages in a simple example where we use fuzzy string matching to engineer features for a binary classification problem. An Example using NLP to Identify Similar Products At Wayfair, we develop sophisticated algorithms to parse large product catalogs and identify similar products. Part of this project involves engineering features for a model which flags two products as the same or not. Let’s start from a dataset that provides several pairs of product names and a label indicating whether or not they refer to the same item. End of explanation """ print('Qratio: ', fuzz.QRatio('brown leather sofa', '12ft leather dark brown sofa')) print('Wratio: ', fuzz.WRatio('brown leather sofa', '12ft leather dark brown sofa')) print('token_sort_ratio: ', fuzz.token_set_ratio('brown leather sofa', '12ft leather dark brown sofa')) """ Explanation: Fuzzywuzzy Similarity Scores For the purpose of this fuzzy text matching illustration, we’ll use an open-source Python library called fuzzywuzzy (developed by the fine folks at SeatGeek). This library contains several functions for measuring the similarity between two strings. Each function takes in two strings and returns a number between 0 and 100 representing the similarity between the strings. Functions differ in their conventions, however, and consequently the results often differ from function to function. End of explanation """ def get_scores(df, func, score_name): """Function for getting fuzzy similarity scores using a specified function""" def _fuzzyscore(row, func=func): """Fuzzy matching score on two columns of pandas dataframe. Called via df.apply() Args: row (df row instance): row of pandas DataFrame with columns 'Product1' and 'Product2' func (function): return numeric similarity score between 'Product1' and 'Product1, defaults to """ return func(row['Product1'], row['Product2']) #get the actual scores df[score_name] = df.apply(_fuzzyscore, axis=1) #get scores for different fuzzy functions get_scores(df, fuzz.QRatio, 'QRatio') get_scores(df, fuzz.WRatio, 'WRatio') get_scores(df, fuzz.partial_ratio, 'partial_ratio') get_scores(df, fuzz.token_set_ratio, 'token_set_ratio') get_scores(df, fuzz.token_sort_ratio, 'token_sort_ratio') df.head() """ Explanation: It’s rarely obvious which function is best for a given problem. Let’s consider five different fuzzy matching methods and compute similarity scores for each pair of strings. Using these scores, we’ll create some violin plots to determine which method is best for distinguishing between matches and not matches. (You could also consider combinations of scores though this comes at a higher computational cost.) End of explanation """ plot_df = pd.melt(df, id_vars=['Match'], value_vars=['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio']) plot_df.columns = ['Match', 'Function', 'Fuzzy Score'] fig, ax = plt.subplots(1,1, figsize=(14, 5)) sns.violinplot(x="Function", y="Fuzzy Score", hue="Match", data=plot_df, split=True, ax=ax, palette=wf_colors) ax.set_ylabel('Similarity Score', fontsize=18) ax.set_xlabel('') ax.legend(loc='lower right', fontsize=13, ncol=2) ax.tick_params(axis='both', which='major', labelsize=16) ax.set_title('Fuzzywuzzy Methods: similarity scores for matches and not matches', fontsize=20) plt.show() # make sure you have a "plots" folder fig.savefig('blog_pic1.png') """ Explanation: A few lines of code is all it takes to generate split violin plots using the Seaborn library. The purple distribution depicts a smoothed (sideways) histogram of fuzzy matching scores when Match is True, while the light-green shows the distribution of similarity scores when Match is False. When two distributions have little or no overlap along the y-axis, the fuzzy matching function will do a better job distinguishing between our binary classes. End of explanation """ df[['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio', 'Match']].corr() """ Explanation: Generally, these fuzzy matching scores do a good job in distinguishing between observations where the two names refer to the same product. For any method, a pair of names with a similarity score of 50 or more will probably refer to the same product. Still, we can see that some fuzzy matching functions do a better job than others in distinguishing between True and False observations. The token-set-ratio plot seems to have the least overlap between the True and False distributions, followed by the plots for token-sort-ratio and WRatio. Of our five similarity scores, the scores from these methods should perform the best in any predictive model. In comparison, notice how much more the True and False distributions overlap for the partial_ratio and QRatio methods. Scores from these methods will be less helpful as features. Conclusion: Violin plots suggest that of our five similarity scores, token-set-ratio would be the best feature in a predictive model, especially compared to the partial-ratio or QRatio methods. Why Violin Plots are Superior to More Conventional Analyses For comparison, let’s look at the Pearson correlation coefficients between our fuzzy-matching scores and our indicator variable for whether the pair is a match or not. End of explanation """ def make_fake_data(low, high, n=300): """Stacks three draws from a uniform distribution w/ bounds given by 'low' and 'high' Args: low (list of ints): lower bounds for the three random draws high (list of ints): upper bounds for the three random draws """ rand_array = np.hstack((np.random.uniform(low=low[0], high=high[0], size=n), np.random.uniform(low=low[1], high=high[1], size=n), np.random.uniform(low=low[2], high=high[2], size=n) )) return rand_array # make fake data true1 = make_fake_data([3, 33, 63], [12, 44, 72]) false1 = make_fake_data([18, 48, 78], [27, 57, 84]) true2 = make_fake_data([0, 30, 60], [15, 45, 75]) false2 = make_fake_data([15, 45, 75], [30, 60, 90]) fake_match_df = pd.DataFrame({'score1': false1, 'score2': false2, 'Match': np.full_like(false1, 0, dtype=bool)}) true_match_df = pd.DataFrame({'score1': true1, 'score2':true2, 'Match': np.full_like(true1, 1, dtype=bool)}) df = true_match_df.append(fake_match_df) plot_df = pd.melt(df, id_vars=['Match'], value_vars=['score1', 'score2']) plot_df.columns = ['Match', 'Function', 'Fuzzy Score'] fig, ax = plt.subplots(1,1, figsize=(12, 5)) sns.violinplot(x='Function', y='Fuzzy Score', hue="Match", data=plot_df, split=True, ax=ax, bw=.1, palette=["#C7DEB1","#9763A4"]) ax.set_ylabel('Similarity Score', fontsize=18) ax.set_xlabel('') ax.legend(loc='upper right', fontsize=12, ncol=2) ax.tick_params(axis='both', which='major', labelsize=16) ax.set_title('Irregular Data: Why Violin Plots are Better than Correlation Coefficients', fontsize=20) fig.savefig('blog_pic2.png') """ Explanation: For this data, the correlation coefficients give a similar ranking as achieved using the violin plots. The token-set-ratio method gives the strongest correlation to the Match variable while the QRatio method gives the weakest correlation. If our goal was only to identify the best fuzzywuzzy function to use, we apparently could have made our selection using correlation coefficients instead of violin plots. In general, however, violin plots are much more reliable and informative. Consider the following (pathological) example. End of explanation """ df.corr() """ Explanation: In these violin plots, the similarity scores on the left appear to be more helpful in separating between matches and not-matches. There is less overlap between the True and False observations and the observations are more tightly clustered into their respective groups. However, notice that the relationship between the similarity scores and the True/False indicator is not at all linear or even monotone. As a result, correlation coefficients can fail to correctly guide our decision on which set of scores to use. Is this true? Let’s take a look. End of explanation """
trangel/Data-Science
deep_learning_ai/Convolution+model+-+Application+-+v1.ipynb
gpl-3.0
import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1) """ Explanation: Convolutional Neural Networks: Application Welcome to Course 4's second assignment! In this notebook, you will: Implement helper functions that you will use when implementing a TensorFlow model Implement a fully functioning ConvNet using TensorFlow After this assignment you will be able to: Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks"). 1.0 - TensorFlow model In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. End of explanation """ # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() """ Explanation: Run the next cell to load the "SIGNS" dataset you are going to use. End of explanation """ # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) """ Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5. <img src="images/SIGNS.png" style="width:800px;height:300px;"> The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples. End of explanation """ X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} """ Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it. To get started, let's examine the shapes of your data. End of explanation """ # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(dtype='float', shape=(None, n_H0, n_W0, n_C0), name='X') Y = tf.placeholder(dtype='float', shape=(None, n_y), name='Y') ### END CODE HERE ### return X, Y X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) """ Explanation: 1.1 - Create placeholders TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session. Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint. End of explanation """ # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable('W1', [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0)) W2 = tf.get_variable('W2', [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1 = " + str(parameters["W1"].eval()[1,1,1])) print("W2 = " + str(parameters["W2"].eval()[1,1,1])) """ Explanation: Expected Output <table> <tr> <td> X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) </td> </tr> <tr> <td> Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) </td> </tr> </table> 1.2 - Initialize parameters You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment. Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use: python W = tf.get_variable("W", [1,2,3,4], initializer = ...) More Info. End of explanation """ # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X, W1, strides = [1,1,1,1], padding = 'SAME') # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, sride 8, padding 'SAME' P1 = tf.nn.max_pool(A1, ksize=[1,8,8,1], strides=[1,8,8,1], padding = 'SAME') # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1, W2, strides=[1,1,1,1], padding='SAME') # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2, ksize=[1,4,4,1], strides=[1,4,4,1], padding='SAME') # FLATTEN P2 = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(P2, num_outputs=6, activation_fn=None) ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = " + str(a)) """ Explanation: Expected Output: <table> <tr> <td> W1 = </td> <td> [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br> -0.06847463 0.05245192] </td> </tr> <tr> <td> W2 = </td> <td> [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br> -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br> -0.22779644 -0.1601823 -0.16117483 -0.10286498] </td> </tr> </table> 1.2 - Forward propagation In TensorFlow, there are built-in functions that carry out the convolution steps for you. tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here. tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here. tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here. In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. Exercise: Implement the forward_propagation function below to build the following model: CONV2D -&gt; RELU -&gt; MAXPOOL -&gt; CONV2D -&gt; RELU -&gt; MAXPOOL -&gt; FLATTEN -&gt; FULLYCONNECTED. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. End of explanation """ # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) """ Explanation: Expected Output: <table> <td> Z3 = </td> <td> [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br> [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] </td> </table> 1.3 - Compute cost Implement the compute cost function below. You might find these two functions helpful: tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here. tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here. Exercise: Compute the cost below using the function above. End of explanation """ # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(64, 64, 3, 6) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , temp_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters """ Explanation: Expected Output: <table> <td> cost = </td> <td> 2.91034 </td> </table> 1.4 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches. Exercise: Complete the function below. The model below should: create placeholders initialize parameters forward propagate compute the cost create an optimizer Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables End of explanation """ _, _, parameters = model(X_train, Y_train, X_test, Y_test) """ Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! End of explanation """ fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image) """ Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. <table> <tr> <td> **Cost after epoch 0 =** </td> <td> 1.917929 </td> </tr> <tr> <td> **Cost after epoch 5 =** </td> <td> 1.506757 </td> </tr> <tr> <td> **Train Accuracy =** </td> <td> 0.940741 </td> </tr> <tr> <td> **Test Accuracy =** </td> <td> 0.783333 </td> </tr> </table> Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! End of explanation """
arongdari/python-topic-model
notebook/HMM_LDA_example.ipynb
apache-2.0
import logging from ptm.nltk_corpus import get_reuters_token_list_by_sentence from ptm import HMM_LDA from ptm.utils import get_top_words logger = logging.getLogger('HMM_LDA') logger.propagate=False """ Explanation: Example of HMM-LDA End of explanation """ n_docs = 1000 voca, corpus = get_reuters_token_list_by_sentence(num_doc=n_docs) print('Vocabulary size', len(voca)) """ Explanation: Read corpus corpus is a nested list of documents, sentences, and word tokens, respectively. End of explanation """ n_docs = len(corpus) n_voca = len(voca) n_topic = 50 n_class = 20 max_iter = 100 alpha = 0.1 beta = 0.01 gamma = 0.1 eta = 0.1 model = HMM_LDA(n_docs, n_voca, n_topic, n_class, alpha=alpha, beta=beta, gamma=gamma, eta=eta, verbose=False) model.fit(corpus, max_iter=max_iter) """ Explanation: Training HMM LDA End of explanation """ for ti in range(n_topic): top_words = get_top_words(model.TW, voca, ti, n_words=10) print('Topic', ti ,': ', ','.join(top_words)) for ci in range(1, n_class): top_words = get_top_words(model.CW, voca, ci, n_words=10) print('Class', ci ,': ', ','.join(top_words)) """ Explanation: Print Top 10 words for each class and topic End of explanation """
donK23/pyData-Projects
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
apache-2.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Modeling ML Tasks End of explanation """ from sklearn.datasets import load_files corpus = load_files("../data/") doc_count = len(corpus.data) print("Doc count:", doc_count) assert doc_count is 56, "Wrong number of documents loaded, should be 56 (56 stories)" """ Explanation: Input End of explanation """ from helpers.tokenizer import TextWrangler from sklearn.feature_extraction.text import CountVectorizer bow = CountVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="lemma")) X_bow = bow.fit_transform(corpus.data) """ Explanation: Vectorizer End of explanation """ from sklearn.cluster import KMeans kmeans = KMeans(n_jobs=-1, random_state=23) from yellowbrick.cluster import KElbowVisualizer viz = KElbowVisualizer(kmeans, k=(2, 28), metric="silhouette") viz.fit(X_bow) #viz.poof(outpath="plots/KElbow_bow_lemma_silhoutte.png") viz.poof() from yellowbrick.cluster import SilhouetteVisualizer def plot_silhoutte_plots(max_n): for i in range(2, max_n + 1): plt.clf() n_cluster = i viz = SilhouetteVisualizer(KMeans(n_clusters=n_cluster, random_state=23)) viz.fit(X_bow) path = "plots/SilhouetteViz" + str(n_cluster) viz.poof(outpath=path) #plot_silhoutte_plots(28) """ Explanation: Decided for BOW vectors, containing lemmatized words. BOW results (in this case) in better cluster performance than with tf-idf vectors. Lemmatization worked slightly better than stemming. (-> KElbow plots in plots/ dir). Models End of explanation """ from yellowbrick.cluster import SilhouetteVisualizer n_clusters = 3 model = KMeans(n_clusters=n_clusters, n_jobs=-1, random_state=23) viz = SilhouetteVisualizer(model) viz.fit(X_bow) viz.poof() """ Explanation: Decided for 3 clusters, because of highest avg Silhoutte score compared to other cluster sizes. End of explanation """ from sklearn.pipeline import Pipeline pipe = Pipeline([("bow", bow), ("kmeans", model)]) pipe.fit(corpus.data) pred = pipe.predict(corpus.data) """ Explanation: Nonetheless, the assignment isn't perfect. Cluster #1 looks good, but the many negative vals in cluster #0 & #1 suggest that there exist a cluster with more similar docs than in the actual assigned cluster. As a cluster size of 2 also leads to an inhomogen cluster and has a lower avg Silhoutte score, we go with the size of 3. Nevertheless, in general those findings suggest that the Sherlock Holmes stories should be represented in a single collection only. Training End of explanation """ from sklearn.metrics import silhouette_score print("Avg Silhoutte score:", silhouette_score(X_bow, pred), "(novel collections)") """ Explanation: Evaluation Cluster density Silhoutte coefficient: [-1,1], where 1 is most dense and negative vals correspond to ill seperation. End of explanation """ print("AVG Silhoutte score", silhouette_score(X_bow, corpus.target), "(original collections)") """ Explanation: Compared to original collections by Sir Arthur Conan Doyle: End of explanation """ from yellowbrick.text import TSNEVisualizer # Map target names of original collections to target vals collections_map = {} for i, collection_name in enumerate(corpus.target_names): collections_map[i] = collection_name # Plot tsne_original = TSNEVisualizer() labels = [collections_map[c] for c in corpus.target] tsne_original.fit(X_bow, labels) tsne_original.poof() """ Explanation: Average Silhoutte coefficient is at least slightly positive and much better than the score of the original assignment (which is even negative). Success. Visual Inspection We come from the original assignment by Sir Arthur Conan Doyle... End of explanation """ # Plot tsne_novel = TSNEVisualizer() labels = ["c{}".format(c) for c in pipe.named_steps.kmeans.labels_] tsne_novel.fit(X_bow, labels) tsne_novel.poof() """ Explanation: ... to the novel collection assignment: End of explanation """ # Novel titles, can be more creative ;> novel_collections_map = {0: "The Unassignable Adventures of Cluster 0", 1: "The Adventures of Sherlock Holmes in Cluster 1", 2: "The Case-Book of Cluster 2"} """ Explanation: Confirms the findings from the Silhoutte plot above (in the Models section), cluster #1 looks very coherent, cluster #2 is seperated and the two documents of cluster #0 fly somewhere around. Nonetheless, compared to the original collection, this looks far better. Success. Document-Cluster Assignment Finally, we want to assign the Sherlock Holmes stories to the novel collection created by clustering, right? Create artificial titles for the collections created from clusters. End of explanation """ orig_assignment = [collections_map[c] for c in corpus.target] novel_assignment = [novel_collections_map[p] for p in pred] titles = [" ".join(f_name.split("/")[-1].split(".")[0].split("_")) for f_name in corpus.filenames] # Final df, compares original with new assignment df_documents = pd.DataFrame([orig_assignment, novel_assignment], columns=titles, index=["Original Collection", "Novel Collection"]).T df_documents.to_csv("collections.csv") df_documents df_documents["Novel Collection"].value_counts() """ Explanation: Let's see how the the books are differently assigned to collections by Sir Arthur Conan Doyle (Original Collection), respectively by the clustering algo (Novel Collection). End of explanation """ tsne_novel_named = TSNEVisualizer(colormap="Accent") tsne_novel_named.fit(X_bow, novel_assignment) tsne_novel_named.poof(outpath="plots/Novel_Sherlock_Holmes_Collections.png") """ Explanation: Collections are uneven assigned. Cluster #1 is the predominant one. Looks like cluster #0 subsume the (rational) unassignable stories. T-SNE plot eventually looks like that: End of explanation """
karst87/ml
01_openlibs/tensorflow/00_resource/tf_examples/notebooks/0_Prerequisite/mnist_dataset_intro.ipynb
mit
# Import MNIST from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Load data X_train = mnist.train.images Y_train = mnist.train.labels X_test = mnist.test.images Y_test = mnist.test.labels """ Explanation: MNIST Dataset Introduction Most examples are using MNIST dataset of handwritten digits. It has 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image, so each sample is represented as a matrix of size 28x28 with values from 0 to 1. Overview Usage In our examples, we are using TensorFlow input_data.py script to load that dataset. It is quite useful for managing our data, and handle: Dataset downloading Loading the entire dataset into numpy array: End of explanation """ # Get the next 64 images array and labels batch_X, batch_Y = mnist.train.next_batch(64) """ Explanation: A next_batch function that can iterate over the whole dataset and return only the desired fraction of the dataset samples (in order to save memory and avoid to load the entire dataset). End of explanation """
Bio204-class/bio204-notebooks
2016-03-30-ANOVA-simulations.ipynb
cc0-1.0
## simulate one way ANOVA under the null hypothesis of no ## difference in group means groupmeans = [0, 0, 0, 0] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size # generate samples samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)] """ Explanation: One-way ANOVA, general setup We'll starting with simulating data for a one-way ANOVA, under the null hypothesis. In this simulation we'll simulate four groups, all drawn from the same underlying distribution: $N(\mu=0,\sigma=1)$. End of explanation """ # draw a figure fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4)) clrs = sbn.color_palette("Set1", n_colors=k) for i, sample in enumerate(samples): sbn.kdeplot(sample, color=clrs[i], ax=ax1) ax1_ymax = ax1.get_ylim()[1] for i, sample in enumerate(samples): ax2.vlines(np.mean(sample), 0, ax1_ymax, linestyle="dashed", color=clrs[i]) ax2.set_xlim(ax1.get_xlim()) ax2.set_ylim(ax1.get_ylim()) ax1.set_title("Group Sample Distributions") ax2.set_title("Group Means") ax1.set_xlabel("X") ax1.set_ylabel("Density") ax2.set_xlabel("mean(X)") ax2.set_ylabel("Density") pass """ Explanation: Illustrate sample distributions and group means We then draw the simulated data, showing the group distributions on the left and the distribution of group means on the right. End of explanation """ # Between-group and within-group estimates of variance sample_group_means = [np.mean(s) for s in samples] sample_group_var = [np.var(s, ddof=1) for s in samples] Vbtw = n * np.var(sample_group_means, ddof=1) Vwin = np.mean(sample_group_var) Fstat = Vbtw/Vwin print("Between group estimate of population variance:", Vbtw) print("Within group estimate of population variance:", Vwin) print("Fstat = Vbtw/Vwin = ", Fstat) """ Explanation: F-statistic We calculate an F-statistic, which is the ratio of the "between group" variance to the "within group" variance. The calculation below is appropriate when all the group sizes are the same. End of explanation """ # now carry out many such simulations to estimate the sampling distristribution # of our F-test statistic groupmeans = [0, 0, 0, 0] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size nsims = 1000 Fstats = [] for sim in range(nsims): samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)] sample_group_means = [np.mean(s) for s in samples] sample_group_var = [np.var(s, ddof=1) for s in samples] Vbtw = n * np.var(sample_group_means, ddof=1) Vwin = np.mean(sample_group_var) Fstat = Vbtw/Vwin Fstats.append(Fstat) Fstats = np.array(Fstats) """ Explanation: Simulating the sampling distribution of the F-test statistic To understand how surprising our observed data is, relative to what we would expect under the null hypothesis, we need to understand the sampling distribution of the F-statistic. Here we use simulation to estimate this sampling distribution. End of explanation """ fig, ax = plt.subplots() sbn.distplot(Fstats, ax=ax, label="Simulation", kde_kws=dict(alpha=0.5, linewidth=2)) # plot the theoretical F-distribution for # corresponding degrees of freedom df1 = k - 1 df2 = n*k - k x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x,Ftheory, linestyle='dashed', linewidth=2, label="Theory") # axes, legends, title ax.set_xlim(0, ) ax.set_xlabel("F-statistic") ax.set_ylabel("Density") ax.legend() title = \ """Comparison of Simulated and Theoretical F-distribution for F(df1={}, df2={})""" ax.set_title(title.format(df1, df2)) pass """ Explanation: Draw a figure to compare our simulated sampling distribution of the F-statistic to the theoretical expectation Let's create a plot comparing our simulated sampling distribution to the theoretical sampling distribution determined analytically. As we see below they compare well. End of explanation """ # draw F distribution x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x, Ftheory, linestyle='solid', linewidth=2, label="Theoretical\nExpectation") # draw vertical line at threshold threshold = stats.f.ppf(0.95, df1, df2) plt.vlines(threshold, 0, stats.f.pdf(threshold, df1, df2), linestyle='solid') # shade area under curve to right of threshold areax = np.linspace(threshold, 9, 250) plt.fill_between(areax, stats.f.pdf(areax, df1, df2), color='gray', alpha=0.75) # axes, legends, title plt.xlim(0, ) plt.xlabel("F-statistic") plt.ylabel("Density") plt.legend() title = \ r""" $\alpha$ = 0.05 threshold for F-distribution with df1 = {}, df2={}""" plt.title(title.format(df1, df2)) print("The α =0.05 significance threshold is:", threshold) pass """ Explanation: Determining signficance thresholds To determine whether we would reject the null hypothesis for an observed value of the F-statistic we need to calculate the appropriate cutoff value for a given significance threshold, $\alpha$. Here we consider the standard signficiance threshold $\alpha$ = 0.05. End of explanation """ # now simulate case where one of the group means is different groupmeans = [0, 0, 0, 1] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size nsims = 1000 Fstats = [] for sim in range(nsims): samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)] sample_group_means = [np.mean(s) for s in samples] sample_group_var = [np.var(s, ddof=1) for s in samples] Vbtw = n * np.var(sample_group_means, ddof=1) Vwin = np.mean(sample_group_var) Fstat = Vbtw/Vwin Fstats.append(Fstat) Fstats = np.array(Fstats) """ Explanation: Note that the F-distribution above is specific to the particular degrees of freedom. We would typically refer to that distribution as $F_{3,96}$. In this case, for $\alpha=0.5$, we would reject the null hypothesis if the observed value of the F-statistic was greater than 2.70. Simulation where $H_A$ holds As we've done in previous cases, it's informative to simulate the situation where the null hypothesis is false (i.e. the alternative hypothesis $H_A$ is true). Here we simulate the case where one of the four groups is drawn from a normal distribution with a mean that is different from the other three groups -- $N(\mu=1, \sigma=1)$ rather than $N(\mu=0, \sigma=1)$. End of explanation """ fig, ax = plt.subplots() sbn.distplot(Fstats, ax=ax, label="Simulated $H_A$", kde_kws=dict(alpha=0.5, linewidth=2)) # plot the theoretical F-distribution for # corresponding degrees of freedom df1 = k - 1 df2 = n*k - k x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x,Ftheory, linestyle='dashed', linewidth=2, label="Theory") ymin, ymax = ax.get_ylim() # Draw threshold alpha = 0.05 ax.vlines(stats.f.ppf(0.95, df1, df2), 0, ymax, linestyle='dotted', color='k', label=r"Threshold for $\alpha=0.05$") # axes, legends, title ax.set_xlim(0, ) ax.set_ylim(0, ymax) ax.set_xlabel("F-statistic") ax.set_ylabel("Density") ax.legend() title = \ """Comparison of Theoretical F-distribution and F-distribution when $H_A$ is true""" ax.set_title(title.format(df1, df2)) pass """ Explanation: We then plot the distribution of the F-statistic under this specific $H_A$ versus the distribution of F under the null hypothesis. End of explanation """
glasslion/data-science-notebooks
thinkstats/chapter1.ipynb
mit
import matplotlib import pandas as pd %matplotlib inline """ Explanation: Chapter 1 Exploratory data analysis Anecdotal evidence usually fails, because: - Small number of observations - Selection bias - Confirmation bias - Inaccuracy To address the limitations of anecdotes, we will use the tools of statistics, which include: - Data collection - large data - valid data - Descriptive statistics - summary statistics - visualization - Exploratory data analysis - patterns - differences - inconsistencies & limitations - Estimation - sample, population - Hypothesis testing - group End of explanation """ import nsfg df = nsfg.ReadFemPreg() df.head() pregordr = df['pregordr'] pregordr[2:5] """ Explanation: DataFrames DataFrame is the fundamental data structure provided by pandas. A DataFrame contains a row for each record. In addition to the data, a DataFrame also contains the variable names and their types, and it provides methods for accessing and modifying the data. We can easily access the data frame and its columns with scripts intthe https://github.com/AllenDowney/ThinkStats2 repo. End of explanation """ birthord_counts = df.birthord.value_counts().sort_index() birthord_counts birthord_counts.plot(kind='bar') """ Explanation: Exercise 1 Print value counts for <tt>birthord</tt> and compare to results published in the codebook End of explanation """ df['prglngth_cut'] = pd.cut(df.prglngth,bins=[0,13,26,50]) df.prglngth_cut.value_counts().sort_index() """ Explanation: Print value counts for <tt>prglngth</tt> and compare to results published in the codebook End of explanation """ df.totalwgt_lb.mean() """ Explanation: Compute the mean birthweight. End of explanation """ df['totalwgt_kg'] = 0.45359237 * df.totalwgt_lb df.totalwgt_kg.mean() """ Explanation: Create a new column named <tt>totalwgt_kg</tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation. End of explanation """ lve_birth = df.outcome == 1 lve_birth.tail() """ Explanation: One important note: when you add a new column to a DataFrame, you must use dictionary syntax, like this ```python CORRECT df['totalwgt_lb'] = df.birthwgt_lb + df.birthwgt_oz / 16.0 Not dot notation, like this:python WRONG! df.totalwgt_lb = df.birthwgt_lb + df.birthwgt_oz / 16.0 ``` The version with dot notation adds an attribute to the DataFrame object, but that attribute is not treated as a new column. Create a boolean Series. End of explanation """ live = df[df.outcome == 1] len(live) """ Explanation: Use a boolean Series to select the records for the pregnancies that ended in live birth. End of explanation """ len(live[(0<=live.birthwgt_lb) & (live.birthwgt_lb<=5)]) """ Explanation: Count the number of live births with <tt>birthwgt_lb</tt> between 0 and 5 pounds (including both). The result should be 1125. End of explanation """ len(live[(9<=live.birthwgt_lb) & (live.birthwgt_lb<95)]) """ Explanation: Count the number of live births with <tt>birthwgt_lb</tt> between 9 and 95 pounds (including both). The result should be 798 End of explanation """ firsts = df[df.birthord==1] others = df[df.birthord>1] len(firsts), len(others) """ Explanation: Use <tt>birthord</tt> to select the records for first babies and others. How many are there of each? End of explanation """ firsts.totalwgt_lb.mean(), others.totalwgt_lb.mean() """ Explanation: Compute the mean weight for first babies and others. End of explanation """ firsts.prglngth.mean(), others.prglngth.mean() """ Explanation: Compute the mean <tt>prglngth</tt> for first babies and others. Compute the difference in means, expressed in hours. End of explanation """ import thinkstats2 resp = thinkstats2.ReadStataDct('2002FemResp.dct').ReadFixedWidth('2002FemResp.dat.gz', compression='gzip') preg = nsfg.ReadFemPreg() preg_map = nsfg.MakePregMap(preg) for index, pregnum in resp.pregnum.iteritems(): caseid = resp.caseid[index] indices = preg_map[caseid] # check that pregnum from the respondent file equals # the number of records in the pregnancy file if len(indices) != pregnum: print(caseid, len(indices), pregnum) break """ Explanation: Exercise 2 End of explanation """
jo-tez/aima-python
games.ipynb
mit
from games import * from notebook import psource, pseudocode """ Explanation: GAMES OR ADVERSARIAL SEARCH This notebook serves as supporting material for topics covered in Chapter 5 - Adversarial Search in the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from games.py module. Let's import required classes, methods, global variables etc., from games module. CONTENTS Game Representation Game Examples Tic-Tac-Toe Figure 5.2 Game Min-Max Alpha-Beta Players Let's Play Some Games! End of explanation """ %psource Game """ Explanation: GAME REPRESENTATION To represent games we make use of the Game class, which we can subclass and override its functions to represent our own games. A helper tool is the namedtuple GameState, which in some cases can come in handy, especially when our game needs us to remember a board (like chess). GameState namedtuple GameState is a namedtuple which represents the current state of a game. It is used to help represent games whose states can't be easily represented normally, or for games that require memory of a board, like Tic-Tac-Toe. Gamestate is defined as follows: GameState = namedtuple('GameState', 'to_move, utility, board, moves') to_move: It represents whose turn it is to move next. utility: It stores the utility of the game state. Storing this utility is a good idea, because, when you do a Minimax Search or an Alphabeta Search, you generate many recursive calls, which travel all the way down to the terminal states. When these recursive calls go back up to the original callee, we have calculated utilities for many game states. We store these utilities in their respective GameStates to avoid calculating them all over again. board: A dict that stores the board of the game. moves: It stores the list of legal moves possible from the current position. Game class Let's have a look at the class Game in our module. We see that it has functions, namely actions, result, utility, terminal_test, to_move and display. We see that these functions have not actually been implemented. This class is just a template class; we are supposed to create the class for our game, by inheriting this Game class and implementing all the methods mentioned in Game. End of explanation """ %psource TicTacToe """ Explanation: Now let's get into details of all the methods in our Game class. You have to implement these methods when you create new classes that would represent your game. actions(self, state): Given a game state, this method generates all the legal actions possible from this state, as a list or a generator. Returning a generator rather than a list has the advantage that it saves space and you can still operate on it as a list. result(self, state, move): Given a game state and a move, this method returns the game state that you get by making that move on this game state. utility(self, state, player): Given a terminal game state and a player, this method returns the utility for that player in the given terminal game state. While implementing this method assume that the game state is a terminal game state. The logic in this module is such that this method will be called only on terminal game states. terminal_test(self, state): Given a game state, this method should return True if this game state is a terminal state, and False otherwise. to_move(self, state): Given a game state, this method returns the player who is to play next. This information is typically stored in the game state, so all this method does is extract this information and return it. display(self, state): This method prints/displays the current state of the game. GAME EXAMPLES Below we give some examples for games you can create and experiment on. Tic-Tac-Toe Take a look at the class TicTacToe. All the methods mentioned in the class Game have been implemented here. End of explanation """ moves = dict(A=dict(a1='B', a2='C', a3='D'), B=dict(b1='B1', b2='B2', b3='B3'), C=dict(c1='C1', c2='C2', c3='C3'), D=dict(d1='D1', d2='D2', d3='D3')) utils = dict(B1=3, B2=12, B3=8, C1=2, C2=4, C3=6, D1=14, D2=5, D3=2) initial = 'A' """ Explanation: The class TicTacToe has been inherited from the class Game. As mentioned earlier, you really want to do this. Catching bugs and errors becomes a whole lot easier. Additional methods in TicTacToe: __init__(self, h=3, v=3, k=3) : When you create a class inherited from the Game class (class TicTacToe in our case), you'll have to create an object of this inherited class to initialize the game. This initialization might require some additional information which would be passed to __init__ as variables. For the case of our TicTacToe game, this additional information would be the number of rows h, number of columns v and how many consecutive X's or O's are needed in a row, column or diagonal for a win k. Also, the initial game state has to be defined here in __init__. compute_utility(self, board, move, player) : A method to calculate the utility of TicTacToe game. If 'X' wins with this move, this method returns 1; if 'O' wins return -1; else return 0. k_in_row(self, board, move, player, delta_x_y) : This method returns True if there is a line formed on TicTacToe board with the latest move else False. TicTacToe GameState Now, before we start implementing our TicTacToe game, we need to decide how we will be representing our game state. Typically, a game state will give you all the current information about the game at any point in time. When you are given a game state, you should be able to tell whose turn it is next, how the game will look like on a real-life board (if it has one) etc. A game state need not include the history of the game. If you can play the game further given a game state, you game state representation is acceptable. While we might like to include all kinds of information in our game state, we wouldn't want to put too much information into it. Modifying this game state to generate a new one would be a real pain then. Now, as for our TicTacToe game state, would storing only the positions of all the X's and O's be sufficient to represent all the game information at that point in time? Well, does it tell us whose turn it is next? Looking at the 'X's and O's on the board and counting them should tell us that. But that would mean extra computing. To avoid this, we will also store whose move it is next in the game state. Think about what we've done here. We have reduced extra computation by storing additional information in a game state. Now, this information might not be absolutely essential to tell us about the state of the game, but it does save us additional computation time. We'll do more of this later on. To store game states will will use the GameState namedtuple. to_move: A string of a single character, either 'X' or 'O'. utility: 1 for win, -1 for loss, 0 otherwise. board: All the positions of X's and O's on the board. moves: All the possible moves from the current state. Note here, that storing the moves as a list, as it is done here, increases the space complexity of Minimax Search from O(m) to O(bm). Refer to Sec. 5.2.1 of the book. Representing a move in TicTacToe game Now that we have decided how our game state will be represented, it's time to decide how our move will be represented. Becomes easy to use this move to modify a current game state to generate a new one. For our TicTacToe game, we'll just represent a move by a tuple, where the first and the second elements of the tuple will represent the row and column, respectively, where the next move is to be made. Whether to make an 'X' or an 'O' will be decided by the to_move in the GameState namedtuple. Fig52 Game For a more trivial example we will represent the game in Figure 5.2 of the book. <img src="images/fig_5_2.png" width="75%"> The states are represented with capital letters inside the triangles (eg. "A") while moves are the labels on the edges between states (eg. "a1"). Terminal nodes carry utility values. Note that the terminal nodes are named in this example 'B1', 'B2' and 'B2' for the nodes below 'B', and so forth. We will model the moves, utilities and initial state like this: End of explanation """ print(moves['A']['a1']) """ Explanation: In moves, we have a nested dictionary system. The outer's dictionary has keys as the states and values the possible moves from that state (as a dictionary). The inner dictionary of moves has keys the move names and values the next state after the move is complete. Below is an example that showcases moves. We want the next state after move 'a1' from 'A', which is 'B'. A quick glance at the above image confirms that this is indeed the case. End of explanation """ fig52 = Fig52Game() """ Explanation: We will now take a look at the functions we need to implement. First we need to create an object of the Fig52Game class. End of explanation """ psource(Fig52Game.actions) print(fig52.actions('B')) """ Explanation: actions: Returns the list of moves one can make from a given state. End of explanation """ psource(Fig52Game.result) print(fig52.result('A', 'a1')) """ Explanation: result: Returns the next state after we make a specific move. End of explanation """ psource(Fig52Game.utility) print(fig52.utility('B1', 'MAX')) print(fig52.utility('B1', 'MIN')) """ Explanation: utility: Returns the value of the terminal state for a player ('MAX' and 'MIN'). Note that for 'MIN' the value returned is the negative of the utility. End of explanation """ psource(Fig52Game.terminal_test) print(fig52.terminal_test('C3')) """ Explanation: terminal_test: Returns True if the given state is a terminal state, False otherwise. End of explanation """ psource(Fig52Game.to_move) print(fig52.to_move('A')) """ Explanation: to_move: Return the player who will move in this state. End of explanation """ psource(Fig52Game) """ Explanation: As a whole the class Fig52 that inherits from the class Game and overrides its functions: End of explanation """ pseudocode("Minimax-Decision") """ Explanation: MIN-MAX Overview This algorithm (often called Minimax) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the utility value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree). It is worth noting that the algorithm works in a depth-first manner. The pseudocode can be found below: End of explanation """ psource(minimax_decision) """ Explanation: Implementation In the implementation we are using two functions, max_value and min_value to calculate the best move for MAX and MIN respectively. These functions interact in an alternating recursion; one calls the other until a terminal state is reached. When the recursion halts, we are left with scores for each move. We return the max. Despite returning the max, it will work for MIN too since for MIN the values are their negative (hence the order of values is reversed, so the higher the better for MIN too). End of explanation """ print(minimax_decision('B', fig52)) print(minimax_decision('C', fig52)) print(minimax_decision('D', fig52)) """ Explanation: Example We will now play the Fig52 game using this algorithm. Take a look at the Fig52Game from above to follow along. It is the turn of MAX to move, and he is at state A. He can move to B, C or D, using moves a1, a2 and a3 respectively. MAX's goal is to maximize the end value. So, to make a decision, MAX needs to know the values at the aforementioned nodes and pick the greatest one. After MAX, it is MIN's turn to play. So MAX wants to know what will the values of B, C and D be after MIN plays. The problem then becomes what move will MIN make at B, C and D. The successor states of all these nodes are terminal states, so MIN will pick the smallest value for each node. So, for B he will pick 3 (from move b1), for C he will pick 2 (from move c1) and for D he will again pick 2 (from move d3). Let's see this in code: End of explanation """ print(minimax_decision('A', fig52)) """ Explanation: Now MAX knows that the values for B, C and D are 3, 2 and 2 (produced by the above moves of MIN). The greatest is 3, which he will get with move a1. This is then the move MAX will make. Let's see the algorithm in full action: End of explanation """ from notebook import Canvas_minimax from random import randint minimax_viz = Canvas_minimax('minimax_viz', [randint(1, 50) for i in range(27)]) """ Explanation: Visualization Below we have a simple game visualization using the algorithm. After you run the command, click on the cell to move the game along. You can input your own values via a list of 27 integers. End of explanation """ pseudocode("Alpha-Beta-Search") """ Explanation: ALPHA-BETA Overview While Minimax is great for computing a move, it can get tricky when the number of game states gets bigger. The algorithm needs to search all the leaves of the tree, which increase exponentially to its depth. For Tic-Tac-Toe, where the depth of the tree is 9 (after the 9th move, the game ends), we can have at most 9! terminal states (at most because not all terminal nodes are at the last level of the tree; some are higher up because the game ended before the 9th move). This isn't so bad, but for more complex problems like chess, we have over $10^{40}$ terminal nodes. Unfortunately we have not found a way to cut the exponent away, but we nevertheless have found ways to alleviate the workload. Here we examine pruning the game tree, which means removing parts of it that we do not need to examine. The particular type of pruning is called alpha-beta, and the search in whole is called alpha-beta search. To showcase what parts of the tree we don't need to search, we will take a look at the example Fig52Game. In the example game, we need to find the best move for player MAX at state A, which is the maximum value of MIN's possible moves at successor states. MAX(A) = MAX( MIN(B), MIN(C), MIN(D) ) MIN(B) is the minimum of 3, 12, 8 which is 3. So the above formula becomes: MAX(A) = MAX( 3, MIN(C), MIN(D) ) Next move we will check is c1, which leads to a terminal state with utility of 2. Before we continue searching under state C, let's pop back into our formula with the new value: MAX(A) = MAX( 3, MIN(2, c2, .... cN), MIN(D) ) We do not know how many moves state C allows, but we know that the first one results in a value of 2. Do we need to keep searching under C? The answer is no. The value MIN will pick on C will at most be 2. Since MAX already has the option to pick something greater than that, 3 from B, he does not need to keep searching under C. In alpha-beta we make use of two additional parameters for each state/node, a and b, that describe bounds on the possible moves. The parameter a denotes the best choice (highest value) for MAX along that path, while b denotes the best choice (lowest value) for MIN. As we go along we update a and b and prune a node branch when the value of the node is worse than the value of a and b for MAX and MIN respectively. In the above example, after the search under state B, MAX had an a value of 3. So, when searching node C we found a value less than that, 2, we stopped searching under C. You can read the pseudocode below: End of explanation """ %psource alphabeta_search """ Explanation: Implementation Like minimax, we again make use of functions max_value and min_value, but this time we utilise the a and b values, updating them and stopping the recursive call if we end up on nodes with values worse than a and b (for MAX and MIN). The algorithm finds the maximum value and returns the move that results in it. The implementation: End of explanation """ print(alphabeta_search('A', fig52)) """ Explanation: Example We will play the Fig52 Game with the alpha-beta search algorithm. It is the turn of MAX to play at state A. End of explanation """ print(alphabeta_search('B', fig52)) print(alphabeta_search('C', fig52)) print(alphabeta_search('D', fig52)) """ Explanation: The optimal move for MAX is a1, for the reasons given above. MIN will pick move b1 for B resulting in a value of 3, updating the a value of MAX to 3. Then, when we find under C a node of value 2, we will stop searching under that sub-tree since it is less than a. From D we have a value of 2. So, the best move for MAX is the one resulting in a value of 3, which is a1. Below we see the best moves for MIN starting from B, C and D respectively. Note that the algorithm in these cases works the same way as minimax, since all the nodes below the aforementioned states are terminal. End of explanation """ from notebook import Canvas_alphabeta from random import randint alphabeta_viz = Canvas_alphabeta('alphabeta_viz', [randint(1, 50) for i in range(27)]) """ Explanation: Visualization Below you will find the visualization of the alpha-beta algorithm for a simple game. Click on the cell after you run the command to move the game along. You can input your own values via a list of 27 integers. End of explanation """ game52 = Fig52Game() """ Explanation: PLAYERS So, we have finished the implementation of the TicTacToe and Fig52Game classes. What these classes do is defining the rules of the games. We need more to create an AI that can actually play games. This is where random_player and alphabeta_player come in. query_player The query_player function allows you, a human opponent, to play the game. This function requires a display method to be implemented in your game class, so that successive game states can be displayed on the terminal, making it easier for you to visualize the game and play accordingly. random_player The random_player is a function that plays random moves in the game. That's it. There isn't much more to this guy. alphabeta_player The alphabeta_player, on the other hand, calls the alphabeta_search function, which returns the best move in the current game state. Thus, the alphabeta_player always plays the best move given a game state, assuming that the game tree is small enough to search entirely. play_game The play_game function will be the one that will actually be used to play the game. You pass as arguments to it an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches! LET'S PLAY SOME GAMES! Game52 Let's start by experimenting with the Fig52Game first. For that we'll create an instance of the subclass Fig52Game inherited from the class Game: End of explanation """ print(random_player(game52, 'A')) print(random_player(game52, 'A')) """ Explanation: First we try out our random_player(game, state). Given a game state it will give us a random move every time: End of explanation """ print( alphabeta_player(game52, 'A') ) print( alphabeta_player(game52, 'B') ) print( alphabeta_player(game52, 'C') ) """ Explanation: The alphabeta_player(game, state) will always give us the best move possible, for the relevant player (MAX or MIN): End of explanation """ minimax_decision('A', game52) alphabeta_search('A', game52) """ Explanation: What the alphabeta_player does is, it simply calls the method alphabeta_full_search. They both are essentially the same. In the module, both alphabeta_full_search and minimax_decision have been implemented. They both do the same job and return the same thing, which is, the best move in the current state. It's just that alphabeta_full_search is more efficient with regards to time because it prunes the search tree and hence, explores lesser number of states. End of explanation """ game52.play_game(alphabeta_player, alphabeta_player) game52.play_game(alphabeta_player, random_player) game52.play_game(query_player, alphabeta_player) game52.play_game(alphabeta_player, query_player) """ Explanation: Demonstrating the play_game function on the game52: End of explanation """ ttt = TicTacToe() """ Explanation: Note that if you are the first player then alphabeta_player plays as MIN, and if you are the second player then alphabeta_player plays as MAX. This happens because that's the way the game is defined in the class Fig52Game. Having a look at the code of this class should make it clear. TicTacToe Now let's play TicTacToe. First we initialize the game by creating an instance of the subclass TicTacToe inherited from the class Game: End of explanation """ ttt.display(ttt.initial) """ Explanation: We can print a state using the display method: End of explanation """ my_state = GameState( to_move = 'X', utility = '0', board = {(1,1): 'X', (1,2): 'O', (1,3): 'X', (2,1): 'O', (2,3): 'O', (3,1): 'X', }, moves = [(2,2), (3,2), (3,3)] ) """ Explanation: Hmm, so that's the initial state of the game; no X's and no O's. Let us create a new game state by ourselves to experiment: End of explanation """ ttt.display(my_state) """ Explanation: So, how does this game state look like? End of explanation """ random_player(ttt, my_state) random_player(ttt, my_state) """ Explanation: The random_player will behave how he is supposed to i.e. pseudo-randomly: End of explanation """ alphabeta_player(ttt, my_state) """ Explanation: But the alphabeta_player will always give the best move, as expected: End of explanation """ ttt.play_game(random_player, alphabeta_player) """ Explanation: Now let's make two players play against each other. We use the play_game function for this. The play_game function makes players play the match against each other and returns the utility for the first player, of the terminal state reached when the game ends. Hence, for our TicTacToe game, if we get the output +1, the first player wins, -1 if the second player wins, and 0 if the match ends in a draw. End of explanation """ for _ in range(10): print(ttt.play_game(alphabeta_player, alphabeta_player)) """ Explanation: The output is (usually) -1, because random_player loses to alphabeta_player. Sometimes, however, random_player manages to draw with alphabeta_player. Since an alphabeta_player plays perfectly, a match between two alphabeta_players should always end in a draw. Let's see if this happens: End of explanation """ for _ in range(10): print(ttt.play_game(random_player, alphabeta_player)) """ Explanation: A random_player should never win against an alphabeta_player. Let's test that. End of explanation """ from notebook import Canvas_TicTacToe bot_play = Canvas_TicTacToe('bot_play', 'random', 'alphabeta') """ Explanation: Canvas_TicTacToe(Canvas) This subclass is used to play TicTacToe game interactively in Jupyter notebooks. TicTacToe class is called while initializing this subclass. Let's have a match between random_player and alphabeta_player. Click on the board to call players to make a move. End of explanation """ rand_play = Canvas_TicTacToe('rand_play', 'human', 'random') """ Explanation: Now, let's play a game ourselves against a random_player: End of explanation """ ab_play = Canvas_TicTacToe('ab_play', 'human', 'alphabeta') """ Explanation: Yay! We (usually) win. But we cannot win against an alphabeta_player, however hard we try. End of explanation """
tclaudioe/Scientific-Computing
SC1v2/Bonus - 11 - Pendulum, double pendulum and chaos.ipynb
bsd-3-clause
from ipywidgets import interact, fixed, IntSlider, FloatSlider, Checkbox import sympy as sym sym.init_printing() import numpy as np import ipywidgets as widgets from scipy.integrate import odeint import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib matplotlib.rc('xtick', labelsize=20) matplotlib.rc('ytick', labelsize=20) from matplotlib import rc rc('text', usetex=True) from mpl_toolkits.mplot3d import Axes3D from matplotlib.patches import Wedge import matplotlib.patches as mpatches from matplotlib.collections import PatchCollection """ Explanation: <center> <img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%"> <h1> INF285 - Computación Científica </h1> <h2> Pendulum, double pendulum and chaos </h2> <h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2> <h2> Version: 1.02</h2> </center> <div id='toc' /> Table of Contents Source The Pendulum Double Pendulum and chaos Challenge: Adding friction and forcing Acknowledgements End of explanation """ plt.figure(figsize=(8,8)) ax=plt.gca() theta0=np.pi/4 x=np.sin(theta0) y=-np.cos(theta0) plt.plot([0, x],[0, y],'-k') plt.scatter(x, y, s=200, marker='o', c='b') plt.scatter(0, 0, s=200, marker='o', c='k') plt.xlim([-1.5,1.5]) plt.ylim([-1.5,0.5]) plt.grid(True) patches=[] wedge = mpatches.Wedge((0, 0), 0.7, 270, 270+45, ec="none") patches.append(wedge) collection = PatchCollection(patches, cmap=plt.cm.hsv, alpha=0.3) ax.add_collection(collection) plt.text(0.1, -0.4, r'$\theta$', fontsize=20) plt.text(0.8, -0.7, r'$m$', fontsize=20) plt.text(0.35, -0.25, r'$l$', fontsize=20) plt.show() """ Explanation: <div id='source' /> Sources Back to TOC See Textbook: Numerical Analysis, Timothy Sauer, 2nd Edition, page 305. https://en.wikipedia.org/wiki/Pendulum_(mathematics) https://en.wikipedia.org/wiki/Double_pendulum https://scienceworld.wolfram.com/physics/DoublePendulum.html https://demonstrations.wolfram.com/DoublePendulum/ World Pendulum Alliance: http://wpa.tecnico.ulisboa.pt/~wpa.daemon/ World Pendulum Alliance at USM: http://wpa.tecnico.ulisboa.pt/~wpa.daemon/hei-partners/p10-universidad-tecnica-federico-santa-maria-utfsm/ <div id='source' /> The Pendulum Back to TOC End of explanation """ def myFirstPendulum(y,t,g,l,m): theta, omega = y return np.array([omega,-g/l*np.sin(theta)]) """ Explanation: In the previous Figure we showed the components that will be included in the modeling of The Pendulum without fricction under the influence of gravity. This means we have the pivot denoted by the black point at coordinates $(0,0)$, the mass $m$ handging from the rigid rod of length $l$, and the angl $\theta$ generated between the vertical line below the pivot and the rigid rod. Differential equation for the evolution of the angle of a pendulum. \begin{align} \ddot{\theta}(t) &=-\dfrac{g}{l}\,\sin(\theta(t))\ \theta(0) &= \theta_0\ \dot{\theta}(0) &= \omega_0 \end{align} This is actually an Initial Value Problem, i.e. it is a differential equation that models the evolution of the angle between the vertical axis and a rigid pendulum. Translating the equation for the pendulum into a code: End of explanation """ def solve_pendulum(N=1000, T=10,y0=np.array([3*np.pi/2, 0.0]),args=(9.8,1,1)): t = np.linspace(0, T, N) sol = odeint(myFirstPendulum, y0, t, args=args) return t, sol '''def solve_pendulum0(N=1000): y0 = [3*np.pi/2, 0.0] t = np.linspace(0, 10, N) sol = odeint(myFirstPendulum, y0, t, args=(9.8,0.2,0.1)) thetas = sol[:,0] return t, sol''' t, sol = solve_pendulum() thetas=sol[:,0] omegas=sol[:,1] plt.figure(figsize=(5,5)) plt.plot(t, thetas, 'b', label=r'$\theta$(t)') plt.plot(t, omegas, 'g', label=r'$\omega$(t)') plt.legend(loc='lower right',fontsize=20) plt.xlabel('t') plt.grid() plt.title('What are we seeing?', fontsize=20) plt.show() """ Explanation: Now solving the differential equation End of explanation """ def plot_pendulum(k,thetas,t): plt.figure(figsize=(10,5)) ax1 = plt.subplot(121) ax2 = plt.subplot(122) x = np.sin(thetas[k]) y = -np.cos(thetas[k]) ax1.plot([0, x],[0, y],'-k') ax1.scatter(x, y, s=200, marker='o', c='b') ax1.set_xlim([-1.5,1.5]) ax1.set_ylim([-1.5,1.5]) ax1.grid(True) ax1.set_title("Time t= %.2f" % (t[k]), fontsize=20) ax2.plot(t[:k], thetas[:k], 'b', label=r'$\theta$(t)') ax2.legend(loc='lower right',fontsize=20) ax2.set_xlabel('t', fontsize=20) ax2.set_xlim([0,t[-1]]) ax2.set_ylim([np.min(thetas)*0.9,np.max(thetas)*1.1]) ax2.grid(True) plt.show() interact(plot_pendulum, k=widgets.IntSlider(min=0, max=1000, step=10, value=0), thetas=fixed(thetas), t=fixed(t)) """ Explanation: Now, this is a pendulum 'moving'! (with interact) End of explanation """ def plot_pendulum2(k,theta0, omega0): plt.figure(figsize=(10,5)) # We need to solve the whole problem again for each different initial angle, # since we are changing the initial condition. t, sol = solve_pendulum(y0=np.array([theta0, omega0])) thetas = sol[:,0] ax1 = plt.subplot(121) ax2 = plt.subplot(122) x = np.sin(thetas[k]) y = -np.cos(thetas[k]) ax1.plot([0, x],[0, y],'-k') ax1.scatter(x, y, s=200, marker='o', c='b') ax1.set_xlim([-1.5,1.5]) ax1.set_ylim([-1.5,1.5]) ax1.grid(True) ax1.set_title("Time t= %.2f" % (t[k]), fontsize=20) ax2.plot(t[:k], thetas[:k], 'b', label=r'$\theta$(t)') ax2.legend(loc='lower right',fontsize=20) ax2.set_xlabel('t', fontsize=20) ax2.set_xlim([0,t[-1]]) ax2.set_ylim([np.min(thetas)*0.9,np.max(thetas)*1.1]) ax2.grid(True) plt.show() interact(plot_pendulum2,k=widgets.IntSlider(min=0, max=999, step=10, value=0), theta0=widgets.FloatSlider(min=0, max=2*np.pi, step=2*np.pi/50, value=3*np.pi/2), omega0=widgets.FloatSlider(min=-5, max=5, step=0.01, value=0)) """ Explanation: In this case we can change the initial angle $\theta_0$ and the initial angular velocity $\omega_0$. We only plot $\theta(t)$. End of explanation """ def plot_pendulum3(k, theta0, omega0, M, FLAG_3D=False): plt.figure(figsize=(16,8)) colors = cm.rainbow(np.linspace(0, 1, M)) ax1 = plt.subplot(121) if FLAG_3D: ax2 = plt.subplot(122, projection='3d') else: ax2 = plt.subplot(122) Ls=np.linspace(0.05,0.2,10)[::-1] for i in np.arange(M): l=Ls[i] t, sol = solve_pendulum(y0=np.array([theta0, omega0]), args=(9.8,l,1)) thetas = sol[:,0] # We are just dividing by 0.2 to scale the rods for plottting purposes. x = np.sin(thetas[k])*l/0.2 y = -np.cos(thetas[k])*l/0.2 ax1.plot([0, x],[0, y],'-k',alpha=0.2) ax1.scatter(x, y, s=200, marker='o', color=colors[i]) ax1.set_xlim([-1.5,1.5]) ax1.set_ylim([-1.5,1.5]) ax1.grid(True) ax1.set_title("Time t= %.2f" % (t[k]), fontsize=20) if FLAG_3D: ax2.plot(t[:k], t[:k]*0+l, thetas[:k]) else: ax2.plot(t[:k], thetas[:k], color=colors[i]) ax2.set_ylim([np.min(thetas)*0.9, np.max(thetas)*1.1]) ax2.grid(True) if FLAG_3D: ax2.view_init(elev=60,azim=35) ax2.set_ylabel(r'$l$', fontsize=20) ax2.set_zlabel(r'$\theta$', fontsize=20) ax2.set_xlabel(r'$t$', fontsize=20) plt.show() k_widget3 = IntSlider(min=0, max=1000-1, step=10, value=0) theta0_widget3 = FloatSlider(min=0, max=2*np.pi, step=2*np.pi/50, value=3*np.pi/2) omega0_widget3 = FloatSlider(min=-5, max=5, step=0.1, value=0) M_widget3 = IntSlider(min=1, max=10, step=1, value=5) interact(plot_pendulum3, k=k_widget3, theta0=theta0_widget3, omega0=omega0_widget3, M=M_widget3, FLAG_3D=Checkbox(value=False, description='Show 3D plot')) """ Explanation: Now, we model $M$ pendulums with different rod length $l$. End of explanation """ plt.figure(figsize=(8,8)) ax=plt.gca() theta0=np.pi/4 theta1=np.pi/3 x1=np.sin(theta0) y1=-np.cos(theta0) x2=x1+0.7*np.sin(theta1) y2=y1-0.7*np.cos(theta1) plt.plot([0, x1],[0, y1],'-k') plt.plot([x1, x2],[y1, y2],'-k') plt.scatter(x1, y1, s=200, marker='o', c='b') plt.scatter(x2, y2, s=200, marker='o', c='r') plt.scatter(0, 0, s=200, marker='o', c='k') plt.xlim([-1.5,2.5]) plt.ylim([-1.5,0.5]) plt.grid(True) patches=[] wedge1 = mpatches.Wedge((0, 0), 0.7, 270, 270+45, ec='none') wedge2 = mpatches.Wedge((x, y), 0.7, 270, 270+60, ec='none') patches.append(wedge1) patches.append(wedge2) collection = PatchCollection(patches, cmap=plt.cm.hsv, alpha=0.3) ax.add_collection(collection) plt.text(0.1, -0.4, r'$\theta_1$', fontsize=20) plt.text(x1+0.1, y1-0.3, r'$\theta_2$', fontsize=20) plt.text(0.8, -0.7, r'$m_1$', fontsize=20) plt.text(x2+0.1, y2, r'$m_2$', fontsize=20) plt.text(0.35, -0.25, r'$l_1$', fontsize=20) plt.text(0.5*(x1+x2)+0.1, 0.5*(y1+y2), r'$l_2$', fontsize=20) plt.show() """ Explanation: <div id='doublependulum' /> Double Pendulum and chaos Back to TOC Source: https://demonstrations.wolfram.com/DoublePendulum/ and https://scienceworld.wolfram.com/physics/DoublePendulum.html In this case the components are pretty much the same, the main difference is that now we have a second pendulum that it attached to the mass $m_1$, Figure below for an sketch of how the second pendulum is included. End of explanation """ def doublePendulum(y, t, l1, l2, m1, m2, g): th1 = y[0] th2 = y[1] th1p = y[2] th2p = y[3] th1pp = -((g*(2*m1 + m2)*np.sin(th1) + g*m2*np.sin(th1 - 2*th2) + 2*m2*(l2*th2p**2 + l1*th1p**2*np.cos(th1 - th2))*np.sin(th1 - th2)) /(2*l1*(m1 + m2 - m2*np.cos(th1 - th2)**2))) th2pp = (((m1 + m2)*(l1*th1p**2 + g*np.cos(th1))+ l2*m2*th2p**2*np.cos(th1 - th2))* np.sin(th1 - th2))/(l2*(m1 + m2 - m2*np.cos(th1 - th2)**2)) return np.array([th1p,th2p,th1pp,th2pp]) def plotDoblePendulum(k=0, T=10, N=1000, th10=np.pi, ome10=2, th20=-np.pi/2, ome20=-2): l1 = 1 l2 = 1 m1 = 1 m2 = 1 g = 9.8 t = np.linspace(0,T,N) y0 = np.array([th10, th20, ome10, ome20]) sol = odeint(doublePendulum, y0, t, args=(l1, l2, m1, m2, g)) # Plotting trajectories theta1 = sol[:k,0] theta2 = sol[:k,1] x1 = l1*np.sin(theta1) y1 = -l1*np.cos(theta1) x2 = x1+l2*np.sin(theta2) y2 = y1-l2*np.cos(theta2) plt.figure(figsize=(16,8)) ax1 = plt.subplot(121) ax2 = plt.subplot(122) ax1.plot(x1,y1,'-ob', alpha=0.05) ax1.plot(x2,y2,'-or', alpha=0.05) # Plotting function values ax2.plot(t[:k], theta1, 'b', label=r'$\theta_1$(t)') ax2.plot(t[:k], theta2, 'r', label=r'$\theta_2$(t)') ax2.legend(loc='lower right',fontsize=20) ax2.set_xlabel(r'$t$', fontsize=20) ax2.set_xlim([0,t[-1]]) ax2.grid(True) # Plotting masses theta1 = sol[k,0] theta2 = sol[k,1] x1 = l1*np.sin(theta1) y1 = -l1*np.cos(theta1) x2 = x1+l2*np.sin(theta2) y2 = y1-l2*np.cos(theta2) ax1.plot([0, x1],[0, y1],'-k') ax1.plot([x1, x2],[y1, y2],'-k') ax1.scatter(x1, y1, s=200, marker='o', c='b') ax1.scatter(x2, y2, s=200, marker='o', c='r') # Extra plottting configurations ax1.grid(True) ax1.set_title("Time t= %.2f" % (t[k]), fontsize=20) #plt.axis('equal') ax1.set_xlim([-2.5,2.5]) ax1.set_ylim([-2.5,2.5]) plt.show() k_widget = IntSlider(min=0, max=1000-1, step=10, value=0) """ Explanation: The equations will be omitted but we refer to the interested reader to look at https://demonstrations.wolfram.com/DoublePendulum/ and https://scienceworld.wolfram.com/physics/DoublePendulum.html. End of explanation """ interact(plotDoblePendulum,k=k_widget,T=(10,50,10), N=fixed(1000), th10=(-np.pi,np.pi,np.pi/8), ome10=(-5,5,0.1), th20=(-np.pi,np.pi,np.pi/8), ome20=(-5,5,0.1)) """ Explanation: Running the double pendulum numerical simulation Here, we added the trajectory of both pendulums. In the previous case, i.e. only one pendulum, the trajectory was always within a circle, however the second pendulum does not have to follow that! End of explanation """ def solveDoblePendulum(T=10 , N=100, y0=np.array([np.pi/2,np.pi/2,2,-2]), args=(1,1,1,1,9.8)): l1, l2, m1, m2, g = args t = np.linspace(0,T,N) #y0 = np.array([np.pi/2,np.pi/2,2,-2]) sol = odeint(doublePendulum, y0, t, args=(l1, l2, m1, m2, g)) return t, sol def plotFastDoblePendulum(axT, axF, t, sol, T, k=0, args=(1,1,1,1,9.8), color_1st_mass='b', color_2nd_mass='r'): l1, l2, m1, m2, g = args theta1 = sol[:k,0] theta2 = sol[:k,1] # Plotting angles axF.plot(t[:k],theta1,'-', c=color_1st_mass) axF.plot(t[:k],theta2,'-', c=color_2nd_mass) axF.grid(True) axF.set_title("Time t= %.2f" % (t[k]), fontsize=20) axF.set_xlim([0, T]) # Plotting Trajectories x1 = l1*np.sin(theta1) y1 = -l1*np.cos(theta1) x2 = x1+l2*np.sin(theta2) y2 = y1-l2*np.cos(theta2) axT.plot(x1,y1,'-o', alpha=0.05, c=color_1st_mass) axT.plot(x2,y2,'-o', alpha=0.05, c=color_2nd_mass) # Plotting masses Trajectories theta1 = sol[k,0] theta2 = sol[k,1] x1 = l1*np.sin(theta1) y1 = -l1*np.cos(theta1) x2 = x1+l2*np.sin(theta2) y2 = y1-l2*np.cos(theta2) axT.plot([0, x1],[0, y1],'-k') axT.plot([x1, x2],[y1, y2],'-k') axT.scatter(x1, y1, s=200, marker='o', c=color_1st_mass) axT.scatter(x2, y2, s=200, marker='o', c=color_2nd_mass) # Extra plottting configurations axT.grid(True) axT.set_title("Time t= %.2f" % (t[k]), fontsize=20) axT.set_xlim([-2.5,2.5]) axT.set_ylim([-2.5,2.5]) """ Explanation: Showing Chaos in a double pendulum! The main idea here is to run two simulations where the only difference between the numerical simulation is that only one initial conditions was perturbed respet to the otehr one. This perturnation will be denoted as $\delta$ in the code. See the next cell with comments. End of explanation """ delta = -0.1 y01=np.array([np.pi/2,np.pi/2,2,-2]) y02=np.array([np.pi/2,np.pi/2,2,-2+delta]) N = 1000 T = 20 args = (1,1,1,1,9.8) t1, sol1 = solveDoblePendulum(T=T, N=N, y0=y01, args=args) t2, sol2 = solveDoblePendulum(T=T, N=N, y0=y02, args=args) def plotSeveralPendulums(k=10): plt.figure(figsize=(15,10)) ax1 = plt.subplot(231) ax2 = plt.subplot(232) ax3 = plt.subplot(233) ax4 = plt.subplot(234) ax5 = plt.subplot(235) ax6 = plt.subplot(236) plotFastDoblePendulum(ax1, ax4, t1, sol1, T, k, args=args) plotFastDoblePendulum(ax2, ax5, t1, sol1, T, k, args=args) plotFastDoblePendulum(ax2, ax5, t2, sol2, T, k, args=args, color_1st_mass='m', color_2nd_mass='g') plotFastDoblePendulum(ax3, ax6, t2, sol2, T, k, args=args, color_1st_mass='m', color_2nd_mass='g') min_y = min(np.min(sol1[:]),np.min(sol2[:])) max_y = max(np.max(sol1[:]),np.max(sol2[:])) ax4.set_ylim([min_y, max_y]) ax5.set_ylim([min_y, max_y]) ax6.set_ylim([min_y, max_y]) plt.show() """ Explanation: Here we add the perturbation $\delta$. Please play with the values and analyze the output! End of explanation """ interact(plotSeveralPendulums,k=(0,N-1,10)) """ Explanation: Notice that the original double pendulum is on the first column of plots, the perturbed double pendulum is on the third column, and we include the data for both of the in the second (central) column of plots. End of explanation """
quantopian/research_public
notebooks/data/quandl.fred_gdpdef/notebook.ipynb
apache-2.0
# import the dataset from quantopian.interactive.data.quandl import fred_gdpdef # Since this data is public domain and provided by Quandl for free, there is no _free version of this # data set, as found in the premium sets. This import gets you the entirety of this data set. # import data operations from odo import odo # import other libraries we will use import pandas as pd import matplotlib.pyplot as plt fred_gdpdef.sort('asof_date') """ Explanation: Quandl: US GDP, Implicit Price Deflator In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1947 through the current day. It contains the value for the United States GDP, using the implicit price deflator by the US Federal Reserve via the FRED data initiative. We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website. Blaze Before we dig into the data, we want to tell you about how you generally access Quantopian partner data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets. Some of these sets (though not this one) are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side. To learn more about using Blaze and generally accessing Quantopian partner data, clone this tutorial notebook. With preamble in place, let's get started: End of explanation """ fred_gdpdef.count() """ Explanation: The data goes all the way back to 1947 and is updated quarterly. Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression: End of explanation """ gdpdef_df = odo(fred_gdpdef, pd.DataFrame) gdpdef_df.plot(x='asof_date', y='value') plt.xlabel("As Of Date (asof_date)") plt.ylabel("GDP Rate") plt.title("US GDP, Implicit Price Deflator") plt.legend().set_visible(False) """ Explanation: Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/data/deal_flow_espace_vert.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: Deal flow espaces verts 2018 - 2019 Ce jeu de données est proposé pour la réalisation d'un projet de module python pour partager une fonction de graphe. Un exemple de ce projet est proposé : td2a_plotting. End of explanation """ from ensae_teaching_cs.data import deal_flow_espace_vert_2018_2019 filenames = deal_flow_espace_vert_2018_2019() filenames import pandas df19 = pandas.read_excel(filenames[0], engine='openpyxl') df19.head() """ Explanation: Récupération des données End of explanation """ import chardet with open(filenames[1], 'rb') as f: c = f.read() chardet.detect(c) df18 = pandas.read_csv(filenames[1], encoding="Windows-1252", sep=";", decimal=',') df18.head() """ Explanation: L'encoding du fichier csv n'est pas évident à deviner. On utilise le module chardet pour cela. End of explanation """ df18.columns df19.columns """ Explanation: Comparons les colonnes. End of explanation """ columns = list(sorted(set(list(df18.columns) + list(df19.columns)))) + ['montant'] from ipywidgets import interact @interact(annee=[2018, 2019], column=columns, x=(0, 10000000, 100000)) def show_rows(annee=2018, column='montant', x=100000): if annee == 2018: if column == 'montant': column = 'iMontantTotal' return df18[df18[column] >= x].sort_values(column).T else: if column == 'montant': column = 'iMontant total' return df19[df19[column] >= x].sort_values(column).T """ Explanation: Vues dynamiques On regarde un peu les données. End of explanation """ lim_metropole = [-5, 10, 41, 52] df18_metro = df18[((df18.COMMUNE_X >= lim_metropole[0]) & (df19.COMMUNE_X <= lim_metropole[1]) & (df18.COMMUNE_Y >= lim_metropole[2]) & (df19.COMMUNE_Y <= lim_metropole[3]))] df18.shape, df18_metro.shape import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt fig = plt.figure(figsize=(7, 7)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent(lim_metropole) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.COASTLINE.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.scatter(df18_metro.COMMUNE_X, df18_metro.COMMUNE_Y, s=df18_metro.iMontantTotal ** 0.5 / 20, alpha=0.5) ax.set_title('France 2018\ninvestissements verts'); """ Explanation: Une carte On regarde les projets sur la métropole en 2018. End of explanation """
StingraySoftware/notebooks
Lightcurve/Analyze light curves chunk by chunk - an example.ipynb
mit
from stingray.simulator.simulator import Simulator from scipy.ndimage.filters import gaussian_filter1d from stingray.utils import baseline_als from scipy.interpolate import interp1d np.random.seed(1034232) # Simulate a light curve with increasing variability and flux length = 10000 dt = 0.1 times = np.arange(0, length, dt) # Create a light curve with powerlaw variability (index 1), # and smooth it to eliminate some Gaussian noise. We will simulate proper # noise with the `np.random.poisson` function. # Both should not be used together, because they alter the noise properties. sim = Simulator(dt=dt, N=int(length/dt), mean=50, rms=0.4) counts_cont = sim.simulate(1).counts counts_cont_init = gaussian_filter1d(counts_cont, 200) # --------------------- # Renormalize so that the light curve has increasing flux and r.m.s. # variability. # --------------------- # The baseline function cannot be used with too large arrays. # Since it's just an approximation, we will just use one every # ten array elements to calculate the baseline mask = np.zeros_like(times, dtype=bool) mask[::10] = True print (counts_cont_init[mask]) baseline = baseline_als(times[mask], counts_cont_init[mask], 1e10, 0.001) base_func = interp1d(times[mask], baseline, bounds_error=False, fill_value='extrapolate') counts_cont = counts_cont_init - base_func(times) counts_cont -= np.min(counts_cont) counts_cont += 1 counts_cont *= times * 0.003 # counts_cont += 500 counts_cont += 500 # Finally, Poissonize it! counts = np.random.poisson(counts_cont) plt.plot(times, counts_cont, zorder=10, label='Continuous light curve') plt.plot(times, counts, label='Final light curve') plt.legend() """ Explanation: R.m.s. - intensity diagram This diagram is used to characterize the variability of black hole binaries and AGN (see e.g. Plant et al., arXiv:1404.7498; McHardy 2010 2010LNP...794..203M for a review). In Stingray it is very easy to calculate. Setup: simulate a light curve with a variable rms and rate We simulate a light curve with powerlaw variability, and then we rescale it so that it has increasing flux and r.m.s. variability. End of explanation """ # This function can be found in stingray.utils def excess_variance(lc, normalization='fvar'): """Calculate the excess variance. Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source intrinsic variance: the *excess variance*, defined as .. math:: \sigma_{XS} = S^2 - \overline{\sigma_{err}^2} the *normalized excess variance*, defined as .. math:: \sigma_{NXS} = \sigma_{XS} / \overline{x^2} and the *fractional mean square variability amplitude*, or :math:`F_{var}`, defined as .. math:: F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x^2}}} Parameters ---------- lc : a :class:`Lightcurve` object normalization : str if 'fvar', return the fractional mean square variability :math:`F_{var}`. If 'none', return the unnormalized excess variance variance :math:`\sigma_{XS}`. If 'norm_xs', return the normalized excess variance :math:`\sigma_{XS}` Returns ------- var_xs : float var_xs_err : float """ lc_mean_var = np.mean(lc.counts_err ** 2) lc_actual_var = np.var(lc.counts) var_xs = lc_actual_var - lc_mean_var mean_lc = np.mean(lc.counts) mean_ctvar = mean_lc ** 2 var_nxs = var_xs / mean_lc ** 2 fvar = np.sqrt(var_xs / mean_ctvar) N = len(lc.counts) var_nxs_err_A = np.sqrt(2 / N) * lc_mean_var / mean_lc ** 2 var_nxs_err_B = np.sqrt(mean_lc ** 2 / N) * 2 * fvar / mean_lc var_nxs_err = np.sqrt(var_nxs_err_A ** 2 + var_nxs_err_B ** 2) fvar_err = var_nxs_err / (2 * fvar) if normalization == 'fvar': return fvar, fvar_err elif normalization == 'norm_xs': return var_nxs, var_nxs_err elif normalization == 'none' or normalization is None: return var_xs, var_nxs_err * mean_lc **2 def fvar_fun(lc): return excess_variance(lc, normalization='fvar') def norm_exc_var_fun(lc): return excess_variance(lc, normalization='norm_xs') def exc_var_fun(lc): return excess_variance(lc, normalization='none') def rate_fun(lc): return lc.meancounts, np.std(lc.counts) lc = Lightcurve(times, counts, gti=[[-0.5*dt, length - 0.5*dt]], dt=dt) start, stop, res = lc.analyze_lc_chunks(1000, np.var) var = res start, stop, res = lc.analyze_lc_chunks(1000, rate_fun) rate, rate_err = res start, stop, res = lc.analyze_lc_chunks(1000, fvar_fun) fvar, fvar_err = res start, stop, res = lc.analyze_lc_chunks(1000, exc_var_fun) evar, evar_err = res start, stop, res = lc.analyze_lc_chunks(1000, norm_exc_var_fun) nvar, nvar_err = res plt.errorbar(rate, fvar, xerr=rate_err, yerr=fvar_err, fmt='none') plt.loglog() plt.xlabel('Count rate') plt.ylabel(r'$F_{\rm var}$') tmean = (start + stop)/2 from matplotlib.gridspec import GridSpec plt.figure(figsize=(15, 20)) gs = GridSpec(5, 1) ax_lc = plt.subplot(gs[0]) ax_mean = plt.subplot(gs[1], sharex=ax_lc) ax_evar = plt.subplot(gs[2], sharex=ax_lc) ax_nvar = plt.subplot(gs[3], sharex=ax_lc) ax_fvar = plt.subplot(gs[4], sharex=ax_lc) ax_lc.plot(lc.time, lc.counts) ax_lc.set_ylabel('Counts') ax_mean.scatter(tmean, rate) ax_mean.set_ylabel('Counts') ax_evar.errorbar(tmean, evar, yerr=evar_err, fmt='o') ax_evar.set_ylabel(r'$\sigma_{XS}$') ax_fvar.errorbar(tmean, fvar, yerr=fvar_err, fmt='o') ax_fvar.set_ylabel(r'$F_{var}$') ax_nvar.errorbar(tmean, nvar, yerr=nvar_err, fmt='o') ax_nvar.set_ylabel(r'$\sigma_{NXS}$') """ Explanation: R.m.s. - intensity diagram We use the analyze_lc_chunks method in Lightcurve to calculate two quantities: the rate and the excess variance, normalized as $F_{\rm var}$ (Vaughan et al. 2010). analyze_lc_chunks() requires an input function that just accepts a light curve. Therefore, we create the two functions rate and excvar that wrap the existing functionality in Stingray. Then, we plot the results. Done! End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch1-Problem_1-18.ipynb
unlicense
%pylab notebook """ Explanation: Excercises Electric Machinery Fundamentals Chapter 1 Problem 1-18 End of explanation """ V = 208.0 * exp(-1j*30/180*pi) # [V] I = 2.0 * exp( 1j*20/180*pi) # [A] """ Explanation: Description Assume that the voltage applied to a load is $\vec{V} = 208\,V\angle -30^\circ$ and the current flowing through the load $\vec{I} = 2\,A\angle 20^\circ$. (a) Calculate the complex power $S$ consumed by this load. (b) Is this load inductive or capacitive? (c) Calculate the power factor of this load? (d) Calculate the reactive power consumed or supplied by this load. Does the load consume reactive power from the source or supply it to the source? End of explanation """ S = V * conjugate(I) # The complex conjugate of a complex number is # obtained by changing the sign of its imaginary part. S_angle = arctan(S.imag/S.real) print('S = {:.1f} VA ∠{:.1f}°'.format(*(abs(S), S_angle/pi*180))) print('====================') """ Explanation: SOLUTION (a) The complex power $S$ consumed by this load is: $$ S = V\cdot I^* $$ End of explanation """ PF = cos(S_angle) print('PF = {:.3f} leading'.format(PF)) print('==================') """ Explanation: (b) This is a capacitive load. (c) The power factor of this load is leading and: End of explanation """ Q = abs(S)*sin(S_angle) print('Q = {:.1f} var'.format(Q)) print('==============') """ Explanation: (d) This load supplies reactive power to the source. The reactive power of the load is: $$ Q = VI\sin\theta = S\sin\theta$$ End of explanation """
harishkrao/DSE200x
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
mit
import pandas as pd from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier """ Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"> Classification of Weather Data <br><br> using scikit-learn <br><br> </p> <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Daily Weather Data Analysis</p> In this notebook, we will use scikit-learn to perform a decision tree based classification of weather data. <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Importing the Necessary Libraries<br></p> End of explanation """ data = pd.read_csv('./weather/daily_weather.csv') """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Creating a Pandas DataFrame from a CSV file<br></p> End of explanation """ data.columns """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Daily Weather Data Description</p> <br> The file daily_weather.csv is a comma-separated file that contains weather data. This data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.<br><br> Let's now check all the columns in the data. End of explanation """ data data[data.isnull().any(axis=1)] """ Explanation: <br>Each row in daily_weather.csv captures weather data for a separate day. <br><br> Sensor measurements from the weather station were captured at one-minute intervals. These measurements were then processed to generate values to describe daily weather. Since this dataset was created to classify low-humidity days vs. non-low-humidity days (that is, days with normal or high humidity), the variables included are weather measurements in the morning, with one measurement, namely relatively humidity, in the afternoon. The idea is to use the morning weather values to predict whether the day will be low-humidity or not based on the afternoon measurement of relative humidity. Each row, or sample, consists of the following variables: number: unique number for each row air_pressure_9am: air pressure averaged over a period from 8:55am to 9:04am (Unit: hectopascals) air_temp_9am: air temperature averaged over a period from 8:55am to 9:04am (Unit: degrees Fahrenheit) air_wind_direction_9am: wind direction averaged over a period from 8:55am to 9:04am (Unit: degrees, with 0 means coming from the North, and increasing clockwise) air_wind_speed_9am: wind speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour) max_wind_direction_9am: wind gust direction averaged over a period from 8:55am to 9:10am (Unit: degrees, with 0 being North and increasing clockwise) max_wind_speed_9am: wind gust speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour) rain_accumulation_9am: amount of rain accumulated in the 24 hours prior to 9am (Unit: millimeters) rain_duration_9am: amount of time rain was recorded in the 24 hours prior to 9am (Unit: seconds) relative_humidity_9am: relative humidity averaged over a period from 8:55am to 9:04am (Unit: percent) relative_humidity_3pm: relative humidity averaged over a period from 2:55pm to 3:04pm (Unit: percent ) End of explanation """ del data['number'] """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Data Cleaning Steps<br><br></p> We will not need to number for each row so we can clean it. End of explanation """ before_rows = data.shape[0] print(before_rows) data = data.dropna() after_rows = data.shape[0] print(after_rows) """ Explanation: Now let's drop null values using the pandas dropna function. End of explanation """ before_rows - after_rows """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> How many rows dropped due to cleaning?<br><br></p> End of explanation """ clean_data = data.copy() clean_data['high_humidity_label'] = (clean_data['relative_humidity_3pm'] > 24.99)*1 print(clean_data['high_humidity_label']) """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"> Convert to a Classification Task <br><br></p> Binarize the relative_humidity_3pm to 0 or 1.<br> End of explanation """ y=clean_data[['high_humidity_label']].copy() #y clean_data['relative_humidity_3pm'].head() y.head() """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Target is stored in 'y'. <br><br></p> End of explanation """ morning_features = ['air_pressure_9am','air_temp_9am','avg_wind_direction_9am','avg_wind_speed_9am', 'max_wind_direction_9am','max_wind_speed_9am','rain_accumulation_9am', 'rain_duration_9am'] X = clean_data[morning_features].copy() X.columns y.columns """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Use 9am Sensor Signals as Features to Predict Humidity at 3pm <br><br></p> End of explanation """ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324) #type(X_train) #type(X_test) #type(y_train) #type(y_test) #X_train.head() #y_train.describe() """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Perform Test and Train split <br><br></p> REMINDER: Training Phase In the training phase, the learning algorithm uses the training data to adjust the model’s parameters to minimize errors. At the end of the training phase, you get the trained model. <img src="TrainingVSTesting.jpg" align="middle" style="width:550px;height:360px;"/> <BR> In the testing phase, the trained model is applied to test data. Test data is separate from the training data, and is previously unseen by the model. The model is then evaluated on how it performs on the test data. The goal in building a classifier model is to have the model perform well on training as well as test data. End of explanation """ humidity_classifier = DecisionTreeClassifier(max_leaf_nodes=10, random_state=0) humidity_classifier.fit(X_train, y_train) type(humidity_classifier) """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Fit on Train Set <br><br></p> End of explanation """ predictions = humidity_classifier.predict(X_test) predictions[:10] y_test['high_humidity_label'][:10] """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Predict on Test Set <br><br></p> End of explanation """ accuracy_score(y_true = y_test, y_pred = predictions) """ Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Measure Accuracy of the Classifier <br><br></p> End of explanation """
feststelltaste/software-analytics
courses/big_data_meetup/Production Coverage Demo Notebook.ipynb
gpl-3.0
import pandas as pd coverage = pd.read_csv("datasets/jacoco.csv") coverage = coverage[['PACKAGE', 'CLASS', 'LINE_COVERED' ,'LINE_MISSED']] coverage['LINES'] = coverage.LINE_COVERED + coverage.LINE_MISSED coverage.head(1) """ Explanation: Context John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not. Idea To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command: bash java -jar jacococli.jar report "C:\Temp\jacoco.exec" --classfiles \ C:\dev\repos\buschmais-spring-petclinic\target\classes --csv jacoco.csv The CSV file contains all lines of code that were passed through during the measurement's time span. We just take the relevant data and add an additional LINES column to be able to calculate the ratio between covered and missed lines later on. End of explanation """ grouped_by_packages = coverage.groupby("PACKAGE").sum() grouped_by_packages['RATIO'] = grouped_by_packages.LINE_COVERED / grouped_by_packages.LINES grouped_by_packages = grouped_by_packages.sort_values(by='RATIO') grouped_by_packages """ Explanation: Analysis It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package. End of explanation """ %matplotlib inline grouped_by_packages[['RATIO']].plot(kind="barh", figsize=(8,2)) """ Explanation: We plot the data for the coverage ratio to get a brief overview of the result. End of explanation """
vascotenner/holoviews
doc/Tutorials/Dynamic_Map.ipynb
bsd-3-clause
import holoviews as hv import numpy as np hv.notebook_extension() """ Explanation: The Containers Tutorial introduced the HoloMap, a core HoloViews data structure that allows easy exploration of parameter spaces. The essence of a HoloMap is that it contains a collection of Elements (e.g. Images and Curves) that you can easily select and visualize. HoloMaps hold fully constructed Elements at specifically sampled points in a multidimensional space. Although HoloMaps are useful for exploring high-dimensional parameter spaces, they can very quickly consume huge amounts of memory to store all these Elements. For instance, a hundred samples along four orthogonal dimensions would need a HoloMap containing a hundred million Elements, each of which could be a substantial object that takes time to create and costs memory to store. Thus HoloMaps have some clear limitations: HoloMaps may require the generation of millions of Elements before the first element can be viewed. HoloMaps can easily exhaust all the memory available to Python. HoloMaps can even more easily exhaust all the memory in the browser when displayed. Static export of a notebook containing HoloMaps can result in impractically large HTML files. The DynamicMap addresses these issues by computing and displaying elements dynamically, allowing exploration of much larger datasets: DynamicMaps generate elements on the fly, allowing the process of exploration to begin immediately. DynamicMaps do not require fixed sampling, allowing exploration of parameters with arbitrary resolution. DynamicMaps are lazy in the sense they only compute only as much data as the user wishes to explore. Of course, these advantages come with some limitations: DynamicMaps require a live notebook server and cannot be fully exported to static HTML. DynamicMaps store only a portion of the underlying data, in the form of an Element cache, which reduces the utility of pickling a DynamicMap. DynamicMaps (and particularly their element caches) are typically stateful (with values that depend on patterns of user interaction), which can make them more difficult to reason about. In order to handle the various situations in which one might want to use a DynamicMap, DynamicMaps can be defined in various "modes" that will be each described separately below: Bounded mode: All dimension ranges specified, allowing exploration within the ranges (i.e., within a bounded region of the multidimensional parameter space). Sampled mode: Some dimension ranges left unspecified, making the DynamicMap not viewable directly, but useful in combination with other objects or for later specifying samples or ranges. Open mode: Dimension ranges not specified, with new elements created on request using a generator. Counter mode: Dimension ranges not specified, with new elements created based on a shared counter state, e.g. from a simulation. All this will make much more sense once we've tried out some DynamicMaps and showed how they work, so let's create one! <center><div class="alert alert-info" role="alert">To use visualize and use a <b>DynamicMap</b> you need to be running a live Jupyter server.<br>This tutorial assumes that it will be run in a live notebook environment.<br> When viewed statically, DynamicMaps will only show the first available Element,<br> and will thus not have any slider widgets, making it difficult to follow the descriptions below.<br><br> It's also best to run this notebook one cell at a time, not via "Run All",<br> so that subsequent cells can reflect your dynamic interaction with widgets in previous cells.</div></center> DynamicMap <a id='DynamicMap'></a> Let's start by importing HoloViews and loading the extension: End of explanation """ x,y = np.mgrid[-50:51, -50:51] * 0.1 def sine_array(phase, freq): return np.sin(phase + (freq*x**2+freq*y**2)) """ Explanation: We will now create the DynamicMap equivalent of the HoloMap introduced in the Containers Tutorial. The HoloMap in that tutorial consisted of Image elements containing sine ring arrays as defined by the sine_array function: End of explanation """ sine_array(0,1).shape """ Explanation: This function returns NumPy arrays when called: End of explanation """ def sine_image(phase, freq): return hv.Image(np.sin(phase + (freq*x**2+freq*y**2))) sine_image(0,1) + sine_image(0.5,2) """ Explanation: For a DynamicMap we will need a function that returns HoloViews elements. It is easy to modify the sine_array function so that it returns Image elements instead of raw arrays: End of explanation """ dmap = hv.DynamicMap(sine_image, kdims=[hv.Dimension('phase',range=(0, np.pi)), hv.Dimension('frequency', range=(0.01,np.pi))]) """ Explanation: Now we can demonstrate the first type of exploration enabled by a DynamicMap, called 'bounded' mode. Bounded mode <a id='BoundedMode'></a> A 'bounded' mode DynamicMap is simply one where all the key dimensions have finite bounds. Bounded mode has the following properties: The limits of the space and/or the allowable values must be declared for all the key dimensions (unless sampled mode is also enabled). You can explore within the declared bounds at any resolution. The data for the DynamicMap is defined using a callable that must be a function of its arguments (i.e., the output is strictly determined by the input arguments). We can now create a DynamicMap by simply declaring the ranges of the two dimensions and passing the sine_image function as the .data: End of explanation """ repr(dmap) """ Explanation: This object is created instantly, because it doesn't generate any hv.Image objects initially. We can now look at the repr of this object: End of explanation """ dmap """ Explanation: All DynamicMaps will look similar, only differing in the listed dimensions. Now let's see how this dynamic map visualizes itself: End of explanation """ dmap + hv.DynamicMap(sine_image, kdims=[hv.Dimension('phase',range=(0, np.pi)), hv.Dimension('frequency', range=(0.01,np.pi))]) """ Explanation: Here each hv.Image object visualizing a particular sine ring pattern with the given parameters is created dynamically, whenever the slider is set to that value. Any value in the allowable range can be requested, with a rough step size supported by sliding, and more precise values available by using the left and right arrow keys to change the value in small steps. In each case the new image is dynamically generated based on whatever the slider's values are. As for any HoloViews Element, you can combine DynamicMaps to create a Layout using the + operator: End of explanation """ def shapes(N, radius=0.5): # Positional keyword arguments are fine paths = [hv.Path([[(radius*np.sin(a), radius*np.cos(a)) for a in np.linspace(-np.pi, np.pi, n+2)]], extents=(-1,-1,1,1)) for n in range(N,N+3)] return hv.Overlay(paths) %%opts Path (linewidth=1.5) dmap = hv.DynamicMap(shapes, kdims=[hv.Dimension('N', range=(2,20)), hv.Dimension('radius', range=(0.5,1))]) dmap """ Explanation: As both elements are DynamicMaps with the same dimension ranges, the continuous sliders are retained. If one or more HoloMaps is used with a DynamicMap, the sliders will snap to the samples available in any HoloMap in the layout. For bounded DynamicMaps that do not require ranges to be declared, see sampled mode below. If you are running this tutorial in a live notebook, the above cell should look like the HoloMap in the Containers Tutorial. DynamicMap is in fact a subclass of HoloMap with some crucial differences: You can now pick any value of phase or frequency up to the precision allowed by the slider. What you see in the cell above will not be exported in any HTML snapshot of the notebook. Using your own callable You can use any callable to define a DynamicMap in closed mode. A valid DynamicMap is defined by the following criteria: There must be as many positional arguments in the callable signature as key dimensions. The argument order in the callable signature must match the order of the declared key dimensions. All key dimensions are defined with a bounded range or values parameter (for categorical dimensions). Here is another example of a bounded DynamicMap: End of explanation """ %opts Path (linewidth=1.5) """ Explanation: As you can see, you can return Overlays from DynamicMaps, and DynamicMaps can be styled in exactly the same way as HoloMaps. Note that currently, Overlay objects should be returned from the callable itself; the * operator is not yet supported at the DynamicMap level (i.e., between a DynamicMap and other Elements). End of explanation """ dmap.data """ Explanation: The DynamicMap cache Above we mentioned that DynamicMap is an instance of HoloMap. Does this mean it has a .data attribute? End of explanation """ hv.HoloMap(dmap) """ Explanation: This is exactly the same sort of .data as the equivalent HoloMap, except that this value will vary according to how much you explored the parameter space of dmap using the sliders above. In a HoloMap, .data contains a defined sampling along the different dimensions, whereas in a DynamicMap, the .data is simply the cache. The cache serves two purposes: Avoids recomputation of an element should we revisit a particular point in the parameter space. This works well for categorical or integer dimensions, but doesn't help much when using continuous sliders for real-valued dimensions. Records the space that has been explored with the DynamicMap for any later conversion to a HoloMap. Ensures that we store only a finite history of generator output when using open mode together with infinite generators. We can always convert any DynamicMap directly to a HoloMap as follows: End of explanation """ dmap[{(2,0.5), (2,1.0), (3,0.5), (3,1.0)}] # Returns a *new* DynamicMap with the specified keys in its cache """ Explanation: This is in fact equivalent to declaring a HoloMap with the same parameters (dimensions, etc.) using dmap.data as input, but is more convenient. Although creating a HoloMap this way is easy, the result is poorly controlled, as the keys in the HoloMap are defined by how you moved the sliders around. For instance, if you run this tutorial straight through using Run All, the above cell won't have any sliders at all, because only a single element will be in the dmap's cache until the DynamicMaps sliders are dragged. If you instead want to specify a specific set of samples, you can easily do so by using the same key-selection semantics as for a HoloMap to define exactly which elements are initially sampled in the cache: End of explanation """ dmap[{2,3},{0.5,1.0}] dmap.data """ Explanation: This object behaves the same way as before it was sampled, but now this DynamicMap can now be exported to static HTML with the allowed slider positions as specified in the cache, without even having to cast to a HoloMap. Of course, if the intent is primarily to have something statically exportable, then it's still a good idea to explicitly cast it to a HoloMap so that it will clearly contain only a finite set of Elements. The key selection above happens to define a Cartesian product, which is one of the most common way to sample across dimensions. Because the list of such dimension values can quickly get very large when enumerated as above, we provide a way to specify a Cartesian product directly, which also works with HoloMaps. Here is an equivalent way of defining the same set of four points in that two-dimensional space: End of explanation """ dmap[dmap.keys()[-1]] """ Explanation: Note that you can index a DynamicMap with a literal key in exactly the same way as a HoloMap. If the key exists in the cache, it is returned directly, otherwise a suitable element will be generated. Here is an example of how you can access the last key in the cache, creating such an element if it didn't already exist: End of explanation """ sliced = dmap[4:8, :] sliced """ Explanation: The default cache size of 500 Elements is relatively high so that interactive exploration will work smoothly, but you can reduce it using the cache_size parameter if you find you are running into issues with memory consumption. A bounded DynamicMap with cache_size=1 requires the least memory, but will recompute a new Element every time the sliders are moved, making it less responsive. Slicing bounded DynamicMaps The declared dimension ranges define the absolute limits allowed for exploration in a bounded DynamicMap. That said, you can use the soft_range parameter to view subregions within that range. Setting the soft_range parameter on dimensions can be done conveniently using slicing on bounded DynamicMaps: End of explanation """ sliced[:, 0.8:1.0] """ Explanation: Notice that N is now restricted to the range 4:8. Open slices are used to release any soft_range values, which resets the limits back to those defined by the full range: End of explanation """ hv.DynamicMap(shapes, kdims=[hv.Dimension('N', values=[2,3,4,5]), hv.Dimension('radius', values=[0.7,0.8,0.9,1])]) """ Explanation: The [:] slice leaves the soft_range values alone and can be used as a convenient way to clone a DynamicMap. Note that mixing slices with any other object type is not supported. In other words, once you use a single slice, you can only use slices in that indexing operation. Sampling DynamicMaps We have now seen one way of sampling a DynamicMap, which is to populate the cache with a set of keys. This approach is designed to make conversion of a DynamicMap into a HoloMap easy. One disadvantage of this type of sampling is that populating the cache consumes memory, resulting in many of the same limitations as HoloMap. To avoid this, there are two other ways of sampling a bounded DynamicMap: Dimension values If you want a fixed sampling instead of continuous sliders, yet still wish to retain the online generation of elements as the sliders are moved, you can simply declare the dimension to have a fixed list of values. The result appears to the user just like a HoloMap, and will show the same data as the pre-cached version above, but now generates the data dynamically to reduce memory requirements and speed up the initial display: End of explanation """ dmap = hv.DynamicMap(shapes, kdims=['N', 'radius'], sampled=True) dmap """ Explanation: Sampled mode <a id='SampledMode'></a> A bounded DynamicMap in sampled mode is the least restricted type of DynamicMap, as it can be declared without any information about the allowable dimension ranges or values: End of explanation """ dmap[{2,3},{0.5,1.0}] """ Explanation: As you can see, this type of DynamicMap cannot be visualized in isolation, because there are no values or ranges specified for the dimensions. You could view it by explicitly specifying such values to get cached samples (and can then cast it to a HoloMap if desired): End of explanation """ dmap + hv.HoloMap({(N,r):shapes(N, r) for N in [3,4,5] for r in [0.5,0.75]}, kdims=['N', 'radius']) """ Explanation: Still, on its own, a sampled-mode DynamicMap may not seem very useful. When they become very convenient is when combined with one or more HoloMaps in a Layout. Because a sampled DynamicMap doesn't have explicitly declared dimension ranges, it can always adopt the set of sample values from HoloMaps in the layout. End of explanation """ xs = np.linspace(0, 2*np.pi) def sin(ph, f, amp): return hv.Curve((xs, np.sin(xs*f+ph)*amp)) kdims=[hv.Dimension('phase', range=(0, np.pi)), hv.Dimension('frequency', values=[0.1, 1, 2, 5, 10]), hv.Dimension('amplitude', values=[0.5, 5, 10])] sine_dmap = hv.DynamicMap(sin, kdims=kdims) """ Explanation: In this way, you only need to worry about choosing specific values or ranges for your dimensions for a single (HoloMap) object, with the others left open-ended as sampled-mode DynamicMaps. This convenience is subject to three particular restrictions: Sampled DynamicMaps do not visualize themselves in isolation (as we have already seen). You cannot build a layout consisting of sampled-mode DynamicMaps only, because at least one HoloMap is needed to define the samples. There currently cannot be more dimensions declared in the sampled DynamicMap than across the rest of the layout. We hope to relax this restriction in future. Using groupby to discretize a DynamicMap A DynamicMap also makes it easy to partially or completely discretize a function to evaluate in a complex plot. By grouping over specific dimensions that define a fixed sampling via the Dimension values parameter, the DynamicMap can be viewed as a GridSpace, NdLayout, or NdOverlay. If a dimension specifies only a continuous range it can't be grouped over, but it may still be explored using the widgets. This means we can plot partial or completely discretized views of a parameter space easily. Partially discretize The implementation for all the groupby operations uses the .groupby method internally, but we also provide three higher-level convenience methods to group dimensions into an NdOverlay (.overlay), GridSpace (.grid), or NdLayout (.layout). Here we will evaluate a simple sine function with three dimensions, the phase, frequency, and amplitude. We assign the frequency and amplitude discrete samples, while defining a continuous range for the phase: End of explanation """ %%opts GridSpace [show_legend=True fig_size=200] sine_dmap.overlay('amplitude').grid('frequency') """ Explanation: Next we define the amplitude dimension to be overlaid and the frequency dimension to be gridded: End of explanation """ %opts Path (linewidth=1 color=Palette('Blues')) def spiral_equation(f, ph, ph2): r = np.arange(0, 1, 0.005) xs, ys = (r * fn(f*np.pi*np.sin(r+ph)+ph2) for fn in (np.cos, np.sin)) return hv.Path((xs, ys)) kdims=[hv.Dimension('f', values=list(np.linspace(1, 10, 10))), hv.Dimension('ph', values=list(np.linspace(0, np.pi, 10))), hv.Dimension('ph2', values=list(np.linspace(0, np.pi, 4)))] spiral_dmap = hv.DynamicMap(spiral_equation, kdims=kdims) """ Explanation: As you can see, instead of having three sliders (one per dimension), we've now laid out the frequency dimension as a discrete set of values in a grid, and the amplitude dimension as a discrete set of values in an overlay, leaving one slider for the remaining dimension (phase). This approach can help you visualize a large, multi-dimensional space efficiently, with full control over how each dimension is made visible. Fully discretize Given a continuous function defined over a space, we could sample it manually, but here we'll look at an example of evaluating it using the groupby method. Let's look at a spiral function with a frequency and first- and second-order phase terms. Then we define the dimension values for all the parameters and declare the DynamicMap: End of explanation """ %%opts GridSpace [xaxis=None yaxis=None] Path [bgcolor='w' xaxis=None yaxis=None] spiral_dmap.groupby(['f', 'ph'], group_type=hv.NdOverlay, container_type=hv.GridSpace) """ Explanation: Now we can make use of the .groupby method to group over the frequency and phase dimensions, which we will display as part of a GridSpace by setting the container_type. This leaves the second phase variable, which we assign to an NdOverlay by setting the group_type: End of explanation """ def gaussian_histogram(samples, scale): frequencies, edges = np.histogram([np.random.normal(scale=scale) for i in range(samples)], 20) return hv.Histogram(frequencies, edges).relabel('Gaussian distribution') gaussian_histogram(100,1) + gaussian_histogram(150,1.5) """ Explanation: This grid shows a range of frequencies f on the x axis, a range of the first phase variable ph on the y axis, and a range of different ph2 phases as overlays within each location in the grid. As you can see, these techniques can help you visualize multidimensional parameter spaces compactly and conveniently. Open mode <a id='OpenMode'></a> DynamicMap also allows unconstrained exploration over unbounded dimensions in 'open' mode. There are two key differences between open mode and bounded mode: Instead of a callable, the input to an open DynamicMap is a generator. Once created, the generator is only used via next(). At least one of the declared key dimensions must have an unbounded range (i.e., with an upper or lower bound not specified). An open mode DynamicMap can run forever, or until a StopIteration exception is raised. Open mode DynamicMaps can be stateful, with an irreversible direction of time. Infinite generators Our first example will be using an infinite generator which plots the histogram for a given number of random samples drawn from a Gaussian distribution: End of explanation """ def gaussian_sampler(samples=10, delta=10, scale=1.0): np.random.seed(1) while True: yield gaussian_histogram(samples, scale) samples+=delta gaussian_sampler() """ Explanation: Lets now use this in the following generator: End of explanation """ dmap = hv.DynamicMap(gaussian_sampler(), kdims=['step']) dmap """ Explanation: Which allows us to define the following infinite DynamicMap: End of explanation """ def gaussian_sampler_kv(samples=10, delta=10, scale=1.0): np.random.seed(1) while True: yield (samples, gaussian_histogram(samples, scale)) samples+=delta hv.DynamicMap(gaussian_sampler_kv(), kdims=['samples']) """ Explanation: Note that step is shown as an integer. This is the default behavior and corresponds to the call count (i.e the number of times next() has been called on the generator. If we want to show the actual number of samples properly, we need our generator to return a (key, element) pair: End of explanation """ def gaussian_sampler_2D(samples=10, scale=1.0, delta=10): np.random.seed(1) while True: yield ((samples, scale), gaussian_histogram(samples, scale)) samples=(samples + delta) if scale==2 else samples scale = 2 if scale == 1 else 1 dmap = hv.DynamicMap(gaussian_sampler_2D(), kdims=['samples', 'scale']) dmap """ Explanation: Note that if you pause the DynamicMap, you can scrub back to previous frames in the cache. In other words, you can view a limited history of elements already output by the generator, which does not re-execute the generator in any way (as it is indeed impossible to rewind generator state). If you have a stateful generator that, say, depends on the current wind speed in Scotland, this history may be misleading, in which case you can simply set the cache_size parameter to 1. Multi-dimensional generators In open mode, elements are naturally serialized by a linear sequence of next() calls, yet multiple key dimensions can still be defined: End of explanation """ hv.HoloMap(dmap) """ Explanation: Here we bin the histogram for two different scale values. Above we can visualize this linear sequence of next() calls, but by casting this open map to a HoloMap, we can obtain a multi-dimensional parameter space that we can freely explore: End of explanation """ def sample_distributions(samples=10, delta=50, tol=0.04): np.random.seed(42) while True: gauss1 = np.random.normal(size=samples) gauss2 = np.random.normal(size=samples) data = (['A']*samples + ['B']*samples, np.hstack([gauss1, gauss2])) diff = abs(gauss1.mean() - gauss2.mean()) if abs(gauss1.mean() - gauss2.mean()) > tol: yield ((samples, diff), hv.BoxWhisker(data, kdims=['Group'], vdims=['Value'])) else: raise StopIteration samples+=delta dmap = hv.DynamicMap(sample_distributions(), kdims=['samples', '$\delta$']) dmap """ Explanation: Note that if you ran this notebook using Run All, only a single frame will be available in the above cell, with no sliders, but if you ran it interactively and viewed a range of values in the previous cell, you'll have multiple sliders in this cell allowing you to explore whatever range of frames is in the cache from the previous cell. Finite generators Open mode DynamicMaps are finite and terminate if StopIteration is raised. This example terminates when the means of two sets of gaussian samples fall within a certain distance of each other: End of explanation """ list(dmap) # The cache """ Explanation: Now if you are familiar with generators in Python, you might be wondering what happens when a finite generator is exhausted. First we should mention that casting a DynamicMap to a list is always finite, because __iter__ returns the cache instead of a potentially infinite generator: End of explanation """ while True: try: next(dmap) # Returns Image elements except StopIteration: print("The dynamic map is exhausted.") break """ Explanation: As we know this DynamicMap is finite, we can make sure it is exhausted as follows: End of explanation """ dmap """ Explanation: Now let's have a look at the dynamic map: End of explanation """ def time_gen(time=1): while True: yield time time += 1 time = time_gen() """ Explanation: Here, we are given only the text-based repr, to indicate that the generator is exhausted. However, as the process of iteration has populated the cache, we can still view the output as a HoloMap using hv.HoloMap(dmap) as before. Counter mode and temporal state <a id='CounterMode'></a> Open mode is intended to use live data streams or ongoing simulations with HoloViews. The DynamicMap will generate live visualizations for as long as new data is requested. Although this works for simple cases, Python generators have problematic limitations that can be resolved using 'counter' mode. In this example, let's say we have a simulation or data recording where time increases in integer steps: End of explanation """ ls = np.linspace(0, 10, 200) xx, yy = np.meshgrid(ls, ls) def cells(): while True: t = next(time) arr = np.sin(xx+t)*np.cos(yy+t) yield hv.Image(arr) def cells_noisy(): while True: t = next(time) arr = np.sin(xx+t)*np.cos(yy+t) yield hv.Image(arr + 0.2*np.random.rand(200,200)) """ Explanation: Now let's create two generators that return Images that are a function of the simulation time. Here, they have identical output except one of the outputs includes additive noise: End of explanation """ hv.DynamicMap(cells(), kdims=['time']) + hv.DynamicMap(cells_noisy(), kdims=['time']) """ Explanation: Now let's create a Layout using these two generators: End of explanation """ ls = np.linspace(0, 10, 200) xx, yy = np.meshgrid(ls, ls) def cells_counter(t): arr = np.sin(xx+t)*np.cos(yy+t) return hv.Image(arr) def cells_noisy_counter(t): arr = np.sin(xx+t)*np.cos(yy+t) return hv.Image(arr + 0.2*np.random.rand(200,200)) """ Explanation: If you pause the animation, you'll see that these two outputs are not in phase, despite the fact that the generators are defined identically (modulo the additive noise)! The issue is that generators are used via the next() interface, and so when either generator is called, the simulation time is increased. In other words, the noisy version in subfigure B actually corresponds to a later time than in subfigure A. This is a fundamental issue, as the next method does not take arguments. What we want is for all the DynamicMaps presented in a Layout to share a common simulation time, which is only incremented by interaction with the scrubber widget. This is exactly the sort of situation where you want to use counter mode. Handling time-dependent state To define a DynamicMap in counter mode: Leave one or more dimensions unbounded (as in open mode) Supply a callable (as in bounded mode) that accepts one argument This callable should act in the same way as the generators of open mode, except the output is controlled by the single counter argument. End of explanation """ hv.DynamicMap(cells_counter, kdims=['time']) + hv.DynamicMap(cells_noisy_counter, kdims=['time']) """ Explanation: Now if we supply these functions instead of generators, A and B will correctly be in phase: End of explanation """ ls = np.linspace(0, 10, 200) xx, yy = np.meshgrid(ls, ls) # Example of a global simulation time # typical in many applications t = 0 def cells_counter_kv(c): global t t = 0.1 * c arr = np.sin(xx+t)*np.cos(yy+t) return (t, hv.Image(arr)) def cells_noisy_counter_kv(c): global t t = 0.1 * c arr = np.sin(xx+t)*np.cos(yy+t) return (t, hv.Image(arr + 0.2*np.random.rand(200,200))) hv.DynamicMap(cells_counter_kv, kdims=['time']) + hv.DynamicMap(cells_noisy_counter_kv, kdims=['time']) print("The global simulation time is now t=%f" % t) """ Explanation: Unfortunately, an integer counter is often too simple to describe simulation time, which may be a float with real-world units. To address this, we can simply return the actual key values we want along the time dimension, just as was demonstrated in open mode using generators: End of explanation """ def sine_kv_gen(phase=0, freq=0.5): while True: yield (phase, hv.Image(np.sin(phase + (freq*x**2+freq*y**2)))) phase+=0.2 dmap = hv.DynamicMap(sine_kv_gen(), kdims=['phase']) """ Explanation: Ensuring that the HoloViews counter maps to a suitable simulation time is the responsibility of the user. However, once a consistent scheme is configured, the callable in each DynamicMap can specify the desired simulation time. If the requested simulation time is the same as the current simulation time, nothing needs to happen. Otherwise, the simulator can be run forward by the requested amount. In this way, HoloViews can provide a rich graphical interface for controlling and visualizing an external simulator, with very little code required. Slicing in open and counter mode Slicing open and counter mode DynamicMaps has the exact same semantics as normal HoloMap slicing, except now the .data attribute corresponds to the cache. For instance: End of explanation """ for i in range(21): dmap.next() print("Min key value in cache:%s\nMax key value in cache:%s" % (min(dmap.keys()), max(dmap.keys()))) sliced = dmap[1:3.1] print("Min key value in cache:%s\nMax key value in cache:%s" % (min(sliced.keys()), max(sliced.keys()))) """ Explanation: Let's fill the cache with some elements: End of explanation """ %%opts Image {+axiswise} ls = np.linspace(0, 10, 200) xx, yy = np.meshgrid(ls, ls) def cells(vrange=False): "The range is set on the value dimension when vrange is True " time = time_gen() while True: t = next(time) arr = t*np.sin(xx+t)*np.cos(yy+t) vdims=[hv.Dimension('Intensity', range=(0,10))] if vrange else ['Intensity'] yield hv.Image(arr, vdims=vdims) hv.DynamicMap(cells(vrange=False), kdims=['time']) + hv.DynamicMap(cells(vrange=True), kdims=['time']) """ Explanation: DynamicMaps and normalization By default, a HoloMap normalizes the display of elements according the minimum and maximum values found across the HoloMap. This automatic behavior is not possible in a DynamicMap, where arbitrary new elements are being generated on the fly. Consider the following examples where the arrays contained within the returned Image objects are scaled with time: End of explanation """
StudyExchange/Udacity
MachineLearning(Advanced)/p2_finding_donors/.ipynb_checkpoints/finding_donors-checkpoint.ipynb
mit
# 为这个项目导入需要的库 import numpy as np import pandas as pd from time import time from IPython.display import display # 允许为DataFrame使用display() # 导入附加的可视化代码visuals.py import visuals as vs # 为notebook提供更加漂亮的可视化 %matplotlib inline # 导入人口普查数据 data = pd.read_csv("census.csv") # 成功 - 显示第一条记录 display(data.head()) """ Explanation: 机器学习纳米学位 监督学习 项目2: 为CharityML寻找捐献者 欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示! 除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。 提示:Code 和 Markdown 区域可通过Shift + Enter快捷键运行。此外,Markdown可以通过双击进入编辑模式。 开始 在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。 这个项目的数据集来自UCI机器学习知识库。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的在线版本中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征'fnlwgt' 以及一些遗失的或者是格式不正确的记录。 探索数据 运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列'income'将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。 End of explanation """ # TODO:总的记录数 n_records = data.count().income # TODO:被调查者的收入大于$50,000的人数 n_greater_50k = data[data.income == '>50K'].shape[0] # TODO:被调查者的收入最多为$50,000的人数 n_at_most_50k = data[data.income == '<=50K'].shape[0] # TODO:被调查者收入大于$50,000所占的比例 greater_percent = 100.0*n_greater_50k/n_records # 打印结果 print "Total number of records: {}".format(n_records) print "Individuals making more than $50,000: {}".format(n_greater_50k) print "Individuals making at most $50,000: {}".format(n_at_most_50k) print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent) """ Explanation: 练习:数据探索 首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量: 总的记录数量,'n_records' 年收入大于50,000美元的人数,'n_greater_50k'. 年收入最多为50,000美元的人数 'n_at_most_50k'. 年收入大于50,000美元的人所占的比例, 'greater_percent'. 提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。 End of explanation """ # 将数据切分成特征和对应的标签 income_raw = data['income'] features_raw = data.drop('income', axis = 1) # 可视化原来数据的倾斜的连续特征 vs.distribution(data) """ Explanation: 准备数据 在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。 转换倾斜的连续特征 一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。 运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。 End of explanation """ # 对于倾斜的数据使用Log转换 skewed = ['capital-gain', 'capital-loss'] features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1)) # 可视化经过log之后的数据分布 vs.distribution(features_raw, transformed = True) """ Explanation: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">对数转换</a>,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。 运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。 End of explanation """ # 导入sklearn.preprocessing.StandardScaler from sklearn.preprocessing import MinMaxScaler # 初始化一个 scaler,并将它施加到特征上 scaler = MinMaxScaler() numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_raw[numerical] = scaler.fit_transform(data[numerical]) # 显示一个经过缩放的样例记录 display(data.head()) display(features_raw.head()) """ Explanation: 规一化数字特征 除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。 运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。 End of explanation """ print 'Origin features:' display(features_raw.head()) print 'Origin income:' display(income_raw.head()) # TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码 features = pd.get_dummies(features_raw) print type(income_raw) # TODO:将'income_raw'编码成数字值 income = income_raw.replace({'<=50K':0, '>50K':1}) # 打印经过独热编码之后的特征数量 encoded = list(features.columns) print "{} total features after one-hot encoding.".format(len(encoded)) # 移除下面一行的注释以观察编码的特征名字 print encoded """ Explanation: 练习:数据预处理 从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C. | | 一些特征 | | 特征_A | 特征_B | 特征_C | | :-: | :-: | | :-: | :-: | :-: | | 0 | B | | 0 | 1 | 0 | | 1 | C | ----> 独热编码 ----> | 0 | 0 | 1 | | 2 | A | | 1 | 0 | 0 | 此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能: - 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。 - 将目标标签'income_raw'转换成数字项。 - 将"<=50K"转换成0;将">50K"转换成1。 End of explanation """ # 导入 train_test_split from sklearn.model_selection import train_test_split # 将'features'和'income'数据切分成训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0) # 显示切分的结果 print "Training set has {} samples.".format(X_train.shape[0]) print "Testing set has {} samples.".format(X_test.shape[0]) """ Explanation: 混洗和切分数据 现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。 运行下面的代码单元来完成切分。 End of explanation """ # TODO: 计算准确率 tp = income[income==1].shape[0] fp = income[income==0].shape[0] tn = 0 fn = 0 accuracy = 1.0*tp/income.shape[0] precision = 1.0*tp/(tp+fp) recall = 1.0*tp/(tp+fn) # TODO: 使用上面的公式,并设置beta=0.5计算F-score beta = 0.5 fscore = 1.0*(1+pow(beta,2))*precision*recall / ((pow(beta,2)*precision)+recall) # 打印结果 print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore) """ Explanation: 评价模型性能 在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。这里面的三个将是你选择的监督学习器,而第四种算法被称为一个朴素的预测器。 评价方法和朴素的预测器 CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率: $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ 尤其是,当$\beta = 0.5$的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。 通过查看不同类别的数据分布(那些最多赚\$50,000和那些能够赚更多的),我们能发现:很明显的是很多的被调查者年收入没有超过\$50,000。这点会显著地影响准确率,因为我们可以简单地预测说“这个人的收入没有超过\$50,000”,这样我们甚至不用看数据就能做到我们的预测在一般情况下是正确的!做这样一个预测被称作是朴素的,因为我们没有任何信息去证实这种说法。通常考虑对你的数据使用一个朴素的预测器是十分重要的,这样能够帮助我们建立一个模型的表现是否好的基准。那有人说,使用这样一个预测是没有意义的:如果我们预测所有人的收入都低于\$50,000,那么CharityML就不会有人捐款了。 问题 1 - 朴素预测器的性能 如果我们选择一个无论什么情况都预测被调查者年收入大于\$50,000的模型,那么这个模型在这个数据集上的准确率和F-score是多少? 注意: 你必须使用下面的代码单元将你的计算结果赋值给'accuracy' 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。 注意:朴素预测器由于不是训练出来的,所以我们可以用全部数据来进行评估(也有人认为保证条件一致仅用测试数据来评估)。 End of explanation """ # TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # TODO:使用sample_size大小的训练数据来拟合学习器 # TODO: Fit the learner to the training data using slicing with 'sample_size' start = time() # 获得程序开始时间 learner = learner.fit(X_train[0:sample_size], y_train[0:sample_size]) end = time() # 获得程序结束时间 # TODO:计算训练时间 results['train_time'] = end - start # TODO: 得到在测试集上的预测值 # 然后得到对前300个训练数据的预测结果 start = time() # 获得程序开始时间 predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train[0:300]) end = time() # 获得程序结束时间 # TODO:计算预测用时 results['pred_time'] = end - start # TODO:计算在最前面的300个训练数据的准确率 results['acc_train'] = accuracy_score(y_train[0:300], predictions_train) # TODO:计算在测试集上的准确率 results['acc_test'] = accuracy_score(y_test, predictions_test) # TODO:计算在最前面300个训练数据上的F-score results['f_train'] = fbeta_score(y_train[0:300], predictions_train, 0.5) # TODO:计算测试集上的F-score results['f_test'] = fbeta_score(y_test, predictions_test, 0.5) # 成功 print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size) # 返回结果 return results """ Explanation: 监督学习模型 下面的监督学习模型是现在在 scikit-learn 中你能够选择的模型 - 高斯朴素贝叶斯 (GaussianNB) - 决策树 - 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K近邻 (KNeighbors) - 随机梯度下降分类器 (SGDC) - 支撑向量机 (SVM) - Logistic回归 问题 2 - 模型应用 列出从上面的监督学习模型中选择的三个适合我们这个问题的模型,你将在人口普查数据上测试这每个算法。对于你选择的每一个算法: 描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处) 这个模型的优势是什么?他什么情况下表现最好? 这个模型的缺点是什么?什么条件下它表现很差? 根据我们当前数据集的特点,为什么这个模型适合这个问题。 回答: 本项目特点:输入多个数值特征,输出2分类,数据比较丰富 - 决策树 - 应用场景:垃圾邮件过滤 - 优点:专家系统 - 计算复杂度不高。 - 可以处理不相关特征数据。 - 易于理解和理解。树可以形象化。 - 对中间值的缺失不敏感。 - 需要很少的数据准备。其他技术通常需要数据标准化,需要创建虚拟变量,并删除空白值。注意,这个模块不支持丢失的值。 - 使用树的成本(即。预测数据)是用于对树进行训练的数据点的对数。 - 能够处理数值和分类数据。其他技术通常是专门分析只有一种变量的数据集。 - 能够处理多输出问题。 - 使用白盒模型。如果一个给定的情况在模型中可以观察到,那么这个条件的解释很容易用布尔逻辑来解释。相比之下,在黑盒模型中(例如:在人工神经网络中,结果可能更难解释。 - 可以使用统计测试验证模型。这样就可以解释模型的可靠性。 - 即使它的假设在某种程度上违反了生成数据的真实模型,也会表现得很好。 - 缺点: - 决策树学习者可以创建那些不能很好地推广数据的过于复杂的树。这就是所谓的过度拟合。修剪(目前不支持)的机制,设置叶片节点所需的最小样本数目或设置树的最大深度是避免此问题的必要条件。 - 决策树可能不稳定,因为数据中的小变化可能导致生成完全不同的树。这个问题通过在一个集合中使用决策树来减轻。 - 我们知道,学习一种最优决策树的问题在最优性甚至是简单概念的几个方面都是np完备性的。因此,实际的决策树学习算法是基于启发式算法的,例如在每个节点上进行局部最优决策的贪婪算法。这种算法不能保证返回全局最优决策树。通过在集合学习者中培训多个树,可以减少这种情况,在这里,特征和样本是随机抽取的。 - 有些概念很难学,因为决策树无法很容易地表达它们,例如XOR、奇偶性或多路复用问题。 - 决策树学习者创建有偏见的树,如果某些类占主导地位。因此,建议在匹配决策树之前平衡数据集。 - 在本项目中,输出可以是比较直观形象的结果,易于理解 - Gradient Boosting - 应用场景:搜索网站网页排名 - 优点:泛化错误率低,易编码,可以应用在大部分分类器上,无参数调整 - 缺点:对离群点敏感 - 本项目中,是典型的二分类应用,应该效果会比较好 - K近邻 (KNeighbors) - 应用场景:人脸识别 - 优点: - 精确度高,对异常值不敏感,无数据输入假定。 - 缺点: - 计算复杂度高,空间复杂度高。 - 在本项目中,高精确度的区分人群,是个不错的选择 - 支撑向量机 (SVM)(CodeReview20170717弃用) - 应用场景:手写字体识别 - 优点: - 在高维空间中有效。 - 在决策函数中,使用一个训练点的子集(称为支持向量),因此它可以有效的存储。 - 缺点: - 如果特性的数量远远大于样本的数量,则SVM方法可能会表现欠佳。 - SVM方法不直接提供概率性估计。 - 在本项目中,SVM可以处理多维度的问题,并且,通过设置软间隔,可以处理线性不可分的情况。 References: 1. [http://scikit-learn.org/stable/modules/tree.html#classification] 2. [http://scikit-learn.org/stable/modules/svm.html#implementation-details] 3. [https://en.wikipedia.org/wiki/Support_vector_machine] 4. Machine learning in action. Peter Harrington 5. [https://en.wikipedia.org/wiki/Gradient_boosting] 练习 - 创建一个训练和预测的流水线 为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在测试集上做预测的训练和测试的流水线是十分重要的。 你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能: 从sklearn.metrics中导入fbeta_score和accuracy_score。 用样例训练集拟合学习器,并记录训练时间。 用学习器来对训练集进行预测并记录预测时间。 在最前面的300个训练数据上做预测。 计算训练数据和测试数据的准确率。 计算训练数据和测试数据的F-score。 End of explanation """ %%time %pdb on # TODO:从sklearn中导入三个监督学习模型 from sklearn import tree, svm, neighbors, ensemble # TODO:初始化三个模型 clf_A = tree.DecisionTreeClassifier(random_state=20) clf_B = neighbors.KNeighborsClassifier() # clf_C = svm.SVC(random_state=20) clf_C = ensemble.GradientBoostingClassifier(random_state=20) # TODO:计算1%, 10%, 100%的训练数据分别对应多少点 samples_1 = int(X_train.shape[0]*0.01) samples_10 = int(X_train.shape[0]*0.1) samples_100 = int(X_train.shape[0]*1.0) # 收集学习器的结果 results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = \ train_predict(clf, samples, X_train, y_train, X_test, y_test) for k in results.keys(): result_df = pd.DataFrame.from_dict(results[k]).T result_df.index = ['1%', '10%', '100%'] print k display(result_df) # 对选择的三个模型得到的评价结果进行可视化 vs.evaluate(results, accuracy, fscore) """ Explanation: 练习:初始模型的评估 在下面的代码单元中,您将需要实现以下功能: - 导入你在前面讨论的三个监督学习模型。 - 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。 - 如果可能对每一个模型都设置一个random_state。 - 注意:这里先使用每一个模型的默认参数,在接下来的部分中你将需要对某一个模型的参数进行调整。 - 计算记录的数目等于1%,10%,和100%的训练数据,并将这些值存储在'samples'中 注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行! random_state的作用主要有两个: 让别人能够复现你的结果 (如reviewer)   你可以确定调参带来的优化是参数调整带来的而不是random_state引来的波动. 可以参考这个帖子: http://discussions.youdaxue.com/t/svr-random-state/30506 另外,模型初始化时,请使用默认参数,因此除了可能需要设置的random_state, 不需要设置其他参数. End of explanation """ %%time %pdb on # TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库 from sklearn.metrics import fbeta_score, make_scorer, accuracy_score from sklearn.model_selection import GridSearchCV from sklearn import ensemble # TODO:初始化分类器 clf = ensemble.GradientBoostingClassifier(random_state=20) # TODO:创建你希望调节的参数列表 #parameters = {'n_neighbors':range(5,10,5), 'algorithm':['ball_tree', 'brute']} parameters = {'max_depth':range(2,10,1)} # TODO:创建一个fbeta_score打分对象 scorer = make_scorer(fbeta_score, beta=0.5) # TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数 grid_obj = GridSearchCV(clf, parameters, scorer) # TODO:用训练数据拟合网格搜索对象并找到最佳参数 print "Start to GridSearchCV" grid_obj.fit(X_train, y_train) print "Start to fit origin model" clf.fit(X_train, y_train) # 得到estimator best_clf = grid_obj.best_estimator_ # 使用没有调优的模型做预测 print "Start to predict" predictions = clf.predict(X_test) best_predictions = best_clf.predict(X_test) # 汇报调参前和调参后的分数 print "Unoptimized model\n------" print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)) print "\nOptimized Model\n------" print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) print "Best parameter:" print grid_obj.best_params_ """ Explanation: 提高效果 在这最后一节中,您将从三个有监督的学习模型中选择最好的模型来使用学生数据。你将在整个训练集(X_train和y_train)上通过使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的F-score。 问题 3 - 选择最佳的模型 基于你前面做的评价,用一到两段向CharityML解释这三个模型中哪一个对于判断被调查者的年收入大于\$50,000是最合适的。 提示:你的答案应该包括关于评价指标,预测/训练时间,以及该算法是否适合这里的数据的讨论。 回答:Gradient Boosting最合适。 决策树的准确率和f-score在训练数据和测试数据之间差异明显,说明,它的泛化能力较差。K-近邻算法和Gradient Boosting则比较好。 K-近邻预测时间太长,增长太快,从0.59(%1),5.209(%10)到34.008(%100)。实际应用中,预测大规模的数据,执行时间太久,不能使用。 Gradient Boosting虽然,模型训练较慢,但是预测速度快。随着,预测数据的增加,预测执行时间基本没变化,维持在0.04秒左右。 问题 4 - 用通俗的话解释模型 用一到两段话,向CharityML用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。 回答: Booting(提升)是将几个弱分类器提升为强分类器的思想。方法可以是,将这几个弱分类器直接相加或加权相加。 训练是从一棵参数很随机的决策树开始,它的预测结果仅比随机拆测要好一点。然后,把预测结果与真实结果比较,看与真实结果的差距,即损失函数的大小。使用损失函数的负梯度方向更新决策树的组合参数,使损失函数逐渐变小到满意的程度。 练习:模型调优 调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需给出并尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能: 导入sklearn.model_selection.GridSearchCV和sklearn.metrics.make_scorer. 初始化你选择的分类器,并将其存储在clf中。 如果能够设置的话,设置random_state。 创建一个对于这个模型你希望调整参数的字典。 例如: parameters = {'parameter' : [list of values]}。 注意: 如果你的学习器(learner)有 max_features 参数,请不要调节它! 使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。 在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。 用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。 注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行! End of explanation """ %%time # TODO:导入一个有'feature_importances_'的监督学习模型 from sklearn.ensemble import GradientBoostingClassifier # TODO:在训练集上训练一个监督学习模型 model = GradientBoostingClassifier() model.fit(X_train, y_train) # TODO: 提取特征重要性 importances = model.feature_importances_ # 绘图 vs.feature_plot(importances, X_train, y_train) """ Explanation: 问题 5 - 最终模型评估 你的最优模型在测试数据上的准确率和F-score是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的朴素预测器怎么样? 注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。 结果: | 评价指标 | 基准预测器 | 未优化的模型 | 优化的模型 | | :------------: | :-----------------: | :---------------: | :-------------: | | 准确率 | 0.2478 | 0.8630 | 0.8697 | | F-score | 0.2917 | 0.7395 | 0.7504 | 回答:最优模型在测试数据上的准确率是0.8697,F-score是0.7504。这个结果比没有优化的模型有明显的提升,比问题1中的朴素预测期好太多。 特征的重要性 在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。通过专注于一些少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。 选择一个有feature_importance_属性(这是一个根据这个选择的分类器来对特征的重要性进行排序的函数)的scikit学习分类器(例如,AdaBoost,随机森林)。在下一个Python代码单元中用这个分类器拟合训练集数据并使用这个属性来决定这个人口普查数据中最重要的5个特征。 问题 6 - 观察特征相关性 当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。 在这十三个记录中,你认为哪五个特征对于预测是最重要的,你会怎样对他们排序?理由是什么? 回答:最重要的5个特征依次是:年龄,教育年限(教育等级一般于这个有一定的正相关),种族,职业和资本收益 1. 年龄是影响最大的,因为个人能力和职位的上升,财富的积累都需要时间 2. 教育水平更高,相对的收入增长会更快更大,这也许就是大家投资教育的动力吧 3. 白人一般教育水平,社交圈的水平会更高 4. 有一个好的职业,当然能赚到更多的钱 5. 资本收益高,说明是有钱人呀 练习 - 提取特征重要性 选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。 在下面的代码单元中,你将要实现以下功能: - 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。 - 在整个训练集上训练一个监督学习模型。 - 使用模型中的'.feature_importances_'提取特征的重要性。 End of explanation """ %%time # 导入克隆模型的功能 from sklearn.base import clone # 减小特征空间 X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]] # 在前面的网格搜索的基础上训练一个“最好的”模型 # 这里使用前面变量model里面AdaBoostClassifier() clf = (clone(best_clf)).fit(X_train_reduced, y_train) # 做一个新的预测 best_predictions = model.predict(X_test) reduced_predictions = clf.predict(X_test_reduced) # 对于每一个版本的数据汇报最终模型的分数 print "Final Model trained on full data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) print "\nFinal Model trained on reduced data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)) """ Explanation: 问题 7 - 提取特征重要性 观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。 这五个特征和你在问题 6中讨论的特征比较怎么样?如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?如果你的选择不相近,那么为什么你觉得这些特征更加相关? 回答:这个结果与我之前的预测差异明显。 1. capital-loss(资本损失),也许有较大的资本损失,才是高收入的人群的最显著的特性,他们回去投资,贫穷一点的人根本就没那么多资产。 2. capital-gain(资本收益),与Capital-loss类似,高收入人群会去投资,资本收益也会越大。 3. marital-status_Married-civ-spouse(与正常的配偶结婚),这个没想到会有那么高的权重。说明,良好的婚姻,能够促进财富增长。 4. ge(年龄),这个与之前的预测比较符合,财富,职位的增长需要时间。 5. education-num(教育年限),这个比预想的重要性小了很多,但是,高收入的人群确实有更多的教育。 特征选择 如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,并简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。 End of explanation """
arasdar/DL
impl-dl/etc/misc/nn_smartwatch.ipynb
unlicense
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Introduction: Neural Network for learning/processing time-series/historical data/signal In this project, we'll build a neural network (NN) to learn and process historical/time-series signal data. The dataset, used in this project, is the smartwatch multisensor dataset acquired and collected by AnEAR app installed on a smartphone. This dataset is provided in the framework of ED-EAR project which is financially supported by NIH. End of explanation """ # The smartwatch historical/time-seris data to visualize data_path_1xn = 'data/smartwatch_data/experimental_data_analysis/Basis_Watch_Data.csv' watch_txn = pd.read_csv(data_path_1xn) """ Explanation: Load, explore, prepare, and preprocess the dataset The first critical step is loading/downloading, exploring, and preparing the dataset correctly. Variables/values on different scales make it difficult for the NN to efficiently learn the correct parameters (e.g. weights and biases). End of explanation """ # txn: time-space from the space-time theory watch_txn.head() # Exploring the data rows-t and cols-n watch_txn[:20] # Getting rid of NaN watch_txn = watch_txn.fillna(value=0.0) watch_txn[:100] # Plotting the smartwatch data watch_txn[:100].plot() #x='dteday', y='cnt' """ Explanation: Downloading/ checking out the smartwatch dataset Smartwatch dataset has the number of sensors time-seris data. The sensors data, recorded from the smartwatch, are following: GSR, heart rate, tempareture, lighting, contact, and ... Below is a plot showing the smartwatch multisensor dataset. Looking at the dataset, we can have information about different sensors modality and values. End of explanation """ features_1x5 = ['calories', 'gsr', 'heart-rate', 'skin-temp', 'steps'] # Store scalings in a dictionary so we can convert back later scaled_features_5x2 = {} for each_name in features_1x5: mean_1x1_val, std_1x1_val = watch_txn[each_name].mean(), watch_txn[each_name].std() # std: standard dev. = square-root of MSE/Variance scaled_features_5x2[each_name] = [mean_1x1_val, std_1x1_val] watch_txn.loc[:, each_name] = (watch_txn[each_name] - mean_1x1_val)/std_1x1_val # Drop date from the dataset watch_txn = watch_txn.drop(labels='date', axis=1) # Visualize the data again to double-check visually again watch_txn[:100].plot() """ Explanation: Batch normalization for standardizing the dataset using mean and variance/ scaling the data For training NN easier and more efficiently on the dataset, we should normalize/standardize each of the continuous variables in the dataset. we'll shift/translate and scale the variables such that they have zero-mean and a standard deviation of 1. These scaling factors are saved to add up to the NN predictions eventually. End of explanation """ # Save 1% data for test test_limit_1x1_value = watch_txn.shape[0]//100 test_data_txn = watch_txn[-test_limit_1x1_value:] # Now remove the test data from the data set for the training data watch_txn = watch_txn[:-test_limit_1x1_value] # Separate all the data into features and targets target_fields_1x1 = ['steps'] features_txn, targets_txm = watch_txn.drop(target_fields_1x1, axis=1), watch_txn[target_fields_1x1] test_features_txn, test_targets_txm = test_data_txn.drop(target_fields_1x1, axis=1), test_data_txn[target_fields_1x1] test_features_txn.shape, test_targets_txm.shape, features_txn.shape, targets_txm.shape """ Explanation: Splitting the dataset into training, validation, and testing sets We'll use the last 1% of the dataset for testing the NN after training and validation to make predictions. End of explanation """ # For validation set, we use 10% the remaining datset # txn: t is time/row (num of records) and n is space/col (input feature space dims) # txm: t is time/row (num of records) and m is space/col (output feature space dims) train_features_txn, train_targets_txm = features_txn[:-(features_txn.shape[0]//10)], targets_txm[:-(targets_txm.shape[0]//10)] valid_features_txn, valid_targets_txm = features_txn[-(features_txn.shape[0]//10):], targets_txm[-(targets_txm.shape[0]//10):] train_features_txn.shape, train_targets_txm.shape, valid_features_txn.shape, valid_targets_txm.shape """ Explanation: Dividing the remaining data into training and validation sets to avoid overfitting and underfitting during the training We'll split the remaining data into two sets: one for training and one for validating as NN is being trained. Since this is a time-series dataset, we'll train on historical data, then try to predict on future data (the validation set). End of explanation """ class NN(object): # n: num_input_units in input layer, # h: num_hidden_units in the hidden layer, # m: num_out_units in the output layer, and # lr: learning_rate def __init__(self, n, h, m, lr): # Initialize parameters: weights and biases self.w_nxh = np.random.normal(loc=0.0, scale=n**(-0.5), size=(n, h)) self.b_1xh = np.zeros(shape=(1, h)) self.w_hxm = np.random.normal(loc=0.0, scale=h**(-0.5), size=(h, m)) self.b_1xm = np.zeros(shape=(1, m)) self.lr = lr # Train to update NN paramteres (w, b) in each epoch using NN hyper-parameters def train(self, X_txn, Y_txm): ''' Train the network on batch of features (X_txn) and targets (Y_txm). Arguments --------- features: X_txn is a 2D array, each row is one data record (t), each column is a feature (n) txn: ixj, rowxcol, and hxw targets: Y_txm is a 2D array as well. ''' dw_nxh = np.zeros_like(self.w_nxh) db_1xh = np.zeros_like(self.b_1xh) dw_hxm = np.zeros_like(self.w_hxm) db_1xm = np.zeros_like(self.b_1xm) for each_X, each_Y in zip(X_txn, Y_txm): #### Implement the forward pass here #### ### Forward pass ### x_1xn = np.array(each_X, ndmin=2) # [[each]] y_1xm = np.array(each_Y, ndmin=2) # [[each]] # TODO: Hidden layer - Replace these values with your calculations. h_in_1xh = (x_1xn @ self.w_nxh) + self.b_1xh # signals into hidden layer h_out_1xh = np.tanh(h_in_1xh) # TODO: Output layer - Replace these values with your calculations. out_logits_1xm = (h_out_1xh @ self.w_hxm) + self.b_1xm # signals into final output layer y_pred_1xm = np.tanh(out_logits_1xm) # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### dy_1xm = y_pred_1xm - y_1xm # Output layer error: difference between actual target and desired output. # TODO: Output error - Replace this value with your calculations. dout_logits_1xm = dy_1xm * (1-(np.tanh(out_logits_1xm)**2)) # dtanh= (1-(np.tanh(x))**2) dh_out_1xh = dout_logits_1xm @ self.w_hxm.T # TODO: Calculate the hidden layer's contribution to the error dh_in_1xh = dh_out_1xh * (1-(np.tanh(h_in_1xh)**2)) dx_1xn = dh_in_1xh @ self.w_nxh.T # is dx_1xn USELESS? # TODO: Backpropagated error terms - Replace these values with your calculations. db_1xm += dout_logits_1xm dw_hxm += (dout_logits_1xm.T @ h_out_1xh).T db_1xh += dh_in_1xh dw_nxh += (dh_in_1xh.T @ x_1xn).T # TODO: Update the NN parameters (w, b) in each epoch of training self.w_hxm -= self.lr * dw_hxm # update hidden-to-output weights with gradient descent step self.b_1xm -= self.lr * db_1xm # output units/neurons/cells/nodes self.w_nxh -= self.lr * dw_nxh # update input-to-hidden weights with gradient descent step self.b_1xh -= self.lr * db_1xh # hidden units/cells/neurons/nodes def run(self, X_txn): ''' Run a forward pass through the network with input features Arguments --------- features: X_txn is a 2D array of records (t as row) and their features (n as col) ''' #### Implement the forward pass here #### ### Forward pass ### x_txn = X_txn # TODO: Hidden layer - Replace these values with your calculations. h_in_txh = (x_txn @ self.w_nxh) + self.b_1xh # signals into hidden layer h_out_txh = np.tanh(h_in_txh) # TODO: Output layer - Replace these values with your calculations. out_logits_txm = (h_out_txh @ self.w_hxm) + self.b_1xm # signals into final output layer y_pred_txm = np.tanh(out_logits_txm) # signals from final output layer return y_pred_txm # Mean Squared-Error(MSE) def MSE(Y_pred_1xt, Y_1xt): return np.mean((Y_pred_1xt-Y_1xt)**2) """ Explanation: How to build the NN Below we'll build NN architecture for learning and processing the time-series dataset. We've built NN architecture for the forward and the backward pass. we'll set the hyperparameters: the momentum, the learning rate, the number of hidden units, the number of input units, the number of output units, and the number of epochs for training passes (updating the NN parameters such as weights and biases). The NN has three layers in general and in our case: one input layer, one/multiple hidden layer/s, and one output layer. The hidden layer/s uses a non-linear function for activations/probability such as the sigmoid and tanh. The output layer, in our case, has one node and is used for the regression, i.e. the output of the node is the same as the input of the node. That is why the activation function is a linear unit (LU) $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account a threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation which happens in forward pass. We use the NN weights to propagate signals forward (in forward pass) from the input to the output layers in NN. We use the weights to also propagate error backwards (in backward pass) from the output all the way back into the NN to update our weights. This is called backpropagation. Hint: We'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. Based on calculus, the derivative of this function is equivalent to the equation $y = x$. In fact, the slope of this function is the derivative of $f(x)$. Below, we'll build the NN as follows: 1. Implement/apply an activation function such as sigmoid, tanh, or ReLU. 2. Implement the forward pass in the train method for forward propagation of the input data. 3. Implement the backpropagation algorithm in the train method in the backward pass by calculating the output error and the parameter gradients. 4. Implement the forward pass again at the end in the run method for the actual prediction in validation data and test data. End of explanation """ ### Set the hyperparameters here ### num_epochs = 100*2 # updating NN parameters (w, b) learning_rate = 2 * 10/train_features_txn.shape[0] # train_features = x_txn, t: number of recorded samples/records hidden_nodes = 10 output_nodes = 1 # y_tx1 input_nodes = train_features_txn.shape[1] # x_txn # Buidlingthe NN by initializing/instantiating the NN class nn = NN(h=hidden_nodes, lr=learning_rate, m=output_nodes, n=input_nodes) # Training-validating the NN - learning process losses_tx2 = {'train':[], 'valid':[]} for each_epoch in range(num_epochs): # Go through a random minibatch of 128 records from the training data set random_minibatch = np.random.choice(train_features_txn.index, size=1024) x_txn, y_tx1 = train_features_txn.ix[random_minibatch].values, train_targets_txm.ix[random_minibatch].values #['cnt'] # # Go through the full batch of records in the training data set # x_txn , y_tx1 = train_features_txn.values, train_targets_txm.values #['cnt'] nn.train(X_txn=x_txn, Y_txm=y_tx1) # Printing out the training progress train_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=train_features_txn).T, Y_1xt=train_targets_txm.values.T) valid_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=valid_features_txn).T, Y_1xt=valid_targets_txm.values.T) print('each_epoch:', each_epoch, 'num_epochs:', num_epochs, 'train_loss:', train_loss_1x1_value, 'valid_loss:', valid_loss_1x1_value) losses_tx2['train'].append(train_loss_1x1_value) losses_tx2['valid'].append(valid_loss_1x1_value) plt.plot(losses_tx2['train'], label='Train loss') plt.plot(losses_tx2['valid'], label='Valid loss') plt.legend() _ = plt.ylim() """ Explanation: Training the NN At first, we'll set the hyperparameters for the NN. The strategy here is to find hyperparameters such that the error on the training set is the low enough, but you're not underfitting or overfitting to the training set. If you train the NN too long or NN has too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. We'll also be using a method known as Stochastic Gradient Descent (SGD) to train the NN using Backpropagation. The idea is that for each training pass, you grab a either a random minibatch of the data instead of using the whole data set/full batch. We can also use bacth gradient descent (BGD), but with SGD, each pass is much faster than BGD. That is why as the data size grows in number of samples, BGD is not feasible for training the NN and we have to use SGD instead to consider our hardware limitation, specifically the memory size limits, i.e. RAM and Cache. Epochs: choose the number of iterations for updating NN parameters This is the number of times to update the NN parameters using the training dataset as we train the NN. The more iterations we use, the better the model might fit the data. However, if you use too many epoch iterations, then the model might not generalize well to the data but memorize the data which is called/known as overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Learning rate This scales the size of the NN parameters updates. If it is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the NN has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the NN to converge. Number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the NN performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features_5x2['steps'] predictions_1xt = nn.run(X_txn=test_features_txn).T*std + mean ax.plot(predictions_1xt[0], label='Prediction', color='b') ax.plot((test_targets_txm*std + mean).values, label='Data', color='g') ax.legend() fig, ax_pred = plt.subplots(figsize=(8,4)) ax_pred.plot(predictions_1xt[0], label='Prediction', color='b') ax_pred.legend() fig, ax_data = plt.subplots(figsize=(8,4)) ax_data.plot((test_targets_txm*std + mean).values, label='Data', color='g') ax_data.legend() """ Explanation: Test predictions Here, we test our NN on the test data to view how well the NN is modeling/predicting the test dataset. If something is wrong, the NN is NOT implemented correctly. End of explanation """
dolittle007/dolittle007.github.io
notebooks/GLM-rolling-regression.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt """ Explanation: Rolling Regression Author: Thomas Wiecki Pairs trading is a famous technique in algorithmic trading that plays two stocks against each other. For this to work, stocks must be correlated (cointegrated). One common example is the price of gold (GLD) and the price of gold mining operations (GFI). End of explanation """ # from pandas_datareader import data # prices = data.GoogleDailyReader(symbols=['GLD', 'GFI'], end='2014-8-1').read().loc['Open', :, :] prices = pd.read_csv(pm.get_data('stock_prices.csv')) prices['Date'] = pd.DatetimeIndex(prices['Date']) prices = prices.set_index('Date') prices.head() finite_idx = (np.isfinite(prices.GLD.values)) & (np.isfinite(prices.GFI.values)) prices = prices.iloc[finite_idx] """ Explanation: Lets load the prices of GFI and GLD. End of explanation """ fig = plt.figure(figsize=(9, 6)) ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$') colors = np.linspace(0.1, 1, len(prices)) mymap = plt.get_cmap("winter") sc = ax.scatter(prices.GFI, prices.GLD, c=colors, cmap=mymap, lw=0) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); """ Explanation: Plotting the prices over time suggests a strong correlation. However, the correlation seems to change over time. End of explanation """ with pm.Model() as model_reg: pm.glm.GLM.from_formula('GLD ~ GFI', prices) trace_reg = pm.sample(2000) """ Explanation: A naive approach would be to estimate a linear model and ignore the time domain. End of explanation """ fig = plt.figure(figsize=(9, 6)) ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$', title='Posterior predictive regression lines') sc = ax.scatter(prices.GFI, prices.GLD, c=colors, cmap=mymap, lw=0) pm.plot_posterior_predictive_glm(trace_reg[100:], samples=100, label='posterior predictive regression lines', lm=lambda x, sample: sample['Intercept'] + sample['GFI'] * x, eval=np.linspace(prices.GFI.min(), prices.GFI.max(), 100)) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); ax.legend(loc=0); """ Explanation: The posterior predictive plot shows how bad the fit is. End of explanation """ model_randomwalk = pm.Model() with model_randomwalk: # std of random walk, best sampled in log space. sigma_alpha = pm.Exponential('sigma_alpha', 1./.02, testval = .1) sigma_beta = pm.Exponential('sigma_beta', 1./.02, testval = .1) """ Explanation: Rolling regression Next, we will build an improved model that will allow for changes in the regression coefficients over time. Specifically, we will assume that intercept and slope follow a random-walk through time. That idea is similar to the stochastic volatility model. $$ \alpha_t \sim \mathcal{N}(\alpha_{t-1}, \sigma_\alpha^2) $$ $$ \beta_t \sim \mathcal{N}(\beta_{t-1}, \sigma_\beta^2) $$ First, lets define the hyper-priors for $\sigma_\alpha^2$ and $\sigma_\beta^2$. This parameter can be interpreted as the volatility in the regression coefficients. End of explanation """ import theano.tensor as tt # To make the model simpler, we will apply the same coefficient for 50 data points at a time subsample_n = 50 lendata = len(prices) ncoef = lendata // subsample_n idx = range(ncoef * subsample_n) with model_randomwalk: alpha = pm.GaussianRandomWalk('alpha', sigma_alpha**-2, shape=ncoef) beta = pm.GaussianRandomWalk('beta', sigma_beta**-2, shape=ncoef) # Make coefficients have the same length as prices alpha_r = tt.repeat(alpha, subsample_n) beta_r = tt.repeat(beta, subsample_n) """ Explanation: Next, we define the regression parameters that are not a single random variable but rather a random vector with the above stated dependence structure. So as not to fit a coefficient to a single data point, we will chunk the data into bins of 50 and apply the same coefficients to all data points in a single bin. End of explanation """ with model_randomwalk: # Define regression regression = alpha_r + beta_r * prices.GFI.values[idx] # Assume prices are Normally distributed, the mean comes from the regression. sd = pm.Uniform('sd', 0, 20) likelihood = pm.Normal('y', mu=regression, sd=sd, observed=prices.GLD.values[idx]) """ Explanation: Perform the regression given coefficients and data and link to the data via the likelihood. End of explanation """ with model_randomwalk: trace_rw = pm.sample(2000, njobs=2) """ Explanation: Inference. Despite this being quite a complex model, NUTS handles it wells. End of explanation """ fig = plt.figure(figsize=(8, 6)) ax = plt.subplot(111, xlabel='time', ylabel='alpha', title='Change of alpha over time.') ax.plot(trace_rw[-1000:]['alpha'].T, 'r', alpha=.05); ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]); """ Explanation: Analysis of results $\alpha$, the intercept, does not seem to change over time. End of explanation """ fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, xlabel='time', ylabel='beta', title='Change of beta over time') ax.plot(trace_rw[-1000:]['beta'].T, 'b', alpha=.05); ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]); """ Explanation: However, the slope does. End of explanation """ fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$', title='Posterior predictive regression lines') colors = np.linspace(0.1, 1, len(prices)) colors_sc = np.linspace(0.1, 1, len(trace_rw[-500::10]['alpha'].T)) mymap = plt.get_cmap('winter') mymap_sc = plt.get_cmap('winter') xi = np.linspace(prices.GFI.min(), prices.GFI.max(), 50) for i, (alpha, beta) in enumerate(zip(trace_rw[-500::10]['alpha'].T, trace_rw[-500::10]['beta'].T)): for a, b in zip(alpha, beta): ax.plot(xi, a + b*xi, alpha=.05, lw=1, c=mymap_sc(colors_sc[i])) sc = ax.scatter(prices.GFI, prices.GLD, label='data', cmap=mymap, c=colors) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); """ Explanation: The posterior predictive plot shows that we capture the change in regression over time much better. Note that we should have used returns instead of prices. The model would still work the same, but the visualisations would not be quite as clear. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-3/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: NCC Source ID: SANDBOX-3 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:25 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
hzzyyy/pymcef
Quickstart tutorial.ipynb
bsd-3-clause
import pandas as pd returns = pd.read_json('data/Russel3k_return.json') """ Explanation: PyMCEF Quickstart tutorial <br> <br> Prerequisites Install Please install package PyMCEF through either conda or pip: <pre> $ conda install -c hzzyyy pymcef $ pip install pymcef </pre> conda packages are available on anaconda cloud for python 2.7, 3.5 and 3.6 on MacOS, Linux and Windows. pip packages are available on pypi for both python 2.7, 3.5 and 3.6 on MacOs and Windows. pypi doesn't support linux binary distributions, to install from pip on linux please download the wheel files from this repo and install locally. <pre> $ pip install your_download_path\pymcef-downloaded-filename.whl </pre> This package is only available on 64 bits OS, in addition, C++11 runtime library is alse required. As result, this package will NOT work on Redhat EL6 with default configuration. If you mainly work with Redhat you can either upgrade to EL7 or customize EL6 with C++11 support. The implementation quality of PyMCEF is validated through several benchmarks. Efficient frontier This package helps to find the efficient frontier based on (jointly) simulated returns of all investible assets. Alternatively speaking, this package tries to solve this problem numerically: Given the joint distribution of the returns of all the investible assets, what should be the best choice of weights to assign on each assets, for all possible risk preferences. Efficient frontier is the set of all 'best' choices of asset weights (a.k.a. portfolio) with all possible risk preferences. Heuristically, the most risk-taking portfolio simply puts all weight on the asset with largest expected return. The most risk-averse portfolio is very diversified, with many positions hedging each other. Portfolio optimization Each portfolio on the efficient frontier is obtained by solving the following optimization problem on the choice of weight vector $w$: \begin{align} \underset{w}{\mathrm{argmin}}\quad & \mathrm{Risk}\left(w\right)-\lambda\cdot\mathrm{Reward}\left(w\right),\ \mathrm{subject\ to}\quad & \sum_{i}w_{i} = 1,\ & \ \ \ \quad w_{i} > 0. \end{align} The Lagrangian multiplier $\lambda$ here uniquely defines a risk preference and its corresponding solution. With $\lambda$ going to infinity the solution is the most risk-taking portfolio and with $\lambda$ being zero, the solution is the most risk-averse portfolio. Risk measures We haven't defined the formulas for the risk and reward functions in the above optimization problem. Not surprisingly, the reward function is the just the expected return of the whole portfolio. Suppose the returns of all the asserts is the random vector: $$ Y = {Y_i}. $$ Then the reward function is simply: \begin{eqnarray} \mathrm{Reward}\left(w\right) & = & \mathbb{E}\left[\sum_{i}w_{i}Y_{i}\right]\ & = & \mathbb{E}\left[X\right]\quad\mathrm{namely} \end{eqnarray} The less obviously obtained risk function, formally known as risk measure, is a function of the portfolio return random variable $X$. In the Markowitz mean-variance model, the risk measure is the variance of $X$, which is theoretically flawed (check this example). In this PyMCEF, the following two more sophisticated risk measures are used: Absolute Semideviation \begin{eqnarray} \mathrm{Risk}\left(w\right) & = & \mathbb{E}\left[\max\left(\mathbb{E}X-X,\ 0\right)\right]\ & = & \mathbb{E}\left[\left(\mathbb{E}X-X\right)^{+}\right]. \end{eqnarray} Fixed-target under-performance \begin{eqnarray} \mathrm{Risk}\left(w\right) & = & \mathbb{E}\left[\max\left(t-X,\ 0\right)\right]\ & = & \mathbb{E}\left[\left(t-X\right)^{+}\right], \end{eqnarray} where $t$ is given target. The riskless return is a sensible choice. According to axiomatized portfolio theory, these two risk measures are better than the variance because they are more consistent with the stochastic dominance rule. Stochastic programming The (joint) distribution of the returns of assets ${Y_i}$ can be parametric. A convenient choice is to assume Normal or log Normal distribution. More complicated ways of parameterizations for sure exist, and they lead to different optimization problems. In addition, with $\lambda$ varying as a positive real number, we face a continuum of optimization problems. Alternatively, we can work with the (finite) samples of ${Y_i}$. In this case, we replace the expectation in the risk or reward function with statistical mean and work with a stochastic programming problem. PyMCEF is implemented in this way, and there is huge advantage in terms of flexibility. The input for PyMCEF is just Monte Carlo simulated returns, and no knowledge about the underlying distribution function is needed for this package to work. In Bayesian inference, it is a common practice to directly simulate the posterior predictive distribution with Markov Chain Monte Carlo, where the posterior density function is known only up to a scaling factor. In another word, in practice, samples of distribution is more available than its corresponding parametric form. Another benefit comes with the linearization of the problem. With slack variables, the constraint and objective function can be linearized such that the solution is valid for an interval of $\lambda$. As a result, the (approximated) efficient frontier has only finite number of portfolios. With stochastic programming, we solve an approximated problem whose solution is much easier to describe. One example We take the daily returns of US stocks in Russell 3000 from 2015-01-02 to 2017-01-04. Each asset has 505 returns, which are treated as simulated daily returns. The direct use of historical data are not Monter Carlo, however it is good enough to demonstrate how to use PyMCEF. We can consider this predictive distribution as a largely over fitted model with not much predictive power. However, this is good starting point for better models. We removed stocks with prices lower than $5 (pink sheet) to avoid liquidity problem. The data is stored in this git repository and can be loaded into a pandas data frame: End of explanation """ returns_filted = returns[(returns > returns.quantile(0.01) * 4) & \ (returns < returns.quantile(0.99) * 4)].dropna(axis=1) """ Explanation: Instead of smoothing, we directly exclude those stocks with extremely large jump in prices (possibly erroneous). Here, a gain or loss more than four times the 99% or 1% quantile is considered as extreme. End of explanation """ len(returns_filted.columns) """ Explanation: After this filtering process, the number of remaining assets is: End of explanation """ from time import time import numpy as np from pymcef import SimpleEFp tic = time() frt = SimpleEFp(training_set=np.transpose(returns_filted.values),\ validation_set=np.transpose(returns_filted.values),\ asset_names=returns_filted.columns) print(time() - tic) """ Explanation: Next we use this return data as both training set and validation set to construct an efficient frontier with PyMCEF. The risk measure is not specified and the default absolute semi-deviation is used. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'svg' fig_ef = frt.plot_ef() """ Explanation: As we can see, the full efficient frontier is obtained in less than one minute. Let's take a look at the efficient frontier: End of explanation """ fig_pf = frt.plot_performance() """ Explanation: The performances of all portfolio is also visualized: End of explanation """ fig_ws = frt.plot_weights() """ Explanation: And how the weights vary with different values of $\lambda$: End of explanation """ from __future__ import print_function prt0 = frt.frontier_in_sample[0] for k, v in prt0['weight'].items(): print(frt.asset_names[k], v) """ Explanation: All the weights are stored in the instance of SimpleEFp. The first one is the most risk-seeking: End of explanation """ prt1 = frt.frontier_in_sample[1] for k, v in prt1['weight'].items(): print(frt.asset_names[k], v) prt3 = frt.frontier_in_sample[3] for k, v in prt3['weight'].items(): print(frt.asset_names[k], v) """ Explanation: Starting from the second portfolio, there are more than one assets: End of explanation """ print(len(frt.frontier_in_sample[-1]['weight'])) """ Explanation: And the number of assets contained in the most risk-averse portfolio is: End of explanation """
arogozhnikov/einops
docs/2-einops-for-deep-learning.ipynb
mit
from einops import rearrange, reduce import numpy as np x = np.random.RandomState(42).normal(size=[10, 32, 100, 200]) # utility to hide answers from utils import guess """ Explanation: Einops tutorial, part 2: deep learning Previous part of tutorial provides visual examples with numpy. What's in this tutorial? working with deep learning packages important cases for deep learning models einsops.asnumpy and einops.layers End of explanation """ # select one from 'chainer', 'gluon', 'tensorflow', 'pytorch' flavour = 'pytorch' print('selected {} backend'.format(flavour)) if flavour == 'tensorflow': import tensorflow as tf tape = tf.GradientTape(persistent=True) tape.__enter__() x = tf.Variable(x) + 0 elif flavour == 'pytorch': import torch x = torch.from_numpy(x) x.requires_grad = True elif flavour == 'chainer': import chainer x = chainer.Variable(x) else: assert flavour == 'gluon' import mxnet as mx mx.autograd.set_recording(True) x = mx.nd.array(x, dtype=x.dtype) x.attach_grad() type(x), x.shape """ Explanation: Select your flavour Switch to the framework you're most comfortable with. End of explanation """ y = rearrange(x, 'b c h w -> b h w c') guess(y.shape) """ Explanation: Simple computations converting bchw to bhwc format and back is a common operation in CV try to predict output shape and then check your guess! End of explanation """ y0 = x y1 = reduce(y0, 'b c h w -> b c', 'max') y2 = rearrange(y1, 'b c -> c b') y3 = reduce(y2, 'c b -> ', 'sum') if flavour == 'tensorflow': print(reduce(tape.gradient(y3, x), 'b c h w -> ', 'sum')) else: y3.backward() print(reduce(x.grad, 'b c h w -> ', 'sum')) """ Explanation: Worked! Did you notice? Code above worked for you backend of choice. <br /> Einops functions work with any tensor like they are native to the framework. Backpropagation gradients are a corner stone of deep learning You can back-propagate through einops operations <br /> (just as with framework native operations) End of explanation """ from einops import asnumpy y3_numpy = asnumpy(y3) print(type(y3_numpy)) """ Explanation: Meet einops.asnumpy Just converts tensors to numpy (and pulls from gpu if necessary) End of explanation """ y = rearrange(x, 'b c h w -> b (c h w)') guess(y.shape) """ Explanation: Common building blocks of deep learning Let's check how some familiar operations can be written with einops Flattening is common operation, frequently appears at the boundary between convolutional layers and fully connected layers End of explanation """ y = rearrange(x, 'b c (h h1) (w w1) -> b (h1 w1 c) h w', h1=2, w1=2) guess(y.shape) """ Explanation: space-to-depth End of explanation """ y = rearrange(x, 'b (h1 w1 c) h w -> b c (h h1) (w w1)', h1=2, w1=2) guess(y.shape) """ Explanation: depth-to-space (notice that it's reverse of the previous) End of explanation """ y = reduce(x, 'b c h w -> b c', reduction='mean') guess(y.shape) """ Explanation: Reductions Simple global average pooling. End of explanation """ y = reduce(x, 'b c (h h1) (w w1) -> b c h w', reduction='max', h1=2, w1=2) guess(y.shape) # you can skip names for reduced axes y = reduce(x, 'b c (h 2) (w 2) -> b c h w', reduction='max') guess(y.shape) """ Explanation: max-pooling with a kernel 2x2 End of explanation """ # models typically work only with batches, # so to predict a single image ... image = rearrange(x[0, :3], 'c h w -> h w c') # ... create a dummy 1-element axis ... y = rearrange(image, 'h w c -> () c h w') # ... imagine you predicted this with a convolutional network for classification, # we'll just flatten axes ... predictions = rearrange(y, 'b c h w -> b (c h w)') # ... finally, decompose (remove) dummy axis predictions = rearrange(predictions, '() classes -> classes') """ Explanation: 1d, 2d and 3d pooling are defined in a similar way for sequential 1-d models, you'll probably want pooling over time python reduce(x, '(t 2) b c -&gt; t b c', reduction='max') for volumetric models, all three dimensions are pooled python reduce(x, 'b c (x 2) (y 2) (z 2) -&gt; b c x y z', reduction='max') Uniformity is a strong point of einops, and you don't need specific operation for each particular case. Good exercises write a version of space-to-depth for 1d and 3d (2d is provided above) write an average / max pooling for 1d models. Squeeze and unsqueeze (expand_dims) End of explanation """ y = x - reduce(x, 'b c h w -> b c 1 1', 'mean') guess(y.shape) """ Explanation: keepdims-like behavior for reductions empty composition () provides dimensions of length 1, which are broadcastable. alternatively, you can use just 1 to introduce new axis, that's a synonym to () per-channel mean-normalization for each image: End of explanation """ y = x - reduce(y, 'b c h w -> 1 c 1 1', 'mean') guess(y.shape) """ Explanation: per-channel mean-normalization for whole batch: End of explanation """ list_of_tensors = list(x) """ Explanation: Stacking let's take a list of tensors End of explanation """ tensors = rearrange(list_of_tensors, 'b c h w -> b h w c') guess(tensors.shape) # or maybe stack along last dimension? tensors = rearrange(list_of_tensors, 'b c h w -> h w c b') guess(tensors.shape) """ Explanation: New axis (one that enumerates tensors) appears first on the left side of expression. Just as if you were indexing list - first you'd get tensor by index End of explanation """ tensors = rearrange(list_of_tensors, 'b c h w -> (b h) w c') guess(tensors.shape) """ Explanation: Concatenation concatenate over the first dimension? End of explanation """ tensors = rearrange(list_of_tensors, 'b c h w -> h w (b c)') guess(tensors.shape) """ Explanation: or maybe concatenate along last dimension? End of explanation """ y = rearrange(x, 'b (g1 g2 c) h w-> b (g2 g1 c) h w', g1=4, g2=4) guess(y.shape) """ Explanation: Shuffling within a dimension channel shuffle (as it is drawn in shufflenet paper) End of explanation """ y = rearrange(x, 'b (g c) h w-> b (c g) h w', g=4) guess(y.shape) """ Explanation: simpler version of channel shuffle End of explanation """ bbox_x, bbox_y, bbox_w, bbox_h = \ rearrange(x, 'b (coord bbox) h w -> coord b bbox h w', coord=4, bbox=8) # now you can operate on individual variables max_bbox_area = reduce(bbox_w * bbox_h, 'b bbox h w -> b h w', 'max') guess(bbox_x.shape) guess(max_bbox_area.shape) """ Explanation: Split a dimension Here's a super-convenient trick. Example: when a network predicts several bboxes for each position Assume we got 8 bboxes, 4 coordinates each. <br /> To get coordinated into 4 separate variables, you move corresponding dimension to front and unpack tuple. End of explanation """ from einops import parse_shape def convolve_2d(x): # imagine we have a simple 2d convolution with padding, # so output has same shape as input. # Sorry for laziness, use imagination! return x # imagine we are working with 3d data x_5d = rearrange(x, 'b c x (y z) -> b c x y z', z=20) # but we have only 2d convolutions. # That's not a problem, since we can apply y = rearrange(x_5d, 'b c x y z -> (b z) c x y') y = convolve_2d(y) # not just specifies additional information, but verifies that all dimensions match y = rearrange(y, '(b z) c x y -> b c x y z', **parse_shape(x_5d, 'b c x y z')) parse_shape(x_5d, 'b c x y z') # we can skip some dimensions by writing underscore parse_shape(x_5d, 'batch c _ _ _') """ Explanation: Getting into the weeds of tensor packing you can skip this part - it explains why taking a habit of defining splits and packs explicitly when implementing custom gated activation (like GLU), split is needed: python y1, y2 = rearrange(x, 'b (split c) h w -&gt; split b c h w', split=2) result = y2 * sigmoid(y2) # or tanh ... but we could split differently python y1, y2 = rearrange(x, 'b (c split) h w -&gt; split b c h w', split=2) first one splits channels into consequent groups: y1 = x[:, :x.shape[1] // 2, :, :] while second takes channels with a step: y1 = x[:, 0::2, :, :] This may drive to very surprising results when input is - a result of group convolution - a result of bidirectional LSTM/RNN - multi-head attention Let's focus on the second case (LSTM/RNN), since it is less obvious. For instance, cudnn concatenates LSTM outputs for forward-in-time and backward-in-time Also in pytorch GLU splits channels into consequent groups (first way) So when LSTM's output comes to GLU, - forward-in-time produces linear part, and backward-in-time produces activation ... - and role of directions is different, and gradients coming to two parts are different - that's not what you expect from simple GLU(BLSTM(x)), right? einops notation makes such inconsistencies explicit and easy-detectable Shape parsing just a handy utility End of explanation """ # each image is split into subgrids, each is now a separate "image" y = rearrange(x, 'b c (h hs) (w ws) -> (hs ws b) c h w', hs=2, ws=2) y = convolve_2d(y) # pack subgrids back to an image y = rearrange(y, '(hs ws b) c h w -> b c (h hs) (w ws)', hs=2, ws=2) assert y.shape == x.shape """ Explanation: Striding anything Finally, how to convert any operation into a strided operation? <br /> (like convolution with strides, aka dilated/atrous convolution) End of explanation """
ML4DS/ML4all
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
mit
# To visualize plots in the notebook %matplotlib inline # Imported libraries #import csv #import random #import matplotlib #import matplotlib.pyplot as plt #import pylab #import numpy as np #from mpl_toolkits.mplot3d import Axes3D #from sklearn.preprocessing import PolynomialFeatures #from sklearn import linear_model from sklearn import svm """ Explanation: Support Vector Machines Notebook version: 1.0 (Oct 26, 2015) Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es) Changes: v.1.0 - First version End of explanation """ X = [[0, 0], [1, 1]] y = [0, 1] clf = svm.SVC() clf.fit(X, y) """ Explanation: 1. Introduction Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The advantages of support vector machines are: Effective in high dimensional spaces. Still effective in cases where number of dimensions is greater than the number of samples. Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vector machines include: If the number of features is much greater than the number of samples, the method is likely to give poor performances. SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). The support vector machines in scikit-learn support both dense (numpy.ndarray and convertible to that by numpy.asarray) and sparse (any scipy.sparse) sample vectors as input. However, to use an SVM to make predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered numpy.ndarray (dense) or scipy.sparse.csr_matrix (sparse) with dtype=float64. SVM implementations in Scikit Learn SVC, NuSVC and LinearSVC are classes capable of performing multi-class classification on a dataset. SVC and NuSVC are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section Mathematical formulation). On the other hand, LinearSVC is another implementation of Support Vector Classification for the case of a linear kernel. Note that LinearSVC does not accept keyword kernel, as this is assumed to be linear. It also lacks some of the members of SVC and NuSVC, like support_. As other classifiers, SVC, NuSVC and LinearSVC take as input two arrays: an array X of size [n_samples, n_features] holding the training samples, and an array y of class labels (strings or integers), size [n_samples]: End of explanation """ clf.predict([[2., 2.]]) """ Explanation: After being fitted, the model can then be used to predict new values: End of explanation """ # get support vectors print clf.support_vectors_ # get indices of support vectors print clf.support_ # get number of support vectors for each class print clf.n_support_ """ Explanation: SVMs decision function depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in members support_vectors_, support_ and n_support: End of explanation """ # Adapted from a notebook by Jason Brownlee def loadDataset(filename, split): xTrain = [] cTrain = [] xTest = [] cTest = [] with open(filename, 'rb') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for i in range(len(dataset)-1): for y in range(4): dataset[i][y] = float(dataset[i][y]) item = dataset[i] if random.random() < split: xTrain.append(item[0:4]) cTrain.append(item[4]) else: xTest.append(item[0:4]) cTest.append(item[4]) return xTrain, cTrain, xTest, cTest with open('iris.data', 'rb') as csvfile: lines = csv.reader(csvfile) xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66) nTrain_all = len(xTrain_all) nTest_all = len(xTest_all) print 'Train: ' + str(nTrain_all) print 'Test: ' + str(nTest_all) """ Explanation: 3.2.1 Example: Iris Dataset. As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (setosa, versicolor or virginica). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters. We will try to fit the logistic regression model to discriminate between two classes using only two attributes. First, we load the dataset and split them in training and test subsets. End of explanation """ # Select attributes i = 0 # Try 0,1,2,3 j = 1 # Try 0,1,2,3 with j!=i # Select two classes c0 = 'Iris-versicolor' c1 = 'Iris-virginica' # Select two coordinates ind = [i, j] # Take training test X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1]) C_tr = [cTrain_all[n] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1] Y_tr = np.array([int(c==c1) for c in C_tr]) n_tr = len(X_tr) # Take test set X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1]) C_tst = [cTest_all[n] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1] Y_tst = np.array([int(c==c1) for c in C_tst]) n_tst = len(X_tst) """ Explanation: Now, we select two classes and two attributes. End of explanation """ def normalize(X, mx=None, sx=None): # Compute means and standard deviations if mx is None: mx = np.mean(X, axis=0) if sx is None: sx = np.std(X, axis=0) # Normalize X0 = (X-mx)/sx return X0, mx, sx """ Explanation: 3.2.2. Data normalization Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized. We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance. End of explanation """ # Normalize data Xn_tr, mx, sx = normalize(X_tr) Xn_tst, mx, sx = normalize(X_tst, mx, sx) """ Explanation: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set. End of explanation """ # Separate components of x into different arrays (just for the plots) x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0] x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0] x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1] x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1] # Scatterplot. labels = {'Iris-setosa': 'Setosa', 'Iris-versicolor': 'Versicolor', 'Iris-virginica': 'Virginica'} plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') """ Explanation: The following figure generates a plot of the normalized training data. End of explanation """ def logregFit(Z_tr, Y_tr, rho, n_it): # Data dimension n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): # Compute posterior probabilities for weight w p1_tr = logistic(np.dot(Z_tr, w)) p0_tr = logistic(-np.dot(Z_tr, w)) # Compute negative log-likelihood nll_tr[n] = - np.dot(Y_tr.T, np.log(p1_tr)) - np.dot((1-Y_tr).T, np.log(p0_tr)) # Update weights w += rho*np.dot(Z_tr.T, Y_tr - p1_tr) return w, nll_tr def logregPredict(Z, w): # Compute posterior probability of class 1 for weights w. p = logistic(np.dot(Z, w)) # Class D = [int(round(pn)) for pn in p] return p, D """ Explanation: In order to apply the gradient descent rule, we need to define two methods: - A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations. - A predict method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions. End of explanation """ # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 200 # Number of iterations # Compute Z's Z_tr = np.c_[np.ones(n_tr), Xn_tr] Z_tst = np.c_[np.ones(n_tst), Xn_tst] n_dim = Z_tr.shape[1] # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print "The optimal weights are:" print w print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) print "The NLL after training is " + str(nll_tr[len(nll_tr)-1]) """ Explanation: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$. End of explanation """ # Create a regtangular grid. x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max() y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max() dx = x_max - x_min dy = y_max - y_min h = dy /400 xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h), np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h)) X_grid = np.array([xx.ravel(), yy.ravel()]).T # Compute Z's Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid] # Compute the classifier output for all samples in the grid. pp, dd = logregPredict(Z_grid, w) # Put the result into a color plot plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') pp = pp.reshape(xx.shape) plt.contourf(xx, yy, pp, cmap=plt.cm.copper) """ Explanation: 3.2.3. Free parameters Under certaing conditions, the gradient descent method can be shown to converge assymtotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors: Number of iterations Initialization Learning step Exercise: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values. Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array p with nbins, you can use plt.hist(p, n) 3.2.3.1. Learning step The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. too large values of $\rho$ make the algorithm diverge. For too small values, the convergence is too slown and more iterations are required for a good convergence. Exercise 3: Observe the evolution of the negative log-likelihood with the number of iterations, for different values of $\rho$. It is easy to check that, for $\rho$ large enough, the gradient descent method does not converge. Can you estimate (through manual observation) and approximate value of $\rho$ stating a boundary between convergence and non-convergence? Exercise 4: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$. Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, 1/10, 1/100, 1/1000, \ldots$ In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions: - C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly) - C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (do not decrease too much slowly) For instance, we can take $\rho_n= 1/n$. Another common choice is $\rho_n = \alpha/(1+\beta n)$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method. 3.2.4. Visualizing the posterior map. We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights. End of explanation """ # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations g = 5 # Degree of polynomial # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(Xn_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print "The optimal weights are:" print w print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) print "The NLL after training is " + str(nll_tr[len(nll_tr)-1]) """ Explanation: 3.2.5. Polynomial Logistic Regression The error rates of the logitic regression model can be potentially reduce by using polynomial transformations. To compute the polinomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing. End of explanation """
Kaggle/learntools
notebooks/feature_engineering_new/raw/tut5.ipynb
apache-2.0
#$HIDE_INPUT$ import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from IPython.display import display from sklearn.feature_selection import mutual_info_regression plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=14, titlepad=10, ) def plot_variance(pca, width=8, dpi=100): # Create figure fig, axs = plt.subplots(1, 2) n = pca.n_components_ grid = np.arange(1, n + 1) # Explained variance evr = pca.explained_variance_ratio_ axs[0].bar(grid, evr) axs[0].set( xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0) ) # Cumulative Variance cv = np.cumsum(evr) axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-") axs[1].set( xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0) ) # Set up figure fig.set(figwidth=8, dpi=100) return axs def make_mi_scores(X, y, discrete_features): mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features) mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns) mi_scores = mi_scores.sort_values(ascending=False) return mi_scores df = pd.read_csv("../input/fe-course-data/autos.csv") """ Explanation: Introduction In the previous lesson we looked at our first model-based method for feature engineering: clustering. In this lesson we look at our next: principal component analysis (PCA). Just like clustering is a partitioning of the dataset based on proximity, you could think of PCA as a partitioning of the variation in the data. PCA is a great tool to help you discover important relationships in the data and can also be used to create more informative features. (Technical note: PCA is typically applied to standardized data. With standardized data "variation" means "correlation". With unstandardized data "variation" means "covariance". All data in this course will be standardized before applying PCA.) Principal Component Analysis In the Abalone dataset are physical measurements taken from several thousand Tasmanian abalone. (An abalone is a sea creature much like a clam or an oyster.) We'll just look at a couple features for now: the 'Height' and 'Diameter' of their shells. You could imagine that within this data are "axes of variation" that describe the ways the abalone tend to differ from one another. Pictorially, these axes appear as perpendicular lines running along the natural dimensions of the data, one axis for each original feature. <figure style="padding: 1em;"> <img src="https://i.imgur.com/rr8NCDy.png" width=300, alt=""> <figcaption style="textalign: center; font-style: italic"><center> </center></figcaption> </figure> Often, we can give names to these axes of variation. The longer axis we might call the "Size" component: small height and small diameter (lower left) contrasted with large height and large diameter (upper right). The shorter axis we might call the "Shape" component: small height and large diameter (flat shape) contrasted with large height and small diameter (round shape). Notice that instead of describing abalones by their 'Height' and 'Diameter', we could just as well describe them by their 'Size' and 'Shape'. This, in fact, is the whole idea of PCA: instead of describing the data with the original features, we describe it with its axes of variation. The axes of variation become the new features. <figure style="padding: 1em;"> <img src="https://i.imgur.com/XQlRD1q.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center>The principal components become the new features by a rotation of the dataset in the feature space. </center></figcaption> </figure> The new features PCA constructs are actually just linear combinations (weighted sums) of the original features: df["Size"] = 0.707 * X["Height"] + 0.707 * X["Diameter"] df["Shape"] = 0.707 * X["Height"] - 0.707 * X["Diameter"] These new features are called the principal components of the data. The weights themselves are called loadings. There will be as many principal components as there are features in the original dataset: if we had used ten features instead of two, we would have ended up with ten components. A component's loadings tell us what variation it expresses through signs and magnitudes: | Features \ Components | Size (PC1) | Shape (PC2) | |-----------------------|------------|-------------| | Height | 0.707 | 0.707 | | Diameter | 0.707 | -0.707 | This table of loadings is telling us that in the Size component, Height and Diameter vary in the same direction (same sign), but in the Shape component they vary in opposite directions (opposite sign). In each component, the loadings are all of the same magnitude and so the features contribute equally in both. PCA also tells us the amount of variation in each component. We can see from the figures that there is more variation in the data along the Size component than along the Shape component. PCA makes this precise through each component's percent of explained variance. <figure style="padding: 1em;"> <img src="https://i.imgur.com/xWTvqDA.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center> Size accounts for about 96% and the Shape for about 4% of the variance between Height and Diameter. </center></figcaption> </figure> The Size component captures the majority of the variation between Height and Diameter. It's important to remember, however, that the amount of variance in a component doesn't necessarily correspond to how good it is as a predictor: it depends on what you're trying to predict. PCA for Feature Engineering There are two ways you could use PCA for feature engineering. The first way is to use it as a descriptive technique. Since the components tell you about the variation, you could compute the MI scores for the components and see what kind of variation is most predictive of your target. That could give you ideas for kinds of features to create -- a product of 'Height' and 'Diameter' if 'Size' is important, say, or a ratio of 'Height' and 'Diameter' if Shape is important. You could even try clustering on one or more of the high-scoring components. The second way is to use the components themselves as features. Because the components expose the variational structure of the data directly, they can often be more informative than the original features. Here are some use-cases: - Dimensionality reduction: When your features are highly redundant (multicollinear, specifically), PCA will partition out the redundancy into one or more near-zero variance components, which you can then drop since they will contain little or no information. - Anomaly detection: Unusual variation, not apparent from the original features, will often show up in the low-variance components. These components could be highly informative in an anomaly or outlier detection task. - Noise reduction: A collection of sensor readings will often share some common background noise. PCA can sometimes collect the (informative) signal into a smaller number of features while leaving the noise alone, thus boosting the signal-to-noise ratio. - Decorrelation: Some ML algorithms struggle with highly-correlated features. PCA transforms correlated features into uncorrelated components, which could be easier for your algorithm to work with. PCA basically gives you direct access to the correlational structure of your data. You'll no doubt come up with applications of your own! <blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;"> <strong>PCA Best Practices</strong><br> There are a few things to keep in mind when applying PCA: <ul> <li> PCA only works with numeric features, like continuous quantities or counts. <li> PCA is sensitive to scale. It's good practice to standardize your data before applying PCA, unless you know you have good reason not to. <li> Consider removing or constraining outliers, since they can an have an undue influence on the results. </ul> </blockquote> Example - 1985 Automobiles In this example, we'll return to our Automobile dataset and apply PCA, using it as a descriptive technique to discover features. We'll look at other use-cases in the exercise. This hidden cell loads the data and defines the functions plot_variance and make_mi_scores. End of explanation """ features = ["highway_mpg", "engine_size", "horsepower", "curb_weight"] X = df.copy() y = X.pop('price') X = X.loc[:, features] # Standardize X_scaled = (X - X.mean(axis=0)) / X.std(axis=0) """ Explanation: We've selected four features that cover a range of properties. Each of these features also has a high MI score with the target, price. We'll standardize the data since these features aren't naturally on the same scale. End of explanation """ from sklearn.decomposition import PCA # Create principal components pca = PCA() X_pca = pca.fit_transform(X_scaled) # Convert to dataframe component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])] X_pca = pd.DataFrame(X_pca, columns=component_names) X_pca.head() """ Explanation: Now we can fit scikit-learn's PCA estimator and create the principal components. You can see here the first few rows of the transformed dataset. End of explanation """ loadings = pd.DataFrame( pca.components_.T, # transpose the matrix of loadings columns=component_names, # so the columns are the principal components index=X.columns, # and the rows are the original features ) loadings """ Explanation: After fitting, the PCA instance contains the loadings in its components_ attribute. (Terminology for PCA is inconsistent, unfortunately. We're following the convention that calls the transformed columns in X_pca the components, which otherwise don't have a name.) We'll wrap the loadings up in a dataframe. End of explanation """ # Look at explained variance plot_variance(pca); """ Explanation: Recall that the signs and magnitudes of a component's loadings tell us what kind of variation it's captured. The first component (PC1) shows a contrast between large, powerful vehicles with poor gas milage, and smaller, more economical vehicles with good gas milage. We might call this the "Luxury/Economy" axis. The next figure shows that our four chosen features mostly vary along the Luxury/Economy axis. End of explanation """ mi_scores = make_mi_scores(X_pca, y, discrete_features=False) mi_scores """ Explanation: Let's also look at the MI scores of the components. Not surprisingly, PC1 is highly informative, though the remaining components, despite their small variance, still have a significant relationship with price. Examining those components could be worthwhile to find relationships not captured by the main Luxury/Economy axis. End of explanation """ # Show dataframe sorted by PC3 idx = X_pca["PC3"].sort_values(ascending=False).index cols = ["make", "body_style", "horsepower", "curb_weight"] df.loc[idx, cols] """ Explanation: The third component shows a contrast between horsepower and curb_weight -- sports cars vs. wagons, it seems. End of explanation """ df["sports_or_wagon"] = X.curb_weight / X.horsepower sns.regplot(x="sports_or_wagon", y='price', data=df, order=2); """ Explanation: To express this contrast, let's create a new ratio feature: End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/1a/nbheap.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: Heap La structure heap ou tas en français est utilisée pour trier. Elle peut également servir à obtenir les k premiers éléments d'une liste. End of explanation """ %matplotlib inline """ Explanation: Un tas est peut être considéré comme un tableau $T$ qui vérifie une condition assez simple, pour tout indice $i$, alors $T[i] \geqslant \max(T[2i+1], T[2i+2])$. On en déduit nécessairement que le premier élément du tableau est le plus grand. Maintenant comment transformer un tableau en un autre qui respecte cette contrainte ? End of explanation """ def swap(tab, i, j): "Echange deux éléments." tab[i], tab[j] = tab[j], tab[i] def entas(heap): "Organise un ensemble selon un tas." modif = 1 while modif > 0: modif = 0 i = len(heap) - 1 while i > 0: root = (i-1) // 2 if heap[root] < heap[i]: swap(heap, root, i) modif += 1 i -= 1 return heap ens = [1,2,3,4,7,10,5,6,11,12,3] entas(ens) """ Explanation: Transformer en tas End of explanation """ from pyensae.graphhelper import draw_diagram def dessine_tas(heap): rows = ["blockdiag {"] for i, v in enumerate(heap): if i*2+1 < len(heap): rows.append('"[{}]={}" -> "[{}]={}";'.format( i, heap[i], i * 2 + 1, heap[i*2+1])) if i*2+2 < len(heap): rows.append('"[{}]={}" -> "[{}]={}";'.format( i, heap[i], i * 2 + 2, heap[i*2+2])) rows.append("}") return draw_diagram("\n".join(rows)) ens = [1,2,3,4,7,10,5,6,11,12,3] dessine_tas(entas(ens)) """ Explanation: Comme ce n'est pas facile de vérifer que c'est un tas, on le dessine. Dessiner un tas End of explanation """ def swap(tab, i, j): "Echange deux éléments." tab[i], tab[j] = tab[j], tab[i] def _heapify_max_bottom(heap): "Organise un ensemble selon un tas." modif = 1 while modif > 0: modif = 0 i = len(heap) - 1 while i > 0: root = (i-1) // 2 if heap[root] < heap[i]: swap(heap, root, i) modif += 1 i -= 1 def _heapify_max_up(heap): "Organise un ensemble selon un tas." i = 0 while True: left = 2*i + 1 right = left+1 if right < len(heap): if heap[left] > heap[i] >= heap[right]: swap(heap, i, left) i = left elif heap[right] > heap[i]: swap(heap, i, right) i = right else: break elif left < len(heap) and heap[left] > heap[i]: swap(heap, i, left) i = left else: break def topk_min(ens, k): "Retourne les k plus petits éléments d'un ensemble." heap = ens[:k] _heapify_max_bottom(heap) for el in ens[k:]: if el < heap[0]: heap[0] = el _heapify_max_up(heap) return heap ens = [1,2,3,4,7,10,5,6,11,12,3] for k in range(1, len(ens)-1): print(k, topk_min(ens, k)) """ Explanation: Le nombre entre crochets est la position, l'autre nombre est la valeur à cette position. Cette représentation fait apparaître une structure d'arbre binaire. Première version End of explanation """ def _heapify_max_bottom_position(ens, pos): "Organise un ensemble selon un tas." modif = 1 while modif > 0: modif = 0 i = len(pos) - 1 while i > 0: root = (i-1) // 2 if ens[pos[root]] < ens[pos[i]]: swap(pos, root, i) modif += 1 i -= 1 def _heapify_max_up_position(ens, pos): "Organise un ensemble selon un tas." i = 0 while True: left = 2*i + 1 right = left+1 if right < len(pos): if ens[pos[left]] > ens[pos[i]] >= ens[pos[right]]: swap(pos, i, left) i = left elif ens[pos[right]] > ens[pos[i]]: swap(pos, i, right) i = right else: break elif left < len(pos) and ens[pos[left]] > ens[pos[i]]: swap(pos, i, left) i = left else: break def topk_min_position(ens, k): "Retourne les positions des k plus petits éléments d'un ensemble." pos = list(range(k)) _heapify_max_bottom_position(ens, pos) for i, el in enumerate(ens[k:]): if el < ens[pos[0]]: pos[0] = k + i _heapify_max_up_position(ens, pos) return pos ens = [1,2,3,7,10,4,5,6,11,12,3] for k in range(1, len(ens)-1): pos = topk_min_position(ens, k) print(k, pos, [ens[i] for i in pos]) import numpy.random as rnd X = rnd.randn(10000) %timeit topk_min(X, 20) %timeit topk_min_position(X, 20) """ Explanation: Même chose avec les indices au lieu des valeurs End of explanation """ from cpyquickhelper.numbers import measure_time from tqdm import tqdm from pandas import DataFrame rows = [] for n in tqdm(list(range(1000, 20001, 1000))): X = rnd.randn(n) res = measure_time('topk_min_position(X, 100)', {'X': X, 'topk_min_position': topk_min_position}, div_by_number=True, number=10) res["size"] = n rows.append(res) df = DataFrame(rows) df.head() import matplotlib.pyplot as plt df[['size', 'average']].set_index('size').plot() plt.title("Coût topk en fonction de la taille du tableau"); """ Explanation: Coût de l'algorithme End of explanation """ rows = [] X = rnd.randn(10000) for k in tqdm(list(range(500, 2001, 150))): res = measure_time('topk_min_position(X, k)', {'X': X, 'topk_min_position': topk_min_position, 'k': k}, div_by_number=True, number=5) res["k"] = k rows.append(res) df = DataFrame(rows) df.head() df[['k', 'average']].set_index('k').plot() plt.title("Coût topk en fonction de k"); """ Explanation: A peu près linéaire comme attendu. End of explanation """ def _heapify_max_up_position_simple(ens, pos, first): "Organise un ensemble selon un tas." i = first while True: left = 2*i + 1 right = left+1 if right < len(pos): if ens[pos[left]] > ens[pos[i]] >= ens[pos[right]]: swap(pos, i, left) i = left elif ens[pos[right]] > ens[pos[i]]: swap(pos, i, right) i = right else: break elif left < len(pos) and ens[pos[left]] > ens[pos[i]]: swap(pos, i, left) i = left else: break def topk_min_position_simple(ens, k): "Retourne les positions des k plus petits éléments d'un ensemble." pos = list(range(k)) pos[k-1] = 0 for i in range(1, k): pos[k-i-1] = i _heapify_max_up_position_simple(ens, pos, k-i-1) for i, el in enumerate(ens[k:]): if el < ens[pos[0]]: pos[0] = k + i _heapify_max_up_position_simple(ens, pos, 0) return pos ens = [1,2,3,7,10,4,5,6,11,12,3] for k in range(1, len(ens)-1): pos = topk_min_position_simple(ens, k) print(k, pos, [ens[i] for i in pos]) X = rnd.randn(10000) %timeit topk_min_position_simple(X, 20) """ Explanation: Pas évident, au pire en $O(n\ln n)$, au mieux en $O(n)$. Version simplifiée A-t-on vraiment besoin de _heapify_max_bottom_position ? End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/ukesm1-0-mmh/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-mmh', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: MOHC Source ID: UKESM1-0-MMH Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:15 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
dssg/diogenes
doc/notebooks/read.ipynb
mit
import diogenes sample_csv_text = 'id,name,age\n0,Anne,57\n1,Bill,76\n2,Cecil,26\n' with open('sample.csv', 'w') as csv_in: csv_in.write(sample_csv_text) sample_table = diogenes.read.open_csv('sample.csv') """ Explanation: The Read Module The :mod:diogenes.read module provides tools for reading data from external sources into Diogenes' preferred Numpy structured array format. The module can read from either: A local CSV file A remote CSV file Any sort of SQL database Local CSV Files We can read local CSV files using :func:diogenes.read.read.open_csv. End of explanation """ print sample_table.dtype print sample_table """ Explanation: sample_table is a structured array (more specifically, a record array). End of explanation """ remote_csv = diogenes.read.open_csv_url('http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv', delimiter=';') print remote_csv.dtype print remote_csv[:10] """ Explanation: Remote CSV files We read remote CSV files with :func:diogenes.read.read.open_csv_url, using a url End of explanation """ conn = diogenes.read.read.connect_sql('sqlite://') conn.execute('CREATE TABLE sample_table (id INT, name TEXT, age INT)') for row in sample_table: conn.execute('INSERT INTO sample_table (id, name, age) VALUES (?, ?, ?)', row) """ Explanation: SQL databases We read from, and write to, databases using :func:diogenes.read.read.connect_sql. When we pass an SQLAlchemy connection string to connect_sql, we get an instance of :class:diogenes.read.read.SQLConnection, which can run SQL queries with :meth:diogenes.read.read.SQLConnection.execute in a way that resembles (but does not strictly adhere to) DBAPI 2.0. End of explanation """ sql_result = conn.execute('SELECT * FROM sample_table') print sql_result.dtype print sql_result """ Explanation: An important difference between most Python SQL libraries and Diogenes is that Diogenes returns queries in structured arrays. End of explanation """
theideasmith/theideasmith.github.io
_notebooks/AsymptoticConvergence/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization.ipynb
mit
from pylab import * from numpy import random as random random.seed(1) N=1000. w = array([14., 30.]); x = zeros((2, int(N))).astype(float32) x[0,:] = arange(N).astype(float32) x[1,:] = 1 y = w.dot(x) + random.normal(size=int(N), scale=100.) """ Explanation: Supplementary Materials This code accompanies the paper Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization (Lipshitz, 2017) Initialization End of explanation """ yh = lambda xs, ws: \ ws.dot(xs) grad = lambda ys, yhs, xs: \ (1./xs.shape[1])*sum((yhs-ys)*xs).astype(float32) delta = lambda gs, a: \ a*gs def regress(y, x, alpha, T=1000, wh=None, **kwargs): wh = random.normal(2, size=2) whs = zeros((T, 2)) whs[0,:] = wh for i in xrange(1,T): wh+=delta(grad(y,yh(x,wh), x), alpha) whs[i,:] = wh.copy() return wh, whs def regrSample(y, x, alpha, T=1000, N=10, **kwargs): out = map( lambda a: \ regress(y,x, alpha, T=T), xrange(N) ) trains = array([o[1] for o in out]) wDist = array([o[0] for o in out]) return wDist, trains def statsRegr(*args, **kwargs): wDist, trains = regrSample(*args, **kwargs) return np.mean(trains, axis=0), np.std(trains, axis=0) """ Explanation: Defining Regression End of explanation """ def plotDynamicsForAlpha(alpha, axTitle, T=1000, N=10): t = np.arange(T) mu, sig = statsRegr(y, x, alpha, T=T, N=N) plot(mu[:,0], 'r:', label='$w_1$') plot(mu[:,1], 'b:', label='$w_2$') fill_between(t, \ mu[:,0]+sig[:,0], \ mu[:,0]-sig[:,0], \ facecolor='red', alpha=0.5) fill_between(t,\ mu[:,1]+sig[:,1], \ mu[:,1]-sig[:,1], \ facecolor='blue', alpha=0.5) xlabel("t [Iterations]", fontdict={'fontsize':fs*.8}) yl = ylabel("$w_{i,t}$",fontdict={'fontsize':fs*.8}) yl.set_rotation('horizontal') title(axTitle, fontdict={'fontsize':fs}) tight_layout() return mu, sig alphaData = [ ("$a=2$", 2), ("$a=0$",0.), ("$a=-0.5N/x^2$",-0.5*N/linalg.norm(x[0,:])**2), ("$a=-N/x^2$", -N/linalg.norm(x[0,:])**2), ("$a=-1.3N/x^2$", -1.3*N/linalg.norm(x[0,:])**2), ("$a=-1.6N/x^2$", -1.6*N/linalg.norm(x[0,:])**2), ("$a=-1.99N/x^2$", -1.99*N/linalg.norm(x[0,:])**2), ("$a=-2N/x^2$", -2.0*N/linalg.norm(x[0,:])**2) ] %matplotlib inline from scipy.stats import norm import seaborn as sns fs = 15 figure(figsize=(10,3*len(alphaData))) outs = [] for i, d in enumerate(alphaData): k, v = d # subplot(len(alphaData),1, i+1) figure(figsize=(10,3)) outs.append(plotDynamicsForAlpha(v, k, T=300 )) tight_layout() # suptitle("Dynamical Learning Trajectories for Significant Alpha Values", y=1.08, fontdict={'fontsize':20}); for i, axtitle in enumerate(alphaData): axtitle, axnum = axtitle mu, sig = outs[i] figure(figsize=(10,3)) if np.sum(np.isnan(mu)) > 0: k=2 idx0=argwhere(~np.isnan(mu[:,0]))[-1]-1 idx1=argwhere(~np.isnan(sig[:,0]))[-1]-1 idx = min(idx0, idx1) xmin = max(mu[idx,0]-k*sig[idx,0], mu[idx,0]-k*sig[idx,0]) xmax = min(mu[idx,0]+k*sig[idx,0], mu[idx,0]+k*sig[idx,0]) x_axis = np.linspace(xmin,xmax, num=300); else: xmin = max(mu[-1,0]-3*sig[-1,0], mu[-1,0]-3*sig[-1,0]) xmax = min(mu[-1,0]+3*sig[-1,0], mu[-1,0]+3*sig[-1,0]) x_axis = np.linspace(xmin,xmax, num=300); plt.plot(x_axis, norm.pdf(x_axis,mu[-1,0],sig[-1,0]),'r:'); plt.plot(x_axis, norm.pdf(x_axis,mu[-1,1],sig[-1,1]), 'b:'); xlim(xmin = xmin, xmax=xmax) p, v = yticks() plt.yticks(p,map(lambda w: round(w, 2),linspace(0, 1, num=len(p)))) title(axtitle) tight_layout() x.shape figure(figsize=(10,10)) subplot(2,1,1) title("Closed From Expression", fontdict={'fontsize':10}) T = 300 w0 = random.normal(2, size=2) t = np.arange(T) a = -2.1*N/linalg.norm(x[0,:])**2 beta2 = (1/N)*a*x[0,:].dot(x[0,:]) beta1 = -(1/N)*a*x[0,:].dot(y) ws = w0[0]*(beta2+1)**t - beta1*(1-(beta2+1)**t)/beta2 # ws = w0[0]*(-1)**t + ((-1)**t -1)*x[0,:].dot(y)/linalg.norm(x[0,:])**2 plot(ws) subplot(2,1,2) title("Simulation", fontdict={'fontsize':10}) wh = w0 whs = zeros((T, 2)) whs[0,:] = wh for i in xrange(1,T): wh+=delta(grad(y,yh(x,wh), x), a) whs[i,:] = wh.copy() plot(whs[:,0]) suptitle(("Asymptotic Behavior " "of Closed form and Simulated Learning: $a = -2.1N/x^2$"), fontdict={"fontsize":20}) """ Explanation: Running Regression above and Below the Upper Bound on $\alpha$ The theoretically derived bounds on $\alpha$ are $$\alpha \in \left( -2\frac{N}{|\mathbf{x}|^2}, 0 \right]$$ Other $\alpha$ values diverge End of explanation """ t = arange(0,10) ws = (0**t)*(w0[0]+x[0,:].dot(y)/linalg.norm(x[0,:])**2) + x[0,:].dot(y)/linalg.norm(x[0,:])**2 figure() ax = subplot(111) ax.set_title("alpha = sup A") ax.plot(ws) t = arange(0,10) ws = ((-1)**t)*w0[0] - (x[0,:].dot(y)/linalg.norm(x[0,:])**2) + (-2)**t*x[0,:].dot(y)/linalg.norm(x[0,:])**2 figure() ax = subplot(111) ax.set_title("alpha = sup A") ax.plot(ws) """ Explanation: $\alpha = \sup A$ End of explanation """
jmlon/PythonTutorials
scipy/scipyOptimize.ipynb
gpl-3.0
from scipy.optimize import minimize import numpy as np """ Explanation: scipy : Optimización Inicialmente se importan los módulos de optimización y numpy End of explanation """ # Algunas constantes a=1 b=2 # La función a optimizar def parabola(x): return (x[0]-a)**2+(x[1]-b)**2 # Un punto inicial para arrancar el proceso x0 = np.array([0.,0.]) # Minimización por el método 'nelder-mead' results = minimize(parabola, x0, method='nelder-mead', options={'xtol': 1e-8, 'disp': True}) results # Ok, pero solo queremos el punto solución results.x """ Explanation: Existen multitud de métodos de optimización (minimización/maximización). El módulo optimize implementa algunos de estos métodos, los cuales se invocan todos a través de la función minimize, pero pueden variar los parámetros según el método de integración. Parábola en $R^2$ transladada End of explanation """ # Translacion xt = np.array([ 1.,2.,3.,4.,5. ]) # La función a optimizar def parabolaN(x): return ((x-xt)**2).sum() """ Explanation: Generalización parábola en $R^n$ End of explanation """ # Un punto inicial para arrancar el proceso x0 = np.zeros( 5 ) # Minimización por el método 'nelder-mead' results = minimize(parabolaN, x0, method='nelder-mead', options={'xtol': 1e-6, 'disp': True}) results.x """ Explanation: Se define un punto inicial y se invoca la función de minización End of explanation """ # Calcular el jacobiano, matrix Nx1 de primeras derivadas de la función def jacobian(x): return 2*(x-xt) """ Explanation: El disponer de las derivadas de la función objetivo abre la posibilidad de usar métodos que convergen más rápidamente. Por ejemplo el gradiente (o jacobiano) de la parábola en $R^n$ está dado por la función: End of explanation """ # Minimización por el método 'BFGS', pasando la funcion del jacobiano results = minimize(parabolaN, x0, method='BFGS', jac=jacobian, options={'disp': True}) results.x """ Explanation: y el método Broyden-Fletcher-Goldfarb-Shanno (BFGS) puede hacer uso de las derivadas así: End of explanation """ # Calcular el Hessiano, matrix nxn de segundas derivadas de la función def hessian(x): return np.diag( 2*np.ones(x.shape) ) # Minimización por el método 'Newton-CG', pasando la funcion del jacobiano y el hessiano results = minimize(parabolaN, x0, method='Newton-CG', jac=jacobian, hess=hessian, options={'disp': True}) results.x """ Explanation: Incluso, si se dispone de las segundas derivadas (Hessian) se pueden utilizar métodos aún más eficientes como el Newton-Conjugate-Gradient End of explanation """ obs = np.array([ 2.52483651, 2.31425344, 2.6848971 , 2.70174083, 3.0463613 , 2.99765209, 3.44428727, 3.5561104 , 3.90656829, 4.04416544, 4.42440144, 4.84459361, 4.79804493, 4.84273557, 5.08542649, 5.0700754 , 4.79993924, 4.82872352, 4.75390864, 4.25474467, 4.42672152, 4.02190748, 3.91485327, 3.43295015, 3.43173336, 2.92309118, 3.10279758, 2.82707051, 2.23905861, 2.29927556, 2.46353113, 2.09561343, 2.22926611, 2.03924848, 1.96606807, 1.9339189 , 2.03639402, 2.10496801, 2.01822495, 2.10666009, 2.13145425, 2.10118512, 1.95166921, 1.7079767 , 2.05553464, 1.83273161, 1.88716236, 1.8912594 , 2.07873222, 1.96578077]) """ Explanation: Ajuste de curvas por mínimos cuadrados El método de mínimos cuadrados se utiliza comunmente para determinar los parámetros de un modelo que mejor se ajustan a un conjunto de observaciones experimentales. Supongamos que se tiene el siguiente conjunto de observaciones: End of explanation """ t = np.linspace(0, 5, 50) # 50 evenly spaced samples """ Explanation: que corresponden a 50 puntos en el tiempo entre 0 y 5: End of explanation """ import matplotlib.pyplot as plt plt.plot(t, obs, 'xb') plt.xlabel('time') plt.ylabel('obs') plt.show() """ Explanation: La gráfica de las observaciones es: End of explanation """ def modelo(t, coeffs): return coeffs[0]+coeffs[1]*np.exp(-((t-coeffs[2])/coeffs[3])**2) """ Explanation: y que el modelo que describe el fenómeno es de la forma: $$ y(t) = A + B \exp{ \left{ - \left( \frac{t-\mu}{\sigma} \right)^2 \right} }$$ donde A,B,$\mu$,$\sigma$ son los parámetros del modelo y $t$ es la variable independiente. Sea coeff[] el vector de parámetros del modelo. Entonces, la función que describe el modelo es: End of explanation """ def residuals(coeffs,obs,t): return obs-modelo(t,coeffs) """ Explanation: El error residual es la diferencia entre las observaciones y el valor que predice el modelo. El método de mínimos cuadrados busca minimizar la suma de los cuadrados del error residual para todas las observaciones. Por lo tanto debemos definir una función que determine el error residual: End of explanation """ # Se utiliza la funcion de minimos cuadrados del modulo scipy from scipy.optimize import leastsq # Se define un punto de partida para el proceso de minimización x0 = np.ones(4) # Se invoca el algoritmo de ajuste por mínimos cuadrados x, flag = leastsq(residuals, x0, args=(obs, t)) x """ Explanation: Y con el modelo, las observaciones y el error residual podemos encontrar los coeficientes del modelo que minimizan el error cuadrático: End of explanation """ y = modelo(t, x) plt.plot(t, obs, 'xb', t, y, '-r') plt.xlabel('time') plt.ylabel('y(t)') plt.title('Curva de mejor ajuste') plt.show() """ Explanation: Conocidos los coeficientes se puede evaluar el modelo $y(t)$ y hacer la gráfica de las observaciones vs el modelo ajustado End of explanation """
ajul/zerosum
python/examples/super_street_fighter_2_turbo.ipynb
bsd-3-clause
import _initpath import numpy import dataset.matchup import dataset.csv import zerosum.balance from pandas import DataFrame # Balances a Super Street Fighter 2 Turbo matchup chart using a logistic handicap. # Produces a .csv file for the initial game and the resulting game. init = dataset.matchup.ssf2t.sorted_by_sum() dataset.csv.write_csv('out/ssf2t_init.csv', init.data, init.row_names, numeric_format = '%0.4f') balance = zerosum.balance.LogisticSymmetricBalance(init.data) opt = balance.optimize() dataset.csv.write_csv('out/ssf2t_opt.csv', opt.F, init.row_names, numeric_format = '%0.4f') """ Explanation: Super Street Fighter 2 Turbo example This example applies a logistic handicap to a Super Street Fighter 2 Turbo matchup chart. End of explanation """ DataFrame(data = init.data, index = init.row_names, columns = init.col_names) """ Explanation: Initial matchup chart End of explanation """ DataFrame(data = opt.F, index = init.row_names, columns = init.col_names) """ Explanation: Matchup chart after balancing with a logistic handicap End of explanation """
saturn77/CythonBootstrap
.ipynb_checkpoints/memoize-checkpoint.ipynb
gpl-2.0
from __future__ import print_function """ Explanation: This is based on the previous dojo-20150131-memoization notebook. End of explanation """ def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) """ Explanation: We start with a function that calculates fibonacci numbers. Its code is very much like the definition of Fibonacci numbers. End of explanation """ [fib(i) for i in range(10)] """ Explanation: It is correct. Check the output in the following cell. End of explanation """ n = 33 %timeit fib(n) fib(n) """ Explanation: However, it takes an exponential amount of time to calculate fibonacci numbers. This is not so bad for small values, but exponential things become big quickly. End of explanation """ from collections import Counter def fib(n): global c c[n] += 1 if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) c = Counter() fib(n) for i in sorted(c, reverse=True): print(i, c[i]) print() print(sum(c.values())) """ Explanation: The problem is that the naïve function recalculates Fibonacci numbers. Some code is added to keep track of how many times the function is called for each input value. End of explanation """ for item in sorted(c.items(), key=(lambda x: x[0]), reverse=True): print(item) print() print(sum(c.values())) """ Explanation: No wonder it is so slow. By the way, do you see a pattern in the numbers? By the way, here's another way of sorting and iterating through the counts. End of explanation """ for item in sorted(c.items(), key=(lambda x: x[0]), reverse=True): print(item[0], item[1]) print() print(sum(c.values())) """ Explanation: The above output has parentheses, because item is a tuple. We can avoid the parentheses, by explicitly indexing the elements of the tuple as in the next cell. End of explanation """ for item in sorted(c.items(), key=(lambda x: x[0]), reverse=True): print(*item) print() print(sum(c.values())) """ Explanation: Another example of how to avoid the parentheses, without having to explicitly index each element of the tuple, is to use a leading '*' to unpack the tuple into separate arguments to the print function. End of explanation """ def fib(n): global c global cache c[n] += 1 if n in cache: return cache[n] if n == 0: f = 0 elif n == 1: f = 1 else: f = fib(n-1) + fib(n-2) cache[n] = f return f # just to make them global c = None cache = None def foo(n): global c global cache c = Counter() cache = {} fib(n) %timeit foo(n) """ Explanation: We can make the function quicker, by calculating each Fibonacci number only once. When a previously calculated value is needed, we just return that previously calculated number. End of explanation """ c = Counter() cache = {} fib(n) for i in sorted(c.keys(), reverse=True): print(i, cache[i], c[i]) print() print(sum(c.values())) """ Explanation: That is over 40,000 times faster than the naïve version. Let's look at the details. End of explanation """ def fib(n): global cache if n in cache: return cache[n] if n == 0: f = 0 elif n == 1: f = 1 else: f = fib(n-1) + fib(n-2) cache[n] = f return f def foo(n): global cache cache = {} fib(n) %timeit foo(n) """ Explanation: Now let's clean up the code. First, let's remove the counting stuff, since we know the answer for that now. End of explanation """ def memoize(f): results = {} def helper(x): if x not in results: results[x] = f(x) return results[x] return helper def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) fib = memoize(fib) def foo(n): def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) fib = memoize(fib) %timeit -n 1 fib(n) for i in range(20): foo(n) """ Explanation: Taking out the diagnostic counting code made it faster yet. The caching code makes the fib() function ugly and I don't like needing to remember to initialize the cache, so let's separate the caching into a separate function. Note that memoize() doesn't know anything about the fib() function, except that it takes a single argument. memoize() can be applied to many single argument functions. End of explanation """ %timeit fib(n) """ Explanation: That yielded yet another big speed increase. It's now over a million times faster than the original function. End of explanation """ @memoize def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) %timeit fib(n) """ Explanation: That's faster yet, but perhaps deceptively so. %timeit did fib(n) many times, but initialized the results cache only once. Next, let's use the syntactic sugar of a function decorator to eliminate the fib = memoize(fib) statement. The @memoize decorator can be used for many single argument functions. End of explanation """ # By the way, this computer is modest. !lscpu """ Explanation: That yielded about the same speed. The code is now both simple and fast. How would one measure the speed of starting with an empty cache each time? Review What are the limitations of memoize() and @memoize? Only works for functions that accept a single argument. The function argument must be acceptable as a dictionary key. Only works for functions, not generators. Size of results dictionary might become too large. How would you work around those limitations? Only works for functions that accept a single argument. Try args and *kwargs stuff. The function argument(s) must be acceptable as a dictionary key. Might try a partial solution. Save the result when the input can be used as a dictionary key, otherwise, just calculate the result. Only works for functions, not generators. Could use cache internal to generator (but would lack elegant general wrapper function or decorator like memoize). Size of results dictionary might become too large. Might limit the size of the dictionary. Beyond that, just calculate the result. Might also save only the results for the most common input, or the results which are the most "expensive" to calculate. Might have multi-stage caching, where first cache is dictionary within Python, and secondary or tertiary caches are saved outside of Python, such as in a file, a database, or a server. End of explanation """ def memoize(f): results = {} def helper(x): results[x] = results.get(x, f(x)) return results[x] return helper @memoize def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) %timeit fib(n) fib(n) """ Explanation: After the 2015-02-23 CohPy presentation of the above, it was suggested to use the .get() method of dictionaries to avoid the if statement inside helper(). The point is to make the code simpler and more elegant. Unfortunately, the .get() method would not change the dictionary, so the return value from the .get() method would have to be saved. That new memoize function would look like the following. It does succeed in getting rid the if statement, but the 'results[x] = results' looks ugly. Even worse, f(x) is always called, defeating the memoization. End of explanation """ def memoize(f): results = {} def helper(x): return results.setdefault(x, f(x)) return helper @memoize def fib(n): if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) %timeit fib(n) fib(n) """ Explanation: Using the .setdefault() method instead of the .get() method looks better, but suffers the same flaw of defeating the memoization by always calling f(x). End of explanation """
metpy/MetPy
v0.5/_downloads/Point_Interpolation.ipynb
bsd-3-clause
import cartopy import cartopy.crs as ccrs from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt import numpy as np from metpy.cbook import get_test_data from metpy.gridding.gridding_functions import (interpolate, remove_nan_observations, remove_repeat_coordinates) def basic_map(map_proj): """Make our basic default map for plotting""" fig = plt.figure(figsize=(15, 10)) view = fig.add_axes([0, 0, 1, 1], projection=to_proj) view.set_extent([-120, -70, 20, 50]) view.add_feature(cartopy.feature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lakes', scale='50m', facecolor='none')) view.add_feature(cartopy.feature.OCEAN) view.add_feature(cartopy.feature.COASTLINE) view.add_feature(cartopy.feature.BORDERS, linestyle=':') return view def station_test_data(variable_names, proj_from=None, proj_to=None): f = get_test_data('station_data.txt') all_data = np.loadtxt(f, skiprows=1, delimiter=',', usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19), dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'), ('slp', 'f'), ('air_temperature', 'f'), ('cloud_fraction', 'f'), ('dewpoint', 'f'), ('weather', '16S'), ('wind_dir', 'f'), ('wind_speed', 'f')])) all_stids = [s.decode('ascii') for s in all_data['stid']] data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids]) value = data[variable_names] lon = data['lon'] lat = data['lat'] if proj_from is not None and proj_to is not None: try: proj_points = proj_to.transform_points(proj_from, lon, lat) return proj_points[:, 0], proj_points[:, 1], value except Exception as e: print(e) return None return lon, lat, value from_proj = ccrs.Geodetic() to_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000) levels = list(range(-20, 20, 1)) cmap = plt.get_cmap('magma') norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True) x, y, temp = station_test_data('air_temperature', from_proj, to_proj) x, y, temp = remove_nan_observations(x, y, temp) x, y, temp = remove_repeat_coordinates(x, y, temp) """ Explanation: Point Interpolation Compares different point interpolation approaches. End of explanation """ gx, gy, img = interpolate(x, y, temp, interp_type='linear', hres=75000) img = np.ma.masked_where(np.isnan(img), img) view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) plt.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Scipy.interpolate linear End of explanation """ gx, gy, img = interpolate(x, y, temp, interp_type='natural_neighbor', hres=75000) img = np.ma.masked_where(np.isnan(img), img) view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) plt.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Natural neighbor interpolation (MetPy implementation) Reference &lt;https://github.com/Unidata/MetPy/files/138653/cwp-657.pdf&gt;_ End of explanation """ gx, gy, img = interpolate(x, y, temp, interp_type='cressman', minimum_neighbors=1, hres=75000, search_radius=100000) img = np.ma.masked_where(np.isnan(img), img) view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) plt.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Cressman interpolation search_radius = 100 km grid resolution = 25 km min_neighbors = 1 End of explanation """ gx, gy, img1 = interpolate(x, y, temp, interp_type='barnes', hres=75000, search_radius=100000) img1 = np.ma.masked_where(np.isnan(img1), img1) view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm) plt.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) """ Explanation: Barnes Interpolation search_radius = 100km min_neighbors = 3 End of explanation """ gx, gy, img = interpolate(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear', rbf_smooth=0) img = np.ma.masked_where(np.isnan(img), img) view = basic_map(to_proj) mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm) plt.colorbar(mmb, shrink=.4, pad=0, boundaries=levels) plt.show() """ Explanation: Radial basis function interpolation linear End of explanation """
bjshaw/phys202-2015-work
assignments/assignment09/IntegrationEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy import integrate """ Explanation: Integration Exercise 1 Imports End of explanation """ def trapz(f, a, b, N): """Integrate the function f(x) over the range [a,b] with N points.""" h = (b-a)/N k = np.arange(1,N) I = h * (0.5*f(a)+0.5*f(b)+np.sum(f(a+k*h))) return I f = lambda x: x**2 g = lambda x: np.sin(x) I = trapz(f, 0, 1, 1000) assert np.allclose(I, 0.33333349999999995) J = trapz(g, 0, np.pi, 1000) assert np.allclose(J, 1.9999983550656628) """ Explanation: Trapezoidal rule The trapezoidal rule generates a numerical approximation to the 1d integral: $$ I(a,b) = \int_a^b f(x) dx $$ by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$: $$ h = (b-a)/N $$ Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points. Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points). End of explanation """ If, errf = integrate.quad(f,0,1) print("Integral:",If) print("Error:",errf) Ig, errg = integrate.quad(g,0,np.pi) print("Integral:",Ig) print("Error:",errg) """ Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. End of explanation """ assert True # leave this cell to grade the previous one """ Explanation: Results are closer to the actual values using the scipy.integrate.quad function rather than the trapz function End of explanation """
recrm/Udebs
tutorial.ipynb
mit
import udebs game_config = """ <udebs> <config> <logging>True</logging> </config> <entities> <xplayer /> <oplayer /> </entities> </udebs> """ game = udebs.battleStart(game_config) """ Explanation: Udebs -- A discrete game analysis engine for python Udebs is a game engine that reads in rules from an xml configuration enforces those rules in a single python class. The engine is useful for a number of purposes. Allowing a programmer to implement a game by focusing on it's rules and not how they should be enforced. Allows other programs to explore the state spaces of a game without worrying about entering illegal states. Allow easy modifications to the rules of a game without worrying about breaking the entire game. So let's work through an example by building a game of tic tac toe and see what udebs can do. End of explanation """ game_config = """ <udebs> <config> <name>tictactoe</name> </config> <map> <dim> <x>3</x> <y>3</y> </dim> </map> <definitions> <strings> <token /> </strings> </definitions> <entities> <x /> <o /> <xplayer> <token>x</token> </xplayer> <oplayer> <token>o</token> </oplayer> <placement> <effect>($caster STAT token) RECRUIT $target</effect> </placement> </entities> </udebs> """ game = udebs.battleStart(game_config) game.castMove("xplayer", (1,1), "placement") game.castMove("oplayer", (0,0), "placement") game.printMap() """ Explanation: The above snippet of code is a minimal example of how to initiate a udebs game instance. We have created a game that contains two objects. The xplayer and the yplayer. Unfortunatly, neither of these objects can do anything yet, but we will fix that soon enough. As well, it's important to note that by default udebs logs every action the game engine takes. We can turn that off by setting logging to False in the configuration. Now let's actually build out a playable game. Game One: Actions End of explanation """ game_config = """ <udebs> <config> <name>tictactoe</name> </config> <map> <dim> <x>3</x> <y>3</y> </dim> </map> <definitions> <strings> <token /> </strings> <stats> <act /> </stats> </definitions> <entities> <!-- tokens --> <x /> <o /> <!-- players --> <players /> <xplayer> <group>players</group> <token>x</token> <act>2</act> </xplayer> <oplayer> <group>players</group> <token>o</token> <act>1</act> </oplayer> <!-- actions --> <placement> <require> <i>($target NAME) == empty</i> <i>($caster STAT act) >= 2</i> </require> <effect> <i>($caster STAT token) RECRUIT $target</i> <i>$caster act -= 2</i> </effect> </placement> <tick> <effect>(ALL players) act += 1</effect> </tick> </entities> </udebs> """ game = udebs.battleStart(game_config) game.castMove("xplayer", (1,1), "placement") game.castMove("xplayer", (0,0), "placement") game.castMove("xplayer", (1,1), "placement") game.printMap() """ Explanation: So now we have added a few important elements to our game. The first is that we have defined a board that the game can be played on. The "map" attribute defines a 3 x 3 square grid that our game can be played on. Alternativly, we could define a hex grid by setting the type attribute on the map tag (\<map type="hex">). Secondly we created an action that the players can perform: placement. An action is a udebs object that has an 'effect' attribute. Actions are usually initated by another udebs entity onto a third one. The castMove method is the primary way that actions are performed. This method takes three arguments, [ caster target action ]. The caster and target are stored in the caster and target variables respectivly and can be accessed in an actions effect. (udebs has two other methods for initiating actions. castInit and castAction. CastInit is used when the action just activates and there is no caster or target. CastAction is useful when there is a caster but no target.) Finally, we have also defined an attribute on the player objects. This attribute is a string that is simply a reference to another udebs object. In this case it is the token that each player places on the board. Game Two: Time and requirements Our game still has a bunch of problems. We currently do not enforce turn order, there is nothing stopping a player from playing in a non empty square, and we have no way of knowing when the game is finished and who won. End of explanation """ game = udebs.battleStart(game_config) game.castMove("xplayer", (1,1), "placement") game.controlTime() game.castMove("oplayer", (0,0), "placement") game.controlTime() game.castMove("xplayer", (0,1), "placement") game.printMap() """ Explanation: To force turn order and to prevent playing in non empty squares we need to the concept of a requirement. A requirement is a condition that must be true for the action to trigger. If the requirements are not met udebs will treat it as an illegal action and refuse to trigger the action. In this case we have defined a second attribute "act" that udebs will track. It is a numerical value or "stat". Then we added a requirement to our placement action saying that a player must have an act value of at least two in order to activate. Likewise we have also added a requirement that the placement be in an empty square. This will prevent a player from placeing in a spot that has already been played in. Note: the \<i> tags are useful in effects and requirements when more than one action must be taken. As shown, the xplayer tries to play twice in a row. Since the player does not have enough act to move twice udebs refuses to perform the second action. In the third action the player tried to play in a square that already had been played in. Udebs also refused to act on this action. We must also create a method for increasing a players act after every play. To do this we will use udebs built in timer. We defined a new action called tick which is a special action that is triggered every time the in game timer increments. This action will increment the act of every object in the group "players". To trigger the in game timer we must simply use the udebs method controlTime. End of explanation """ game_config = """ <udebs> <config> <name>tictactoe</name> <immutable>True</immutable> </config> <map> <dim> <x>3</x> <y>3</y> </dim> </map> <definitions> <strings> <token /> </strings> <stats> <act /> </stats> </definitions> <entities> <!-- tokens --> <x /> <o /> <!-- players --> <players /> <xplayer immutable="False"> <group>players</group> <token>x</token> <act>2</act> </xplayer> <oplayer immutable="False"> <group>players</group> <token>o</token> <act>1</act> </oplayer> <!-- actions --> <force_order> <require>($target NAME) == empty</require> <effect>($caster STAT token) RECRUIT $target</effect> </force_order> <placement> <group>force_order</group> <require>($target NAME) == empty</require> <effect>($caster STAT token) RECRUIT $target</effect> </placement> <tick> <effect>(ALL players) act += 1</effect> </tick> </entities> </udebs> """ game = udebs.battleStart(game_config) game.castMove("xplayer", (1,1), "placement") game.controlTime() game.castMove("oplayer", (0,0), "placement") game.controlTime() game.castMove("xplayer", (0,1), "placement") game.printMap() """ Explanation: Game Three: Inheritance and Immutability Before we talk about detecting the end of the game let's talk a little more about engine details. End of explanation """ udebs.register("self") def ENDSTATE(state): def rows(gameMap): """Iterate over possible win conditions in game map.""" size = len(gameMap) for i in range(size): yield gameMap[i] yield [j[i] for j in gameMap] yield [gameMap[i][i] for i in range(size)] yield [gameMap[size - 1 - i][i] for i in range(size)] # Check for a win tie = True for i in rows(state.getMap().map): value = set(i) if "empty" in value: tie = False elif len(value) == 1: if i[0] == "x": return 1 elif i[0] == "o": return -1 if tie: return 0 game_config = """ <udebs> <config> <name>tictactoe</name> <immutable>True</immutable> </config> <map> <dim> <x>3</x> <y>3</y> </dim> </map> <definitions> <strings> <token /> </strings> <stats> <act /> </stats> </definitions> <entities> <!-- tokens --> <x /> <o /> <!-- players --> <players /> <xplayer immutable="False"> <group>players</group> <token>x</token> <act>2</act> </xplayer> <oplayer immutable="False"> <group>players</group> <token>o</token> <act>1</act> </oplayer> <!-- actions --> <force_order> <require>($target NAME) == empty</require> <effect>($caster STAT token) RECRUIT $target</effect> </force_order> <placement> <group>force_order</group> <require>($target NAME) == empty</require> <effect>($caster STAT token) RECRUIT $target</effect> </placement> <tick> <effect> <i>(ALL players) act += 1</i> <i>INIT end</i> </effect> </tick> <end> <require> <i>score = (ENDSTATE)</i> <i>$score != None</i> </require> <effect>EXIT $score</effect> </end> </entities> </udebs> """ game = udebs.battleStart(game_config) game.castMove("xplayer", (1,1), "placement") game.controlTime() game.castMove("oplayer", (0,0), "placement") game.controlTime() game.castMove("xplayer", (0,1), "placement") game.controlTime() game.castMove("oplayer", (0,2), "placement") game.controlTime() game.castMove("xplayer", (2,1), "placement") game.controlTime() game.printMap() """ Explanation: Some quick notes: By default udebs assumes that any object could hold some information about the current game space. So when we used placement udebs created a copy of the x tile and placed it in the map. We can change this behaviour by explicitly telling udebs that the x and y tiles will never hold gamestate by creating them as immutable objects. The default assumption for immutablity can be set in the udebs configuration block. Individual entities can be set using a tag when defining the entity. In our case, the only objects that hold state are the player objects. So we set all objects to immutable by default and explicitly set the player objects to mutable. This allows udebs to stop creating copies of the x and o tiles every time we place them. In this example the only effect is that the printMap method stops showing numbers next to the tiles. However, for treesearch and other more intense processes the speedup can be considerable. Secondly: Udebs objects inherit properties from their group. So if we wanted to create several actions that would exhaust a players turn, they can all inherit from the force_turn object instead of writting the same effects and requires constantly in them all. Game Three: Game Loop and Detecting Completion End of explanation """
EvanBianco/striplog
tutorial/Well_object.ipynb
apache-2.0
from striplog import Well print(Well.__doc__) fname = 'P-129_out.LAS' well = Well(fname) well.data['GR'] well.well.DATE.data """ Explanation: Make a well End of explanation """ from striplog import Striplog, Legend legend = Legend.default() f = 'P-129_280_1935.png' name, start, stop = f.strip('.png').split('_') striplog = Striplog.from_img(f, float(start), float(stop), legend=legend, tolerance=35) %matplotlib inline striplog.plot(legend, ladder=True, interval=(5,50), aspect=5) well.add_striplog(striplog, "striplog") """ Explanation: Make a striplog and add it to the well End of explanation """ well.striplog well.striplog.striplog.source well.striplog.striplog.start """ Explanation: Striplogs are added as a dictionary, but you can access them via attributes too: End of explanation """ import xlrd xls = "_Cuttings.xlsx" book = xlrd.open_workbook(xls) sh = book.sheet_by_name("P-129") tops = [c.value for c in sh.col_slice(0, 4)] bases = [c.value for c in sh.col_slice(1, 4)] descr = [c.value for c in sh.col_slice(3, 4)] rows = [i for i in zip(tops, bases, descr)] rows[:5] from striplog import Lexicon lexicon = Lexicon.default() cuttings = Striplog.from_array(rows, lexicon) cuttings cuttings.plot(interval=(5,50), aspect=5) print(cuttings[:5]) well.add_striplog(cuttings, "cuttings") well.striplog.cuttings[3:5] """ Explanation: Make another striplog and add it We can also make striplogs from cuttings data, for example. So let's read cuttings data from an Excel spreadsheet and add another striplog to the well. End of explanation """ print(well.striplogs_to_las3(use_descriptions=True)) """ Explanation: Export to LAS 3.0 End of explanation """ fname = 'P-129_striplog_from_image.las' p129 = Well(fname, lexicon=lexicon, unknown_as_other=True) p129.striplog p129.striplog.lithology.plot(legend, interval=(10,50)) """ Explanation: Reading from LAS files End of explanation """ import matplotlib.pyplot as plt """ Explanation: Plotting with logs End of explanation """ z = well.data['DEPT'] log = well.data['GR'] lineweight = 0.5 plot_min = 0 plot_max = 200 # Set up the figure. fig = plt.figure(figsize=(4,16)) # Plot into the figure. # First, the lith log, the full width of the log. ax = fig.add_subplot(111) well.striplog.striplog.plot_axis(ax, legend, default_width=plot_max) # Plot the DT with a white fill to fake the curve fill. ax.plot(log, z, color='k', lw=lineweight) ax.fill_betweenx(z, log, plot_max, color='w', zorder = 2) # Limit axes. ax.set_xlim(plot_min, plot_max) ax.set_ylim(z[-1], 0) # Show the figure. #plt.savefig('/home/matt/filled_log.png') plt.show() """ Explanation: Now we have all the well data in one place, we should be able to make some nice plots. For example, use the striplog in a log plot, or use the striplog to define categories for colouring a cross-plot. First, let's try filling an ordinary curve – the GR log — with lithlogy colours. A classic workaround for this is to plot the striplog, then plot the GR log, then fill the GR log with white to mask the striplog. End of explanation """ z, lith = well.striplog.striplog.to_log(start=well.start, stop=well.stop, step=well.step, legend=legend) import matplotlib.pyplot as plt fig = plt.figure(figsize=(4, 10)) ax = fig.add_subplot(121) ax.plot(lith, z) ax.set_ylim(z[-1], 0) ax.get_yaxis().set_tick_params(direction='out') ax2 = fig.add_subplot(122) striplog.plot_axis(ax2, legend=legend) ax2.set_ylim(z[-1], 0) ax2.get_xaxis().set_ticks([]) ax2.get_yaxis().set_ticks([]) #plt.savefig('/home/matt/discretized.png') plt.show() """ Explanation: Crossplots Now let's try a cross-plot. To facilitate this, generate a new log with the striplog.to_log() method: End of explanation """ import matplotlib.colors as clr cmap = clr.ListedColormap([i.colour for i in legend]) plt.figure(figsize=(10,7)) plt.scatter(well.data['GR'], well.data['DT'], c=lith, edgecolors='none', alpha=0.8, cmap=cmap, vmin=1) plt.xlim(0, 200); plt.ylim(0,150) plt.xlabel('GR'); plt.ylabel('DT') plt.grid() ticks = [int(i) for i in list(set(lith))] ix = [int(i)-1 for i in list(set(lith)) if i] labels = [i.component.summary() for i in legend[ix]] cbar = plt.colorbar() cbar.set_ticks(ticks) cbar.set_ticklabels(labels) plt.show() """ Explanation: Next step: make a colourmap from the legend. This could be a method of the legend, but it's so easy it hardly seems worth the trouble. End of explanation """ from matplotlib.patches import Rectangle # Start the plot. fig = plt.figure(figsize=(12,10)) # Crossplot. ax = fig.add_axes([0.15,0.15,0.6,0.6]) ax.scatter(well.data['GR'], well.data['DT'], c=lith, edgecolors='none', alpha=0.67, cmap=cmap, vmin=1) ax.set_xlim(0, 200); plt.ylim(0,150) ax.set_xlabel('GR $API$'); ax.set_ylabel(r'DT $\mu s/m$') ax.grid() # Draw the legend. axc = fig.add_axes([0.775,0.2,0.15,0.5]) i = 0 for d in legend: if i+1 in lith: tcolour = 'k' talpha = 1.0 tstyle = 'medium' else: tcolour = 'k' talpha = 0.25 tstyle = 'normal' rect = Rectangle((0, i), 1, 1, color=d.colour, alpha=0.67 ) axc.add_patch(rect) text = axc.text(0.5, 0.5+i, d.component.summary(default="Unassigned"), color=tcolour, ha='center', va='center', fontsize=10, weight=tstyle, alpha=talpha) i += 1 axc.set_xlim(0,1); axc.set_ylim(len(legend), 0) axc.set_xticks([]); axc.set_yticks([]) axc.set_title('Lithology', fontsize=12) # Finish. #plt.savefig('/home/matt/crossplot.png') plt.show() """ Explanation: We can get a bit fancier still: End of explanation """
duncanwp/python_for_climate_scientists
course_content/notebooks/numpy_intro.ipynb
gpl-3.0
import numpy as np """ Explanation: An introduction to NumPy NumPy provides an efficient representation of multidimensional datasets like vectors and matricies, and tools for linear algebra and general matrix manipulations - essential building blocks of virtually all technical computing Typically NumPy is imported as np: End of explanation """ lst = [10, 20, 30, 40] arr = np.array([10, 20, 30, 40]) print(lst) print(arr) """ Explanation: NumPy, at its core, provides a powerful array object. Let's start by exploring how the NumPy array differs from a Python list. We start by creating a simple Python list and a NumPy array with identical contents: End of explanation """ print(lst[0], arr[0]) print(lst[-1], arr[-1]) print(lst[2:], arr[2:]) """ Explanation: Element indexing Elements of a one-dimensional array are accessed with the same syntax as a list: End of explanation """ lst[-1] = 'a string inside a list' lst """ Explanation: Differences between arrays and lists The first difference to note between lists and arrays is that arrays are homogeneous; i.e. all elements of an array must be of the same type. In contrast, lists can contain elements of arbitrary type. For example, we can change the last element in our list above to be a string: End of explanation """ arr[-1] = 'a string inside an array' """ Explanation: But the same can not be done with an array, as we get an error message: End of explanation """ print('Data type :', arr.dtype) print('Total number of elements :', arr.size) print('Number of dimensions :', arr.ndim) print('Shape (dimensionality) :', arr.shape) print('Memory used (in bytes) :', arr.nbytes) """ Explanation: Caveat, it can be done, but really don't do it; lists are generally better at non-homogeneous collections. Array Properties and Methods The following provide basic information about the size, shape and data in the array: End of explanation """ print('Minimum and maximum :', arr.min(), arr.max()) print('Sum and product of all elements :', arr.sum(), arr.prod()) print('Mean and standard deviation :', arr.mean(), arr.std()) """ Explanation: Arrays also have many useful statistical/mathematical methods: End of explanation """ arr.dtype """ Explanation: Data types The information about the type of an array is contained in its dtype attribute. End of explanation """ arr[-1] = 1.234 arr """ Explanation: Once an array has been created, its dtype is fixed (in this case to an 8 byte/64 bit signed integer) and it can only store elements of the same type. For this example where the dtype is integer, if we try storing a floating point number in the array it will be automatically converted into an integer: End of explanation """ np.array(256, dtype=np.uint8) float_info = ('{finfo.dtype}: max={finfo.max:<18}, ' 'approx decimal precision={finfo.precision};') print(float_info.format(finfo=np.finfo(np.float32))) print(float_info.format(finfo=np.finfo(np.float64))) """ Explanation: NumPy comes with most of the common data types (and some uncommon ones too). The most used (and portable) dtypes are: bool uint8 int (machine dependent) int8 int32 int64 float (machine dependent) float32 float64 Full details can be found at http://docs.scipy.org/doc/numpy/user/basics.types.html. What are the limits of the common NumPy integer types? End of explanation """ np.array(1, dtype=np.uint8).astype(np.float32) """ Explanation: Floating point precision is covered in detail at http://en.wikipedia.org/wiki/Floating_point. However, we can convert an array from one type to another with the astype method End of explanation """ np.zeros(5, dtype=np.float) np.zeros(3, dtype=np.int) """ Explanation: Creating Arrays Above we created an array from an existing list. Now let's look into other ways in which we can create arrays. A common need is to have an array initialized with a constant value. Very often this value is 0 or 1. zeros creates arrays of all zeros, with any desired dtype: End of explanation """ print('5 ones:', np.ones(5, dtype=np.int)) """ Explanation: and similarly for ones: End of explanation """ a = np.empty(4, dtype=np.float) a.fill(5.5) a """ Explanation: If we want an array initialized with an arbitrary value, we can create an empty array and then use the fill method to put the value we want into the array: End of explanation """ np.arange(10, dtype=np.float64) np.arange(5, 7, 0.1) """ Explanation: Alternatives, such as: np.ones(4) * 5.5 np.zeros(4) + 5.5 are generally less efficient, but are also reasonable. Filling arrays with sequences NumPy also offers the arange function, which works like the builtin range but returns an array instead of a list: End of explanation """ print("A linear grid between 0 and 1:") print(np.linspace(0, 1, 5)) print("A logarithmic grid between 10**2 and 10**4:") print(np.logspace(2, 4, 3)) """ Explanation: The linspace and logspace functions to create linearly and logarithmically-spaced grids respectively, with a fixed number of points that include both ends of the specified interval: End of explanation """ import numpy as np import numpy.random """ Explanation: Finally, it is often useful to create arrays with random numbers that follow a specific distribution. The np.random module contains a number of functions that can be used to this effect. For more details see http://docs.scipy.org/doc/numpy/reference/routines.random.html. Creating random arrays Finally, it is often useful to create arrays with random numbers that follow a specific distribution. The np.random module contains a number of functions that can be used to this effect. First, we must import it: End of explanation """ print(np.random.randn(5)) """ Explanation: To produce an array of 5 random samples taken from a standard normal distribution (0 mean and variance 1): End of explanation """ norm10 = np.random.normal(10, 3, 5) print(norm10) """ Explanation: For an array of 5 samples from the normal distribution with a mean of 10 and a variance of 3: End of explanation """ mask = norm10 > 9 mask """ Explanation: Indexing with other arrays Above we saw how to index NumPy arrays with single numbers and slices, just like Python lists. Arrays also allow for a more sophisticated kind of indexing that is very powerful: you can index an array with another array, and in particular with an array of boolean values. This is particularly useful to extract information from an array that matches a certain condition. Consider for example that in the array norm10 we want to replace all values above 9 with the value 0. We can do so by first finding the mask that indicates where this condition is true or false: End of explanation """ print(('Values above 9:', norm10[mask])) print('Resetting all values above 9 to 0...') norm10[mask] = 0 print(norm10) """ Explanation: Now that we have this mask, we can use it to either read those values or to reset them to 0: End of explanation """ lst2 = [[1, 2, 3], [4, 5, 6]] arr2 = np.array([[1, 2, 3], [4, 5, 6]]) print(arr2) print(arr2.shape) """ Explanation: Whilst beyond the scope of this course, it is also worth knowing that a specific masked array object exists in NumPy. Further details are available at http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html Arrays with more than one dimension Up until now all our examples have used one-dimensional arrays. NumPy can also create arrays of arbitrary dimensions, and all the methods illustrated in the previous section work on arrays with more than one dimension. A list of lists can be used to initialize a two dimensional array: End of explanation """ print(lst2[0][1]) print(arr2[0, 1]) """ Explanation: With two-dimensional arrays we start seeing the power of NumPy: while nested lists can be indexed by repeatedly using the [ ] operator, multidimensional arrays support a much more natural indexing syntax using a single [ ] and a set of indices separated by commas: End of explanation """ print(lst2[0:2][1]) print(arr2[0:2, 1]) """ Explanation: Question: Why does the following example produce different results? End of explanation """ np.zeros((2, 3)) np.random.normal(10, 3, size=(2, 4)) """ Explanation: Multidimensional array creation The array creation functions listed previously can also be used to create arrays with more than one dimension. For example: End of explanation """ arr = np.arange(8).reshape(2, 4) print(arr) """ Explanation: In fact, the shape of an array can be changed at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is: End of explanation """ arr = np.arange(2, 18, 2).reshape(2, 4) print(arr) print('Second element from dimension 0, last 2 elements from dimension one:') print(arr[1, 2:]) """ Explanation: Slices With multidimensional arrays you can also index using slices, and you can mix and match slices and single indices in the different dimensions: End of explanation """ print('First row: ', arr[0], 'is equivalent to', arr[0, :]) print('Second row: ', arr[1], 'is equivalent to', arr[1, :]) """ Explanation: If you only provide one index to slice a multidimensional array, then the slice will be expanded to ":" for all of the remaining dimensions: End of explanation """ arr1 = np.empty((4, 6, 3)) print('Orig shape: ', arr1.shape) print(arr1[...].shape) print(arr1[..., 0:2].shape) print(arr1[2:4, ..., ::2].shape) print(arr1[2:4, :, ..., ::-1].shape) """ Explanation: This is also known as "ellipsis". Ellipsis can be specified explicitly with "...". It will automatically expand to ":" for each of the unspecified dimensions in the array, and can even be used at the beginning of the slice: End of explanation """ arr1 = np.arange(4) arr2 = np.arange(10, 14) print(arr1, '+', arr2, '=', arr1 + arr2) """ Explanation: Operating with arrays Arrays support all regular arithmetic operators, and the NumPy library also contains a complete collection of basic mathematical functions that operate on arrays. It is important to remember that in general, all operations with arrays are applied element-wise, i.e., are applied to all the elements of the array at the same time. For example: End of explanation """ print(arr1, '*', arr2, '=', arr1 * arr2) """ Explanation: Importantly, even the multiplication operator is by default applied element-wise: It is not the matrix multiplication from linear algebra: End of explanation """ 1.5 * arr1 """ Explanation: We may also multiply an array by a scalar: End of explanation """ print(np.arange(3)) print(np.arange(3) + 5) """ Explanation: This is an example of broadcasting. Broadcasting The fact that NumPy operates on an element-wise basis means that in principle arrays must always match one another's shape. However, NumPy will also helpfully "broadcast" dimensions when possible. Here is an example of broadcasting a scalar to a 1D array: End of explanation """ np.ones((3, 3)) + np.arange(3) """ Explanation: We can also broadcast a 1D array to a 2D array, in this case adding a vector to all rows of a matrix: End of explanation """ a = np.arange(3).reshape((3, 1)) b = np.arange(3) print(a, '+', b, '=\n', a + b) """ Explanation: We can also broadcast in two directions at a time: End of explanation """ arr1 = np.ones((2, 3)) arr2 = np.ones((2, 1)) # arr1 + arr2 arr1 = np.ones((2, 3)) arr2 = np.ones(3) # arr1 + arr2 arr1 = np.ones((1, 3)) arr2 = np.ones((2, 1)) # arr1 + arr2 arr1 = np.ones((1, 3)) arr2 = np.ones((1, 2)) # arr1 + arr2 arr1 = np.ones((1, 3)) arr3 = arr2[:, :, np.newaxis] # arr1 + arr3 """ Explanation: Pictorially: (image source) Rules of Broadcasting Broadcasting follows these three rules: If the two arrays differ in their number of dimensions, the shape of the array with fewer dimensions is padded with ones on its leading (left) side. If the shape of the two arrays does not match in any dimension, either array with shape equal to 1 in a given dimension is stretched to match the other shape. If in any dimension the sizes disagree and neither has shape equal to 1, an error is raised. Note that all of this happens without ever actually creating the expanded arrays in memory! This broadcasting behavior is in practice enormously powerful, especially given that when NumPy broadcasts to create new dimensions or to 'stretch' existing ones, it doesn't actually duplicate the data. In the example above the operation is carried out as if the scalar 1.5 was a 1D array with 1.5 in all of its entries, but no actual array is ever created. This can save lots of memory in cases when the arrays in question are large. As such this can have significant performance implications. Broadcasting Examples So when we do... np.arange(3) + 5 The scalar 5 is: first 'promoted' to a 1-dimensional array of length 1 (rule 1) then, this array is 'stretched' to length 3 to match the first array. (rule 2) After these two operations are complete, the addition can proceed as now both operands are one-dimensional arrays of length 3. When we do np.ones((3, 3)) + np.arange(3) The second array is: first 'promoted' to a 2-dimensional array of shape (1, 3) (rule 1) then axis 0 is 'stretched' to length 3 to match the first array (rule 2) When we do np.arange(3).reshape((3, 1)) + np.arange(3) The second array is: first 'promoted' to a 2-dimensional array of shape (1, 3) (rule 1) then axis 0 is 'stretched' to form an array of shape (3, 3) (rule 2) and the first array's axis 1 is 'stretched' to form an array of shape (3, 3) (rule 2) Then the operation proceeds as if on two 3 $\times$ 3 arrays. The general rule is: when operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward, creating dimensions of length 1 as needed. Two dimensions are considered compatible when they are equal to begin with, or one of them is 1; in this case NumPy will do the 'stretching' to make them equal. If these conditions are not met, a ValueError: operands could not be broadcast together exception is thrown, indicating that the arrays have incompatible shapes. Questions: What will the result of adding arr1 with arr2 be in the following cases? End of explanation """ print('For the following array:\n', arr) print('The sum of elements along the rows is :', arr.sum(axis=1)) print('The sum of elements along the columns is :', arr.sum(axis=0)) """ Explanation: Exercise 1 1. Use np.arange and reshape to create the array A = [[1 2 3 4] [5 6 7 8]] 2. Use np.array to create the array B = [1 2] 3. Use broadcasting to add B to A to create the final array A + B = [[2 3 4 5] [7 8 9 10] Hint: what shape does B have to be changed to? Array Properties and Methods (cont.) For multidimensional arrays it is possible to carry out computations along a single dimension by passing the axis parameter: End of explanation """ np.zeros((3, 4, 5, 6)).sum(axis=2).shape """ Explanation: As you can see in this example, the value of the axis parameter is the dimension that will be consumed once the operation has been carried out. This is why to sum along the columns we use axis=0. This can be easily illustrated with an example that has more dimensions; we create an array with 4 dimensions and shape (3,4,5,6) and sum along the axis number 2 (i.e. the third axis, since in Python all counts are 0-based). That consumes the dimension whose length was 5, leaving us with a new array that has shape (3,4,6): End of explanation """ print('Array:\n', arr) print('Transpose:\n', arr.T) """ Explanation: Another widely used property of arrays is the .T attribute, which allows you to access the transpose of the array: End of explanation """ x = np.linspace(0, 9, 3) y = np.linspace(-8, 4, 3) x2d, y2d = np.meshgrid(x, y) print(x2d) print(y2d) """ Explanation: Generating 2D coordinate arrays A common task is to generate a pair of arrays that represent the coordinates of our data. When orthogonal 1d coordinate arrays already exist, NumPy's meshgrid function is very useful: End of explanation """ np.arange(6).reshape((1, -1)) np.arange(6).reshape((2, -1)) """ Explanation: Reshape and newaxis Reshaping arrays is a common task in order to make the best of NumPy's powerful broadcasting. A useful tip with the reshape method is that it is possible to provide a -1 length for at most one of the dimensions. This indicates that NumPy should automatically calculate the length of this dimension: End of explanation """ arr = np.arange(6) print(arr[np.newaxis, :, np.newaxis].shape) """ Explanation: Another way to increase the dimensionality of an array is to use the newaxis keyword: End of explanation """ arr = np.arange(8) arr_view = arr.reshape(2, 4) """ Explanation: Views, not Copies Note that reshaping (like most NumPy operations), wherever possible, provides a view of the same memory: End of explanation """ # Print the "view" array from reshape. print('Before\n', arr_view) # Update the first element of the original array. arr[0] = 1000 # Print the "view" array from reshape again, # noticing the first value has changed. print('After\n', arr_view) """ Explanation: What this means is that if one array is modified, the other will also be updated: End of explanation """ x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) """ Explanation: This lack of copying allows for very efficient vectorized operations, but this power should be used carefully - if used badly it can lead to some bugs that are hard to track down. If in doubt, you can always copy the data to a different block of memory with the copy() method. Element-wise Functions NumPy ships with a full complement of mathematical functions that work on entire arrays, including logarithms, exponentials, trigonometric and hyperbolic trigonometric functions, etc. For example, sampling the sine function at 100 points between $0$ and $2\pi$ is as simple as: End of explanation """ x = np.arange(-5, 5.5, 0.5) y = np.exp(x) """ Explanation: Or to sample the exponential function between $-5$ and $5$ at intervals of $0.5$: End of explanation """ v1 = np.array([2, 3, 4]) v2 = np.array([1, 0, 1]) print(v1, '.', v2, '=', np.dot(v1, v2)) """ Explanation: Linear algebra in NumPy NumPy ships with a basic linear algebra library, and all arrays have a dot method whose behavior is that of the scalar dot product when its arguments are vectors (one-dimensional arrays) and the traditional matrix multiplication when one or both of its arguments are two-dimensional arrays: End of explanation """ A = np.arange(6).reshape(2, 3) print(A, '\n') print(np.dot(A, A.T)) """ Explanation: For matrix-matrix multiplication, the regular $matrix \times matrix$ rules must be satisfied. For example $A \times A^T$: End of explanation """ print(np.dot(A.T, A)) """ Explanation: results in a (2, 2) array, yet $A^T \times A$ results in a (3, 3). Why is this?: End of explanation """ print(A, 'x', v1, '=', np.dot(A, v1)) """ Explanation: NumPy makes no distinction between row and column vectors and simply verifies that the dimensions match the required rules of matrix multiplication. Below is an example of matrix-vector multiplication, and in this case we have a $2 \times 3$ matrix multiplied by a 3-vector, which produces a 2-vector: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-3/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: PCMDI Source ID: SANDBOX-3 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:36 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
dipanjank/ml
text_classification_and_clustering/problem_statement.ipynb
gpl-3.0
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') import pandas as pd import numpy as np import seaborn as sns raw_input = pd.read_pickle('input.pkl') raw_input.head() """ Explanation: <h1 align="center">EF Machine Learning Homework</h1> The Machine Learning Tasks The task(s) is(are) to build a system to classify the Level of writing samples by english language learners, using a data set gathered from users. Each Level is comprised of multiple units. Learners progress in linear order from one Level to the next, although within a Level, they may jump around from one unit to another at will. Typically one of the last units in a Level is the “written task”, in which the learners write freely on a given topic. Convert XML to Pandas DataFrame ( <a href="https://github.com/dipanjank/ml/blob/master/ef_homework/step_1_data_prep.ipynb">step_1_data_prep.ipynb</a>) In this step, I converted the raw XML(https://www.dropbox.com/s/rizbq4co1hlfpft/EFWritingData.xml?dl=0) into a pandas DataFrame. Each row in DataFrame is a flat representation of one &lt;writing&gt; element, sub-elements and attributes. I excluded the article date in the conversion since I wasn't sure about using it in the classification step. End of explanation """ ax = plt.subplots(1, 1, figsize=(12, 8)) ax = sns.boxplot(x='level', y='topic_id', data=raw_input) """ Explanation: Analysis of MetaData I first investigated the relationships between the metadata fields and the level attribute. The metadata fields excluding date are: article_id grade level topic_id topic_text unit Figure below shows the boxplot of topic_id for each distinct value of level. End of explanation """ _, ax = plt.subplots(1, 2, figsize=(16, 4)) _ = sns.boxplot(x='level', y='grade', data=raw_input, ax=ax[0]) _ = sns.boxplot(x='level', y='unit', data=raw_input, ax=ax[1]) """ Explanation: Each level belongs to a mutually exclusive set of topic IDs. We can deterministically figure out the level from the topic_id without using any Machine Learning. However since this homework is for text classification, I chose to go ahead and build a text classifier without using the topic_id or the topic_text in the model. Figure below shows the boxplots per level for the other two metadata features, unit and grade. Visually they didn't seem to be good predictors for level. End of explanation """ nmf_error = pd.read_csv('nmf_rec_errors.csv', index_col=0, squeeze=True) ax = nmf_error.plot(kind='bar', title='NMF reconstruction errors vs. Component Size', rot=0) _ = ax.set(xlabel='NNF Component Size', ylabel='Reconstruction Error') """ Explanation: Text Classification I used the experience from the metadata analysis to build my working hypothesis for text based classifier: Each writing sample at a given level is about a topic. The topic uniquely determines the level. Different topics produce different distribution of words. As instructed in the homework, I set aside 20% of the total data (with the same proportion of levels) as a test set. I used the other 80% for cross-validation and training of the model(s) under consideration. I also constructed a smaller training set by taking random samples of 1000 records for each level from the training set. This allowed me to run different experiments quickly (the full training set has ~900K records). Small Sample Analysis (<a href="https://github.com/dipanjank/ml/blob/master/ef_homework/step_2_classification_of_sample_dataset.ipynb">step_2_classification_of_sample_dataset.ipynb</a>) I converted the smaller training set of 16000 records into term frequency (tf) and inverse document frequency (idf) based features and then used a maximum entropy classifier to maximize the log-likelihood of these features given class label from the training data. Then I extracted the out-of-sample predictions using 10-fold cross-validation and then generated the precision, recall, f1-score and confusion matrix for the true and predicted labels. The main decision points with tf-idf features are: Stopword Removal: I used NLTK's default english language stopwords. Stemming / lemmatization: I didn't consider stemming. Removing frequent and infrequent words: These are hyperparameters for the classification problem. I ran the classification pipeline on the sample dataset a few times with and without stopwords, different values for the frequent and infrequent word thresholds and maximum number of features to keep. The main driver of f1-score seemed to be the maximum number of features which also required more time to run. For example, the parameters below: counter = TfidfVectorizer( ngram_range=(1, 2), stop_words=en_stopwords, max_df=0.4, min_df=25, max_features=3000, sublinear_tf=True ) scaler = StandardScaler(with_mean=False) model = LogisticRegression(penalty='l2', max_iter=200, random_state=4321) pipeline = make_pipeline(counter, scaler, model) produces the classification result from cross-validation: precision recall f1-score support 1 0.90 0.89 0.90 1007 2 0.89 0.88 0.88 1019 3 0.85 0.88 0.87 972 4 0.86 0.85 0.85 1010 5 0.86 0.87 0.87 993 6 0.88 0.91 0.90 966 7 0.81 0.81 0.81 1000 8 0.80 0.78 0.79 1025 9 0.82 0.82 0.82 999 10 0.85 0.87 0.86 978 11 0.77 0.76 0.76 1023 12 0.86 0.87 0.87 996 13 0.91 0.90 0.90 1013 14 0.86 0.86 0.86 1002 15 0.81 0.80 0.80 1010 16 0.85 0.86 0.86 987 avg / total 0.85 0.85 0.85 16000 In order to improve the model, I then looked at the misclassified records in my sample dataset. The biggest misclassification occurred between level 7 and 8. The distribution of topics for the 7 <---> 8 misclassification are: topic_id topic_text level level_predicted number of misclassifications 50 Planning for the future 7 8 28 59 Making a 'to do' list of your dreams 8 7 21 62 Responding to written invitations 8 7 6 60 Describing a business trip 8 7 5 63 Congratulating a friend on an award 8 7 4 52 Writing about a memorable experience 7 8 4 Next I extracted the word count matrices for a subset of these articles, compared them and concluded that binary features indicating presence / absence of rare words could be a better indicator of level. This approach improved the composite f1-score to 0.87 from 0.85. Full Dataset Analysis (<a href="https://github.com/dipanjank/ml/blob/master/ef_homework/step_3_classification_of_full_dataset.ipynb">step_3_classification_of_full_dataset.ipynb</a> Level Classification Next I executed 10 fold-cross validation of the binary feature + maximum entropy classifier model on the full training set. This produced a composite f1-score of 0.95, but the precision on levels 15 and 16 (the levels with the lowest number of samples) were only 65%. This is most likely due to label imbalance. Changing the maximum entropy classifier to account for label imbalance improved the precision on levels 15 and 16 to 75% and 81%, respectively. Finally, I re-trained the model using the full training set and evaluated against the test set. Final Classification Result (on Test Set): precision recall f1-score support 1 0.97 0.98 0.98 69973 2 0.96 0.96 0.96 32966 3 0.94 0.93 0.94 22245 4 0.95 0.97 0.96 33412 5 0.95 0.94 0.94 17268 6 0.95 0.93 0.94 10754 7 0.94 0.96 0.95 19290 8 0.89 0.87 0.88 8550 9 0.91 0.90 0.91 5800 10 0.95 0.94 0.95 7324 11 0.88 0.85 0.86 3232 12 0.89 0.87 0.88 1896 13 0.89 0.89 0.89 1769 14 0.83 0.80 0.81 750 15 0.76 0.77 0.77 442 16 0.80 0.80 0.80 391 avg / total 0.95 0.95 0.95 236062 Group Classification This is essentially the same classification problem as the level classification but with collapsed categories, so my intuition is that the same feature-classifier combination will work well and should produce slightly better performance. This seems to hold true. Final Classification Result (on Test Set): precision recall f1-score support A1 0.98 0.99 0.98 124702 A2 0.95 0.95 0.95 61427 B1 0.94 0.94 0.94 33760 B2 0.92 0.90 0.91 12650 C1 0.88 0.82 0.85 3135 C2 0.78 0.78 0.78 388 avg / total 0.96 0.96 0.96 236062 Text Clustering (<a href="https://github.com/dipanjank/ml/blob/master/ef_homework/step_4_text_clustering.ipynb">step_4_text_clustering.ipynb</a>) The third task in the assignment is to build a text clustering system on the same set of text documents. To solve this problem, I chose the following approach: Tf-Idf based Feature Extraction: This is very similar to the feature extraction method using TfIdfVectorizer as above. Dimension Reduction of the sparse feature matrix: I experimented with two methods of dimension reduction: Singular Value Decomoposition (SVD) This transforms the (sparse) n_samples * vocab_size tf-idf feature matrix to n_samples * k where k < vocab_size and corresponds to top k eigenvectors, or orthonormal bases, of the feature matrix. I rejected this approch since it was much slower than NMF and even after increasing the number of principal components to 5000, the percentage of variance explained remained less than 60%. Non-negative Matrix Factorization (NMF) This transforms tf-idf feature matrix to n_samples * k where each of the k k axes (k << vocab_size) represents a topic. Given that we already know that the documents belong to specific set of topics, this seemed a natural fit for this problem. K-Means Clustering: Finally, I used K-Means clustering to assign cluster labels to the NMF transformed feature matrix. Selection of Number of NMF Components To select the optimal number of NMF components, I plotted the reconstruction error from the NMF step for different component sizes on the 16000 sample dataset. End of explanation """ sse = pd.read_csv('kmeans_sse.csv', index_col=0, squeeze=True) fig, ax = plt.subplots(1, 1, figsize=(10, 4)) ax = sse.plot(kind='bar', title='Cluster SSE vs Cluster Size (16K features)', ax=ax, rot=0) _ = ax.set(xlabel='number of clusters', ylabel='SSE') """ Explanation: Based on this, I chose the number of components as 20. Optimal Cluster Size To select the optimal cluster size for KMeans, I plotted the inertia (Sum of squared distances of each sample from the its cluster centroid) vs. the number of clusters ranging from 2 to 30 on the 16000 sample dataset. End of explanation """ cluster_counts = pd.read_csv('label_counts_all.csv', index_col=0, squeeze=True) level_counts = raw_input.level.value_counts() _, ax = plt.subplots(1, 2, figsize=(14, 6)) ax_0 = cluster_counts.plot(ax=ax[0], kind='bar', title='Value Counts per Cluster', rot=0) _ = ax_0.set(xlabel='Cluster ID', ylabel='Samples') ax_1 = level_counts.plot(ax=ax[1], kind='bar', title='Value Counts per Level', rot=0) _ = ax_1.set(xlabel='Level', ylabel='Samples') """ Explanation: This is commonly known as the "Elbow Plot" and the optimal cluster size is the point where the SSE curve starts to flatten out. In this case, I chose the optimal cluster size as 20. Cluster Assignments and Comparison with Level Next I executed the feature construction, dimension reduction and clustering steps for the full dataset. The figure shows the value counts per cluster and value counts per level. End of explanation """ ct_full = pd.read_csv('cluster_assignment_vs_labels_all.csv', index_col=0) fig, ax = plt.subplots(1, 1, figsize=(16, 8)) ax = sns.heatmap(ct_full, annot=True, fmt='d', ax=ax) _ = ax.set(title='Cluster assignments vs Level (Full Dataset)') _ = ax.set(xlabel='Cluster ID', ylabel='Level') """ Explanation: Visually, this approach to clustering results aren't similar to the levels. To get a more quantifiable for measure between the two, I calculated the Adjusted Rand Index (which ranges from 0 to 1). The Adjusted Rand Index between this cluster assignment and the level attribute is 0.089, indicating that they are quite dissimilar. The figure below shows cross-tabulation of Cluster IDs vs. the level attribute: End of explanation """
weikang9009/pysal
notebooks/viz/splot/libpysal_non_planar_joins_viz.ipynb
bsd-3-clause
from pysal.lib.weights.contiguity import Queen import pysal.lib from pysal.lib import examples import matplotlib.pyplot as plt import geopandas as gpd %matplotlib inline from pysal.viz.splot.pysal.lib import plot_spatial_weights """ Explanation: splot.pysal.lib: assessing neigbors & spatial weights In spatial analysis workflows it is often important and necessary to asses the relationships of neighbouring polygons. pysal.lib and splot can help you to inspect if two neighbouring polygons share an edge or not. Content: * Imports * Data Preparation * Plotting Imports End of explanation """ examples.explain('rio_grande_do_sul') """ Explanation: Data Preparation Let's first have a look at the dataset with pysal.lib.examples.explain End of explanation """ gdf = gpd.read_file(examples.get_path('map_RS_BR.shp')) gdf.head() weights = Queen.from_dataframe(gdf) """ Explanation: Load data into a geopandas geodataframe End of explanation """ plot_spatial_weights(weights, gdf) plt.show() """ Explanation: This warning tells us that our dataset contains islands. Islands are polygons that do not share edges and nodes with adjacent polygones. This can for example be the case if polygones are truly not neighbouring, eg. when two land parcels are seperated by a river. However, these islands often stems from human error when digitizing features into polygons. This unwanted error can be assessed using splot.pysal.lib plot_spatial_weights functionality: Plotting End of explanation """ wnp = pysal.lib.weights.util.nonplanar_neighbors(weights, gdf) """ Explanation: This visualisation depicts the spatial weights network, a network of connections of the centroid of each polygon to the centroid of its neighbour. As we can see, there are many polygons in the south and west of this map, that are not connected to it's neighbors. This stems from digitization errors and needs to be corrected before we can start our statistical analysis. pysal.lib offers a tool to correct this error by 'snapping' incorrectly separated neighbours back together: End of explanation """ plot_spatial_weights(wnp, gdf) plt.show() """ Explanation: We can now visualise if the nonplanar_neighbors tool adjusted all errors correctly: End of explanation """ plot_spatial_weights(wnp, gdf, nonplanar_edge_kws=dict(color='#4393c3')) plt.show() """ Explanation: The visualization shows that all erroneous islands are now stored as neighbors in our new weights object, depicted by the new joins displayed in orange. We can now adapt our visualization to show all joins in the same color, by using the nonplanar_edge_kws argument in plot_spatial_weights: End of explanation """
astroNN/astroNN
notebooks/2_fullyconnected.ipynb
mit
import h5py import numpy as np import tensorflow as tf # this package from astronn.data import fetch_notMNIST """ Explanation: Deep Learning Assignment 2 Previously in 1_notmnist.ipynb, we 1. Downloaded a test dataset with training, development, and testing subsets (based on the notMNIST dataset). 2. We visualized and explored the data. 3. We trained an out-of-box logistic regression model to classify the data and found that it performs very badly The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow. End of explanation """ cache_file = fetch_notMNIST() def randomize(data, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_data = data[permutation] shuffled_labels = labels[permutation] return shuffled_data, shuffled_labels with h5py.File(cache_file, 'r') as f: train_dataset, train_labels = randomize(f['train']['images'][:], f['train']['labels'][:]) valid_dataset, valid_labels = randomize(f['validate']['images'][:], f['validate']['labels'][:]) test_dataset, test_labels = randomize(f['test']['images'][:], f['test']['labels'][:]) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) image_size = train_dataset.shape[-1] num_labels = np.unique(train_labels).size """ Explanation: First load and randomize the data, as we did in 1_notmnist.ipynb End of explanation """ def labels_to_1hot(labels): # this uses numpy array broadcasting to do this operation in a memory efficient way: # see: http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html one_hots = (np.arange(num_labels) == labels[:,np.newaxis]).astype(np.float32) return one_hots def flatten_images(image_data): # -1 in the reshape tells numpy to infer the length along that axis # float32 takes up less memory return image_data.reshape((-1, image_size*image_size)).astype(np.float32) train_labels = labels_to_1hot(train_labels) valid_labels = labels_to_1hot(valid_labels) test_labels = labels_to_1hot(test_labels) train_dataset = flatten_images(train_dataset) valid_dataset = flatten_images(valid_dataset) test_dataset = flatten_images(test_dataset) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) """ Explanation: Here we reformat the data into a shape that's more adapted to the models we're going to train: - we'll flatten the image data (so, instead of 28 by 28 images, length-784 arrays) - we'll turn the labels into float 1-hot encodings (read about 1-hots in this excellent StackOverflow answer). End of explanation """ # With gradient descent training, even this much data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 # This is where we set up the graph and define variables that we will use, # however *nothing actually gets executed here*! This only establishes and # defines variables and operations on the graph. graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # 0.5 is the learning rate # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) """ Explanation: We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this: * First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below: with graph.as_default(): ... Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below: with tf.Session(graph=graph) as session: ... Let's load all the data into TensorFlow and build the computation graph corresponding to our training: End of explanation """ num_steps = 801 def accuracy(predictions, labels): # a simple comparision of the predicted labels vs. the truth to get the prediction accuracy return (100.0 * np.sum(np.argmax(predictions, axis=1) == np.argmax(labels, axis=1)) / predictions.shape[0]) # define a Session with the graph we defined above with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the biases. tf.initialize_all_variables().run() print('Initialized') for step in range(num_steps): # Run the computations. We tell run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy arrays. # these three variables (optimizer, loss, and train_prediction) are defined and # attached to the graph in the previous cell. _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 100 == 0): print('Loss at step', step, ':', l) print('Training accuracy: {:.1f}%'.format(accuracy(predictions, train_labels[:train_subset, :]))) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Validation accuracy: {:.1f}%'.format(accuracy(valid_prediction.eval(), valid_labels))) print('Test accuracy: {:.1f}%'.format(accuracy(test_prediction.eval(), test_labels))) """ Explanation: Now that we have set up the graph, we will run some computations in a Session and iterate: End of explanation """ batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) """ Explanation: The graph above holds the entire training set in memory as a constant() node on the graph. Below, we will slightly modify the graph to instead constain placeholder() nodes for the training data, and during the training we will pass in smaller (mini)batches of the training data. These placeholders will be fed the actual training data at every call of sesion.run(). End of explanation """ num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step", step, ":", l) print("Minibatch accuracy: {:.1f}%".format(accuracy(predictions, batch_labels))) print("Validation accuracy: {:.1f}%".format(accuracy(valid_prediction.eval(), valid_labels))) print("Test accuracy: {:.1f}%".format(accuracy(test_prediction.eval(), test_labels))) """ Explanation: Let's run it: End of explanation """ n_hidden = 1024 batch_size = 128 def model(X, weights, biases): # Hidden layer with RELU activation # ADD CODE HERE: # ... pass graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # Similar to the above SGD model, we will need weights and biases, but now we # are building a neural network with one hidden layer. Meaning, we will need # two sets of weights and biases: one set for the hidden layer, one set for # the output layer. ADD CODE HERE: # ... # Training computation. logits = model(tf_train_dataset, weights, biases) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = model(tf_valid_dataset, weights, biases) test_prediction = model(tf_test_dataset, weights, biases) """ Explanation: Problem 1 Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve the validation / test accuracy compared to the SGD model above. End of explanation """ num_steps = 1001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 100 == 0): print("Minibatch loss at step", step, ":", l) print("Minibatch accuracy: {:.1f}%".format(accuracy(predictions, batch_labels))) print("Validation accuracy: {:.1f}%".format(accuracy(valid_prediction.eval(), valid_labels))) print("Test accuracy: {:.1f}%".format(accuracy(test_prediction.eval(), test_labels))) """ Explanation: Note: you should be able to run the graph exactly the same way as the SGD model above: End of explanation """
googledatalab/notebooks
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
apache-2.0
import numpy as np import pandas as pd import os import re import csv from sklearn.datasets import fetch_20newsgroups # data will be downloaded. Note that an error message saying something like "No handlers could be found for # logger sklearn.datasets.twenty_newsgroups" might be printed, but this is not an error. news_train_data = fetch_20newsgroups(subset='train', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes')) news_test_data = fetch_20newsgroups(subset='test', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes')) """ Explanation: <h1>About this Notebook</h1> This notebook demonstrates the experience of using ML Workbench to create a machine learning model for text classification and setting it up for online prediction. Training the model is done "locally" inside Datalab. In the next notebook (Text Classification --- 20NewsGroup (large data)), it demonstrates how to do it by using Cloud ML Engine services. If you have any feedback, please send them to datalab-feedback@google.com. Data The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics. The classification problem is to identify the newsgroup a post was summited to, given the text of the post. There are a few versions of this dataset from different sources online. Below, we use the version within scikit-learn which is already split into a train and test/eval set. For a longer introduction to this dataset, see the scikit-learn website Download Data End of explanation """ news_train_data.data[2], news_train_data.target_names[news_train_data.target[2]] def clean_and_tokenize_text(news_data): """Cleans some issues with the text data Args: news_data: list of text strings Returns: For each text string, an array of tokenized words are returned in a list """ cleaned_text = [] for text in news_data: x = re.sub('[^\w]|_', ' ', text) # only keep numbers and letters and spaces x = x.lower() x = re.sub(r'[^\x00-\x7f]',r'', x) # remove non ascii texts tokens = [y for y in x.split(' ') if y] # remove empty words tokens = ['[number]' if x.isdigit() else x for x in tokens] # convert all numbers to '[number]' to reduce vocab size. cleaned_text.append(tokens) return cleaned_text clean_train_tokens = clean_and_tokenize_text(news_train_data.data) clean_test_tokens = clean_and_tokenize_text(news_test_data.data) """ Explanation: Cleaning the Raw Data Printing the 3rd element in the test dataset shows the data contains text with newlines, punctuation, misspellings, and other items common in text documents. To build a model, we will clean up the text by removing some of these issues. End of explanation """ def get_unique_tokens_per_row(text_token_list): """Collect unique tokens per row. Args: text_token_list: list, where each element is a list containing tokenized text Returns: One list containing the unique tokens in every row. For example, if row one contained ['pizza', 'pizza'] while row two contained ['pizza', 'cake', 'cake'], then the output list would contain ['pizza' (from row 1), 'pizza' (from row 2), 'cake' (from row 2)] """ words = [] for row in text_token_list: words.extend(list(set(row))) return words # Make a plot where the x-axis is a token, and the y-axis is how many text documents # that token is in. words = pd.DataFrame(get_unique_tokens_per_row(clean_train_tokens) , columns=['words']) token_frequency = words['words'].value_counts() # how many documents contain each token. token_frequency.plot(logy=True) vocab = token_frequency[np.logical_and(token_frequency < 1000, token_frequency > 10)] vocab.plot(logy=True) def filter_text_by_vocab(news_data, vocab): """Removes tokens if not in vocab. Args: news_data: list, where each element is a token list vocab: set containing the tokens to keep. Returns: List of strings containing the final cleaned text data """ text_strs = [] for row in news_data: words_to_keep = [token for token in row if token in vocab or token == '[number]'] text_strs.append(' '.join(words_to_keep)) return text_strs clean_train_data = filter_text_by_vocab(clean_train_tokens, set(vocab.index)) clean_test_data = filter_text_by_vocab(clean_test_tokens, set(vocab.index)) # Check a few instances of cleaned data clean_train_data[:3] """ Explanation: Get Vocabulary We will need to filter the vocabulary to remove high frequency words and low frequency words. End of explanation """ !mkdir -p ./data with open('./data/train.csv', 'w') as f: writer = csv.writer(f, lineterminator='\n') for target, text in zip(news_train_data.target, clean_train_data): writer.writerow([news_train_data.target_names[target], text]) with open('./data/eval.csv', 'w') as f: writer = csv.writer(f, lineterminator='\n') for target, text in zip(news_test_data.target, clean_test_data): writer.writerow([news_test_data.target_names[target], text]) # Also save the vocab, which will be useful in making new predictions. with open('./data/vocab.txt', 'w') as f: vocab.to_csv(f) """ Explanation: Save the Cleaned Data For Training End of explanation """ import google.datalab.contrib.mlworkbench.commands # This loads the '%%ml' magics """ Explanation: Create Model with ML Workbench The MLWorkbench Magics are a set of Datalab commands that allow an easy code-free experience to training, deploying, and predicting ML models. This notebook will take the cleaned data from the previous notebook and build a text classification model. The MLWorkbench Magics are a collection of magic commands for each step in ML workflows: analyzing input data to build transforms, transforming data, training a model, evaluating a model, and deploying a model. For details of each command, run with --help. For example, "%%ml train --help". When the dataset is small (like with the 20 newsgroup data), there is little benefit of using cloud services. This notebook will run the analyze, transform, and training steps locally. However, we will take the locally trained model and deploy it to ML Engine and show how to make real predictions on a deployed model. Every MLWorkbench magic can run locally or use cloud services (adding --cloud flag). The next notebook (Text Classification --- 20NewsGroup (large data)) in this sequence shows the cloud version of every command, and gives the normal experience when building models are large datasets. However, we will still use the 20 newsgroup data. End of explanation """ %%ml dataset create name: newsgroup_data format: csv train: ./data/train.csv eval: ./data/eval.csv schema: - name: news_label type: STRING - name: text type: STRING %%ml dataset explore name: newsgroup_data """ Explanation: First, define the dataset we are going to use for training. End of explanation """ %%ml analyze output: ./analysis data: newsgroup_data features: news_label: transform: target text: transform: bag_of_words !ls ./analysis """ Explanation: Step 1: Analyze The first step in the MLWorkbench workflow is to analyze the data for the requested transformations. We are going to build a bag of words representation on the text and use this in a linear model. Therefore, the analyze step will compute the vocabularies and related statistics of the data for traing. End of explanation """ !rm -rf ./transform %%ml transform --shuffle output: ./transform analysis: ./analysis data: newsgroup_data # note: the errors_* files are all 0 size, which means no error. !ls ./transform/ -l -h """ Explanation: Step 2: Transform This step is optional as training can start from csv data (the same data used in the analysis step). The transform step performs some transformations on the input data and saves the results to a special TensorFlow file called a TFRecord file containing TF.Example protocol buffers. This allows training to start from preprocessed data. If this step is not used, training would have to perform the same preprocessing on every row of csv data every time it is used. As TensorFlow reads the same data row multiple times during training, this means the same row would be preprocessed multiple times. By writing the preprocessed data to disk, we can speed up training. Because the the 20 newsgroups data is small, this step does not matter, but we do it anyway for illustration. This step is recommended if there are text column in a dataset, and required if there are image columns in a dataset. We run the transform step for the training and eval data. End of explanation """ %%ml dataset create name: newsgroup_transformed train: ./transform/train-* eval: ./transform/eval-* format: transformed """ Explanation: Create a "transformed dataset" to use in next step. End of explanation """ # Training should use an empty output folder. So if you run training multiple times, # use different folders or remove the output from the previous run. !rm -fr ./train """ Explanation: Step 3: Training MLWorkbench automatically builds standard TensorFlow models without you having to write any TensorFlow code. End of explanation """ %%ml train output: ./train analysis: ./analysis/ data: newsgroup_transformed model_args: model: linear_classification top-n: 5 """ Explanation: The following training step takes about 10~15 minutes. End of explanation """ # You can also plot the summary events which will be saved with the notebook. from google.datalab.ml import Summary summary = Summary('./train') summary.list_events() summary.plot(['loss', 'accuracy']) """ Explanation: Go to Tensorboard (link shown above) to monitor the training progress. Note that training stops when it detects accuracy is no longer increasing for eval data. End of explanation """ !ls ./train/ """ Explanation: The output of training is two models, one in training_output/model and another in training_output/evaluation_model. These tensorflow models are identical except the latter assumes the target column is part of the input and copies the target value to the output. Therefore, the latter is ideal for evaluation. End of explanation """ %%ml batch_predict model: ./train/evaluation_model/ output: ./batch_predict format: csv data: csv: ./data/eval.csv # It creates a results csv file, and a results schema json file. !ls ./batch_predict """ Explanation: Step 4: Evaluation using batch prediction Below, we use the evaluation model and run batch prediction locally. Batch prediction is needed for large datasets where the data cannot fit in memory. For demo purpose, we will use the training evaluation data again. End of explanation """ !head -n 5 ./batch_predict/predict_results_eval.csv %%ml evaluate confusion_matrix --plot csv: ./batch_predict/predict_results_eval.csv %%ml evaluate accuracy csv: ./batch_predict/predict_results_eval.csv """ Explanation: Note that the output of prediction is a csv file containing the score for each label class. 'predicted_n' is the label for the nth largest score. We care about 'predicted', the final model prediction. End of explanation """ # Create bucket !gsutil mb gs://bq-mlworkbench-20news-lab !gsutil cp -r ./batch_predict/predict_results_eval.csv gs://bq-mlworkbench-20news-lab # Use Datalab's Bigquery API to load CSV files into table. import google.datalab.bigquery as bq import json with open('./batch_predict/predict_results_schema.json', 'r') as f: schema = json.load(f) # Create BQ Dataset bq.Dataset('newspredict').create() # Create the table table = bq.Table('newspredict.result1').create(schema=schema, overwrite=True) table.load('gs://bq-mlworkbench-20news-lab/predict_results_eval.csv', mode='overwrite', source_format='csv', csv_options=bq.CSVOptions(skip_leading_rows=1)) """ Explanation: Step 5: BigQuery to analyze evaluate results Sometimes you want to query your prediction/evaluation results using SQL. It is easy. End of explanation """ %%bq query SELECT * FROM newspredict.result1 WHERE predicted != target """ Explanation: Now, run any SQL queries on "table newspredict.result1". Below we query all wrong predictions. End of explanation """ %%ml predict model: ./train/model/ headers: text data: - nasa - windows xp """ Explanation: Prediction Local Instant Prediction The MLWorkbench also supports running prediction and displaying the results within the notebook. Note that we use the non-evaluation model below (./train/model) which takes input with no target column. End of explanation """ # Pick some data from eval csv file. They are cleaned text. # The truth labels for the following 3 instances are # - rec.autos # - comp.windows.x # - talk.politics.mideast instance0 = ('little confused models [number] [number] heard le se someone tell differences far features ' + 'performance curious book value [number] model less book value usually words demand ' + 'year heard mid spring early summer best buy') instance1 = ('hi requirement closing opening different display servers within x application manner display ' + 'associated client proper done during transition problems') instance2 = ('attacking drive kuwait country whose citizens close blood business ties saudi citizens thinks ' + 'helped saudi arabia least eastern muslim country doing anything help kuwait protect saudi arabia ' + 'indeed masses citizens demonstrating favor butcher saddam killed muslims killing relatively rich ' + 'muslims nose west saudi arabia rolled iraqi invasion charge saudi arabia idea governments official ' + 'religion de facto de human nature always ones rise power world country citizens leader slick ' + 'operator sound guys angels posting edited stuff following friday york times reported group definitely ' + 'conservative followers house rule country enough reported besides complaining government conservative ' + 'enough asserted approx [number] [number] kingdom charge under saudi islamic law brings death penalty ' + 'diplomatic guy bin isn called severe punishment [number] women drove public while protest ban women ' + 'driving guy group said al said women fired jobs happen heard muslims ban women driving basis qur etc ' + 'yet folks ban women called choose rally behind hate women allowed tv radio immoral kingdom house neither ' + 'least nor favorite government earth restrict religious political lot among things likely replacements ' + 'going lot worse citizens country house feeling heat lately last six months read religious police ' + 'government western women fully stupid women imo sends wrong signals morality read cracked down few home ' + 'based religious posted government owned newspapers offering money turns group dare worship homes secret ' + 'place government grown try take wind conservative opposition things small taste happen guys house trying ' + 'long run others general west evil zionists rule hate west crowd') data = [instance0, instance1, instance2] %%ml predict model: ./train/model/ headers: text data: $data """ Explanation: Why Does My Model Predict this? Prediction Explanation. "%%ml explain" gives you insights on what are important features in the prediction data that contribute positively or negatively to certain labels. We use LIME under "%%ml explain". (LIME is an open sourced library performing feature sensitivity analysis. It is based on the work presented in this paper. LIME is included in Datalab.) In this case, we will check which words in text are contributing most to the predicted label. End of explanation """ %%ml explain --detailview_only model: ./train/model labels: rec.autos type: text data: $instance0 %%ml explain --detailview_only model: ./train/model labels: comp.windows.x type: text data: $instance1 """ Explanation: The first and second instances are predicted correctly. The third is wrong. Below we run "%%ml explain" to understand more. End of explanation """ %%ml explain --detailview_only model: ./train/model labels: talk.politics.guns,talk.politics.mideast type: text data: $instance2 """ Explanation: On instance 2, the top prediction result does not match truth. Predicted is "talk.politics.guns" while truth is "talk.politics.mideast". So let's analyze these two labels. End of explanation """ !gsutil -q mb gs://bq-mlworkbench-20news-lab # Move the regular model to GCS !gsutil -m cp -r ./train/model gs://bq-mlworkbench-20news-lab """ Explanation: Deploying Model to ML Engine Now that we have a trained model, have analyzed the results, and have tested the model output locally, we are ready to deploy it to the cloud for real predictions. Deploying a model requires the files are on GCS. The next few cells makes a bucket on GCS, copies the locally trained model, and deploys it. End of explanation """ %%ml model deploy path: gs://bq-mlworkbench-20news-lab name: news.alpha """ Explanation: See this doc https://cloud.google.com/ml-engine/docs/how-tos/managing-models-jobs for a the definition of ML Engine models and versions. An ML Engine version runs predictions and is contained in a ML Engine model. We will create a new ML Engine model, and depoly the TensorFlow graph as a ML Engine version. This can be done using gcloud (see https://cloud.google.com/ml-engine/docs/how-tos/deploying-models), or Datalab which we use below. End of explanation """ from oauth2client.client import GoogleCredentials from googleapiclient import discovery from googleapiclient import errors # Store your project ID, model name, and version name in the format the API needs. api_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format( your_project_ID=google.datalab.Context.default().project_id, model_name='news', version_name='alpha') # Get application default credentials (possible only if the gcloud tool is # configured on your machine). See https://developers.google.com/identity/protocols/application-default-credentials # for more info. credentials = GoogleCredentials.get_application_default() # Build a representation of the Cloud ML API. ml = discovery.build('ml', 'v1', credentials=credentials) # Create a dictionary containing data to predict. # Note that the data is a list of csv strings. body = { 'instances': ['nasa', 'windows ex']} # Create a request request = ml.projects().predict( name=api_path, body=body) print('The JSON request: \n') print(request.to_json()) # Make the call. try: response = request.execute() print('\nThe response:\n') print(json.dumps(response, indent=2)) except errors.HttpError, err: # Something went wrong, print out some information. print('There was an error. Check the details:') print(err._get_reason()) """ Explanation: How to Build Your Own Prediction Client A common task is to call a deployed model from different applications. Below is an example of writing a python client to run prediction. Covering model permissions topics is outside the scope of this notebook, but for more information see https://cloud.google.com/ml-engine/docs/tutorials/python-guide and https://developers.google.com/identity/protocols/application-default-credentials . End of explanation """ # The output of this cell is placed in the name box # Store your project ID, model name, and version name in the format the API needs. api_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format( your_project_ID=google.datalab.Context.default().project_id, model_name='news', version_name='alpha') print('Place the following in the name box') print(api_path) """ Explanation: To demonstrate prediction client further, check API Explorer (https://developers.google.com/apis-explorer). it allows you to send raw HTTP requests to many Google APIs. This is useful for understanding the requests and response, and help you build your own client with your favorite language. Please visit https://developers.google.com/apis-explorer/#search/ml%20engine/ml/v1/ml.projects.predict and enter the following values for each text box. End of explanation """ print('Place the following in the request body box') request = {'instances': ['nasa', 'windows xp']} print(json.dumps(request)) """ Explanation: The fields text box can be empty. Note that because we deployed the non-evaluation model, our depolyed model takes a csv input which only has one column. In general, the "instances" is a list of csv strings for models trained by MLWorkbench. Click in the request body box, and note a small drop down menu appears in the FAR RIGHT of the input box. Slect "Freeform editor". Then enter the following in the request body box. End of explanation """ %%ml model delete name: news.alpha %%ml model delete name: news # Delete the GCS bucket !gsutil -m rm -r gs://bq-mlworkbench-20news-lab # Delete BQ table bq.Dataset('newspredict').delete(delete_contents = True) """ Explanation: Then click the "Authorize and execute" button. The prediction results are returned in the browser. Cleaning up the deployed model End of explanation """
InsightSoftwareConsortium/SimpleITK-Notebooks
Python/300_Segmentation_Overview.ipynb
apache-2.0
%matplotlib inline import matplotlib.pyplot as plt from ipywidgets import interact, FloatSlider import SimpleITK as sitk # Download data to work on %run update_path_to_download_script from downloaddata import fetch_data as fdata from myshow import myshow, myshow3d img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd")) img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd")) # To visualize the labels image in RGB with needs a image with 0-255 range img_T1_255 = sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8) img_T2_255 = sitk.Cast(sitk.RescaleIntensity(img_T2), sitk.sitkUInt8) myshow3d(img_T1) """ Explanation: Introduction to ITK Segmentation in SimpleITK Notebooks <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F300_Segmentation_Overview.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a> <b>Goal</b>: To become familiar with basic segmentation algorithms available in ITK, and interactively explore their parameter space. Image segmentation filters process an image to partition it into (hopefully) meaningful regions. The output is commonly an image of integers where each integer can represent an object. The value 0 is commonly used for the background, and 1 ( sometimes 255) for a foreground object. End of explanation """ seg = img_T1 > 200 myshow(sitk.LabelOverlay(img_T1_255, seg), "Basic Thresholding") seg = sitk.BinaryThreshold( img_T1, lowerThreshold=100, upperThreshold=400, insideValue=1, outsideValue=0 ) myshow(sitk.LabelOverlay(img_T1_255, seg), "Binary Thresholding") """ Explanation: Thresholding Thresholding is the most basic form of segmentation. It simply labels the pixels of an image based on the intensity range without respect to geometry or connectivity. End of explanation """ otsu_filter = sitk.OtsuThresholdImageFilter() otsu_filter.SetInsideValue(0) otsu_filter.SetOutsideValue(1) seg = otsu_filter.Execute(img_T1) myshow(sitk.LabelOverlay(img_T1_255, seg), "Otsu Thresholding") print(otsu_filter.GetThreshold()) """ Explanation: ITK has a number of histogram based automatic thresholding filters including Huang, MaximumEntropy, Triangle, and the popular Otsu's method. These methods create a histogram then use a heuristic to determine a threshold value. End of explanation """ seed = (132, 142, 96) seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8) seg.CopyInformation(img_T1) seg[seed] = 1 seg = sitk.BinaryDilate(seg, [3] * 3) myshow(sitk.LabelOverlay(img_T1_255, seg), "Initial Seed") seg = sitk.ConnectedThreshold(img_T1, seedList=[seed], lower=100, upper=190) myshow(sitk.LabelOverlay(img_T1_255, seg), "Connected Threshold") """ Explanation: Region Growing Segmentation The first step of improvement upon the naive thresholding is a class of algorithms called region growing. This includes: <ul> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ConnectedThresholdImageFilter.html">ConnectedThreshold</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ConfidenceConnectedImageFilter.html">ConfidenceConnected</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1VectorConfidenceConnectedImageFilter.html">VectorConfidenceConnected</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1NeighborhoodConnectedImageFilter.html">NeighborhoodConnected</a></li> </ul> Earlier we used 3D Slicer to determine that index: (132,142,96) was a good seed for the left lateral ventricle. End of explanation """ seg = sitk.ConfidenceConnected( img_T1, seedList=[seed], numberOfIterations=1, multiplier=2.5, initialNeighborhoodRadius=1, replaceValue=1, ) myshow(sitk.LabelOverlay(img_T1_255, seg), "ConfidenceConnected") img_multi = sitk.Compose(img_T1, img_T2) seg = sitk.VectorConfidenceConnected( img_multi, seedList=[seed], numberOfIterations=1, multiplier=2.5, initialNeighborhoodRadius=1, ) myshow(sitk.LabelOverlay(img_T2_255, seg)) """ Explanation: Improving upon this is the ConfidenceConnected filter, which uses the initial seed or current segmentation to estimate the threshold range. End of explanation """ seed = (132, 142, 96) feature_img = sitk.GradientMagnitudeRecursiveGaussian(img_T1, sigma=0.5) speed_img = sitk.BoundedReciprocal( feature_img ) # This is parameter free unlike the Sigmoid myshow(speed_img) """ Explanation: Fast Marching Segmentation The FastMarchingImageFilter implements a fast marching solution to a simple level set evolution problem (eikonal equation). In this example, the speed term used in the differential equation is provided in the form of an image. The speed image is based on the gradient magnitude and mapped with the bounded reciprocal $1/(1+x)$. End of explanation """ fm_filter = sitk.FastMarchingBaseImageFilter() fm_filter.SetTrialPoints([seed]) fm_filter.SetStoppingValue(1000) fm_img = fm_filter.Execute(speed_img) myshow( sitk.Threshold( fm_img, lower=0.0, upper=fm_filter.GetStoppingValue(), outsideValue=fm_filter.GetStoppingValue() + 1, ) ) def fm_callback(img, time, z): seg = img < time myshow(sitk.LabelOverlay(img_T1_255[:, :, z], seg[:, :, z])) interact( lambda **kwargs: fm_callback(fm_img, **kwargs), time=FloatSlider(min=0.05, max=1000.0, step=0.05, value=100.0), z=(0, fm_img.GetSize()[2] - 1), ) """ Explanation: The output of the FastMarchingImageFilter is a <b>time-crossing map</b> that indicates, for each pixel, how much time it would take for the front to arrive at the pixel location. End of explanation """ seed = (132, 142, 96) seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8) seg.CopyInformation(img_T1) seg[seed] = 1 seg = sitk.BinaryDilate(seg, [3] * 3) """ Explanation: Level-Set Segmentation There are a variety of level-set based segmentation filter available in ITK: <ul> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1GeodesicActiveContourLevelSetImageFilter.html">GeodesicActiveContour</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ShapeDetectionLevelSetImageFilter.html">ShapeDetection</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ThresholdSegmentationLevelSetImageFilter.html">ThresholdSegmentation</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1LaplacianSegmentationLevelSetImageFilter.html">LaplacianSegmentation</a></li> <li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScalarChanAndVeseDenseLevelSetImageFilter.html">ScalarChanAndVese</a></li> </ul> There is also a <a href="http://www.itk.org/Doxygen/html/group__ITKLevelSetsv4.html">modular Level-set framework</a> which allows composition of terms and easy extension in C++. First we create a label image from our seed. End of explanation """ stats = sitk.LabelStatisticsImageFilter() stats.Execute(img_T1, seg) factor = 3.5 lower_threshold = stats.GetMean(1) - factor * stats.GetSigma(1) upper_threshold = stats.GetMean(1) + factor * stats.GetSigma(1) print(lower_threshold, upper_threshold) init_ls = sitk.SignedMaurerDistanceMap(seg, insideIsPositive=True, useImageSpacing=True) lsFilter = sitk.ThresholdSegmentationLevelSetImageFilter() lsFilter.SetLowerThreshold(lower_threshold) lsFilter.SetUpperThreshold(upper_threshold) lsFilter.SetMaximumRMSError(0.02) lsFilter.SetNumberOfIterations(1000) lsFilter.SetCurvatureScaling(0.5) lsFilter.SetPropagationScaling(1) lsFilter.ReverseExpansionDirectionOn() ls = lsFilter.Execute(init_ls, sitk.Cast(img_T1, sitk.sitkFloat32)) print(lsFilter) myshow(sitk.LabelOverlay(img_T1_255, ls > 0)) """ Explanation: Use the seed to estimate a reasonable threshold range. End of explanation """