repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
zaqwes8811/micro-apps
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/02-Discrete-Bayes.ipynb
mit
from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() """ Explanation: Table of Contents Discrete Bayes Filter End of explanation """ import numpy as np belief = np.array([1./10]*10) print(belief) """ Explanation: The Kalman filter belongs to a family of filters called Bayesian filters. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level. That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader. I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic. Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking. Tracking a Dog Let's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody invented a sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to the network via wifi and sends an update once a second. I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read door, hall, hall, and so on. How can I use that information to determine where Simon is? To keep the problem small enough to plot easily we will assume that there are only 10 positions in the hallway, which we will number 0 to 9, where 1 is to the right of 0. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0. When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. From my perspective he is equally likely to be in any position. There are 10 positions, so the probability that he is in any given position is 1/10. Let's represent our belief of his position in a NumPy array. I could use a Python list, but NumPy arrays offer functionality that we will be using soon. End of explanation """ hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) """ Explanation: In Bayesian statistics this is called a prior. It is the probability prior to incorporating measurements or other information. More completely, this is called the prior probability distribution. A probability distribution is a collection of all possible probabilities for an event. Probability distributions always sum to 1 because something had to happen; the distribution lists all possible events and the probability of each. I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats probability as a belief about a single event. Let's take an example. I know that if I flip a fair coin infinitely many times I will get 50% heads and 50% tails. This is called frequentist statistics to distinguish it from Bayesian statistics. Computations are based on the frequency in which events occur. I flip the coin one more time and let it land. Which way do I believe it landed? Frequentist probability has nothing to say about that; it will merely state that 50% of coin flips land as heads. In some ways it is meaningless to assign a probability to the current state of the coin. It is either heads or tails, we just don't know which. Bayes treats this as a belief about a single event - the strength of my belief or knowledge that this specific coin flip is heads is 50%. Some object to the term "belief"; belief can imply holding something to be true without evidence. In this book it always is a measure of the strength of our knowledge. We'll learn more about this as we go. Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian. In practice statisticians use a mix of frequentist and Bayesian techniques. Sometimes finding the prior is difficult or impossible, and frequentist techniques rule. In this book we can find the prior. When I talk about the probability of something I am referring to the probability that some specific thing is true given past events. When I do that I'm taking the Bayesian approach. Now let's create a map of the hallway. We'll place the first two doors close together, and then another door further away. We will use 1 for doors, and 0 for walls: End of explanation """ import kf_book.book_plots as book_plots from kf_book.book_plots import figsize, set_figsize import matplotlib.pyplot as plt belief = np.array([1./3, 1./3, 0, 0, 0, 0, 0, 0, 1/3, 0]) book_plots.bar_plot(belief) """ Explanation: I start listening to Simon's transmissions on the network, and the first data I get from the sensor is door. For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no reason to believe he is in front of the first, second, or third door. What I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door. End of explanation """ belief = hallway * (1./3) print(belief) """ Explanation: This distribution is called a categorical distribution, which is a discrete distribution describing the probability of observing $n$ outcomes. It is a multimodal distribution because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that we have narrowed down our knowledge to one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8. This is an improvement in two ways. I've rejected a number of hallway positions as impossible, and the strength of my belief in the remaining positions has increased from 10% to 33%. This will always happen. As our knowledge improves the probabilities will get closer to 100%. A few words about the [mode](https://en.wikipedia.org/wiki/Mode_(statistics&#41;) of a distribution. Given a set of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the mode is the number that occurs most often. For this set the mode is 2. A set can contain more than one mode. The set {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former set is unimodal, and the latter is multimodal. Another term used for this distribution is a histogram. Histograms graphically depict the distribution of a set of numbers. The bar chart above is a histogram. I hand coded the belief array in the code above. How would we implement this in code? We represent doors with 1, and walls as 0, so we will multiply the hallway variable by the percentage, like so; End of explanation """ belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]) """ Explanation: Extracting Information from Sensor Readings Let's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor: door move right door Can we deduce Simon's location? Of course! Given the hallway's layout there is only one place from which you can get this sequence, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. The only possibility is that he is now in front of the second door. Our belief is: End of explanation """ def update_belief(hall, belief, z, correct_scale): for i, val in enumerate(hall): if val == z: belief[i] *= correct_scale belief = np.array([0.1] * 10) reading = 1 # 1 is 'door' update_belief(hallway, belief, z=reading, correct_scale=3.) print('belief:', belief) print('sum =', sum(belief)) plt.figure() book_plots.bar_plot(belief) """ Explanation: I designed the hallway layout and sensor readings to give us an exact answer quickly. Real problems are not so clear cut. But this should trigger your intuition - the first sensor reading only gave us low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we know more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. This is possible when a set of sensor readings only matches one to a few starting locations. We could implement this solution now, but instead let's consider a real world complication to the problem. Noisy Sensors Perfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or misread if he is not facing down the hallway. Thus when I get door I cannot use 1/3 as the probability. I have to assign less than 1/3 to each door, and assign a small probability to each blank wall position. Something like Python [.31, .31, .01, .01, .01, .01, .01, .01, .31, .01] At first this may seem insurmountable. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure? The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise. Say we get a reading of door, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to fix that in a moment. Let's look at that in Python code. Here I use the variable z to denote the measurement. z or y are customary choices in the literature for the measurement. As a programmer I prefer meaningful variable names, but I want you to be able to read the literature and/or other filtering code, so I will start introducing these abbreviated names now. End of explanation """ belief / sum(belief) """ Explanation: This is not a probability distribution because it does not sum to 1.0. But the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list. That is easy with NumPy: End of explanation """ hallway == 1 """ Explanation: FilterPy implements this with the normalize function: Python from filterpy.discrete_bayes import normalize normalize(belief) It is a bit odd to say "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and compute the scale factor from that. The equation for that is $$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$ Also, the for loop is cumbersome. As a general rule you will want to avoid using for loops in NumPy code. NumPy is implemented in C and Fortran, so if you avoid for loops the result often runs 100x faster than the equivalent loop. How do we get rid of this for loop? NumPy lets you index arrays with boolean arrays. You create a boolean array with logical operators. We can find all the doors in the hallway with: End of explanation """ from filterpy.discrete_bayes import normalize def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) belief[hall==z] *= scale normalize(belief) belief = np.array([0.1] * 10) scaled_update(hallway, belief, z=1, z_prob=.75) print('sum =', sum(belief)) print('probability of door =', belief[0]) print('probability of wall =', belief[2]) book_plots.bar_plot(belief, ylim=(0, .3)) """ Explanation: When you use the boolean array as an index to another array it returns only the elements where the index is True. Thus we can replace the for loop with python belief[hall==z] *= scale and only the elements which equal z will be multiplied by scale. Teaching you NumPy is beyond the scope of this book. I will use idiomatic NumPy constructs and explain them the first time I present them. If you are new to NumPy there are many blog posts and videos on how to use NumPy efficiently and idiomatically. Here is our improved version: End of explanation """ def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return normalize(likelihood * belief) """ Explanation: We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions. This result is called the posterior, which is short for posterior probability distribution. All this means is a probability distribution after incorporating the measurement information (posterior means 'after' in this context). To review, the prior is the probability distribution before including the measurement's information. Another term is the likelihood. When we computed belief[hall==z] *= scale we were computing how likely each position was given the measurement. The likelihood is not a probability distribution because it does not sum to one. The combination of these gives the equation $$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$ When we talk about the filter's output we typically call the state after performing the prediction the prior or prediction, and we call the state after the update either the posterior or the estimated state. It is very important to learn and internalize these terms as most of the literature uses them extensively. Does scaled_update() perform this computation? It does. Let me recast it into this form: End of explanation """ from filterpy.discrete_bayes import update def lh_hallway(hall, z, z_prob): """ compute likelihood that a measurement matches positions in the hallway.""" try: scale = z_prob / (1. - z_prob) except ZeroDivisionError: scale = 1e8 likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return likelihood belief = np.array([0.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) update(likelihood, belief) """ Explanation: This function is not fully general. It contains knowledge about the hallway, and how we match measurements to it. We always strive to write general functions. Here we will remove the computation of the likelihood from the function, and require the caller to compute the likelihood themselves. Here is a full implementation of the algorithm: python def update(likelihood, prior): return normalize(likelihood * prior) Computation of the likelihood varies per problem. For example, the sensor might not return just 1 or 0, but a float between 0 and 1 indicating the probability of being in front of a door. It might use computer vision and report a blob shape that you then probabilistically match to a door. It might use sonar and return a distance reading. In each case the computation of the likelihood will be different. We will see many examples of this throughout the book, and learn how to perform these calculations. FilterPy implements update. Here is the previous example in a fully general form: End of explanation """ def perfect_predict(belief, move): """ move the position by `move` spaces, where positive is to the right, and negative is to the left """ n = len(belief) result = np.zeros(n) for i in range(n): result[i] = belief[(i-move) % n] return result belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) plt.subplot(121) book_plots.bar_plot(belief, title='Before prediction', ylim=(0, .4)) belief = perfect_predict(belief, 1) plt.subplot(122) book_plots.bar_plot(belief, title='After prediction', ylim=(0, .4)) """ Explanation: Incorporating Movement Recall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors? Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math. First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our belief array? I hope that after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after he moved one position to the right we should believe that there is a 50% chance he is at position 4. The hallway is circular, so we will use modulo arithmetic to perform the shift. End of explanation """ from ipywidgets import interact, IntSlider belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) beliefs = [] for _ in range(20): # Simon takes one step to the right belief = perfect_predict(belief, 1) beliefs.append(belief) def simulate(time_step): book_plots.bar_plot(beliefs[time_step], ylim=(0, .4)) interact(simulate, time_step=IntSlider(value=0, max=len(beliefs)-1)); """ Explanation: We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning. The next cell animates this so you can see it in action. Use the slider to move forwards and backwards in time. This simulates Simon walking around and around the hallway. It does not yet incorporate new measurements so the probability distribution does not change shape, only position. End of explanation """ def predict_move(belief, move, p_under, p_correct, p_over): n = len(belief) prior = np.zeros(n) for i in range(n): prior[i] = ( belief[(i-move) % n] * p_correct + belief[(i-move-1) % n] * p_over + belief[(i-move+1) % n] * p_under) return prior belief = [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior) """ Explanation: Terminology Let's pause a moment to review terminology. I introduced this terminology in the last chapter, but let's take a second to help solidify your knowledge. The system is what we are trying to model or filter. Here the system is our dog. The state is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the estimated state of the system. In practice this often gets called the state, so be careful to understand the context. One cycle of prediction and updating with a measurement is called the state or system evolution, which is short for time evolution [7]. Another term is system propagation. It refers to how the state of the system changes over time. For filters, time is usually a discrete step, such as 1 second. For our dog tracker the system state is the position of the dog, and the state evolution is the position after a discrete amount of time has passed. We model the system behavior with the process model. Here, our process model is that the dog moves one or more positions at each time step. This is not a particularly accurate model of how dogs behave. The error in the model is called the system error or process error. The prediction is our new prior. Time has moved forward and we made a prediction without benefit of knowing the measurements. Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds? Clearly, $$ \begin{aligned} \bar x &= 17 + (15*2) \ &= 47 \end{aligned}$$ I use bars over variables to indicate that they are priors (predictions). We can write the equation for the process model like this: $$ \bar x_{k+1} = f_x(\bullet) + x_k$$ $x_k$ is the current position or state. If the dog is at 17 m then $x_k = 17$. $f_x(\bullet)$ is the state propagation function for x. It describes how much the $x_k$ changes over one time step. For our example it performs the computation $15 \cdot 2$ so we would define it as $$f_x(v_x, t) = v_k t$$. Adding Uncertainty to the Prediction perfect_predict() assumes perfect measurements, but all sensors have noise. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? This may sound like an insurmountable problem, but let's model it and see what happens. Assume that the sensor's movement measurement is 80% likely to be correct, 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if the movement measurement is 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces. Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider the reported movement of 2. If we are 100% certain the dog started from position 3, then there is an 80% chance he is at 5, and a 10% chance for either 4 or 6. Let's try coding that: End of explanation """ belief = [0, 0, .4, .6, 0, 0, 0, 0, 0, 0] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior) prior """ Explanation: It appears to work correctly. Now what happens when our belief is not 100% certain? End of explanation """ belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) beliefs = [] for i in range(100): belief = predict_move(belief, 1, .1, .8, .1) beliefs.append(belief) print('Final Belief:', belief) # make interactive plot def show_prior(step): book_plots.bar_plot(beliefs[step-1]) plt.title('Step {}'.format(step)) interact(show_prior, step=IntSlider(value=1, max=len(beliefs))); print('Final Belief:', belief) """ Explanation: Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step. If you look at the probabilities after performing the update you might be dismayed. In the example above we started with probabilities of 0.4 and 0.6 in two positions; after performing the update the probabilities are not only lowered, but they are strewn out across the map. This is not a coincidence, or the result of a carefully chosen example - it is always true of the prediction. If the sensor is noisy we lose some information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the belief array. Let's try this with 100 iterations. The plot is animated; use the slider to change the step number. End of explanation """ def predict_move_convolution(pdf, offset, kernel): N = len(pdf) kN = len(kernel) width = int((kN - 1) / 2) prior = np.zeros(N) for i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N prior[i] += pdf[index] * kernel[k] return prior """ Explanation: After 100 iterations we have lost almost all information, even though we were 100% sure that we started in position 0. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates a small amount of information is left, after 50 a lot is left, but by 200 iterations essentially all information is lost. And, if you are viewing this online here is an animation of that output. <img src="animations/02_no_info.gif"> I will not generate these standalone animations through the rest of the book. Please see the preface for instructions to run this book on the web, for free, or install IPython on your computer. This will allow you to run all of the cells and see the animations. It's very important that you practice with this code, not just read passively. Generalizing with Convolution We made the assumption that the movement error is at most one position. But it is possible for the error to be two, three, or more positions. As programmers we always want to generalize our code so that it works for all cases. This is easily solved with convolution. Convolution modifies one function with another function. In our case we are modifying a probability distribution with the error function of the sensor. The implementation of predict_move() is a convolution, though we did not call it that. Formally, convolution is defined as $$ (f \ast g) (t) = \int_0^t !f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$ where $f\ast g$ is the notation for convolving f by g. It does not mean multiply. Integrals are for continuous functions, but we are using discrete functions. We replace the integral with a summation, and the parenthesis with array brackets. $$ (f \ast g) [t] = \sum\limits_{\tau=0}^t !f[\tau] \, g[t-\tau]$$ Comparison shows that predict_move() is computing this equation - it computes the sum of a series of multiplications. Khan Academy [4] has a good introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear. You slide an array called the kernel across another array, multiplying the neighbors of the current cell with the values of the second array. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array [0.1, 0.8, 0.1]. All we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named it pdf. End of explanation """ from filterpy.discrete_bayes import predict belief = [.05, .05, .05, .05, .55, .05, .05, .05, .05, .05] prior = predict(belief, offset=1, kernel=[.1, .8, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6)) """ Explanation: This illustrates the algorithm, but it runs very slow. SciPy provides a convolution routine convolve() in the ndimage.filters module. We need to shift the pdf by offset before convolution; np.roll() does that. The move and predict algorithm can be implemented with one line: python convolve(np.roll(pdf, offset), kernel, mode='wrap') FilterPy implements this with discrete_bayes' predict() function. End of explanation """ prior = predict(belief, offset=3, kernel=[.05, .05, .6, .2, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6)) """ Explanation: All of the elements are unchanged except the middle ones. The values in position 4 and 6 should be $$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$ Position 5 should be $$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$ Let's ensure that it shifts the positions correctly for movements greater than one and for asymmetric kernels. End of explanation """ from filterpy.discrete_bayes import update hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) prior = np.array([.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) """ Explanation: The position was correctly shifted by 3 positions and we give more weight to the likelihood of an overshoot vs an undershoot, so this looks correct. Make sure you understand what we are doing. We are making a prediction of where the dog is moving, and convolving the probabilities to get the prior. If we weren't using probabilities we would use this equation that I gave earlier: $$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$ The prior, our prediction of where the dog will be, is the amount the dog moved plus his current position. The dog was at 10, he moved 5 meters, so he is now at 15 m. It couldn't be simpler. But we are using probabilities to model this, so our equation is: $$ \bar{ \mathbf x}{k+1} = \mathbf x_k \ast f{\mathbf x}(\bullet)$$ We are convolving the current probabilistic position estimate with a probabilistic estimate of how much we think the dog moved. It's the same concept, but the math is slightly different. $\mathbf x$ is bold to denote that it is an array of numbers. Integrating Measurements and Movement Updates The problem of losing information during a prediction may make it seem as if our system would quickly devolve into having no knowledge. However, each prediction is followed by an update where we incorporate the measurement into the estimate. The update improves our knowledge. The output of the update step is fed into the next prediction. The prediction degrades our certainty. That is passed into another update, where certainty is again increased. Let's think about this intuitively. Consider a simple case - you are tracking a dog while he sits still. During each prediction you predict he doesn't move. Your filter quickly converges on an accurate estimate of his position. Then the microwave in the kitchen turns on, and he goes streaking off. You don't know this, so at the next prediction you predict he is in the same spot. But the measurements tell a different story. As you incorporate the measurements your belief will be smeared along the hallway, leading towards the kitchen. On every epoch (cycle) your belief that he is sitting still will get smaller, and your belief that he is inbound towards the kitchen at a startling rate of speed increases. That is what intuition tells us. What does the math tell us? We have already programmed the update and predict steps. All we need to do is feed the result of one into the other, and we will have implemented a dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right one position each epoch. As in a real world application, we will start with no knowledge of his position by assigning equal probability to all positions. End of explanation """ kernel = (.1, .8, .1) prior = predict(posterior, 1, kernel) book_plots.plot_prior_vs_posterior(prior, posterior, True, ylim=(0,.5)) """ Explanation: After the first update we have assigned a high probability to each door position, and a low probability to each wall position. End of explanation """ likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) """ Explanation: The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense. End of explanation """ prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) """ Explanation: Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall. End of explanation """ prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) """ Explanation: This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more cycle. End of explanation """ book_plots.predict_update_chart() """ Explanation: I ignored an important issue. Earlier I assumed that we had a motion sensor for the predict step; then, when talking about the dog and the microwave I assumed that you had no knowledge that he suddenly began running. I mentioned that your belief that the dog is running would increase over time, but I did not provide any code for this. In short, how do we detect and/or estimate changes in the process model if we aren't directly measuring it? For now I want to ignore this problem. In later chapters we will learn the mathematics behind this estimation; for now it is a large enough task just to learn this algorithm. It is profoundly important to solve this problem, but we haven't yet built enough of the mathematical apparatus that is required, and so for the remainder of the chapter we will ignore the problem by assuming we have a sensor that senses movement. The Discrete Bayes Algorithm This chart illustrates the algorithm: End of explanation """ def discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway): posterior = np.array([.1]*10) priors, posteriors = [], [] for i, z in enumerate(measurements): prior = predict(posterior, 1, kernel) priors.append(prior) likelihood = lh_hallway(hallway, z, z_prob) posterior = update(likelihood, prior) posteriors.append(posterior) return priors, posteriors def plot_posterior(posteriors, i): plt.title('Posterior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(posteriors[i], ylim=(0, 1.0)) plt.axvline(i % len(hallway), lw=5) def plot_prior(priors, i): plt.title('Prior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(priors[i], ylim=(0, 1.0), c='#ff8015') plt.axvline(i % len(hallway), lw=5) def animate_discrete_bayes(step): step -= 1 i = step // 2 if step % 2 == 0: plot_prior(priors, i) else: plot_posterior(posteriors, i) """ Explanation: This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter. The filter equations are: $$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{Predict Step} \ \mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{Update Step}\end{aligned}$$ $\mathcal L$ is the usual way to write the likelihood function, so I use that. The $\|\|$ notation denotes taking the norm. We need to normalize the product of the likelihood with the prior to ensure $x$ is a probability distribution that sums to one. We can express this in pseudocode. Initialization 1. Initialize our belief in the state Predict 1. Based on the system behavior, predict state for the next time step 2. Adjust belief to account for the uncertainty in prediction Update 1. Get a measurement and associated belief about its accuracy 2. Compute how likely it is the measurement matches each state 3. Update state belief with this likelihood When we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ. Algorithms in this form are sometimes called predictor correctors. We make a prediction, then correct them. Let's animate this. First Let's write functions to perform the filtering and to plot the results at any step. I've plotted the position of the doorways in black. Prior are drawn in orange, and the posterior in blue. I draw a thick vertical line to indicate where Simon really is. This is not an output of the filter - we know where Simon is only because we are simulating his movement. End of explanation """ # change these numbers to alter the simulation kernel = (.1, .8, .1) z_prob = 1.0 hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) # measurements with no noise zs = [hallway[i % len(hallway)] for i in range(50)] priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes, step=IntSlider(value=1, max=len(zs)*2)); """ Explanation: Let's run the filter and animate it. End of explanation """ hallway = np.array([1, 0, 1, 0, 0]*2) kernel = (.1, .8, .1) prior = np.array([.1] * 10) zs = [1, 0, 1, 0, 0, 1] z_prob = 0.75 priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes, step=IntSlider(value=12, max=len(zs)*2)); """ Explanation: Now we can see the results. You can see how the prior shifts the position and reduces certainty, and the posterior stays in the same position and increases certainty as it incorporates the information from the measurement. I've made the measurement perfect with the line z_prob = 1.0; we will explore the effect of imperfect measurements in the next section. Finally, Another thing to note is how accurate our estimate becomes when we are in front of a door, and how it degrades when in the middle of the hallway. This should make intuitive sense. There are only a few doorways, so when the sensor tells us we are in front of a door this boosts our certainty in our position. A long stretch of no doors reduces our certainty. The Effect of Bad Sensor Data You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a filter - it should filter out bad sensor measurements. Does it do that? To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 6 correct measurements: End of explanation """ measurements = [1, 0, 1, 0, 0, 1, 1] priors, posteriors = discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway); plot_posterior(posteriors, 6) """ Explanation: We have identified the likely cases of having started at position 0 or 5, because we saw this sequence of doors and walls: 1,0,1,0,0. Now I inject a bad measurement. The next measurement should be 0, but instead we get a 1: End of explanation """ with figsize(y=5.5): measurements = [1, 0, 1, 0, 0, 1, 1, 1, 0, 0] for i, m in enumerate(measurements): likelihood = lh_hallway(hallway, z=m, z_prob=.75) posterior = update(likelihood, prior) prior = predict(posterior, 1, kernel) plt.subplot(5, 2, i+1) book_plots.bar_plot(posterior, ylim=(0, .4), title='step {}'.format(i+1)) plt.tight_layout() """ Explanation: That one bad measurement has significantly eroded our knowledge. Now let's continue with a series of correct measurements. End of explanation """ class Train(object): def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9): self.track_len = track_len self.pos = 0 self.kernel = kernel self.sensor_accuracy = sensor_accuracy def move(self, distance=1): """ move in the specified direction with some small chance of error""" self.pos += distance # insert random movement error according to kernel r = random.random() s = 0 offset = -(len(self.kernel) - 1) / 2 for k in self.kernel: s += k if r <= s: break offset += 1 self.pos = int((self.pos + offset) % self.track_len) return self.pos def sense(self): pos = self.pos # insert random sensor error if random.random() > self.sensor_accuracy: if random.random() > 0.5: pos += 1 else: pos -= 1 return pos """ Explanation: We quickly filtered out the bad sensor reading and converged on the most likely positions for our dog. Drawbacks and Limitations Do not be mislead by the simplicity of the examples I chose. This is a robust and complete filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works. With that said, this filter it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book. The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dog's $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of an array we use a multidimensional grid to store the probabilities at each discrete location. Each update() and predict() step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time per time step. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements. The second problem is that the filter is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. A 100 meter hallway requires 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. A 100x100 m$^2$ courtyard requires 100,000,000 bins to get 1cm accuracy. A third problem is that the filter is multimodal. In the last example we ended up with strong beliefs that the dog was in position 4 or 9. This is not always a problem. Particle filters, which we will study later, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, and 30% sure you are on Willow Avenue. A forth problem is that it requires a measurement of the change in state. We need a motion sensor to detect how much the dog moves. There are ways to work around this problem, but it would complicate the exposition of this chapter, so, given the aforementioned problems, I will not discuss it further. With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues. Tracking and Control We have been passively tracking an autonomously moving object. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go there. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command. There is more than one robot, and we need to know where they all are so we do not cause them to crash. So we add sensors. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we count 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. We can use the code from the previous section to track our robot since magnet counting is very similar to doorway sensing. But we are not done. We've learned to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? We know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. This is a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. Wheels and motors are imperfect. The robot might end up 0.9 units away, or maybe 1.2 units. Now the entire solution is clear. We assumed that the dog kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call predict() we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement. Simulating the Train Behavior We need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and its sensor will sometimes return the incorrect value. End of explanation """ def train_filter(iterations, kernel, sensor_accuracy, move_distance, do_print=True): track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) prior = np.array([.9] + [0.01]*9) posterior = prior[:] normalize(prior) robot = Train(len(track), kernel, sensor_accuracy) for i in range(iterations): # move the robot and robot.move(distance=move_distance) # peform prediction prior = predict(posterior, move_distance, kernel) # and update the filter m = robot.sense() likelihood = lh_hallway(track, m, sensor_accuracy) posterior = update(likelihood, prior) index = np.argmax(posterior) if do_print: print('''time {}: pos {}, sensed {}, ''' '''at position {}'''.format( i, robot.pos, m, track[robot.pos])) print(''' estimated position is {}''' ''' with confidence {:.4f}%:'''.format( index, posterior[index]*100)) book_plots.bar_plot(posterior) if do_print: print() print('final position is', robot.pos) index = np.argmax(posterior) print('''Estimated position is {} with ''' '''confidence {:.4f}%:'''.format( index, posterior[index]*100)) """ Explanation: With that we are ready to write the filter. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. A length of 10 makes it easier to plot and inspect. End of explanation """ import random random.seed(3) np.set_printoptions(precision=2, suppress=True, linewidth=60) train_filter(4, kernel=[1.], sensor_accuracy=.999, move_distance=4, do_print=True) """ Explanation: Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding. End of explanation """ random.seed(5) train_filter(4, kernel=[.1, .8, .1], sensor_accuracy=.9, move_distance=4, do_print=True) """ Explanation: We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors. End of explanation """ with figsize(y=5.5): for i in range (4): random.seed(3) plt.subplot(221+i) train_filter(148+i, kernel=[.1, .8, .1], sensor_accuracy=.8, move_distance=4, do_print=False) plt.title ('iteration {}'.format(148+i)) """ Explanation: There was a sensing error at time 1, but we are still quite confident in our position. Now let's run a very long simulation and see how the filter responds to errors. End of explanation """
cbpygit/pypmj
examples/Extensions explained - materials.ipynb
gpl-3.0
import os os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg' """ Explanation: Imports and configuration We set the path to the config.cfg file using the environment variable 'PYPMJ_CONFIG_FILE'. If you do not have a configuration file yet, please look into the Setting up a configuration file example. End of explanation """ import sys sys.path.append('..') import pypmj as jpy import numpy as np """ Explanation: Now we can import pypmj and numpy. Since the parent directory, which contains the pypmj module, is not automatically in our path, we need to append it before. End of explanation """ jpy.load_extension('materials') """ Explanation: Load the materials extension. End of explanation """ jpy.MaterialData? """ Explanation: What this extension is for The materials extension provides access to tabulated and formula-based optical material property data. It can load such data from different data bases, extract additional information such as citations, as well as automatically interpolate, extrapolate and plot the data. Usage The functionality is provided by the class MaterialData. End of explanation """ jpy.MaterialData.materials.keys() """ Explanation: Note: The current implementation is a bit inflexible/incomplete and will probably be changed/completed in a future version. There is currently only access to the following materials, although adding more materials is easily done by extending this dict in the materials.py file: End of explanation """ GaAs = jpy.MaterialData(material = 'gallium_arsenide') """ Explanation: We show the abilities using the gallium arsenide data set. End of explanation """ GaAs.getAllInfo() """ Explanation: Get some metadata: End of explanation """ wvl = 600.e-9 # = 600nm GaAs.getNKdata(wvl) """ Explanation: The default for unitOfLength is 1., which defaults to meter. We can get refractive index data (here called $n$-$k$-data, where $n$ is the real and $k$ the imaginary part of the complex refractive index) for specific wavelengths like this End of explanation """ wvls = np.linspace(600.e-9, 1000.e-9, 6) # = 600nm to 1000nm GaAs.getNKdata(wvls) """ Explanation: Or for multiple wavelengths values End of explanation """ GaAs.getPermittivity(wvls) """ Explanation: Or we can get the permittivity. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(8,6)) GaAs.plotData() """ Explanation: We can also plot the complete known data to get an overview of the data set. End of explanation """ plt.figure(figsize=(8,6)) GaAs.plotData(wvlRange=(200.e-9, 1000.e-9), plotKnownValues=True) """ Explanation: Or we can plot data in a specific wavelength range, thistime also showing the known (tabulated) points to show case the interpolation. End of explanation """
BrownDwarf/ApJdataFrames
notebooks/Douglas2017_extra_1.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import pandas as pd pd.options.display.max_columns = 150 %config InlineBackend.figure_format = 'retina' import astropy from astropy.table import Table from astropy.io import ascii import numpy as np """ Explanation: ApJdataFrames Douglas_2017 Extra, 1 Title: Poking the Beehive from Space: K2 Rotation Periods for Praesepe Authors: S. T. Douglas, M. A. Agüeros, K. R. Covey, and A. Kraus Data is from this paper: http://iopscience.iop.org/article/10.3847/1538-4357/aa6e52/meta End of explanation """ df2 = pd.read_csv('../data/Douglas2017/tab2.csv') df3 = pd.read_csv('../data/Douglas2017/tab3.csv') """ Explanation: Douglas et al. 2016 Tables 2 & 3 End of explanation """ c16_GO_url = 'https://keplerscience.arc.nasa.gov/data/campaigns/c16/K2Campaign16targets.csv' df_GO_c16 = pd.read_csv(c16_GO_url) df3.shape, df_GO_c16.shape df_C16 = pd.merge(df3, df_GO_c16, how = 'inner', left_on='EPIC', right_on='EPIC ID') df_C16.shape """ Explanation: Spectroscopic selection for iSHELL C16 monitoring Selection criteria for iSHELL Previously observed by Douglas et al. 2017 Being observed in K2 C16 Bright enough for iSHELL Large amplitude of starspot modulation Medium-to-low period (lower vsini, on average) The K2 website has the list of targets in C16 End of explanation """ plt.figure(figsize=(5,5)) plt.plot(df_C16.Kmag, df_C16['Raw-Amp'], '.', ms=3, color='#BBBBBB') plt.xlim(8, 20) plt.ylim(0, 0.11) plt.xlabel('$K$ (mag)') plt.ylabel('Amplitude (mag)') """ Explanation: OK, so 308 targets are being re-observed. Good. End of explanation """ df_C16['Raw-Amp'].max() plt.figure(figsize=(5,10)) plt.axvspan(9.5, 10.9, alpha=0.5, color='#f39c12') plt.axhspan(0.03, 0.285, alpha=0.5, color='#1abc9c') plt.plot(df_C16.Kmag, df_C16['Raw-Amp'], 'o', ms=3, color='#2980b9', mec='#ecf0f1') plt.xlim(8, 20) plt.ylim(0, 0.30) plt.xlabel('$K$ (mag)') plt.ylabel('Amplitude (mag)') plt.savefig('/Users/obsidian/Desktop/beehive/iSHELL_beehive.png', dpi=300) """ Explanation: The iSHELL exposure time calculator indicates a 10.9 magnitude can be achieved in 10 minutes with 6 cycles with a 0.75 arcsecond slit in 0.8 arcsecond seeing. We don't want to go too bright, because those targets are likely to be more massive stars, which don't show starspots, or are facula dominated. End of explanation """ subset = (df_C16.Kmag > 9.5) & (df_C16.Kmag < 10.9) & (df_C16['Raw-Amp'] > 0.03) df_C16[subset] df_C16.columns df_C16[subset] good_cols = ['NAME', 'EPIC', 'Mass', "r'mag", 'Kmag', 'Prot1', 'Clean?', 'Multi-Prot?', 'Spot-Evol?', 'Blended?', 'Binary', 'Raw-Amp', 'Smoothed-Amp', 'Prot-Flag', 'Kpmag', 'EPIC ID', ' RA (J2000) [deg]', ' Dec (J2000) [deg]', ' magnitude'] df_C16.columns df_C16[subset][good_cols].T pd.merge(df_C16[subset][good_cols], df2, on='EPIC').T """ Explanation: Well, there are a few objects in the target range. End of explanation """ df_vdb = pd.read_csv('~/Desktop/beehive/c05/211900000/35518/' + \ 'hlsp_k2sff_k2_lightcurve_211935518-c05_kepler_v1_llc-default-aper.txt', usecols=[0,1]) df_vdb.columns #plt.plot(df_vdb['BJD - 2454833'], df_vdb[' Corrected Flux']) #phased = np.mod(df_vdb['BJD - 2454833'], 3.932626) #plt.plot(phased, df_vdb[' Corrected Flux'], '.') """ Explanation: Read in the lightcurve End of explanation """
thiagoqd/queirozdias-deep-learning
seq2seq/sequence_to_sequence_implementation.ipynb
mit
import numpy as np import time import helper source_path = 'data/letters_source.txt' target_path = 'data/letters_target.txt' source_sentences = helper.load_data(source_path) target_sentences = helper.load_data(target_path) """ Explanation: Character Sequence to Sequence In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews. <img src="images/sequence-to-sequence.jpg"/> Dataset The dataset lives in the /data/ folder. At the moment, it is made up of the following files: * letters_source.txt: The list of input letter sequences. Each sequence is its own line. * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number. End of explanation """ source_sentences[:50].split('\n') """ Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols. End of explanation """ target_sentences[:50].split('\n') """ Explanation: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line. End of explanation """ def extract_character_vocab(data): special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int # Build int2letter and letter2int dicts source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences) # Convert characters to ids source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')] target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')] print("Example source sequence") print(source_letter_ids[:3]) print("\n") print("Example target sequence") print(target_letter_ids[:3]) """ Explanation: Preprocess To do anything useful with it, we'll need to turn the each string into a list of characters: <img src="images/source_and_target_arrays.png"/> Then convert the characters to their int values as declared in our vocabulary: End of explanation """ from distutils.version import LooseVersion import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) """ Explanation: This is the final shape we need them to be in. We can now proceed to building the model. Model Check the Version of TensorFlow This will check to make sure you have the correct version of TensorFlow End of explanation """ # Number of Epochs epochs = 60 # Batch Size batch_size = 128 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 15 decoding_embedding_size = 15 # Learning Rate learning_rate = 0.001 """ Explanation: Hyperparameters End of explanation """ def get_model_inputs(): input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length') return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length """ Explanation: Input End of explanation """ def encoding_layer(input_data, rnn_size, num_layers, source_sequence_length, source_vocab_size, encoding_embedding_size): # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ Explanation: Sequence to Sequence Model We can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components: 2.1 Encoder - Embedding - Encoder cell 2.2 Decoder 1- Process decoder inputs 2- Set up the decoder - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder 2.3 Seq2seq model connecting the encoder and decoder 2.4 Build the training graph hooking up the model with the optimizer 2.1 Encoder The first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder. Embed the input data using tf.contrib.layers.embed_sequence <img src="images/embed_sequence.png" /> Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output. <img src="images/encoder.png" /> End of explanation """ # Process the input we'll feed to the decoder def process_decoder_input(target_data, vocab_to_int, batch_size): '''Remove the last word id from each batch and concat the <GO> to the begining of each batch''' ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1) return dec_input """ Explanation: 2.2 Decoder The decoder is probably the most involved part of this model. The following steps are needed to create it: 1- Process decoder inputs 2- Set up the decoder components - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder Process Decoder Input In the training process, the target sequences will be used in two different places: Using them to calculate the loss Feeding them to the decoder during training to make the model more robust. Now we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form): <img src="images/targets_1.png"/> We need to do a simple transformation on the tensor before feeding it to the decoder: 1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. We do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting. <img src="images/strided_slice_1.png"/> 2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning. <img src="images/targets_add_go.png"/> Now the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols): <img src="images/targets_after_processing_1.png"/> End of explanation """ def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size, target_sequence_length, max_target_sequence_length, enc_state, dec_input): # 1. Decoder Embedding target_vocab_size = len(target_letter_to_int) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_letter_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return training_decoder_output, inference_decoder_output """ Explanation: Set up the decoder components - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder 1- Embedding Now that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. We'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent: <img src="images/embeddings.png" /> 2- Decoder Cell Then we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well. We need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model). First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM. 3- Dense output layer Before we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step. 4- Training decoder Essentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust. We can think of the training decoder as looking like this (except that it works with sequences in batches): <img src="images/sequence-to-sequence-training-decoder.png"/> The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters). 5- Inference decoder The inference decoder is the one we'll use when we deploy our model to the wild. <img src="images/sequence-to-sequence-inference-decoder.png"/> We'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs. End of explanation """ def seq2seq_model(input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers): # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, source_sequence_length, source_vocab_size, encoding_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(targets, target_letter_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size, target_sequence_length, max_target_sequence_length, enc_state, dec_input) return training_decoder_output, inference_decoder_output """ Explanation: 2.3 Seq2seq model Let's now go a step above, and hook up the encoder and decoder using the methods we just declared End of explanation """ # Build the graph train_graph = tf.Graph() # Set the graph to default to ensure that it is ready for training with train_graph.as_default(): # Load the model inputs input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs() # Create the training and inference logits training_decoder_output, inference_decoder_output = seq2seq_model(input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length, len(source_letter_to_int), len(target_letter_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers) # Create tensors for the training logits and inference logits training_logits = tf.identity(training_decoder_output.rnn_output, 'logits') inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions') # Create the weights for sequence_loss masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this: <img src="images/logits.png"/> The logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient. End of explanation """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths """ Explanation: Get Batches There's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2 Source sequences (it's actually in int form, we're showing the characters for clarity): <img src="images/source_batch.png" /> Target sequences (also in int, but showing letters for clarity): <img src="images/target_batch.png" /> End of explanation """ # Split data to training and validation sets train_source = source_letter_ids[batch_size:] train_target = target_letter_ids[batch_size:] valid_source = source_letter_ids[:batch_size] valid_target = target_letter_ids[:batch_size] (valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size, source_letter_to_int['<PAD>'], target_letter_to_int['<PAD>'])) display_step = 20 # Check training loss after every 20 batches checkpoint = "best_model.ckpt" with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(1, epochs+1): for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate( get_batches(train_target, train_source, batch_size, source_letter_to_int['<PAD>'], target_letter_to_int['<PAD>'])): # Training step _, loss = sess.run( [train_op, cost], {input_data: sources_batch, targets: targets_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths}) # Debug message updating us on the status of the training if batch_i % display_step == 0 and batch_i > 0: # Calculate validation cost validation_loss = sess.run( [cost], {input_data: valid_sources_batch, targets: valid_targets_batch, lr: learning_rate, target_sequence_length: valid_targets_lengths, source_sequence_length: valid_sources_lengths}) print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}' .format(epoch_i, epochs, batch_i, len(train_source) // batch_size, loss, validation_loss[0])) # Save Model saver = tf.train.Saver() saver.save(sess, checkpoint) print('Model Trained and Saved') """ Explanation: Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size. End of explanation """ def source_to_seq(text): '''Prepare the text for the model''' sequence_length = 7 return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text)) input_sentence = 'hello' text = source_to_seq(input_sentence) checkpoint = "./best_model.ckpt" loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(checkpoint + '.meta') loader.restore(sess, checkpoint) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') #Multiply by batch_size to match the model's input parameters answer_logits = sess.run(logits, {input_data: [text]*batch_size, target_sequence_length: [len(text)]*batch_size, source_sequence_length: [len(text)]*batch_size})[0] pad = source_letter_to_int["<PAD>"] print('Original Text:', input_sentence) print('\nSource') print(' Word Ids: {}'.format([i for i in text])) print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text]))) print('\nTarget') print(' Word Ids: {}'.format([i for i in answer_logits if i != pad])) print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad]))) """ Explanation: Prediction End of explanation """
sueiras/training
sklearn/03 - Control overfit and hyperparameter optimization.ipynb
gpl-3.0
from __future__ import print_function from sklearn import __version__ as sklearn_version print('Sklearn version:', sklearn_version) """ Explanation: Sklearn control overfit example - Use the California house database to show how to control overfit tuning the model parameters End of explanation """ from sklearn import datasets all_data = datasets.california_housing.fetch_california_housing() print(all_data.DESCR) # Randomize, separate train & test and normalize from sklearn.utils import shuffle X, y = shuffle(all_data.data, all_data.target, random_state=0) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0) # Normalize the data from sklearn.preprocessing import Normalizer normal = Normalizer() X_train = normal.fit_transform(X_train) X_test = normal.transform(X_test) # Create a basic decision tree from sklearn import tree from sklearn.metrics import mean_absolute_error clf = tree.DecisionTreeRegressor() clf.fit(X_train, y_train) mean_absolute_error(y_test, clf.predict(X_test)) # Define a function to evaluate the error over models with different max_depth def acc(md): ''' Calculate error of a tree with a specific mas_depth Paramters: md: max depth of the tree Returns: Mean absolute error of the fitted tree ''' # Define model ... # Fit model ... # Evaluate and return the error ... return ... # Evaluate from max_depth=1 to max_depth=30 index = [] accuracy = [] for i in range(1,30): accuracy_step = acc(i) index += [i] accuracy += [accuracy_step] print('Max depth - Error:', i, accuracy_step) # Plot the error vs max_depth import matplotlib.pyplot as plt %matplotlib inline plt.plot(index,accuracy) """ Explanation: Load data End of explanation """ # Define the model with the best parametrization ... clf.fit(X_train, y_train) mean_absolute_error(y_test, clf.predict(X_test)) # Plot the scatterplot plt.scatter(y_test, clf.predict(X_test)) """ Explanation: Fit the best model End of explanation """ import numpy as np from time import time from scipy.stats import randint from sklearn.model_selection import RandomizedSearchCV # Define estimator. No parameters ... # specify parameters and distributions to sample from (COMPLETE) param_dist = {"max_depth": randint(3, 20), "min_samples_leaf": ...} # Define randomized search. Complete the function parameters random_search = RandomizedSearchCV(...) # Run the randomized search start = time() random_search.fit(X_train, y_train) print("RandomizedSearchCV took %.2f seconds for %d candidates parameter settings." % ((time() - start), n_iter_search)) # Utility function to report best scores def report(results, n_top=3): for i in range(1, n_top + 1): candidate = np.argmax(results['rank_test_score'] == i) print("Model with rank: ", i) print("Mean validation score: ", results['mean_test_score'][candidate]) print("Parameters: ", results['params'][candidate], "\n") report(random_search.cv_results_) # Build the tree with the optimal parametrization # Define the model with the best parametrization ... clf.fit(X_train, y_train) print(mean_absolute_error(y_test, clf.predict(X_test))) plt.scatter(y_test, clf.predict(X_test)) """ Explanation: A better way. Use a model_selection tool: RandomizedSeachCV End of explanation """
Lattecom/HYStudy
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
mit
import sympy sympy.init_printing(use_latex='mathjax') """ Explanation: Symbolic operation with sympy End of explanation """ # define symbol x = sympy.symbols('x') print(type(x)) x # define fuction f = x**2 + 4*x f # differentiation sympy.diff(f) # simplify function sympy.simplify(f) # solving equation from sympy import solve solve(f) # factorize from sympy import factor sympy.factor(f) # partial differentiation x, y = sympy.symbols('x y') f = x**2 + 4*x*y + y**2 f # sympy.diff(a, b) ## a: function, b: variable sympy.diff(f, x) sympy.diff(f, y) """ Explanation: reference for LaTeX commands in MathJax http://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm http://oeis.org/wiki/List_of_LaTeX_mathematical_symbols End of explanation """ # draw 2nd degree function def f2(x): return x**3 + 2*x**2 - 20 x = np.linspace(-21, 21, 500) y = f2(x) plt.plot(x, y) plt.show() """ Explanation: Draw function graph End of explanation """ import numpy as np import matplotlib as mpl import matplotlib.pylab as plt # function definition def f(x, y): return 3*x**2 + 4*x*y + 4*y**2 - 50*x - 20*y + 100 # coordinate range xx = np.linspace(-11, 16, 500) yy = np.linspace(-11, 16, 500) # make coordinate point X, Y = np.meshgrid(xx, yy) # dependent variable point on coordinate Z = f(X, Y) from mpl_toolkits.mplot3d import Axes3D # draw surface plot fig = plt.figure(figsize=(15, 10)) fig.gca(projection='3d').plot_surface(X, Y, Z) plt.xlabel('x') plt.ylabel('y') plt.show() # gradient vector(x) def gx(x, y): return 6*x + 4*y - 50 # gradient vector(y) def gy(x, y): return 8*y + 4*x - 20 # gradient vector point and coordinate xx2 = np.linspace(-10, 15, 10) yy2 = np.linspace(-10, 15, 10) X2, Y2 = np.meshgrid(xx2, yy2) GX = gx(X2, Y2) GY = gy(X2, Y2) # gradient vector quiver plot plt.figure(figsize=(10, 10)) ## make contour plot contour = plt.contour(X, Y, Z, cmap='pink', levels=[-100, 0, 100, 200, 400, 800, 1600]) ## contour plot labeling plt.clabel(contour, inline=1, fontsize=10) ## make quiver plot ## plt.quiver(x, y, gx, gy): draw vector from (x, y) to (gx, gy) plt.quiver(X2, Y2, GX, GY, color='pink', width=0.003, scale=600) ## plot labeling plt.axis('equal') plt.xlabel('x') plt.ylabel('y') plt.show() """ Explanation: Gradient vector, quiver & contour plot End of explanation """
NYUDataBootcamp/Materials
Code/notebooks/bootcamp_practice_a_answerkey.ipynb
mit
# to make sure things are working, run this import pandas as pd print('Pandas version: ', pd.__version__) """ Explanation: Data Bootcamp: Code Practice A (answerkey) Optional Code Practice A: Jupyter basics and Python's graphics tools (the Matplotlib package). The goals are to become familiar with Jupyter and Matplotlib and to explore some datasets. The data management part of this goes beyond what we've done in class to date. We recommend you just run the code provided and focus on the graphs for now. This notebook written by Dave Backus for the NYU Stern course Data Bootcamp. Check Jupyter before we start. Run the code below and make sure it works. End of explanation """ import pandas as pd import matplotlib.pyplot as plt import datetime as dt %matplotlib inline """ Explanation: If you get something like "Pandas version: 0.17.1" you're fine. If you get an error, bring your computer by and ask for help. If you're unusually brave, go to StackOverflow and read the instructions. Then come ask for help. (This has to do with how your computer processes unicode. When you hear that word -- unicode -- you should run away at high speed.) Question 1. Setup Import packages, arrange for graphs to display in the notebook. End of explanation """ url = 'http://pages.stern.nyu.edu/~dbackus/Data/beer_production_1947-2004.xlsx' beer = pd.read_excel(url, skiprows=12, index_col=0) print('Dimensions:', beer.shape) beer[list(range(1,11))].head(3) vars = list(range(1,101)) # extract top 100 firms pdf = beer[vars].T # transpose (flip rows and columns) pdf[[1947, 1967, 1987, 2004]].head() """ Explanation: Remind yourself: What does the pandas package do? [data management] What does the matplotlib package do? [graphics] What does %matplotlib inline do? [displays plots in the notebook] Question 2. Jupyter basics We refer to the cell that's highlighted as the current cell. Clicking once on any cell makes it the current cell. Clicking again allows you to edit it. The + in the toolbar at the top creates a new cell below the current cell. Change a cell from Code to Markdown (in other words, text) with the dropdown menu in the toolbar. To run a cell, hit shift-enter or click on the run-cell icon in the tooolbar (sideways triangle and vertical line). For more information, click on Help at the top. User Interface Tour is a good place to start. Practice with the following: Make this cell the current cell. Add an empty cell below it. Add text to the new cell: your name and the date, for example. Optional: Add a link to your LinkedIn or Facebook page. Hint: Look at the text in the top cell to find an example of a link. Run the cell. Question 3. Winner take all and the long tail in the US beer industry The internet has produced some interesting market behavior, music being a great example. Among them: Winner take all. The large producers (Beyonce, for example) take larger shares of the market than they had in the past. The long tail. At the same time, small producers in aggregate increase their share. Curiously enough, we see the same thing in the US beer industry: Scale economies and a reduction in transportation costs (the interstate highway system was built in the 1950s and 60s) led to consolidation, with the large firms getting larger, and the small ones either sellingout or going bankrupt. (How many beer brands can you think of that no longer exist?) Starting in the 1980s, we saw a significant increase in the market share of small firms ("craft brewers") overall, even though each of them remains small. We illustrate this with data from Victor and Carol Tremblay that describe the output of the top 100 US beer producers from 1947 to 2004. This is background data from their book, The US Brewing Industry, MIT Press, 2004. See here for the names of the brewers. Output is measured in thousands of 31-gallon barrels. Data manipulation. The data manipulation goes beyond what we've done in class. You're free to ignore it, but here's the idea. The spreadsheet contains output by firms ranked 1 to 100 in size. Each row refers to a specific year and includes the outputs of firms in order of size. We don't have their names. We transpose this so that the columns are years and include output for the top-100 firms. The row labels are the size rank of the firm. We then plot the size against the rank for four years to see how it has changed. End of explanation """ # a basic plot fig, ax = plt.subplots() pdf[1947].plot(ax=ax, logy=True) pdf[1967].plot(ax=ax, logy=True) pdf[1987].plot(ax=ax, logy=True) pdf[2004].plot(ax=ax, logy=True) ax.legend() """ Explanation: Question. Can you see consolidation here? End of explanation """ # for help ax.set_title? # this is easier if we put the basic plot in a function def make_plot(): fig, ax = plt.subplots() pdf[1947].plot(ax=ax, logy=True) pdf[1967].plot(ax=ax, logy=True) pdf[1987].plot(ax=ax, logy=True) pdf[2004].plot(ax=ax, logy=True) ax.legend() return ax ax = make_plot() ax.set_title('Beer sales by industry rank', fontsize=14) # line width: put lw=2 in each of the plot statements ax = make_plot() ax.set_xlabel('Industry Rank') ax.set_ylabel('Sales (log scale)') # log scale: otherwise the differences are too large # we can't show the alternative because some of the numbers are zero # color: we add color='somecolor' in each of the plot statements """ Explanation: Answer these questions below. Code is sufficient, but it's often helpful to add comments to remind yourself what you did, and why. Get help for the set.title method by typing ax.set_title? in a new cell and running it. Note that you can open the documentation this produces in a separate tab with the icon in the upper right (hover text = "Open the pager in an external window"). Add a title with ax.set_title('Your title'). Change the fontsize of the title to 14. What happens if we add the argument/parameter lw=2 to the ax.plot() statements? Add a label to the x axis with ax.set_xlabel(). Add a label to the y axis. Why did we use a log scale (logy=True)? What happens if we don't? Use the color argument/parameter to choose a more effective set of colors. In what sense do you see "winner takes all"? A "long tail"? Put each answer in a separate code cell. End of explanation """ # data input (takes about 20 seconds on a wireless network) url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/' url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/' url3 = 'WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLSX' url = url1 + url2 + url3 cols = [2, 4, 5] + list(range(6,28)) prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…']) print('Dimensions: ', prj.shape) print('Column labels: ', prj.columns) # rename some variables pop = prj pop = pop.rename(columns={'Reference date (as of 1 July)': 'Year', 'Region, subregion, country or area *': 'Country', 'Country code': 'Code'}) # select countries and years countries = ['Japan'] years = [2015, 2035, 2055, 2075, 2095] pop = pop[pop['Country'].isin(countries) & pop['Year'].isin(years)] pop = pop.drop(['Country', 'Code'], axis=1) pop = pop.set_index('Year').T pop = pop/1000 # convert population from thousands to millions pop.head() pop.tail() """ Explanation: Question 4. Japan's aging population Populations are getting older throughout the world, but Japan is a striking example. One of our favorite quotes: Last year, for the first time, sales of adult diapers in Japan exceeded those for babies.  Let's see what the numbers look like using projections fron the United Nations' Population Division. They have several projections; we use what they call the "medium variant." We have a similar issue with the data: population by age for a given country and date goes across rows, not down columns. So we choose the ones we want and transpose them. Again, more than we've done so far. End of explanation """ pop[[2015]].plot() pop[[2015]].plot(kind='bar') # my fav pop[[2015]].plot(kind='barh') fig, ax = plt.subplots(figsize=(10,6)) pop.plot(ax=ax) ax.set_title('Population by age') ax.set_xlabel('Population (millions)') ax.set_ylabel('Age Range') pop.plot(kind='bar', subplots=True, figsize=(6,8), sharey=True) """ Explanation: Comment. Now we have the number of people in any five-year age group running down columns. The column labels are the years. With the dataframe df: Plot the current age distribution with pop[[2015]].plot(). Note that 2015 here does not have quotes around it: it's an unusual case of integer column labels. Plot the current age distribution as a bar chart. Which do you think looks better? Create figure and axis objects Use the axis object to plot the age distribution for all the years in the dataframe. Add titles and axis labels. Plot the age distribution for each date in a separate subplot. What argument parameter does this? Bonus points: Change the size of the figure to accomodate the subplots. End of explanation """ # data input (takes about 20 seconds on a wireless network) url = 'http://pages.stern.nyu.edu/~dbackus/Data/feds200628.csv' gsw = pd.read_csv(url, skiprows=9, index_col=0, usecols=list(range(11)), parse_dates=True) print('Dimensions: ', gsw.shape) print('Column labels: ', gsw.columns) print('Row labels: ', gsw.index) # grab recent data df = gsw[gsw.index >= dt.datetime(2010,1,1)] # convert to annual, last day of year df = df.resample('A', how='last').sort_index() df.head() df.columns = list(range(1,11)) ylds = df.T ylds.head(3) """ Explanation: Question 5. Dynamics of the yield curve One of our favorite topics is the yield curve: a plot of the yield to maturity on a bond against the bond's maturity. The foundation here is yields on zero coupon bonds, which are simpler objects than yields on coupon bonds. We often refer to bond yields rising or falling, but in fact the yield curve often does different things at different maturities. We will see that here. For several years, short yields have been stuck at zero, yet yields for bond with maturities of two years and above have varied quite a bit. We use the Fed's well-known Gurkaynak, Sack, and Wright data, which provides daily data on US Treasury yields from 1961 to the present. The Fed posts the data, but it's in an unfriendly format. So we saved it as a csv file, which we read in below. The variables are yields: SVENYnn is the yield for maturity nn years. End of explanation """ fig, ax = plt.subplots() ylds.plot(ax=ax) ax.set_title('US Treasury Yields') ax.set_ylabel('Yield') ax.set_xlabel('Maturity in Years') ybar = ylds.mean(axis=1) ybar.plot(ax=ax, color='black', linewidth=3, linestyle='dashed') """ Explanation: With the dataframe ylds: Create figure and axis objects Use the axis object to plot the yield curve for all the years in the dataframe. Add a title and axis labels. Explain what you see: What happened to the yield curve over the past six years? Challenging. Compute the mean yield for each maturity. Plot them on the same graph in black. End of explanation """
walkon302/CDIPS_Recommender
notebooks/Plotting_Sequences_in_low_dimensions.ipynb
apache-2.0
# our lib from lib.resnet50 import ResNet50 from lib.imagenet_utils import preprocess_input, decode_predictions #keras from keras.preprocessing import image from keras.models import Model import glob def preprocess_img(img_path): img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) return(x,img) # instantiate the model base_model = ResNet50(include_top=False, weights='imagenet') #this will pull the weights from the folder # cut the model to lower levels only model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output) user_id = 106144465 #get images folder = '../data_img_sample_item_view_sequences/' img_files = glob.glob(folder+'*'+str(user_id)+'*') print(img_files) # make features trajectory_features = np.empty((len(img_files),2048)) for i,img_file in enumerate(img_files): x,img = preprocess_img(img_file) # preprocess trajectory_features[i,:] = model.predict(x)[0,0,0,:] red_traj = np.dot(trajectory_features,red_weights) print('target class') plt.figure(figsize=(12,6)) len_seq = len(img_files) fig,axes = plt.subplots(2,len_seq) # make color color_red_black = pd.Series(red_traj>0).map({False:'k',True:'r'}).as_matrix() for i in range(len_seq): img = image.load_img(img_files[i], target_size=(224, 224)) # images axes[0,i].imshow(img) axes[0,i].set_xticklabels([]) #axes[0,i].get_xaxis().set_visible(False) axes[0,i].get_xaxis().set_ticks([]) axes[0,i].get_yaxis().set_visible(False) if i<(len_seq-1): axes[0,i].set_xlabel('view '+str(i)) else: axes[0,i].set_xlabel('buy') # bar axes[1,i].bar(0,red_traj[i],color=color_red_black[i]) axes[1,i].set_ylim([-10,5]) axes[1,i].get_xaxis().set_visible(False) axes[1,i].axhline(y=0,linestyle='--',color='w') if i==0: print('here') axes[1,i].set_ylabel('red classification') else: axes[1,i].get_yaxis().set_visible(False) sns.despine() savefile = '../figures/example_sequence_interpretable_features_ui_'+str(user_id)+'.png' plt.savefig(savefile,dpi=300) reload(src.s3_data_management) from src import s3_data_management s3_data_management.push_results_to_s3(os.path.basename(savefile),savefile) """ Explanation: Plot Trajectories with Pictures take dimension (e.g. red) that I've trained the nn features to classify and plot sequences in that dimension. use sequences that have images End of explanation """ # load weights from the nn red_weights = np.loadtxt('../data_nn_features/class_weights_LR_redpink.txt') # load smaller user behavior dataset user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl') user_sample = user_profile.user_id.unique() print(len(user_profile)) print(len(user_sample)) user_profile.head() # read nn features spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl") spu_fea.head() # sample users size = 10 np.random.seed(1000) user_ids = np.random.choice(user_profile.user_id.unique(),size=size) fig,axes = plt.subplots(size,1,figsize=(16,3*size),sharex=True,sharey=True) for ui,user_id in enumerate(user_ids): # get his trajectory trajectory = user_profile.loc[user_profile.user_id==user_id,] # get trajectory features (make a separate function # ) trajectory_features = np.empty((len(trajectory),2048)) for i,(index,row) in enumerate(trajectory.iterrows()): trajectory_features[i,:] = spu_fea.loc[spu_fea.spu_id==row['view_spu'],'features'].as_matrix()[0] # project onto red dimension red_traj = np.dot(trajectory_features,red_weights) # plot axes[ui].plot(np.arange(len(red_traj)),red_traj) axes[ui].axhline(y=0,linestyle='--',color='k') axes[ui].set_ylabel('red features') sns.despine() plt.xlabel('positition in sequence') savefile = '../figures/example_sequences_red_10_users.png' plt.savefig(savefile,dpi=300) from sklearn.decomposition import PCA trajectory.head() #spu_fea.spu_id['spu_'] # read nn features spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl") # reduce dimensionality pca = pickle.load(open('../data_nn_features/pca_all_items_sample1000.pkl','rb')) plt.plot(red_traj,np.arange(len(red_traj))) plt.xlabel('red features') plt.ylabel('positition in sequence') sns.despine() len(spu_fea['features'].as_matrix()[0]) #spu_fea[row['view_spu']] projection = pca.transform(X_item_feature[0,:].reshape(-1,1).T) projection.shape plt.plot(np.arange(100),projection[0,0:100]) plt.xlabel('component') plt.ylabel('projection') sns.despine() # sample users size = 10 np.random.seed(1000) user_ids = np.random.choice(user_profile.user_id.unique(),size=size) fig,axes = plt.subplots(size,1,figsize=(16,3*size),sharex=True,sharey=True) for ui,user_id in enumerate(user_ids): # get his trajectory trajectory = user_profile.loc[user_profile.user_id==user_id,] # get trajectory features (make a separate function # ) trajectory_features = np.empty((len(trajectory),2048)) for i,(index,row) in enumerate(trajectory.iterrows()): trajectory_features[i,:] = spu_fea.loc[spu_fea.spu_id==row['view_spu'],'features'].as_matrix()[0] # project onto pca dimension projected_traj = pca.transform(trajectory_features) # get first dimension traj_PC1 = projected_traj[:,0] traj_PC2 = projected_traj[:,1] traj_PC3 = projected_traj[:,2] # plot axes[ui].plot(traj_PC1,label='PC1') axes[ui].plot(traj_PC2,label='PC2') axes[ui].plot(traj_PC3,label='PC3') plt.legend() axes[ui].axhline(y=0,linestyle='--',color='k') axes[ui].set_ylabel('red features') sns.despine() plt.xlabel('positition in sequence') savefile = '../figures/example_sequences_PCA_10_users.png' plt.savefig(savefile,dpi=300) """ Explanation: Plot Trajectories from User Profile Eval Dataset same as above, but without images. End of explanation """ %%bash #jupyter nbconvert --to Plotting_Sequences_in_low_dimensions.ipynb && mv Plotting_Sequences_in_low_dimensions.slides.html ../notebook_slides/Plotting_Sequences_in_low_dimensions_v1.slides.html jupyter nbconvert --to html Plotting_Sequences_in_low_dimensions.ipynb && mv Exploring_Data.html ../notebook_htmls/Plotting_Sequences_in_low_dimensions_v1.html cp Plotting_Sequences_in_low_dimensions.ipynb ../notebook_versions/Plotting_Sequences_in_low_dimensions_v1.ipynb """ Explanation: Save End of explanation """
ericmjl/systems-microbiology-hiv
Problem Set.ipynb
mit
# This cell loads the data and cleans it for you, and log10 transforms the drug resistance values. # Remember to run this cell if you want to have the data loaded into memory. DATA_HANDLE = 'drug_data/hiv-protease-data.csv' # specify the relative path to the protease drug resistance data N_DATA = 8 # specify the number of columns in the CSV file that are drug resistance measurements. CONSENSUS = 'sequences/hiv-protease-consensus.fasta' # specify the relative path to the HIV protease consensus sequence data, drug_cols, feat_cols = cf.read_data(DATA_HANDLE, N_DATA) consensus_map = cf.read_consensus(CONSENSUS) data = cf.clean_data(data, feat_cols, consensus_map) for name in drug_cols: data[name] = data[name].apply(np.log10) data.head() """ Complete the function below to compute the correlation score. Use the scipy.stats.pearsonr(x, y) function to find the correlation score between two arrays of things. You do not need to type the whole name, as I have imported the pearsonr name for you, so you only have to do: pearsonr(x, y) Procedure: 1. Select two columns' names to compare. 2. Make sure to drop NaN values. the pearsonr function cannot deal with NaN values. (Refer to the Lecture notebook if you forgot how to do this.) 3. Pass the data in to pearsonr(). """ def corr_score(drug1, drug2): ### BEGIN SOLUTION # Get the subset of data, while dropping columns that have NaN in them. # Return the pearsonr score. return pearsonr(____________, ____________) ### END SOLUTION assert corr_score('IDV', 'FPV') == (0.79921991532901282, 2.6346448659104859e-306) assert corr_score('ATV', 'FPV') == (0.82009597442033089, 2.5199367322520278e-231) assert corr_score('NFV', 'DRV') == (0.69148264851159791, 4.0640711263961111e-82) assert corr_score('LPV', 'SQV') == (0.76682619729899326, 4.2705737581002648e-234) """ Explanation: Problem Set on Machine Learning Problem 1 Identify an academic literature reference that descirbes the PhenoSense assay. Paste the URL to the PubMed article below, and write a 1-2 sentence summary on what is measured in the assay, and how it relates to drug resistance. Compare and contrast it with the plaque reduction assay as mentioned in the literature - what would be one advantage of the plaque reduction assay that is lacking in PhenoSense, and vice versa? Answer Double-click on this cell to type in your answer. Use Markdown formatting if you'd like. A new paragraph is delineated by having a line in between them. You can bold or italicize text. Bulleted Lists are done this way. 4 spaces for indents. Numbered Lists are done this way. 4 spaces for indents. The numbering is automatically parsed! Leave the answer at the top so Claire can know where your answer is! Problem 2 Write code below to calculate the correlation between two drugs' resistance profiles. Identify the protease drugs for which the two drugs' resistance values are correlated. Speculate as to why they would be correlated. End of explanation """ def return_cleaned_data(drug_name, data): # Select the subsets of columns of interest. cols_of_interest = [] cols_of_interest.append(drug_name) cols_of_interest.extend(feat_cols) subset = data[cols_of_interest].dropna() Y = subset[drug_name] X = subset[feat_cols] # Binarize the columns. lb = LabelBinarizer() lb.fit(list('CHIMSVAGLPTRFYWDNEQK')) X_binarized = pd.DataFrame() for col in X.columns: binarized_cols = lb.transform(X[col]) for i, c in enumerate(lb.classes_): X_binarized[col + '_' + c] = binarized_cols[:,i] return X_binarized, Y X_binarized, Y = return_cleaned_data('FPV', data) len(X_binarized), len(Y) num_estimators = [_________] # fill in the list of estimators to try here. models = {'Random Forest':RandomForestRegressor, } # fill in the other models here # Initialize a dictionary to hold the models' MSE values. mses = dict() for model_name, model in models.items(): mses[model_name] = dict() for n in num_estimators: mses[model_name][n] = 0 # Iterate over the models, and number of estimators. for model_name, model in models.items(): for n_est in num_estimators: print(model_name, n_est) ### Begin Here # Set up the cross-validation iterator # Initialize the model # Collect the cross-validation scores. Remember that mse will be negative, and needs to # be transformed to be positive. ### End Here # Store the mean MSEs. mses[model_name][n_est] = np.mean(-cv_scores) # When you're done, run the following cell to make your plot. pd.DataFrame(mses).plot() plt.xlabel('Num Estimators') plt.ylabel('MSE') """ Explanation: Question: Which two drugs are most correlated? Answer Question: Why might they be correlated? (Hint: you can look online for what they look like.) Answer Problem 3 Fill in the code below to plot the relationship between number of estimators (X-axis) and the MSE value for each of the estimators. Try 10, 30, 50, 80, 100, 300, 500 and 800 estimators. Use the ShuffleSplit iterator with cross-validation. Use mean of at least 5 cross-validated MSE scores. End of explanation """ # Load in the data and binarize it. proteases = [s for s in SeqIO.parse('sequences/HIV1-protease.fasta', 'fasta') if len(s) == 99] alignment = MultipleSeqAlignment(proteases) proteases_df = pd.DataFrame(np.array([list(rec) for rec in alignment], str)) proteases_df.index = [s.id for s in proteases] proteases_df.columns = [i for i in range(1, 100)] X_global = cf.binarize_seqfeature(proteases_df) # Train your model here, with optimized parameters for best MSE minimization. ### BEGIN model = ________________(__________) # put your best model here, with optimized parameters. model.fit(______________) preds = model.predict(______________) plt.hist(preds) ### END """ Explanation: Question: Given the data above, consider the following question from the viewpoint of a data scientist/data analyst. What factors do you need to consider when tweaking model parameters? Answer Problem 4 Pick the best model from above, and re-train it on the dataset again. Refer to the Lecture notebook for a version of the code that may help here! Now, use it to make predictions on the global HIV protease dataset. Plot the global distribution. End of explanation """
whitead/numerical_stats
unit_9/hw_2017/problem_set_2.ipynb
gpl-3.0
from scipy.integrate import quad import numpy as np quad(np.sin, 0, np.pi)[0] """ Explanation: Problem 1 Instructions Evaluate the following definite integrals using the quad function and the lambda keyword. You may only report your answer in Python and you should only print the integral area and nothing else. Do not define a function, use lambda to define your integrands if necessary 1.1 $$ \int_0^\pi \sin x\, dx $$ End of explanation """ quad(np.log, 0, 1)[0] """ Explanation: 1.2 $$ \int_0^1 \ln x\, dx $$ End of explanation """ quad(lambda x: x**2, -1, 1)[0] """ Explanation: 1.3 $$ \int_{-1}^1 x^2 \, dx $$ End of explanation """ quad(lambda y: y**3 - 2 * y, 0, 10)[0] """ Explanation: 1.1 $$ \int_0^{10} y^3 - 2 y \, dy $$ End of explanation """ import matplotlib.pyplot as plt def my_cdf(x): '''Returns the cumulative distribution function for a standard normal from -inf to x Args: x: the upper bound of definite integral Returns: The probability of the interval ''' #use np.inf for lower bound return quad(lambda x: 1 / np.sqrt(2 * np.pi) * np.exp(-x**2 / 2), -np.inf, x)[0] #need to vectorize it v_cdf = np.vectorize(my_cdf) #plot from -3 to 3 with 1000 points x = np.linspace(-3,3,1000) plt.plot(x, v_cdf(x)) plt.xlabel('$a$') plt.ylabel('$P(-\infty < x < a$') plt.show() """ Explanation: Problem 2 Instruction Answer the following questions in Python 2.1 Create and plot a function which integrates the standard normal distribution from $-\infty$ to the given argument. Plot from $-3$ to $3$. You may not use scipy.stats and you should evaluate the integral yourself using quad. End of explanation """ def fxn(x): if np.abs(x) < 1: return x**2 return x #need to vectorize is to use with numpy v_fxn = np.vectorize(fxn) #integrate it A = quad(v_fxn, -3, 3)[0] print('The integral is {}'.format(A)) x = np.linspace(-3, 3, 1000) plt.plot(x, v_fxn(x)) plt.xlabel('x') plt.ylabel('f(x)') plt.show() """ Explanation: 2.2 Define the following piecewise function in Python, plot it and integrate it from -3 to 3: $$ f(x) = \; \left. \begin{array}{llr} x^2 & \textrm{if} & |x| < 1\ x & \textrm{otherwise} & \ \end{array}\right} $$ End of explanation """ def inner_integral(x): return quad(lambda y: x**2 - y**2, 0, x)[0] quad(inner_integral, 0, 3.5)[0] """ Explanation: 2.3 Evaluate the following integral: $$ \int_0^{3.5}\int_0^x x^2 - y^2 \,dy\, dx $$ End of explanation """
Dataweekends/odsc_intro_to_data_science
Titanic Survival Workshop.ipynb
mit
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Predicting survival of Titanic Passengers This notebook explores a dataset containing information of passengers of the Titanic. The dataset can be downloaded from Kaggle Tutorial goals Explore the dataset Build a simple predictive modeling Iterate and improve your score Optional: upload your prediction to Kaggle using the test dataset How to follow along: git clone https://github.com/Dataweekends/odsc_intro_to_data_science cd odsc_intro_to_data_science ipython notebook We start by importing the necessary libraries: End of explanation """ df = pd.read_csv('titanic-train.csv') """ Explanation: 1) Explore the dataset Numerical exploration Load the csv file into memory using Pandas Describe each attribute is it discrete? is it continuous? is it a number? is it text? Identify the target Check if any values are missing Load the csv file into memory using Pandas End of explanation """ df.head(3) """ Explanation: What's the content of df ? End of explanation """ df.info() """ Explanation: Describe each attribute (is it discrete? is it continuous? is it a number? is it text?) End of explanation """ df['Pclass'].value_counts() """ Explanation: Is Pclass a continuous or discrete class? End of explanation """ df['SibSp'].value_counts() df['Parch'].value_counts() """ Explanation: What about these: ('SibSp', 'Parch')? End of explanation """ df[['Ticket', 'Fare', 'Cabin']].head(3) df['Embarked'].value_counts() """ Explanation: and what about these: ('Ticket', 'Fare', 'Cabin', 'Embarked')? End of explanation """ df['Survived'].value_counts() """ Explanation: Identify the target What are we trying to predict? ah, yes... Survival! End of explanation """ df.info() """ Explanation: Check if any values are missing End of explanation """ df['Age'].plot(kind='hist', figsize=(10,6)) plt.title('Distribution of Age', size = '20') plt.xlabel('Age', size = '20') plt.ylabel('Number of passengers', size = '20') median_age = df['Age'].median() plt.axvline(median_age, color = 'r') median_age """ Explanation: Mental notes so far: Dataset contains 891 entries 1 Target column (Survived) 11 Features: 6 numerical, 5 text 1 useless (PassengerId) 3 categorical (Pclass, Sex, Embarked) 4 numerical, > 0 (Age, SibSp, Parch, Fare) 3 not sure how to treat (Name, Ticket, Cabin) Age is only available for 714 passengers Cabin is only available for 204 passengers Embarked is missing for 2 passengers Visual exploration plot the distribution of Age impute the missing values for Age using the median Age check the influence of Age, Sex and Class on Survival Plot the distribution of Age End of explanation """ df['Age'].fillna(median_age, inplace = True) df.info() """ Explanation: impute the missing values for Age using the median Age End of explanation """ df[df['Survived']==1]['Age'].plot(kind='hist', bins = 10, range = (0,100), figsize=(10,6), alpha = 0.3, color = 'g') df[df['Survived']==0]['Age'].plot(kind='hist', bins = 10, range = (0,100), figsize=(10,6), alpha = 0.3, color = 'r') plt.title('Distribution of Age', size = '20') plt.xlabel('Age', size = '20') plt.ylabel('Number of passengers', size = '20') plt.legend(['Survived', 'Dead']) plt.show() """ Explanation: check the influence of Age End of explanation """ survival_by_gender = df[['Sex','Survived']].pivot_table(columns = ['Survived'], index = ['Sex'], aggfunc=len) survival_by_gender survival_by_gender.plot(kind = 'bar', stacked = True) plt.show() """ Explanation: Check the influence of Sex on Survival End of explanation """ survival_by_Pclass = df[['Pclass','Survived']].pivot_table(columns = ['Survived'], index = ['Pclass'], aggfunc=len) survival_by_Pclass survival_by_Pclass.plot(kind = 'bar', stacked = True) plt.show() """ Explanation: Check the influence of Pclass on Survival End of explanation """ df['Male'] = df['Sex'].map({'male': 1, 'female': 0}) df[['Sex', 'Male']].head() """ Explanation: Ok, so, Age and Pclass seem to have some influence on survival rate. Let's build a simple model to test that Define a new feature called "Male" that is 1 if Sex = 'male' and 0 otherwise End of explanation """ actual_dead = len(df[df['Survived'] == 0]) total_passengers = len(df) ratio_of_dead = actual_dead / float(total_passengers) print "If I predict everybody dies, I'm correct %0.1f %% of the time" % (100 * ratio_of_dead) df['Survived'].value_counts() """ Explanation: Define simplest model as benchmark The simplest model is a model that predicts 0 for everybody, i.e. no survival. How good is it? End of explanation """ X = df[['Male', 'Pclass', 'Age']] y = df['Survived'] """ Explanation: We need to do better than that Define features (X) and target (y) variables End of explanation """ from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier(random_state=0) model """ Explanation: Initialize a decision tree model End of explanation """ from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=0) """ Explanation: Split the features and the target into a Train and a Test subsets. Ratio should be 80/20 End of explanation """ model.fit(X_train, y_train) """ Explanation: Train the model End of explanation """ my_score = model.score(X_test, y_test) print "Classification Score: %0.2f" % my_score """ Explanation: Calculate the model score End of explanation """ from sklearn.metrics import confusion_matrix y_pred = model.predict(X_test) print "\n=======confusion matrix==========" print confusion_matrix(y_test, y_pred) """ Explanation: Print the confusion matrix for the decision tree model End of explanation """
xpharry/Udacity-DLFoudation
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation """ data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() """ Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation """ rides[:24*10].plot(x='dteday', y='cnt') """ Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation """ dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() """ Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation """ quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std """ Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation """ # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] """ Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation """ # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] """ Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation """ class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = self.sigmoid def sigmoid(self, x): return 1 / (1 + np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = self.activation_function(final_inputs) # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_errors = np.dot(self.weights_hidden_to_output, output_error) # errors propagated to the hidden layer hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(hidden_outputs, output_errors).T # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * np.dot(inputs, hidden_errors * hidden_grad).T # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = self.activation_function(final_inputs) # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) """ Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation """ import sys ### Set the hyperparameters here ### epochs = 1000 learning_rate = 0.05 hidden_nodes = 3 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5) """ Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) """ Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation """ import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) """ Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation """
shapiromatron/bmds-server
scripts/tdist-approximation.ipynb
mit
%matplotlib inline import json import numpy as np import pandas as pd from scipy.stats import t """ Explanation: tdist estimation proof in javascript This notebook should act as a proof to the reliability of the javascript estimation. This method is used for plotting confidence intervals on group summary data. Since it's only used for plotting, it doesn't need to be extremely precise. End of explanation """ dfs = np.arange(1,350) lowers = t.ppf(0.025, dfs) uppers = t.ppf(0.975, dfs) # prove that lowers == uppers *-1; so we dont' need to capture both assert np.allclose(lowers* -1, uppers) is True df = pd.DataFrame(data=dict(df=dfs, uppers=t.ppf(0.975, dfs))) df.plot("df", "uppers", logx=True) """ Explanation: Generate "truth" End of explanation """ def inv_tdist_05(df): if df < 1 or df >350: raise ValueError() if df == 1: return 12.7062047361747; if (df < 12): b = [ 7.9703237683e-5, -3.5145890027e-3, 0.063259191874, -0.5963723075, 3.129413441, -8.8538894383, 13.358101926, ]; elif (df < 62): b = [ 1.1184055716e-10, -2.7885328039e-8, 2.8618499662e-6, -1.5585120701e-4, 4.8300645273e-3, -0.084316656676, 2.7109288893, ]; else: b = [ 5.1474329765e-16, -7.262226388e-13, 4.2142967681e-10, -1.2973354626e-7, 2.275308052e-5, -2.2594979441e-3, 2.0766977669, ]; p = np.poly1d(b) return p(df) simulated = [inv_tdist_05(i) for i in dfs] df2 = df.copy() df2.loc[:, "simulated"] = simulated df2.plot("df", logx=True) """ Explanation: Translate javascript to python: As shown in permalink: source javascript inv_tdist_05 = function(df) { // Calculates the inverse t-distribution using a piecewise linear form for // the degrees of freedom specified. Assumes a two-tailed distribution with // an alpha of 0.05. Based on curve-fitting using Excel's T.INV.2T function // with a maximum absolute error of 0.00924 and percent error of 0.33%. // // Roughly equivalent to scipy.stats.t.ppf(0.975, df) var b; if (df &lt; 1) { return NaN; } else if (df == 1) { return 12.7062047361747; } else if (df &lt; 12) { b = [ 7.9703237683e-5, -3.5145890027e-3, 0.063259191874, -0.5963723075, 3.129413441, -8.8538894383, 13.358101926, ]; } else if (df &lt; 62) { b = [ 1.1184055716e-10, -2.7885328039e-8, 2.8618499662e-6, -1.5585120701e-4, 4.8300645273e-3, -0.084316656676, 2.7109288893, ]; } else { b = [ 5.1474329765e-16, -7.262226388e-13, 4.2142967681e-10, -1.2973354626e-7, 2.275308052e-5, -2.2594979441e-3, 2.0766977669, ]; if (df &gt; 350) { console.warn("Extrapolating beyond inv_tdist_05 regression range (N&gt;350)."); return undefined; } } return ( b[0] * Math.pow(df, 6) + b[1] * Math.pow(df, 5) + b[2] * Math.pow(df, 4) + b[3] * Math.pow(df, 3) + b[4] * Math.pow(df, 2) + b[5] * Math.pow(df, 1) + b[6] ); } End of explanation """ df2.loc[:, "abs_err"] = (df2.simulated - df2.uppers) df2.loc[:, "rel_err"] = (df2.simulated - df2.uppers) / df.uppers df2.plot.scatter("df", "abs_err", ylabel="Absolute error") df2.plot.scatter("df", "rel_err", ylabel="Relative error (%)") print(f"Maximum relative error: {df2.rel_err.abs().max():.2%}") """ Explanation: No difference visually, that's a good sign... Calculating absolute and relative errors: End of explanation """ df2.set_index('df')[:30] df2.describe() """ Explanation: That should work! Showing the first 30 rows: End of explanation """ txt = '{"values":[12.7062047361747,4.301779582509312,3.1871553520088085,2.7672099913287695,2.576719698609363,2.4503335932468495,2.35906152566238,2.3041482182027515,2.2683337371729486,2.2245002959999134,2.2017053895295042,2.1780859284726732,2.1606116586136483,2.145318168287048,2.13192540154941,2.1201830443959344,2.1098683461365377,2.1007840212971045,2.0927562320459434,2.08563265114544,2.0792806054289166,2.073585299802687,2.068448121773319,2.0637850265000908,2.0595250023726566,2.0556086171139034,2.0519866444080206,2.0486187710537602,2.0454723846429066,2.04252144176394,2.0397454167309066,2.0371283308374872,2.03465786213627,2.032324535743218,2.0301209946673473,2.0280413511655966,2.026080618622904,2.024234223957484,2.022497600551303,2.0208658617057607,2.019333554622569,2.0178944949098296,2.0165416816133206,2.0152672927729807,2.014062761504584,2.012918932606644,2.0118262996924767,2.010775322847504,2.00975682681175,2.0087624796875003,2.0077853521722373,2.006820557316689,2.0058659708081494,2.00492303177898,2.0039976241402675,2.0031010384407653,2.002251014250958,2.0014728630723946,2.000800671772158,2.0002785865425583,1.9999621773861023,1.9987437069143683,1.9981671078955452,1.997603659522643,1.9970530657624572,1.9965150358777874,1.995989284364195,1.9954755308871324,1.9949735002194409,1.9944829221792224,1.994003531568078,1.9935350681097201,1.9930772763889537,1.9926299057910293,1.9921927104413641,1.9917654491456378,1.9913478853302549,1.99093978698318,1.9905409265951435,1.990151081101217,1.989770031822761,1.9893975644097397,1.9890334687834117,1.9886775390793867,1.9883295735910542,1.987989374713385,1.9876567488870995,1.9873315065432098,1.9870134620479303,1.9867024336479606,1.9863982434161382,1.9861007171974614,1.985809684555483,1.9855249787190759,1.9852464365295674,1.9849738983882452,1.984707208204234,1.9844462133427427,1.9841907645736814,1.9839407160206501,1.9836959251102977,1.9834562525220514,1.983221562138216,1.9829917209944465,1.9827665992305876,1.9825460700418862,1.9823300096305743,1.9821182971578224,1.9819108146960625,1.9817074471816825,1.9815080823680926,1.9813126107791592,1.9811209256630125,1.9809329229462223,1.9807485011883463,1.9805675615368479,1.980390007682385,1.9802157458144691,1.9800446845774955,1.9798767350271438,1.979711810587149,1.9795498270064436,1.9793907023166695,1.9792343567900617,1.9790807128977006,1.9789296952681386,1.9787812306463928,1.978635247853311,1.9784916777453083,1.9783504531744742,1.9782115089490484,1.9780747817942703,1.9779402103135968,1.9778077349502912,1.977677297949384,1.9775488433200024,1.9774223167980716,1.9772976658093868,1.9771748394330546,1.977053788365307,1.976934464883684,1.9768168228115877,1.9767008174832077,1.976586405708816,1.9764735457404328,1.9763621972378638,1.9762523212351057,1.9761438801071263,1.9760368375370105,1.9759311584834822,1.9758268091487907,1.9757237569469732,1.975621970472485,1.9755214194692008,1.975422074799787,1.9753239084154448,1.9752268933260224,1.9751310035705003,1.9750362141878446,1.9749425011882331,1.9748498415246514,1.9747582130648573,1.9746675945637207,1.9745779656359284,1.9744893067290645,1.9744015990970583,1.9743148247740039,1.97422896654835,1.9741440079374613,1.9740599331625495,1.973976727123975,1.9738943753769196,1.97381286410743,1.973732180108831,1.9736523107585102,1.9735732439950733,1.9734949682958693,1.9734174726548872,1.9733407465610215,1.9732647799767125,1.973189563316951,1.9731150874286594,1.97304134357044,1.9729683233926956,1.9728960189181195,1.972824422522556,1.9727535269162348,1.9726833251253697,1.9726138104741346,1.9725449765670058,1.9724768172714762,1.972409326701141,1.972342499199152,1.972276329322045,1.9722108118239352,1.9721459416410856,1.9720817138768438,1.972018123786952,1.971955166765225,1.9718928383296002,1.9718311341085586,1.9717700498279147,1.9717095812979797,1.9716497244010922,1.9715904750795215,1.9715318293237418,1.971473783161076,1.9714163326447092,1.9713594738430755,1.9713032028296136,1.9712475156728921,1.97119240842711,1.9711378771229593,1.971083917758868,1.9710305262926076,1.9709776986332717,1.970925430633627,1.970873718082835,1.970822556699542,1.970771942125342,1.9707218699186095,1.9706723355487021,1.970623334390535,1.9705748617195264,1.9705269127069098,1.9704794824154235,1.9704325657953645,1.9703861576810158,1.9703402527874445,1.970294845707671,1.9702499309102048,1.9702055027369578,1.9701615554015217,1.9701180829878193,1.9700750794491264,1.9700325386074633,1.9699904541533573,1.969948819645976,1.9699076285136317,1.9698668740546537,1.9698265494386356,1.9697866477080503,1.9697471617802351,1.9697080844497505,1.9696694083911057,1.9696311261618573,1.96959323020608,1.969555712858202,1.9695185663472183,1.96948178280127,1.9694453542525947,1.96940927264285,1.9693735298288046,1.9693381175884013,1.9693030276271912,1.9692682515851376,1.9692337810437914,1.969199607533835,1.9691657225430008,1.9691321175243561,1.969098783904959,1.969065713094892,1.9690328964966524,1.9690003255149293,1.9689679915667382,1.9689358860919326,1.9689040005640863,1.9688723265017445,1.9688408554800438,1.9688095791427087,1.9687784892144131,1.968747577513513,1.9687168359651541,1.9686862566147454,1.9686558316418057,1.9686255533741792,1.968595414302624,1.9685654070957685,1.968535524615441,1.9685057599323672,1.968476106342242,1.9684465573821681,1.9684171068474676,1.9683877488088617,1.9683584776300247,1.9683292879855072,1.9683001748790279,1.9682711336621366,1.9682421600532514,1.9682132501570626,1.9681844004843083,1.9681556079719194,1.9681268700035401,1.9680981844304126,1.9680695495926364,1.9680409643407955,1.9680124280579612,1.9679839406820596,1.9679555027286129,1.96792711531385,1.9678987801781909,1.9678704997100958,1.9678422769702895,1.9678141157163576,1.9677860204277073,1.9677579963309046,1.967730049425379,1.967702186509499,1.967674415207022,1.9676467439939085,1.9676191822255102,1.9675917401641314,1.967564429006956,1.9675372609143478,1.967510249038523,1.9674834075525867,1.9674567516799493,1.9674302977241083,1.9674040630987975,1.9673780663585156,1.9673523272294176,1.9673268666405792,1.9673017067556313,1.9672768710047754,1.9672523841171432,1.9672282721535608,1.9672045625396537,1.9671812840993452,1.9671584670887095,1.9671361432302037,1.9671143457472664,1.9670931093992925,1.9670724705169695,1.9670524670379934,1.9670331385431519,1.967014526292775,1.96699667326356,1.9669796241857693,1.9669634255807922,1.9669481257990793,1.9669337750584557,1.9669204254827894,1.9669081311410483,1.9668969480867098,1.9668869343975561,1.9668781502158295,1.9668706577887685,1.9668645215095006,1.9668598079583202]}' df2.loc[:, "simulated_js"] = json.loads(txt)["values"] (df2.simulated - df2.simulated_js).abs().describe() """ Explanation: Confirm javascript implemenation is the same as python To generate in javascript: javascript console.log(JSON.stringify({values:_.range(1,350).map(inv_tdist_05)})) The output is copied below and compared to our python simulated data: End of explanation """
mathinmse/mathinmse.github.io
Lecture-22-Phase-Field-Basics.ipynb
mit
import matplotlib.pyplot as plt import numpy as np %matplotlib notebook def plot_p_and_g(): phi = np.linspace(-0.1, 1.1, 200) g=phi**2*(1-phi)**2 p=phi**3*(6*phi**2-15*phi+10) # Changed 3 to 1 in the figure call. plt.figure(1, figsize=(12,6)) plt.subplot(121) plt.plot(phi, g, linewidth=1.0); plt.xlabel('$\phi$', fontsize=18) plt.ylabel('$g(\phi)$', fontsize=18) plt.subplot(122) plt.plot(phi, p, linewidth=1.0); plt.xlabel('$\phi$', fontsize=18) plt.ylabel('$p(\phi)$', fontsize=18) return plot_p_and_g() """ Explanation: Lecture 22: Phase Field Models Sections Introduction Learning Goals On Your Own The Order Parameter In Class The Free Energy Functional The Equation of Motion Analytical Solution [Numerical Solution (FiPy)](#Numerical-Solution-(FiPy) In 2D Homework Summary Looking Ahead Reading Assignments and Practice Introduction This workbook/lecture is derived from Boettinger, et al. in Annual Review of Materials Research, v32, p163-194 (2002). doi: 10.1146/annurev.matsci.32.101901.155803 The phase field method makes possible the study of complex microstructural morphologies such as dendritic and eutectic solidification as well as polycrystalline growth. The major contribution of the method is the introduction of an order parameter used to delineate phases such as solid/liquid, $\alpha~/~\beta$, etc. The concept of an order parameter is not new. However, smoothly varying this order parameter through an interphase interface frees us from tracking the interface position and applying boundary conditions at interfaces having complex morphologies. Top of Page Learning Goals Introduction to the idea of an "order parameter". Observe a practical use for the Calculus of Variations. Introduction to what is meant by a non-homogeneous thermodynamic system. Code a simple microstructure simulation. Top of Page On Your Own Read this small excerpt from Boettinger's paper: The method employs a phase-field variable, e.g., $\phi$, which is a function of position and time, to describe whether the material is liquid or solid. The behavior of this variable is governed by an equation that is coupled to equations for heat and solute transport. Interfaces between liquid and solid are described by smooth but highly localized changes of this variable between fixed values that represent solid and liquid, (in this review, 0 and 1, respectively). Therein is the key feature of phase field models. The order (or the phase) is described by a field variable ($\phi$) coupled to heat and mass transfer. The result is that complex interface shapes do not require tracking of the position of the interface. This may not have significance to you, however there was a time when knowledge of the position of the interface was required for solidification calculations. This boundary condition made dendrite computation difficult if not impossible. The Order Parameter The order parameter can be thought of as an envelope around probability amplitudes of atomic positions. In the picture below we have a probability density of finding an atom at a particular position. In this picture $\phi = 0$ might be considered the solid, and $\phi = 1$ would be the liquid. Using the order parameter in this way makes it easier to calculate solidification microstructures - we no longer have to track the interface (you'll see how this works below). The shape of this interface is a balance between two forces. The energy increase for intermediate states between solid and liquid (from the bulk free energy) and energy costs associated with steep gradients in the phase-field order parameter. Top of Page Review: Calculus of Variations The calculus of variations is a rich mathematical subject. There are many books on the topic. One of the canonical problems in the subject area is to compute the shortest arc between to points on a plane. This, is a good place to begin your study of the topic. For now I'll describe the major output of the CoV and the points relevant to phase field. The analogy between calculus and CoV is good. If finding the minimum of a function can be done by inspecting derivatives then the minimum of a functional can be found by inspecting the so-called 'variational derivative'. In particular the first chapter of Lev. D. Elsgolc's book "Calculus of Variations" presents this idea nicely. The CoV gives us the Euler-Lagrange equation. This is the main tool of CoV: $$ \frac{\delta F}{\delta \phi} = \frac{\partial F}{\partial \phi} - \frac{\partial}{\partial x} \frac{\partial F}{\partial \nabla \phi} = 0$$ The scalar and gradient terms in $\phi$ are treated as independent variables. This equation is telling us that the function that minimizes the functional is the solution to the above differential equation. Top of Page In Class The Free Energy Functional The Langragian is constructed from the free energy functional (and integrated over all space, V) and that, in turn is constructed from the bulk free energy and the gradient energy thus: $$L(\phi,\nabla\phi) = \int_V \Big[ ~~f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2~\Big]~ dV$$ Where L is the Lagrangian, the functional (hereafter F) is in the square brackets, V is the volume of the system, $\phi$ is the order parameter, T is the temperature, $\epsilon$ is the gradient energy coefficient and $\nabla$ has the usual meaning. $f(\phi,T)$ is the free energy density. This is often referred to as the 'bulk' free energy term and the terms in $\nabla\phi$ are the gradient energy terms. You may also hear these terms referred to as the 'non-classical' terms. At equilibrium the variational derivatives must satisfy the following: $$ \frac{\delta F}{\delta \phi} = \frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi = 0$$ Recall the definition of the Euler-Lagrange equation: $$ \frac{\delta F}{\delta \phi} = \frac{\partial F}{\partial \phi} - \frac{\partial}{\partial x} \frac{\partial F}{\partial \nabla \phi} = 0$$ and the free energy functional: $$f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2$$ The first equation above tells us that the function $\phi(x,t)$ is unchanging. We will compute this interface profile below. To develop a kinetic expression we make an educated guess (also known as an "ansatz" in the phase field literature) about the relaxation of a system towards equilibrium. Top of Page The Equation of Motion We assume the following functional form for the equation of motion: $$\frac{\partial \phi}{\partial t} = - M_\phi \frac{\delta F}{\delta \phi}$$ This is the simplest expression that guarantees the free energy of the system will decrease over time. In this form the phase-field variable, $\phi$, is non-conserved. The conserved form takes the divergence of the expression above as is done when expression accumulation in a control volume (as in Fick's second law). Our equation of motion is therefore: $$\frac{\partial \phi}{\partial t} = - M_\phi \Big[\frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi \Big]$$ Top of Page Building the Bulk Free Energy '$f$' There are two so-called helper functions that homogenize the free energy. The interpolating function, $p(\phi)$ and the double well function $g(\phi)$. End of explanation """ %matplotlib notebook import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D from matplotlib.ticker import LinearLocator, FormatStrFormatter def plot_homogeneous_F(): plt.fig = plt.figure(2, figsize=(10,10)) plt.ax = plt.fig.gca(projection='3d') phi = np.linspace(0.0, 1.0, 100) temperature = np.linspace(0.0, 1.0, 100) phi,temperature = np.meshgrid(phi,temperature) W=30.0 L=1.0 Tm=0.5 g=phi**2*(1-phi)**2 p=phi**3*(6*phi**2-15*phi+10) f = W*g+L*p*(Tm-temperature)/Tm energyPlot = plt.ax.plot_surface(phi, temperature, f, label=None, cmap=plt.cm.coolwarm, rstride=5, cstride=5, alpha=0.5) energyPlot = plt.contour(phi, temperature, f,20) plt.clabel(energyPlot, inline=1, fontsize=10) plt.ax.set_xlabel('$\phi$') plt.ax.set_ylabel('T') plt.ax.set_zlabel('$f(\phi,t)$') return plot_homogeneous_F() """ Explanation: We start by using the ordinary free energy of the pure components: Pure A, liquid phase - $f_A^L(T)$ Pure A, solid phase - $f_A^S(T)$ As we will be limiting ourselves to a pure material at this time, these are the only two free energies we need. Near the melting point these free energies are often modeled as straight lines using the relationship for the Gibbs free energy: $$G = H - TS$$ taking H and S to be constants. Following conventions of the phase diagram modeling community we take the reference state of the component A to be the equilibrium phase at STP. If this were a metal like Cu then the reference state would be the FCC phase. For us, that will be the SOLID. This sets: $$f_A^S(T) = 0$$ Expanding the difference in free energy between the solid and the liquid around the melting point results in: $$f_A^L(T)-f_A^S(T) = L_A \frac{(T_M^A - T)}{T_M^A}$$ The next step is to homogenize the free energy for component A. We build the free energy $f(\phi,T)_A$ as follows: $$f(\phi,T)_A = W_A~g(\phi) + f_L p(\phi) + f_S (1-p(\phi))$$ so that: $$f(\phi,T)_A = W_A~g(\phi) + L_A \frac{(T_M^A - T)}{T_M^A}p(\phi)$$ Let us plot this and see what it looks like. End of explanation """ %matplotlib notebook import matplotlib.pyplot as plt import numpy as np from ipywidgets import interact, fixed fig = None def plot_equilibrium(W=500.0, epsilon=1.0): global fig if fig: plt.close(fig) fig = plt.figure() x = np.linspace(-1.0, 1.0, 200) phi = 0.5*(1+np.tanh(x*np.sqrt(2*W)/(2*epsilon))) plt.plot(x, phi, linewidth=1.0) plt.xlabel('$x$', fontsize=24) plt.ylabel('$\phi(x)$', fontsize=24) return print 'Hello!' interact(plot_equilibrium, W=(1,1000,10), epsilon=fixed(1.0)) """ Explanation: $$L(\phi,\nabla\phi) = \int_V \Big[ ~~f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2~\Big]~ dV$$ From here it should be clear that the free energy space is homogeneous in the order parameter and temperature. Now that the description of the bulk energy is complete we can return to the Euler-Lagrange equations and proceed to develop our equilibrium solution and our equations of motion. Top of Page Analytical Solution The full expression for the equation of motion is: $$\frac{\partial \phi}{\partial t} = - M_\phi \epsilon^2 \Big[\nabla^2\phi- \frac{2W_A}{\epsilon^2} \phi(1-\phi)(1-2\phi)\Big]-\frac{30 M_\phi L_A}{T_M^A}(T_M^A - T)\phi^2(1-\phi)^2$$ While this is correct - it is often not explicitly written out like this when solving the equations numerically. Further, it is better if you don't fully expand the derivatives when attempting the analytical solution. There is a fair bit of algebra and a few assumptions that enable the solution to the Euler-Lagrange equation above. I will state the procedue and leave out the gory detail. Remember we are after the expression that gives us $\phi(x)$. Keep this in mind as you read through the following bullet points. First - assume that you are at the melting temperature. The rationale is that this is the only temperature where BOTH phases CO-EXIST. Any other temperature and it does not make sense to discuss an interface. This removes the second term on the RHS of the above expression. Second - you can use $\frac{d\phi}{dx}$ as an integrating factor to take the first integral of the Euler-Lagrange equation. Third - after evaluating the constant (C=0 is the answer, but, why it is zero will test your reasoning skills) the equation is seperable and can be integrated. The result is: $$\phi(x) = \frac{1}{2} \Big[ 1 + \tanh \Big( \frac{x}{2\delta} \Big) \Big]$$ where $\delta$ is related to the W and $\epsilon$ parameters. End of explanation """ %matplotlib osx from fipy import * L = 1. nx = 400 dx = L/nx mesh = Grid1D(dx=dx, nx=nx) phase = CellVariable(name="phase",mesh=mesh) viewer = MatplotlibViewer(vars=(phase,),datamin=-0.1, datamax=1.1, legend=None) """ Explanation: W and $\epsilon$ can be parameterized in terms of the surface energy and the interface thickness to make a connection with the physical world. Top of Page Numerical Solution (FiPy) End of explanation """ x = mesh.cellCenters phase.setValue(1.) phase.setValue(0., where=x > L/2) viewer.plot() """ Explanation: This cell sets the initial conditions. There is a helper attribute cellCenters that fetches a list of the x points. The setValue helper functions and the 'where' keyword help you to set the initial conditions. FiPy is linked to Matplotlib and once you created the viewer object you call .plot() to update. End of explanation """ import sympy as sp phi = sp.symbols('phi') sp.init_printing() ((1-phi)**2*(phi**2)).diff(phi).simplify() (phi**3*(6*phi**2-15*phi+10)).diff(phi).simplify() """ Explanation: Top of Page DIY: Code the terms to complete the phase field description. $$\frac{\partial \phi}{\partial t} = - M_\phi \epsilon^2 \Big[\nabla^2\phi- \frac{2W_A}{\epsilon^2} \phi(1-\phi)(1-2\phi)\Big]-\frac{30 M_\phi L_A}{T_M^A}(T_M^A - T)\phi^2(1-\phi)^2$$ End of explanation """ eps_sqrd = 0.00025 M = 1.0 W = 0.5 Lv = 1. Tm = 1. T = 1.0 enthalpy = Lv*(Tm-T)/Tm S0 = W*2.0*phase*(phase-1.0)*(2*phase-1.0) + 30*phase**2*(phase**2-2*phase+1)*enthalpy """ Explanation: $$\frac{\partial \phi}{\partial t} = - M_\phi \Big[\frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi \Big]$$ $$f(\phi,T)_A = W_A~g(\phi) + L_A \frac{(T_M^A - T)}{T_M^A}p(\phi)$$ End of explanation """ eq = TransientTerm() == DiffusionTerm(coeff=eps_sqrd*M) - S0 for i in range(50): eq.solve(var = phase, dt=0.1) viewer.plot() """ Explanation: This is our general statement of a diffusive PDE. There is a transient term and a source term. Translate from the description of the phase field model above. End of explanation """ %matplotlib osx from fipy import * L = 1. nx = 200 dx = L/nx dy = L/nx mesh = Grid2D(dx=dx, dy=dx, nx=nx, ny=nx) phase = CellVariable(name="phase", mesh=mesh) x = mesh.cellCenters()[0] y = mesh.cellCenters()[1] phase.setValue(1.) x0 = 0.0 y0 = 0.0 #phase.setValue(0., where=( # ((x-x0)**2+(y-y0)**2 > L/3) & ((x-L)**2+(y-L)**2 > 0.2) # ) # ) phase.setValue(ExponentialNoiseVariable(mesh=mesh, mean=0.5)) viewer = Matplotlib2DGridViewer(vars=phase, datamin=0.0, datamax=1.0) viewer.plot() """ Explanation: Just re-execute this cell after you change parameters. You can execute it over and over until you are satisfied that you've reached equilibrium. You can try changing $\kappa$, W, and T. Changing T from the melting tmperature will result in a moving interface. This is where things get interesting! Top of Page In 2D This one is important. We will simulate a pair of curved particles (each with a different radius) at the melting temperature. What do you think will happen? End of explanation """ eps_sqrd = 0.00025 M = 1.0 W = 0.5 Lv = 1. Tm = 1. T = 1. enthalpy = Lv*(Tm-T)/Tm S0 = W*2.0*phase*(phase-1.0)*(2*phase-1.0) + 30*phase**2*(phase**2-2*phase+1)*enthalpy eq = TransientTerm() == DiffusionTerm(coeff=eps_sqrd) - S0 for i in range(500): eq.solve(var = phase, dt=0.05) viewer.plot() """ Explanation: The strength of FiPy is that you can use the same code here in 2D as above. End of explanation """
lhcb/opendata-project
Example-Analysis.ipynb
gpl-2.0
from __future__ import print_function from __future__ import division %pylab inline execfile('Data/setup_example.py') """ Explanation: Analysis of Nobel prize winners Welcome to the programming example page. This page shows an example analysis of Nobel prize winners. The coding commands and techniques that are demonstrated in this analysis are similar to those that are needed for your particle physics analysis. IMPORTANT: For every code box with code already in it, like the one below you must click in and press shift+enter to run the code. This is how you also run your own code. If the In [x]: to the left of a codebox changes to In [*]: that means the code in that box is currently running If you ever want more space to display output of code you can press the + button in the toolbar to the right of the save button to create another input box. For the sliders in the example histograms to work you will have to run all the codeboxes in this notebook. You can either do this as you read and try changing the code to see what happens, or select cell in the toolbar at the top and select run all. First we load in the libraries we require and read in the file that contains the data. End of explanation """ data.head(5) # Displaying some of the data so you can see what form it takes in the DataFrame """ Explanation: Lets now view the first few lines of the data table. The rows of the data table are each of the Nobel prizes awarded and the columns are the information about who won the prize. We have put the data into a pandas DataFrame we can now use all the functions associated with DataFrames. A useful function is .head(), this prints out the first few lines of the data table. End of explanation """ # print the earliest year in the data print(data.Year.min()) # print the latest year in the data print(data.Year.max()) """ Explanation: Plotting a histogram Lets learn how to plot histograms. We will plot the number of prizes awarded per year. Nobel prizes can be awarded for up to three people per category. As each winner is recorded as an individual entry the histogram will tell us if there has been a trend of increasing or decreasing multiple prize winners in one year. However before we plot the histogram we should find information out about the data so that we can check the range of the data we want to plot. End of explanation """ # filter out the Economics prizes from the data data_without_economics = data.query("Category != 'economics'") print('Number of economics prizes in "data_without_economics":') print(len(data_without_economics.query("Category == 'economics'"))) """ Explanation: The data set also contains entries for economics. Economics was not one of the original Nobel prizes and has only been given out since 1969. If we want to do a proper comparison we will need to filter this data out. We can do this with a pandas query. We can then check there are no economics prizes left by finding the length of the data after applying a query to only select economics prizes. This will be used in the main analysis to count the number of $B^+$ and $B^-$ mesons. End of explanation """ # plot the histogram of number of winners against year H_WinnersPerYear = data_without_economics.Year.hist(bins=11, range=[1900, 2010]) xlabel('Year') ylabel('Number of Winners') """ Explanation: We can now plot the histogram over a sensible range using the hist function from matplotlib. You will use this throughout the main analysis. End of explanation """ def plot_hist(bins): changingBins = data_without_economics.Year.hist(bins=bins, range=[1900,2010]) xlabel('Year') ylabel('Number of People Given Prizes') BinSize = round(60/bins, 2) print(BinSize) interact(plot_hist, bins=[2, 50, 1]) """ Explanation: From the histogram we can see that there has been a recent trend of more multiple prize winners in the same year. However there is a drop in the range 1940 - 1950, this was due to prizes being awarded intermittently during World War II. To isolate this gap we can change the bin size (by changing the number of bins variable) to contain this range. Try changing the slider below (you will have to click in code box and press shift+enter to activate it) and see how the number of bins affects the look of the histogram. End of explanation """ modernPhysics = "(Category == 'physics' && Year > 2005)" # Integer values don't go inside quotes physicsOnly = "(Category == 'physics')" # apply the physicsOnly query physicsOnlyDataFrame = data.query(physicsOnly) """ Explanation: As you can see by varying the slider - changing the bin size really does change how the data looks! There is discussion on what is the appropiate bin size to use in the main notebook. Preselections We now want to select our data. This is the same process as with filtering out economics prizes before but we'll go into more detail. This time lets filter out everything except Physics. We could do so by building a new dataset from the old one with loops and if statements, but the inbuilt pandas function .query() provides a quicker way. By passing a conditional statement, formatted into a string, we can create a new dataframe which is filled with only data that made the conditional statement true. A few examples are given below but only filtering out all but physics is used. End of explanation """ physicsOnlyDataFrame.head(5) """ Explanation: Lets check the new DataFrames to see if this has worked! End of explanation """ H_PhysicsWinnersPerYear = physicsOnlyDataFrame.Year.hist(bins=15, range=[1920,2010]) xlabel('Year') #Plot an x label ylabel('Number of Winners in Physics') #Plot a y label """ Explanation: Brilliant! You will find this technique useful to select kaons in the main analysis. Lets now plot the number of winners per year just for physics. End of explanation """ # Create new variable in the dataframe physicsOnlyDataFrame['AgeAwarded'] = physicsOnlyDataFrame.Year - physicsOnlyDataFrame.BirthYear physicsOnlyDataFrame.head(5) """ Explanation: We have now successfully plotted the histogram of just the physics prizes after applying our pre-selection. Calculations, Scatter Plots and 2D Histogram Adding New Data to a Data Frame You will find this section useful for when it comes to creating a Dalitz plot in the particle physics analysis. We want to see what ages people have been awarded Nobel prizes and measure the spread in the ages. Then we'll consider if over time people have been getting awarded Nobel prizes earlier or later in their life. First we'll need to calculate the age or the winners at the time the prize was awarded based on the Year and Birthdate columns. We create an AgeAwarded variable and add this to the data. End of explanation """ # plot a histogram of the laureates ages H_AgeAwarded = physicsOnlyDataFrame.AgeAwarded.hist(bins=15) """ Explanation: Lets make a plot of the age of the winners at the time they were awarded the prize End of explanation """ # count number of entries NumEntries = len(physicsOnlyDataFrame) # calculate square of ages physicsOnlyDataFrame['AgeAwardedSquared'] = physicsOnlyDataFrame.AgeAwarded**2 # calculate sum of square of ages, and sum of ages AgeSqSum = physicsOnlyDataFrame['AgeAwardedSquared'].sum() AgeSum = physicsOnlyDataFrame['AgeAwarded'].sum() # calculate std and print it std = sqrt((AgeSqSum-(AgeSum**2/NumEntries)) / NumEntries) print(std) """ Explanation: Making Calculations Lets calculate a measure of the spread in ages of the laureates. We will calculate the standard deviation of the distribution. End of explanation """ # calculate standard deviation (rms) of distribution print(physicsOnlyDataFrame['AgeAwarded'].std()) """ Explanation: There is actually a function that would calculate the rms for you, but we wanted to teach you how to manipulate data to make calculations! End of explanation """ scatter(physicsOnlyDataFrame['Year'], physicsOnlyDataFrame['AgeAwarded']) plt.xlim(1900, 2010) # change the x axis range plt.ylim(20, 100) # change the y axis range xlabel('Year Awarded') ylabel('Age Awarded') """ Explanation: Scatter Plot Now lets plot a scatter plot of Age vs Date awarded End of explanation """ hist2d(physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded, bins=10) colorbar() # Add a colour legend xlabel('Year Awarded') ylabel('Age Awarded') """ Explanation: 2D Histogram We can also plot a 2D histogram and bin the results. The number of entries in the data set is relatively low so we will need to use reasonably large bins to have acceptable statistics in each bin. We have given you the ability to change the number of bins so you can see how the plot changes. Note that the number of total bins is the value of the slider squared. This is because the value of bins given in the hist2d function is the number of bins on one axis. End of explanation """ def plot_histogram(bins): hist2d(physicsOnlyDataFrame['Year'].values,physicsOnlyDataFrame['AgeAwarded'].values, bins=bins) colorbar() #Set a colour legend xlabel('Year Awarded') ylabel('Age Awarded') interact(plot_histogram, bins=[1, 20, 1]) # Creates the slider """ Explanation: Alternatively you can use interact to add a slider to vary the number of bins End of explanation """ physics_counts, xedges, yedges, Image = hist2d( physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded, bins=10, range=[(1900, 2010), (20, 100)] ) colorbar() # Add a colour legend xlabel('Year Awarded') ylabel('Age Awarded') """ Explanation: Playing with the slider will show you the effect of changing bthe in size in a 2D histogram. The darker bins in the top right corner show that there does appear to be a trend of Nobel prizes being won at an older age in more recent years. Manipulating 2D histograms This section is advanced and only required for the final section of the main analysis. As the main analysis requires the calculation of an asymmetry, we now provide a contrived example of how to do this using the nobel prize dataset, we recommed only reading this section after reaching the "Searching for local matter anti-matter differences" section of the main analysis. First calculate the number of entries in each bin of the 2D histogram and store these values in physics_counts as a 2D array. xedges and yedges are 1D arrays containing the values of the bin edges along each axis. End of explanation """ # Make the "chemistryOnlyDataFrame" dataset chemistryOnlyDataFrame = data.query("(Category == 'chemistry')") chemistryOnlyDataFrame['AgeAwarded'] = chemistryOnlyDataFrame.Year - chemistryOnlyDataFrame.BirthYear # Plot the histogram chemistry_counts, xedges, yedges, Image = hist2d( chemistryOnlyDataFrame.Year, chemistryOnlyDataFrame.AgeAwarded, bins=10, range=[(1900, 2010), (20, 100)] ) colorbar() # Add a colour legend xlabel('Year Awarded') ylabel('Age Awarded') """ Explanation: Repeat the procedure used for physics to get the 2D histgram of age against year awarded for chemistry nobel prizes. End of explanation """ counts = (physics_counts - chemistry_counts) / (physics_counts + chemistry_counts) """ Explanation: Subtract the chemistry_counts from the physics_counts and normalise by their sum. This is known as an asymmetry. End of explanation """ counts[np.isnan(counts)] = 0 """ Explanation: Where there are no nobel prize winners for either subject counts will contain an error value (nan) as the number was divided by zero. Here we replace these error values with 0. End of explanation """ pcolor(xedges, yedges, counts, cmap='seismic') colorbar() """ Explanation: Finally plot the asymmetry using the pcolor function. As positive and negative values each have a different meaning we use the seismic colormap, see here for a full list of all available colormaps. End of explanation """
UltronAI/Deep-Learning
CS231n/reference/CS231n-master/assignment1/features.ipynb
mit
import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 """ Explanation: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. End of explanation """ from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() """ Explanation: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. End of explanation """ from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) """ Explanation: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. End of explanation """ # Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [1e5, 1e6, 1e7] results = {} best_val = -1 best_svm = None pass ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for l in learning_rates: for r in regularization_strengths: svm = LinearSVM() svm.train(X_train_feats, y_train, learning_rate=l, reg=r, num_iters=1500, batch_size=200) y_train_pred = svm.predict(X_train_feats) y_val_pred = svm.predict(X_val_feats) training_accuracy = np.mean(y_train == y_train_pred) validation_accuracy = np.mean(y_val == y_val_pred) results[(l, r)] = (training_accuracy, validation_accuracy) if validation_accuracy > best_val: best_val = validation_accuracy best_svm = svm ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print test_accuracy # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() """ Explanation: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. End of explanation """ print X_train_feats.shape from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 net = TwoLayerNet(input_dim, hidden_dim, num_classes) best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ best_val = -1 best_stats = None learning_rates = np.logspace(-10, 0, 5) # np.logspace(-10, 10, 8) #-10, -9, -8, -7, -6, -5, -4 regularization_strengths = np.logspace(-3, 5, 5) # causes numeric issues: np.logspace(-5, 5, 8) #[-4, -3, -2, -1, 1, 2, 3, 4, 5, 6] results = {} iters = 2000 #100 for lr in learning_rates: for rs in regularization_strengths: net = TwoLayerNet(input_dim, hidden_dim, num_classes) # Train the network stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=iters, batch_size=200, learning_rate=lr, learning_rate_decay=0.95, reg=rs) y_train_pred = net.predict(X_train_feats) acc_train = np.mean(y_train == y_train_pred) y_val_pred = net.predict(X_val_feats) acc_val = np.mean(y_val == y_val_pred) results[(lr, rs)] = (acc_train, acc_val) if best_val < acc_val: best_stats = stats best_val = acc_val best_net = net # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) == y_test).mean() print test_acc # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(best_stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(best_stats['train_acc_history'], label='train') plt.plot(best_stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show() """ Explanation: Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/03_tensorflow/e_ai_platform.ipynb
apache-2.0
import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # For Python Code # Model Info MODEL_NAME = 'taxifare' # Model Version MODEL_VERSION = 'v1' # Training Directory name TRAINING_DIR = 'taxi_trained' # For Bash Code os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['MODEL_NAME'] = MODEL_NAME os.environ['MODEL_VERSION'] = MODEL_VERSION os.environ['TRAINING_DIR'] = TRAINING_DIR os.environ['TFVERSION'] = '1.14' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: Scaling up ML using Cloud AI Platform In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud AI Platform. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates how to package up a TensorFlow model to run it within Cloud AI Platform. Later in the course, we will look at ways to make a more effective machine learning model. Environment variables for project and bucket Note that: <ol> <li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li> <li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li> </ol> <b>Change the cell below</b> to reflect your Project ID and bucket name. End of explanation """ %%bash # The bucket needs to exist for the gsutil commands in next cell to work gsutil mb -p ${PROJECT} gs://${BUCKET} """ Explanation: Create the bucket to store model and training data for deploying to Google Cloud Machine Learning Engine Component End of explanation """ %%bash # This command will fail if the Cloud Machine Learning Engine API is not enabled using the link above. echo "Getting the service account email associated with the Cloud AI Platform API" AUTH_TOKEN=$(gcloud auth print-access-token) SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \ -H "Authorization: Bearer $AUTH_TOKEN" \ https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \ | python -c "import json; import sys; response = json.load(sys.stdin); \ print (response['serviceAccount'])") # If this command fails, the Cloud Machine Learning Engine API has not been enabled above. echo "Authorizing the Cloud AI Platform account $SVC_ACCOUNT to access files in $BUCKET" gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored. gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET """ Explanation: Enable the Cloud Machine Learning Engine API The next command works with Cloud AI Platform API. In order for the command to work, you must enable the API using the Cloud Console UI. Use this link. Then search the API list for Cloud Machine Learning and enable the API before executing the next cell. Allow the Cloud AI Platform service account to read/write to the bucket containing training data. End of explanation """ %%bash find ${MODEL_NAME} %%bash cat ${MODEL_NAME}/trainer/model.py """ Explanation: Packaging up the code Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> containing the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>). End of explanation """ %%bash echo "Working Directory: ${PWD}" echo "Head of taxi-train.csv" head -1 $PWD/taxi-train.csv echo "Head of taxi-valid.csv" head -1 $PWD/taxi-valid.csv """ Explanation: Find absolute paths to your data Note the absolute paths below. End of explanation """ %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before # tensorboard is started rm -rf $PWD/${TRAINING_DIR} %%bash # Setup python so it sees the task module which controls the model.py export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME} # Currently set for python 2. To run with python 3 # 1. Replace 'python' with 'python3' in the following command # 2. Edit trainer/task.py to reflect proper module import method python -m trainer.task \ --train_data_paths="${PWD}/taxi-train*" \ --eval_data_paths=${PWD}/taxi-valid.csv \ --output_dir=${PWD}/${TRAINING_DIR} \ --train_steps=1000 --job-dir=./tmp %%bash ls $PWD/${TRAINING_DIR}/export/exporter/ %%writefile ./test.json {"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2} %%bash # This model dir is the model exported after training and is used for prediction # model_dir=$(ls ${PWD}/${TRAINING_DIR}/export/exporter | tail -1) # predict using the trained model gcloud ai-platform local predict \ --model-dir=${PWD}/${TRAINING_DIR}/export/exporter/${model_dir} \ --json-instances=./test.json """ Explanation: Running the Python module from the command-line Clean model training dir/output dir End of explanation """ %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before # tensorboard is started rm -rf $PWD/${TRAINING_DIR} """ Explanation: Monitor training with TensorBoard To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row. TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests. You may close the TensorBoard tab when you are finished exploring. Clean model training dir/output dir End of explanation """ %%bash # Use Cloud Machine Learning Engine to train the model in local file system gcloud ai-platform local train \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ -- \ --train_data_paths=${PWD}/taxi-train.csv \ --eval_data_paths=${PWD}/taxi-valid.csv \ --train_steps=1000 \ --output_dir=${PWD}/${TRAINING_DIR} """ Explanation: Running locally using gcloud End of explanation """ %%bash ls $PWD/${TRAINING_DIR} """ Explanation: Use TensorBoard to examine results. When I ran it (due to random seeds, your results will be different), the average_loss (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13. End of explanation """ %%bash # Clear Cloud Storage bucket and copy the CSV files to Cloud Storage bucket echo $BUCKET gsutil -m rm -rf gs://${BUCKET}/${MODEL_NAME}/smallinput/ gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/${MODEL_NAME}/smallinput/ %%bash OUTDIR=gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME # Clear the Cloud Storage Bucket used for the training job gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC \ --runtime-version=$TFVERSION \ -- \ --train_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-train*" \ --eval_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-valid*" \ --output_dir=$OUTDIR \ --train_steps=10000 """ Explanation: Submit training job using gcloud First copy the training data to the cloud. Then, launch a training job. After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress. <b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job. End of explanation """ %%bash gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter """ Explanation: Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. <b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b> Deploy model Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>. End of explanation """ %%bash MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1) echo "MODEL_LOCATION = ${MODEL_LOCATION}" gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} """ Explanation: Deploy model : step 1 - remove version info Before an existing cloud model can be removed, it must have any version info removed. If an existing model does not exist, this command will generate an error but that is ok. End of explanation """ %%bash gcloud ai-platform models delete ${MODEL_NAME} """ Explanation: Deploy model: step 2 - remove existing model Now that the version info is removed from an existing model, the actual model can be removed. If an existing model is not deployed, this command will generate an error but that is ok. It just means the model with the given name is not deployed. End of explanation """ %%bash gcloud ai-platform models create ${MODEL_NAME} --regions $REGION """ Explanation: Deploy model: step 3 - deploy new model End of explanation """ %%bash MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR}/export/exporter | tail -1) echo "MODEL_LOCATION = ${MODEL_LOCATION}" gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION """ Explanation: Deploy model: step 4 - add version info to the new model End of explanation """ %%bash gcloud ai-platform predict --model=${MODEL_NAME} --version=${MODEL_VERSION} --json-instances=./test.json from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json credentials = GoogleCredentials.get_application_default() api = discovery.build('ml', 'v1', credentials=credentials, discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json') request_data = {'instances': [ { 'pickuplon': -73.885262, 'pickuplat': 40.773008, 'dropofflon': -73.987232, 'dropofflat': 40.732403, 'passengers': 2, } ] } parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, MODEL_NAME, MODEL_VERSION) response = api.projects().predict(body=request_data, name=parent).execute() print ("response={0}".format(response)) """ Explanation: Prediction End of explanation """ %%bash XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line. OUTDIR=gs://${BUCKET}/${MODEL_NAME}/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) CRS_BUCKET=cloud-training-demos # use the already exported data echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --runtime-version=$TFVERSION \ -- \ --train_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/train.csv" \ --eval_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/valid.csv" \ --output_dir=$OUTDIR \ --train_steps=100000 """ Explanation: Train on larger dataset I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow. Go to http://bigquery.cloud.google.com/ and type the query: <pre> SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'nokeyindata' AS key FROM [nyc-tlc:yellow.trips] WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 AND ABS(HASH(pickup_datetime)) % 1000 == 1 </pre> Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.): <ol> <li> Click on the "Save As Table" button and note down the name of the dataset and table. <li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name. <li> Click on "Export Table" <li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu) <li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv) <li> Download the two files, remove the header line and upload it back to GCS. </ol> <p/> <p/> Run Cloud training on 1-million row dataset This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help. End of explanation """ %%bash gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} """ Explanation: Challenge Exercise Modify your solution to the challenge exercise in d_trainandevaluate.ipynb appropriately. Make sure that you implement training and deployment. Increase the size of your dataset by 10x since you are running on the cloud. Does your accuracy improve? Clean-up Delete Model : step 1 - remove version info Before an existing cloud model can be removed, it must have any version info removed. End of explanation """ %%bash gcloud ai-platform models delete ${MODEL_NAME} """ Explanation: Delete model: step 2 - remove existing model Now that the version info is removed from an existing model, the actual model can be removed. End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/day-by-day/day18-kinematics-terminal-velocity-of-a-skydiver/Day_18_pre_class_notebook.ipynb
agpl-3.0
from IPython.display import YouTubeVideo # WATCH THE VIDEO IN FULL-SCREEN MODE YouTubeVideo("JXJQYpgFAyc",width=640,height=360) # Numerical integration """ Explanation: Day 18 Pre-class assignment Goals for today's pre-class assignment In this pre-class assignment, you are going to learn how to: Numerically integrate a function Numerically differentiate a function Get a sense of how the result depends on the step size you use. Assignment instructions Watch the videos below and complete the assigned programming problems. End of explanation """ # Put your code here # WATCH THE VIDEO IN FULL-SCREEN MODE YouTubeVideo("b0K8LiHyrBg",width=640,height=360) # Numerical differentiation """ Explanation: Question 1: Write a function that uses the rectangle rule to integrates $f(x) = \sin(x)$ from $x_{beg}= 0$ to $x_{end} = \pi$ by taking $N_{step}$ equal-sized steps $\Delta x = \frac{x_{beg} - x_{end}}{N_{step}}$. Allow $N_{step}$ and the beginning and ending of the range to be defined by user-set parameters. For values of $N_{step} = 10, 100$, and $1000$, how close are you to the true answer? (In other words, calculate the error as defined above.) Note 1: $\int_{0}^{\pi} \sin(x) dx = \left. -\cos(x) \right|_0^\pi = 2$ Note 2: The "error" is defined as $\epsilon = |\frac{I - T}{T}|$, where I is the integrated answer, T is the true (i.e., analytic) answer, and the vertical bars denote that you take the absolute value. End of explanation """ # Put your code here """ Explanation: Question 2: Write a function that calculates the derivative of $f(x) = e^{-2x}$ at several points between -3.0 and 3.0, using two points that are a distance $\Delta x$ from the point, x, where we want the value of the derivative. Calculate the difference between this value and the answer to the analytic solution, $\frac{df}{dx} = -2 e^{-2x}$, for $\Delta x$ = 0.1, 0.01 and 0.001 (in other words, calculate the error as defined above). Hint: use np.linspace() to create a range of values of x that are regularly-spaced, create functions that correspond to $f(x)$ and $\frac{df}{dx}$, and use numpy to calculate the derivatives and the error. Note that if x is a numpy array, a function f(x) that returns a value will also be a numpy array. In other words, the function: def f(x): return np.exp(-2.0*x) will return an array of values corresponding to the function $f(x)$ defined above if given an array of x values. End of explanation """
ajrader/timeseries
notebooks/Prophet_QuickStart_Example.ipynb
apache-2.0
peyton_dataset_url = 'https://github.com/facebookincubator/prophet/blob/master/examples/example_wp_peyton_manning.csv' peyton_filename = '../datasets/example_wp_peyton_manning.csv' import pandas as pd import numpy as np from fbprophet import Prophet # NB: this didn't work as of 8/22/17 #import io #import requests #s=requests.get(peyton_dataset_url).content #df=pd.read_csv(io.StringIO(s.decode('utf-8')))#df = pd.read_csv(peyton_dataset_url) df = pd.read_csv(peyton_filename) # transform to log scale df['y']=np.log(df['y']) df.head() """ Explanation: Working with FB Prophet begin with Quick Start example from FB page Lok at time series of daily page views fro the Wikipedia page for Peyton Manning. The csv is available here End of explanation """ m = Prophet() m.fit(df); """ Explanation: Fit the model by instantiating a new Prophet object. Any settings required for the forecasting procedure are passed to this object upon construction. You then can call this object's fit method and pass in the historical dataframe. Fitting should take 1-5 seconds. End of explanation """ future = m.make_future_dataframe(periods=365) future.tail() """ Explanation: Predictions are then made on a dataframe with a column ds containing the dates for which a prediction is to be made. You can get a suitable dataframe that extends into the future a specified number of days using the helper method Prophet.make_future_dataframe. By default it will also include the dates from the history, so we will see the model fit as well. End of explanation """ forecast = m.predict(future) forecast[['ds','yhat','yhat_lower','yhat_upper']].tail() """ Explanation: The predict method will assign each row in future a predicted value which it names yhat. If you pass in historical dates, it will provide an in-sample fit. The forecast object here is a new dataframe that includes a column yhat with the forecast, as well as columns for components and uncertainty intervals. End of explanation """ m.plot(forecast) """ Explanation: You can plot the forecast by calling the Prophet.plot method and passing in your forecast dataframe End of explanation """ m.plot_components(forecast) """ Explanation: If you want to see the forecast components, you can use the Prophet.plot_components method. By default you’ll see the trend, yearly seasonality, and weekly seasonality of the time series. If you include holidays, you’ll see those here, too. End of explanation """
ClimateTools/Correlation_EPSL
Proctor_NAO_bandwidth.ipynb
mit
%matplotlib inline from scipy import interpolate from scipy import special from scipy.signal import butter, lfilter, filtfilt import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt from nitime import algorithms as alg from nitime import utils from scipy.stats import t import pandas as pd """ Explanation: Load necessary packages End of explanation """ def butter_lowpass(cutoff, fs, order=3): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def filter(x, cutoff, axis, fs=1.0, order=3): b, a = butter_lowpass(cutoff, fs, order=order) y = filtfilt(b, a, x, axis=axis) return y def movingaverage(interval, window_size): window = np.ones(int(window_size))/float(window_size) return np.convolve(interval, window, 'valid') def owncorr(x,y,n): x_ano=np.ma.anomalies(x) x_sd=np.sum(x_ano**2,axis=0) y_ano=np.ma.anomalies(y) y_sd=np.sum(y_ano**2,axis=0) nomi = np.dot(x_ano,y_ano) corr = nomi/np.sqrt(np.dot(x_sd[None],y_sd[None])) # When using AR_est_YW, we should substract mean from # time series first x_coef, x_sigma = alg.AR_est_YW (x_ano, 1) y_coef, y_sigma = alg.AR_est_YW (y_ano, 1) if x_coef > 1: eps = np.spacing(1.0) x_coef = 1.0 - eps**(1/4) elif x_coef < 0: x_coef = 0.0 if y_coef > 1: eps = np.spacing(1.0) y_coef = 1.0 - eps**(1/4) elif y_coef < 0: y_coef = 0.0 neff = n*(1-x_coef*y_coef)/(1+x_coef*y_coef) if neff <3: neff = 3 coef = [] coef.append(x_coef) coef.append(y_coef) tval = corr/np.sqrt(1-corr**2)*np.sqrt(neff-2) pval = t.sf(abs(tval),neff-2)*2 return corr,pval,coef def gaussianize(X): n = X.shape[0] #p = X.shape[1] Xn = np.empty((n,)) Xn[:] = np.NAN nz = np.logical_not(np.isnan(X)) index = np.argsort(X[nz]) rank = np.argsort(index) CDF = 1.*(rank+1)/(1.*n) -1./(2*n) Xn[nz] = np.sqrt(2)*special.erfinv(2*CDF -1) return Xn """ Explanation: Define functions for filtering, moving averages, and normalizing data End of explanation """ data = genfromtxt('data/scotland.csv', delimiter=',') bandw = data[0:115,4] # band width (1879-1993), will be correlated with T/P bandwl = data[3:129,4] # band width (1865-1990), will be correlation with winter NAO bandwn = gaussianize(bandw) #normalized band width bandwln = gaussianize(bandwl) #normalized band width rain = genfromtxt('data/Assynt_P.txt') #precipitaiton temp = genfromtxt('data/Assynt_T.txt') #temperature wnao = genfromtxt('data/wnao.txt') #winter NAO wnao = wnao[::-1] rainn = gaussianize(rain) tempn = gaussianize(temp) #calculate the ratio of temperature over precipitation ratio = temp/rain ration = gaussianize(ratio) """ Explanation: Read bandwidth and rain/temperature data and normalize them End of explanation """ bandw_fil = movingaverage(bandw, 11) bandwn_fil = movingaverage(bandwn, 11) bandwl_fil = movingaverage(bandwl, 11) rain_fil = movingaverage(rain, 11) rainn_fil = movingaverage(rainn, 11) ratio_fil = movingaverage(ratio, 11) wnao_fil = movingaverage(wnao, 11) """ Explanation: Smoothing data (11-year running average) End of explanation """ corr_ratio,pval_ratio,coef = owncorr(bandw_fil,ratio_fil,115) #correlation between smoothed bandwidth and ratio corr_nao,pval_nao,coef_nao = owncorr(bandwl_fil,wnao_fil,126) #correlation between smoothed bandwidth and winter NAO corr_n,pval_n,coef_n = owncorr(bandwn,ration,115) #correlation between normalized bandwidth and ratio corr_naon,pval_naon,coef_naon = owncorr(bandwln,wnao,126) #correlation between normalized bandwidtha and winter NAO """ Explanation: Calculate correlation and p-values with considering autocorrelation, and the autocorrelations (coef) End of explanation """ print(corr_ratio) print(pval_ratio) print(coef) print(corr_nao) print(pval_nao) print(coef_nao) print(corr_n) print(pval_n) print(coef_n) print(corr_naon) print(pval_naon) print(coef_naon) """ Explanation: Check the correlation results End of explanation """
moble/PostNewtonian
PNTerms/AngularMomentum.ipynb
mit
AngularMomentum_NoSpin = PNCollection() AngularMomentum_Spin = PNCollection() """ Explanation: The following PNCollection objects will contain all the terms in the different parts of the binding energy. End of explanation """ AngularMomentum_NoSpin.AddDerivedVariable('L_coeff', M**2*nu/v) """ Explanation: Individual energy terms In this notebook, every term will be multiplied by the following coefficient. End of explanation """ e_5 = 0 # Placeholder for unknown term in energy expression. AngularMomentum_NoSpin.AddDerivedConstant('L_0', ellHat, datatype=ellHat.datatype) # L_1 is 0 AngularMomentum_NoSpin.AddDerivedConstant('L_2', (frac(3,2) + frac(1,6)*nu)*ellHat, datatype=ellHat.datatype) # L_3 is 0 AngularMomentum_NoSpin.AddDerivedConstant('L_4', (frac(27,8) - frac(19,8)*nu + frac(1,24)*nu**2)*ellHat, datatype=ellHat.datatype) # L_5 is 0 AngularMomentum_NoSpin.AddDerivedConstant('L_6', (frac(135,16) + (-frac(6889,144) + frac(41,24)*pi**2)*nu + frac(31,24)*nu**2 + frac(7,1296)*nu**3)*ellHat, datatype=ellHat.datatype) # L_7 is 0 AngularMomentum_NoSpin.AddDerivedConstant('L_8', (frac(2835,128) - frac(5,7)*nu*((-frac(123671,5760)+frac(9037,1536)*pi**2+frac(1792,15)*ln(2)+frac(896,15)*EulerGamma) + (-frac(498449,3456) + frac(3157,576)*pi**2)*nu + frac(301,1728)*nu**2 + frac(77,31104)*nu**3) + frac(64,35)*nu)*ellHat, datatype=ellHat.datatype) AngularMomentum_NoSpin.AddDerivedConstant('L_lnv_8', (-frac(128,3)*nu)*ellHat, datatype=ellHat.datatype) # L_9 is 0 # Below are the incomplete terms AngularMomentum_NoSpin.AddDerivedConstant('L_10', (frac(15309,256) + nu*(-frac(2,3)*e_5 - frac(4988,945) - frac(656,135)*nu))*ellHat, datatype=ellHat.datatype) AngularMomentum_NoSpin.AddDerivedConstant('L_lnv_10', ((frac(9976,105) + frac(1312,15)*nu)*nu*2)*ellHat, datatype=ellHat.datatype) """ Explanation: Note that fractions need to be entered as, e.g., frac(3,4) so that they are not converted to finite-precision decimals. The nonspinning orbital binding energy is known through 4pN. The 5pN term has a known relationship to the 5pN binding energy term $e_5(\nu)$, though the latter is still incomplete. We set it to zero here. These expressions come from Eq. (230) and related footnotes of Blanchet (2006). Note that his calculation is for nonspinning systems, so he writes the quantity as $J$, which is assumed to be the component along $\hat{\ell}$. End of explanation """ # Lower-order terms are 0 AngularMomentum_Spin.AddDerivedVariable('L_SO_3', (-(35*S_ell + 15*Sigma_ell*delta)/(6*M**2))*ellHat + ((S_n + Sigma_n*delta)/(2*M**2))*nHat + (-(3*S_lambda + Sigma_lambda*delta)/M**2)*lambdaHat, datatype=ellHat.datatype) # L_SO_4 is 0 AngularMomentum_Spin.AddDerivedVariable('L_SO_5', (7*(61*S_ell*nu - 99*S_ell + 30*Sigma_ell*delta*nu - 27*Sigma_ell*delta)/(72*M**2))*ellHat + ((-19*S_n*nu + 33*S_n - 10*Sigma_n*delta*nu + 33*Sigma_n*delta)/(24*M**2))*nHat + ((18*S_lambda*nu - 21*S_lambda + 8*Sigma_lambda*delta*nu - 3*Sigma_lambda*delta)/(6*M**2))*lambdaHat, datatype=ellHat.datatype) # L_SO_6 is 0 AngularMomentum_Spin.AddDerivedVariable('L_SO_7', ((-29*S_ell*nu**2 + 1101*S_ell*nu - 405*S_ell - 15*Sigma_ell*delta*nu**2 + 468*Sigma_ell*delta*nu - 81*Sigma_ell*delta)/(16*M**2))*ellHat + ((11*S_n*nu**2 - 1331*S_n*nu + 183*S_n + 5*Sigma_n*delta*nu**2 - 734*Sigma_n*delta*nu + 183*Sigma_n*delta)/(48*M**2))*nHat + ((-32*S_lambda*nu**2 + 2*S_lambda*nu - 174*S_lambda - 16*Sigma_lambda*delta*nu**2 - 79*Sigma_lambda*delta*nu - 12*Sigma_lambda*delta)/(24*M**2))*lambdaHat, datatype=ellHat.datatype) """ Explanation: (Look for spin-squared terms.) The spin-orbit terms in the angular momentum are complete to 3.5pN. These terms come from Eq. (4.7) of Bohé et al. (2012): End of explanation """ def AngularMomentumExpression(AngularMomentumTerms=[AngularMomentum_NoSpin, AngularMomentum_Spin], PNOrder=frac(7,2)): # We have to play some tricks with the log terms so that `horner` works def logterm(key,val): if 'lnv' in val: return logv else: return 1 return L_coeff*horner(sum([key*(v**n)*logterm(key,val) for Terms in AngularMomentumTerms for n in range(2*PNOrder+1) for key,val in Terms.items() if val.endswith('_{0}'.format(n))])).subs(logv, ln(v)) # display(AngularMomentumExpression(PNOrder=frac(8,2))) """ Explanation: Collected terms End of explanation """
phanrahan/magmathon
notebooks/intermediate/PopCount.ipynb
mit
import magma as m """ Explanation: PopCount8 In this tutorial, we show how to construct a circuit to compute an 8-bit PopCount (population count). End of explanation """ from mantle import FullAdder """ Explanation: In this example, we are going to use the built-in fulladder from Mantle. End of explanation """ # 2 input def csa2(I0, I1): return m.bits(FullAdder()(I0, I1, 0)) # 3 input def csa3(I0, I1, I2): return m.bits(FullAdder()(I0, I1, I2)) """ Explanation: A common name for a full adder is a carry-sum adder, or csa. Let's define two csa functions to help us construct the popcount. End of explanation """ def popcount8(I): # Dadda dot notation (of the result) # o o csa0_0_21 - row 0, bits 2 and 1 # o o csa0_1_21 - row 1, bits 2 and 1 # o o csa0_2_21 - row 2, bits 2 and 1 csa0_0_21 = csa3(I[0], I[1], I[2]) csa0_1_21 = csa3(I[3], I[4], I[5]) csa0_2_21 = csa2(I[6], I[7]) # o o csa1_0_21 - row 0, bits 2 and 1 # o o csa1_1_43 - row 1, bits 4 and 2 csa1_0_21 = csa3(csa0_0_21[0], csa0_1_21[0], csa0_2_21[0]) csa1_1_42 = csa3(csa0_0_21[1], csa0_1_21[1], csa0_2_21[1]) # o o csa2_0_42 - row 0, bits 4 and 2 csa2_0_42 = csa2(csa1_0_21[1], csa1_1_42[0]) # o o csa3_0_84 - row 0, bits 8 and 4 csa3_0_84 = csa2(csa1_1_42[1], csa2_0_42[1]) return m.bits([csa1_0_21[0], csa2_0_42[0], csa3_0_84[0], csa3_0_84[1]]) """ Explanation: To construct the 8-bit popcount, we first use 3 csa's to sum bits 0 through 2, 3 through 5, and 6 through 7. This forms 3 2-bit results. We can consider the results to be two columns, one for each place. The first column is the 1s and the second column is the 2s. We then use two fulladders to sum these columns. We continue summing 3-bits at a time until we get a single bit in each column. A common way to show these operations is with Dadda dot notation which shows how many bits are in each colum. End of explanation """ import fault class Main(m.Circuit): io = m.IO(I=m.In(m.Bits[8]), O=m.Out(m.Bits[4])) io.O @= popcount8(io.I) tester = fault.PythonTester(Main) assert tester(0xFF) == 8 assert tester(0xF0) == 4 assert tester(0xEE) == 6 m.compile('build/popcount8', Main, inline=True) !cat build/popcount8.v """ Explanation: Test bench Let's test this using fault End of explanation """
EmissionsIndex/Emissions-Index
Code/EIA bulk download - non-facility (distributed PV & state-level).ipynb
gpl-3.0
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import io, time, json import pandas as pd import os import numpy as np import math """ Explanation: National generation and fuel consumption The data in this notebook is generation and consumption by fuel type for the entire US. These values are larger than what would be calculated by summing facility-level data. Note that the fuel types are somewhat aggregated (coal rather than BIT, SUB, LIG, etc). So when we multiply the fuel consumption by an emissions factor there will be some level of error. The code assumes that you have already downloaded an ELEC.txt file from EIA's bulk download website. End of explanation """ path = os.path.join('Raw data', 'Electricity data', '2017-03-15 ELEC.txt') with open(path, 'rb') as f: raw_txt = f.readlines() """ Explanation: Read ELECT.txt file End of explanation """ def line_to_df(line): """ Takes in a line (dictionary), returns a dataframe """ for key in ['latlon', 'source', 'copyright', 'description', 'geoset_id', 'iso3166', 'name', 'state']: line.pop(key, None) # Split the series_id up to extract information # Example: ELEC.PLANT.GEN.388-WAT-ALL.M series_id = line['series_id'] series_id_list = series_id.split('.') # Use the second to last item in list rather than third plant_fuel_mover = series_id_list[-2].split('-') line['type'] = plant_fuel_mover[0] # line['state'] = plant_fuel_mover[1] line['sector'] = plant_fuel_mover[2] temp_df = pd.DataFrame(line) try: temp_df['year'] = temp_df.apply(lambda x: x['data'][0][:4], axis=1).astype(int) temp_df['month'] = temp_df.apply(lambda x: x['data'][0][-2:], axis=1).astype(int) temp_df['value'] = temp_df.apply(lambda x: x['data'][1], axis=1) temp_df.drop('data', axis=1, inplace=True) return temp_df except: exception_list.append(line) pass exception_list = [] gen_rows = [row for row in raw_txt if 'ELEC.GEN' in row and 'series_id' in row and 'US-99.M' in row and 'ALL' not in row] total_fuel_rows = [row for row in raw_txt if 'ELEC.CONS_TOT_BTU' in row and 'series_id' in row and 'US-99.M' in row and 'ALL' not in row] eg_fuel_rows = [row for row in raw_txt if 'ELEC.CONS_EG_BTU' in row and 'series_id' in row and 'US-99.M' in row and 'ALL' not in row] """ Explanation: Filter lines to only include total generation and fuel use Only want monthly US data for all sectors - US-99.M - ELEC.GEN, ELEC.CONS_TOT_BTU, ELEC.CONS_EG_BTU - not ALL Fuel codes: - WWW, wood and wood derived fuels - WND, wind - STH, solar thermal - WAS, other biomass - TSN, all solar - SUN, utility-scale solar - NUC, nuclear - NG, natural gas - PEL, petroleum liquids - SPV, utility-scale solar photovoltaic - PC, petroluem coke - OTH, other - COW, coal, - DPV, distributed photovoltaic - OOG, other gases - HPS, hydro pumped storage - HYC, conventional hydroelectric - GEO, geothermal - AOR, other renewables (total) End of explanation """ gen_df = pd.concat([line_to_df(json.loads(row)) for row in gen_rows]) #drop gen_df.head() """ Explanation: All generation by fuel End of explanation """ gen_df.loc[:,'value'] *= 1000 gen_df.loc[:,'units'] = 'megawatthours' gen_df['datetime'] = pd.to_datetime(gen_df['year'].astype(str) + '-' + gen_df['month'].astype(str), format='%Y-%m') gen_df['quarter'] = gen_df['datetime'].dt.quarter gen_df.rename_axis({'value':'generation (MWh)'}, axis=1, inplace=True) #drop gen_df.head() #drop gen_df.loc[gen_df['type']=='OOG'].head() """ Explanation: Multiply generation values by 1000 and change the units to MWh End of explanation """ total_fuel_df = pd.concat([line_to_df(json.loads(row)) for row in total_fuel_rows]) #drop total_fuel_df.head() """ Explanation: Total fuel consumption End of explanation """ total_fuel_df.loc[:,'value'] *= 1E6 total_fuel_df.loc[:,'units'] = 'mmbtu' total_fuel_df['datetime'] = pd.to_datetime(total_fuel_df['year'].astype(str) + '-' + total_fuel_df['month'].astype(str), format='%Y-%m') total_fuel_df['quarter'] = total_fuel_df['datetime'].dt.quarter total_fuel_df.rename_axis({'value':'total fuel (mmbtu)'}, axis=1, inplace=True) #drop total_fuel_df.head() #drop total_fuel_df.loc[total_fuel_df['type']=='OOG'].head() """ Explanation: Multiply generation values by 1,000,000 and change the units to MMBtu End of explanation """ eg_fuel_df = pd.concat([line_to_df(json.loads(row)) for row in eg_fuel_rows]) #drop eg_fuel_df.head() """ Explanation: Electric generation fuel consumption End of explanation """ eg_fuel_df.loc[:,'value'] *= 1E6 eg_fuel_df.loc[:,'units'] = 'mmbtu' eg_fuel_df['datetime'] = pd.to_datetime(eg_fuel_df['year'].astype(str) + '-' + eg_fuel_df['month'].astype(str), format='%Y-%m') eg_fuel_df['quarter'] = eg_fuel_df['datetime'].dt.quarter eg_fuel_df.rename_axis({'value':'elec fuel (mmbtu)'}, axis=1, inplace=True) #drop eg_fuel_df.head() """ Explanation: Multiply generation values by 1,000,000 and change the units to MMBtu End of explanation """ merge_cols = ['type', 'year', 'month'] fuel_df = total_fuel_df.merge(eg_fuel_df[merge_cols+['elec fuel (mmbtu)']], how='outer', on=merge_cols) fuel_df.head() """ Explanation: Combine three datasets Need to estimate fuel use for OOG, because EIA doesn't include any (this is only ~2% of OOG fuel for electricity in 2015). End of explanation """ #drop fuel_df.loc[~(fuel_df['elec fuel (mmbtu)']>=0)] gen_fuel_df = gen_df.merge(fuel_df[merge_cols+['total fuel (mmbtu)', 'elec fuel (mmbtu)']], how='outer', on=merge_cols) gen_fuel_df.head() """ Explanation: Not seeing the issue that shows up with facilities, where some facilities have positive total fuel consumption but no elec fuel consumption End of explanation """ #drop gen_fuel_df.loc[gen_fuel_df['generation (MWh)'].isnull()] """ Explanation: No records with positive fuel use but no generation End of explanation """ path = os.path.join('Clean data', 'Final emission factors.csv') ef = pd.read_csv(path, index_col=0) #drop ef.index #drop gen_fuel_df['type'].unique() """ Explanation: Add CO<sub>2</sub> emissions The difficulty here is that EIA combines all types of coal fuel consumption together in the bulk download and API. Fortunately the emission factors for different coal types aren't too far off on an energy basis (BIT is 93.3 kg/mmbtu, SUB is 97.2 kg/mmbtu). I'm going to average the BIT and SUB factors rather than trying to do something more complicated. In 2015 BIT represented 45% of coal energy for electricity and SUB represented 48%. Same issue with petroleum liquids. Using the average of DFO and RFO, which were the two largest share of petroleum liquids. End of explanation """ #drop ef.loc['NG', 'Fossil Factor'] fuel_factors = {'NG' : ef.loc['NG', 'Fossil Factor'], 'PEL': ef.loc[['DFO', 'RFO'], 'Fossil Factor'].mean(), 'PC' : ef.loc['PC', 'Fossil Factor'], 'COW' : ef.loc[['BIT', 'SUB'], 'Fossil Factor'].mean(), 'OOG' : ef.loc['OG', 'Fossil Factor']} #drop fuel_factors # Start with 0 emissions in all rows # For fuels where we have an emission factor, replace the 0 with the calculated value gen_fuel_df['all fuel CO2 (kg)'] = 0 gen_fuel_df['elec fuel CO2 (kg)'] = 0 for fuel in fuel_factors.keys(): gen_fuel_df.loc[gen_fuel_df['type']==fuel,'all fuel CO2 (kg)'] = \ gen_fuel_df.loc[gen_fuel_df['type']==fuel,'total fuel (mmbtu)'] * fuel_factors[fuel] gen_fuel_df.loc[gen_fuel_df['type']==fuel,'elec fuel CO2 (kg)'] = \ gen_fuel_df.loc[gen_fuel_df['type']==fuel,'elec fuel (mmbtu)'] * fuel_factors[fuel] gen_fuel_df.loc[gen_fuel_df['type']=='COW',:].head() #drop gen_fuel_df.loc[gen_fuel_df['type']=='OOG'].head() """ Explanation: Match general types with specific fuel codes Fuel codes: - WWW, wood and wood derived fuels - WND, wind - STH, solar thermal - WAS, other biomass - TSN, all solar - SUN, utility-scale solar - NUC, nuclear - NG, natural gas - PEL, petroleum liquids - SPV, utility-scale solar photovoltaic - PC, petroluem coke - OTH, other - COW, coal, - DPV, distributed photovoltaic - OOG, other gases - HPS, hydro pumped storage - HYC, conventional hydroelectric - GEO, geothermal - AOR, other renewables (total) End of explanation """ path = os.path.join('Clean data', 'EIA country-wide gen fuel CO2.csv') gen_fuel_df.to_csv(path, index=False) """ Explanation: Export data End of explanation """
weleen/mxnet
example/notebooks/tutorials/char_lstm.ipynb
apache-2.0
import os import urllib import zipfile if not os.path.exists("char_lstm.zip"): urllib.urlretrieve("http://data.mxnet.io/data/char_lstm.zip", "char_lstm.zip") with zipfile.ZipFile("char_lstm.zip","r") as f: f.extractall("./") with open('obama.txt', 'r') as f: print f.read()[0:1000] """ Explanation: Character-level language models This tutorial shows how to train a character-level language model with a multilayer recurrent neural network. In particular, we will train a multilayer LSTM network that is able to generate President Obama's speeches. Prepare data We first download the dataset and show the first few characters. End of explanation """ def read_content(path): with open(path) as ins: return ins.read() # Return a dict which maps each char into an unique int id def build_vocab(path): content = list(read_content(path)) idx = 1 # 0 is left for zero-padding the_vocab = {} for word in content: if len(word) == 0: continue if not word in the_vocab: the_vocab[word] = idx idx += 1 return the_vocab # Encode a sentence with int ids def text2id(sentence, the_vocab): words = list(sentence) return [the_vocab[w] for w in words if len(w) > 0] # build char vocabluary from input vocab = build_vocab("./obama.txt") print('vocab size = %d' %(len(vocab))) """ Explanation: Then we define a few utility functions to pre-process the dataset. End of explanation """ import lstm # Each line contains at most 129 chars. seq_len = 129 # embedding dimension, which maps a character to a 256-dimension vector num_embed = 256 # number of lstm layers num_lstm_layer = 3 # hidden unit in LSTM cell num_hidden = 512 symbol = lstm.lstm_unroll( num_lstm_layer, seq_len, len(vocab) + 1, num_hidden=num_hidden, num_embed=num_embed, num_label=len(vocab) + 1, dropout=0.2) """test_seq_len""" data_file = open("./obama.txt") for line in data_file: assert len(line) <= seq_len + 1, "seq_len is smaller than maximum line length. Current line length is %d. Line content is: %s" % (len(line), line) data_file.close() """ Explanation: Create LSTM Model Now we create the a multi-layer LSTM model. The definition of LSTM cell is implemented in lstm.py. End of explanation """ import bucket_io # The batch size for training batch_size = 32 # initalize states for LSTM init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)] init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)] init_states = init_c + init_h # Even though BucketSentenceIter supports various length examples, # we simply use the fixed length version here data_train = bucket_io.BucketSentenceIter( "./obama.txt", vocab, [seq_len], batch_size, init_states, seperate_char='\n', text2id=text2id, read_content=read_content) """ Explanation: Train First, we create a DataIterator End of explanation """ # @@@ AUTOTEST_OUTPUT_IGNORED_CELL import mxnet as mx import numpy as np import logging logging.getLogger().setLevel(logging.DEBUG) # We will show a quick demo with only 1 epoch. In practice, we can set it to be 100 num_epoch = 1 # learning rate learning_rate = 0.01 # Evaluation metric def Perplexity(label, pred): loss = 0. for i in range(pred.shape[0]): loss += -np.log(max(1e-10, pred[i][int(label[i])])) return np.exp(loss / label.size) model = mx.mod.Module(symbol=symbol, data_names=[x[0] for x in data_train.provide_data], label_names=[y[0] for y in data_train.provide_label], context=[mx.gpu(0)]) model.fit(train_data=data_train, num_epoch=num_epoch, optimizer='sgd', optimizer_params={'learning_rate':learning_rate, 'momentum':0, 'wd':0.0001}, initializer=mx.init.Xavier(factor_type="in", magnitude=2.34), eval_metric=mx.metric.np(Perplexity), batch_end_callback=mx.callback.Speedometer(batch_size, 20), epoch_end_callback=mx.callback.do_checkpoint("obama")) """ Explanation: Then we can train with the standard model.fit approach. End of explanation """ from rnn_model import LSTMInferenceModel # helper strcuture for prediction def MakeRevertVocab(vocab): dic = {} for k, v in vocab.items(): dic[v] = k return dic # make input from char def MakeInput(char, vocab, arr): idx = vocab[char] tmp = np.zeros((1,)) tmp[0] = idx arr[:] = tmp # helper function for random sample def _cdf(weights): total = sum(weights) result = [] cumsum = 0 for w in weights: cumsum += w result.append(cumsum / total) return result def _choice(population, weights): assert len(population) == len(weights) cdf_vals = _cdf(weights) x = random.random() idx = bisect.bisect(cdf_vals, x) return population[idx] # we can use random output or fixed output by choosing largest probability def MakeOutput(prob, vocab, sample=False, temperature=1.): if sample == False: idx = np.argmax(prob, axis=1)[0] else: fix_dict = [""] + [vocab[i] for i in range(1, len(vocab) + 1)] scale_prob = np.clip(prob, 1e-6, 1 - 1e-6) rescale = np.exp(np.log(scale_prob) / temperature) rescale[:] /= rescale.sum() return _choice(fix_dict, rescale[0, :]) try: char = vocab[idx] except: char = '' return char """ Explanation: Inference We first define some utility functions to help us make inferences: End of explanation """ import rnn_model # load from check-point _, arg_params, __ = mx.model.load_checkpoint("obama", 75) # build an inference model model = rnn_model.LSTMInferenceModel( num_lstm_layer, len(vocab) + 1, num_hidden=num_hidden, num_embed=num_embed, num_label=len(vocab) + 1, arg_params=arg_params, ctx=mx.gpu(), dropout=0.2) """ Explanation: Then we create the inference model: End of explanation """ seq_length = 600 input_ndarray = mx.nd.zeros((1,)) revert_vocab = MakeRevertVocab(vocab) # Feel free to change the starter sentence output ='The United States' random_sample = False new_sentence = True ignore_length = len(output) for i in range(seq_length): if i <= ignore_length - 1: MakeInput(output[i], vocab, input_ndarray) else: MakeInput(output[-1], vocab, input_ndarray) prob = model.forward(input_ndarray, new_sentence) new_sentence = False next_char = MakeOutput(prob, revert_vocab, random_sample) if next_char == '': new_sentence = True if i >= ignore_length - 1: output += next_char print(output) """ Explanation: Now we can generate a sequence of 600 characters starting with "The United States" End of explanation """
lsanomaly/lsanomaly
lsanomaly/notebooks/digits.ipynb
mit
import os from IPython.display import Image import numpy as np from pathlib import Path from sklearn import metrics cwd = os.getcwd() os.chdir(Path(cwd).parents[1]) from lsanomaly import LSAnomaly import lsanomaly.notebooks.digits as demo digits = os.path.join(os.getcwd(), "lsanomaly", "notebooks", "digits.png") """ Explanation: Recognizing a Digit In this example, we try to recognise digits of class 9 given training examples from classes 0-8. End of explanation """ X_train, X_test, y_train, y_test = demo.data_prep(test_size=0.5) anomaly_model = LSAnomaly() anomaly_model.fit(X_train, y_train) predictions = anomaly_model.predict_proba(X_test) fpr, tpr, thresholds = metrics.roc_curve(y_test == 9, predictions[:, -1]) """ Explanation: The $8\times 8$ images of digits are loaded from scikit-learn. Any digit $< 9$ is defined as the inlier class. Thus, the digit 9 is an outlier or anomaly. Note this is similar to the SVC example. Anomaly Model End of explanation """ demo.plot_roc(fpr, tpr, metrics.auc(fpr, tpr)) """ Explanation: ROC Curve End of explanation """ y_pred = anomaly_model.predict(X_test) y_pred = [w if np.isreal(w) else 9 for w in y_pred] demo.plot_confusion_matrix(y_test, y_pred, title='Confusion matrix', normalize=False) """ Explanation: Confusion Matrix End of explanation """
CPernet/LanguageDecision
notebooks/individuals/controls.ipynb
gpl-3.0
# Environment setup %matplotlib inline %cd /lang_dec # Imports import warnings; warnings.filterwarnings('ignore') import hddm import numpy as np import matplotlib.pyplot as plt from utils import model_tools, signal_detection # Import control models controls_data = hddm.load_csv('/lang_dec/data/controls_clean.csv') controls_model = hddm.HDDM(controls_data, depends_on={'v': 'stim'}, bias=True) controls_model.load_db(dbname='language_decision/models/controls', db='txt') controls_model_threshold = hddm.HDDM(controls_data, depends_on={'v': 'stim', 'a': 'stim'}) controls_model_threshold.load_db(dbname='language_decision/models/controls_threshold', db='txt') """ Explanation: Controls: Individual Subject Model Performance Analysis End of explanation """ fig = plt.figure() ax = fig.add_subplot(111, xlabel='RT', ylabel='count', title='RT distributions') for i, subj_data in controls_data.groupby('subj_idx'): subj_data.rt.hist(bins=20, histtype='step', ax=ax) """ Explanation: Reaction Time Distributions RT distributions of each individual subject End of explanation """ controls_model.plot_posterior_predictive(figsize=(15, 10)) """ Explanation: Model Fitness How well do the models fit our data? Individual subject RT distributions are plotted in red on top of the predictive likelihood in blue (see http://ski.clps.brown.edu/hddm_docs/tutorial_python.html) End of explanation """
StefanoAllesina/ISC
regex/solutions/MapOfScience_solution.ipynb
gpl-2.0
import re import csv """ Explanation: Map of Science Solution Read the file pubmed_results.txt, and extract all the US ZIP codes. First, import the modules we'll need. End of explanation """ with open('../data/MapOfScience/pubmed_results.txt') as f: my_text = f.read() len(my_text) """ Explanation: Now read the whole file, and store it into a string. End of explanation """ my_text = re.sub(r'\n\s{6}', ' ', my_text) """ Explanation: Note that the zipcode could be broken over two lines, as in line 43 of pubmed_results.txt AD - Biological and Biomedical Sciences Program, Harvard Medical School, Boston, MA 02115, USA. Department of Genetics, Harvard Medical School, Boston, MA 02115, USA. To avoid problems, replace each newline followed by 6 spaces with a single space. End of explanation """ print(my_text[:2000]) """ Explanation: This should be ok now: End of explanation """ zipcodes = re.findall(r'[A-Z]{2}\s(\d{5}), USA', my_text) """ Explanation: Now write a simple regular expression that creates a list of zipcodes: End of explanation """ len(zipcodes) zipcodes[:10] """ Explanation: The anatomy of the regular expression: [A-Z]{2} -&gt; two capital letters (for the state) \s -&gt; followed by a space \d{5} -&gt; followed by exactly 5 digits , USA -&gt; follwed by the string ", USA" Note that we use a group (\d{5}) to capture exclusively the zipcode proper. End of explanation """ unique_zipcodes = list(set(zipcodes)) unique_zipcodes.sort() unique_zipcodes[:10] len(unique_zipcodes) """ Explanation: Extract the unique zipcodes End of explanation """ zip_coordinates = {} with open('../data/MapOfScience/zipcodes_coordinates.txt') as f: csvr = csv.DictReader(f) for row in csvr: zip_coordinates[row['ZIP']] = [float(row['LAT']), float(row['LNG'])] """ Explanation: Now create a dictionary with the latitude and longitude for each zipcode: End of explanation """ zip_code = [] zip_long = [] zip_lat = [] zip_count = [] """ Explanation: Create the lists zip_code, containing the ZIP codes, zip_long, zip_lat, and zip_count, containing the unique ZIP codes, their longitude, latitude, and count (number of occurrences in Science), respectively. End of explanation """ for z in unique_zipcodes: # if we can find the coordinates if z in zip_coordinates.keys(): zip_code.append(z) zip_lat.append(zip_coordinates[z][0]) zip_long.append(zip_coordinates[z][1]) zip_count.append(zipcodes.count(z)) """ Explanation: Populate the lists: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.scatter(zip_long, zip_lat, s = zip_count, c= zip_count) plt.colorbar() # only continental us without Alaska plt.xlim(-125,-65) plt.ylim(23, 50) # add a few cities for reference (optional) ard = dict(arrowstyle="->") plt.annotate('Los Angeles', xy = (-118.25, 34.05), xytext = (-108.25, 34.05), arrowprops = ard) plt.annotate('Palo Alto', xy = (-122.1381, 37.4292), xytext = (-112.1381, 37.4292), arrowprops= ard) plt.annotate('Cambrdige', xy = (-71.1106, 42.3736), xytext = (-73.1106, 48.3736), arrowprops= ard) plt.annotate('Chicago', xy = (-87.6847, 41.8369), xytext = (-87.6847, 46.8369), arrowprops= ard) plt.annotate('Seattle', xy = (-122.33, 47.61), xytext = (-116.33, 47.61), arrowprops= ard) plt.annotate('Miami', xy = (-80.21, 25.7753), xytext = (-80.21, 30.7753), arrowprops= ard) params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0] * 3, plSize[1] * 3) ) plt.show() zip_code.index('60637') zip_count[215] """ Explanation: Plot the results using the following code: End of explanation """
pyconsk/meetup
Bratislava/201508/Ludolph.ipynb
cc0-1.0
pip install ludolph """ Explanation: Ludolph Ludolph je jednoduchý XMPP klient napísaný v Pythone, ktorý dokáže odpovedať na správy podľa toho ako si ho naprogramujeme ;) XMPP Extensible Messaging and Presence Protocol (XMPP) (predtým známy ako Jabber) je protokol používaný na sieťovú komunikáciu, podobne ako AIM, ICQ, MSN alebo Skype. XMPP je súbor protokolov a technológií založených na XML, ktoré umožňuje akýmkoľvek dvom entitám na internete vzájomne si vymieňať textové správy, informácie o prítomnosti, a ďalšie štruktúrované informácie v (takmer) reálnom čase. Zoznam verejných jabber serverov: https://xmpp.net/directory.php Opensource jabber servery: https://www.ejabberd.im/, https://prosody.im/ Instalacia Ludolpha Podrobný popis inštalácie na Ludolphovej wiki: https://github.com/erigones/Ludolph/wiki/How-to-install-and-configure-Ludolph Na meetupe budeme inštalovať Ludolpha do virtual environments, resp pyvenv (izolované virtuálne prostredie, ktoré umožnuje inštalovať balíky rôznych verzií nezávisle na balíkoch nainštalovaných v OS.). End of explanation """ wget -O ~/.ludolph.cfg https://raw.github.com/erigones/Ludolph/master/ludolph/ludolph.cfg.example """ Explanation: Konfiguracia Ludolpha ulozime si konfiguracny subor na miesto kde ho bude ludolph pri starte hladat: End of explanation """ import requests def weather(self, msg): apikey = 'xxxxxxxxxxxxxxxxxxxxxxxxx' city = 'Bratislava,sk' url = 'http://api.openweathermap.org/data/2.5/forecast/city?q=%s&APPID=%s&units=metric' res = requests.get(url % (city, apikey)).json() # Odpoved obsahuje atribut list, ktory obsauje zoznam odpovedi zodpovedajucich najdenemu mestu. # Pre jednoduchost vyberieme prvu odpoved try: data = res['list'][0] except IndexError: return 'Zadane mesto %s nebolo najdene' % city else: return 'Teplota: %s, Popis: %s' % (data['main']['temp'], data['weather'][0]['description']) """ Explanation: V sekcii [xmpp] musíme zadefinovat username a password aby sa mal kam prihlásiť Ludolph po spusteni. Máme plne funkčného Jabberbota, ktorý sa po spusteni prihlási a môžeme si ho pridať medzi priateľov a začať s ním komunikovať. Pluginy v Ludolphovi Podrobný popis tvorby pluginu na Ludolphovej wiki: https://github.com/erigones/Ludolph/wiki/How-to-create-a-plugin Napísať si vlastný plugin do Ludolpha nie je nič zložité, Ludolph má predpripravený "hello_world" plugin projekt. Projekt slúži ako čo možno najjednoduchší príklad na vytvorenie pluginu a popisu dekorátorov. "hello_world" plugin je štrukturovaný ako inštalovateľný balíček pre PyPI. V prípade pluginu však stačí aby sa súbor nachádazal v python path, tj. aby import bol schopný importovať triedu. Plugin na predpoveď počasia Do "hello_world" pluginu si pridáme novú metódu, ktorá bude obsahovať nasledovný kód: End of explanation """
tensorflow/docs-l10n
site/ja/io/tutorials/prometheus.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow IO Authors. End of explanation """ import os try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io from datetime import datetime import tensorflow as tf import tensorflow_io as tfio """ Explanation: Prometheus サーバーからメトリックを読み込む <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/prometheus"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td> </table> 注: このノートブックは python パッケージの他、sudo apt-get installを使用してサードパーティのパッケージをインストールします。 概要 このチュートリアルでは、CoreDNS メトリクスを Prometheus サーバーからtf.data.Datasetに読み込み、tf.kerasをトレーニングと推論に使用します。 CoreDNS は、サービスディスカバリに重点を置いた DNS サーバーであり、Kubernetes クラスタの一部として広くデプロイされています。そのため、通常、開発オペレーションによって綿密に監視されています。 このチュートリアルでは、開発者向けに機械学習による運用の自動化の例を紹介します。 セットアップと使用法 必要な tensorflow-io パッケージをインストールし、ランタイムを再起動する End of explanation """ !curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz !tar -xzf coredns_1.6.7_linux_amd64.tgz !curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile !cat Corefile # Run `./coredns` as a background process. # IPython doesn't recognize `&` in inline bash cells. get_ipython().system_raw('./coredns &') """ Explanation: CoreDNS と Prometheus のインストールとセットアップ デモ用に、9053番ポートをローカルで開放し、DNS クエリを受信するための CoreDNS サーバーとスクレイピングのメトリックを公開するために9153番ポート (デフォルト) を開放します。以下は CoreDNS の基本的な Corefile 構成であり、ダウンロードできます。 .:9053 { prometheus whoami } インストールの詳細については、CoreDNS のドキュメントを参照してください。 End of explanation """ !curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz !tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1 !curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml !cat prometheus.yml # Run `./prometheus` as a background process. # IPython doesn't recognize `&` in inline bash cells. get_ipython().system_raw('./prometheus &') """ Explanation: 次に、Prometheus サーバーをセットアップし、Prometheus を使用して、上記の9153番ポートで公開されている CoreDNS メトリックを取得します。また、構成用のprometheus.ymlファイルはダウンロードできます。 End of explanation """ !sudo apt-get install -y -qq dnsutils !dig @127.0.0.1 -p 9053 demo1.example.org !dig @127.0.0.1 -p 9053 demo2.example.org """ Explanation: アクティビティを表示するためには、digコマンドを使用して、セットアップされている CoreDNS サーバーに対していくつかの DNS クエリを生成できます。 End of explanation """ dataset = tfio.experimental.IODataset.from_prometheus( "coredns_dns_request_count_total", 5, endpoint="http://localhost:9090") print("Dataset Spec:\n{}\n".format(dataset.element_spec)) print("CoreDNS Time Series:") for (time, value) in dataset: # time is milli second, convert to data time: time = datetime.fromtimestamp(time // 1000) print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total'])) """ Explanation: CoreDNS サーバーのメトリックが Prometheus サーバーによりスクレイピングされ、TensorFlow で使用する準備ができました。 CoreDNS メトリックのデータセットを作成し、TensorFlow で使用する PostgreSQL サーバーから利用可能な CoreDNS メトリックのデータセットを作成します。これは、tfio.experimental.IODataset.from_prometheusを使用して実行できます。少なくとも次の 2 つの引数が必要です。 <code>query</code>はメトリックを選択するため Prometheus サーバーに渡され、<code>length</code>は Dataset に読み込む期間です。 "coredns_dns_request_count_total"と"5"(秒)から始めて、以下のデータセットを作成します。チュートリアルの前半で 2 つの DNS クエリが送信されたため、"coredns_dns_request_count_total"のメトリックは時系列の終わりに"2.0"になると予想されます。 End of explanation """ dataset = tfio.experimental.IODataset.from_prometheus( "go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090") print("Time Series CoreDNS/Prometheus Comparision:") for (time, value) in dataset: # time is milli second, convert to data time: time = datetime.fromtimestamp(time // 1000) print("{}: {}/{}".format( time, value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'], value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes'])) """ Explanation: データセットの仕様をさらに見てみましょう。 ``` ( TensorSpec(shape=(), dtype=tf.int64, name=None), { 'coredns': { 'localhost:9153': { 'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None) } } } ) ``` データセットが(time, values)タプルで構成されていることは明らかです。valuesフィールドは、次のように展開された python dict です。 "job_name": { "instance_name": { "metric_name": value, }, } 上記の例では、'coredns'はジョブ名、'localhost:9153'はインスタンス名、'coredns_dns_request_count_total'はメトリック名です。使用する Prometheus クエリによっては、複数のジョブ/インスタンス/メトリックが返される可能性があることに注意してください。これは、データセットの構造で python dict が使用されている理由でもあります。 別のクエリ"go_memstats_gc_sys_bytes"を例として見てみましょう。CoreDNS と Prometheus はどちらも Golang で記述されているため、"go_memstats_gc_sys_bytes"メトリックは、"coredns"ジョブと"prometheus"ジョブの両方で使用できます。 注: このセルは、最初に実行したときにエラーになる場合があります。もう一度実行すると、パスします。 End of explanation """ n_steps, n_features = 2, 1 simple_lstm_model = tf.keras.models.Sequential([ tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)), tf.keras.layers.Dense(1) ]) simple_lstm_model.compile(optimizer='adam', loss='mae') """ Explanation: 作成されたDatasetは、トレーニングまたは推論のために直接tf.kerasに渡す準備ができています。 モデルトレーニングにデータセットを使用する メトリクスのデータセットを作成すると、モデルのトレーニングや推論のためにデータセットをtf.kerasに直接渡すことができます。 デモのために、このチュートリアルでは、1 つの特徴と 2 つのステップを入力とする非常に単純な LSTM モデルを使用します。 End of explanation """ n_samples = 10 dataset = tfio.experimental.IODataset.from_prometheus( "go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090") # take go_memstats_gc_sys_bytes from coredns job dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes']) # find the max value and scale the value to [0, 1] v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum) dataset = dataset.map(lambda v: (v / v_max)) # expand the dimension by 1 to fit n_features=1 dataset = dataset.map(lambda v: tf.expand_dims(v, -1)) # take a sliding window dataset = dataset.window(n_steps, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda d: d.batch(n_steps)) # the first value is x and the next value is y, only take 10 samples x = dataset.take(n_samples) y = dataset.skip(1).take(n_samples) dataset = tf.data.Dataset.zip((x, y)) # pass the final dataset to model.fit for training simple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10) """ Explanation: 使用するデータセットは、10 サンプルの CoreDNS の「go_memstats_sys_bytes」の値です。ただし、window = n_stepsおよびshift = 1のスライディングウィンドウが形成されるため、追加のサンプルが必要です (2 つの連続する要素の場合、最初のサンプルはxで、2 番目はyと見なされます) 。合計は10 + n_steps - 1 + 1 = 12 秒です。 また、データ値は[0、1]にスケーリングされています。 End of explanation """
shakhova/BananaML
kaggle_flight/Desicion_trees_practise.ipynb
gpl-3.0
from __future__ import division, print_function # отключим всякие предупреждения Anaconda import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd %matplotlib inline import seaborn as sns from matplotlib import pyplot as plt plt.rcParams['figure.figsize'] = (6,4) xx = np.linspace(0,1,50) plt.plot(xx, [2 * x * (1-x) for x in xx], label='gini') plt.plot(xx, [4 * x * (1-x) for x in xx], label='2*gini') plt.plot(xx, [-x * np.log2(x) - (1-x) * np.log2(1 - x) for x in xx], label='entropy') plt.plot(xx, [1 - max(x, 1-x) for x in xx], label='missclass') plt.plot(xx, [2 - 2 * max(x, 1-x) for x in xx], label='2*missclass') plt.xlabel('p+') plt.ylabel('criterion') plt.title('Критерии качества как функции от p+ (бинарная классификация)') plt.legend(); """ Explanation: Как строится дерево решений На прошлом занятии мы затронули понятие энтропии - рассмотрим ее подробнее. Определение Энтропия Шеннона определяется для системы с $N$ возможными состояниями следующим образом: $$\Large S = -\sum_{i=1}^{N}p_ilog_2p_i,$$ где $p_i$ – вероятности нахождения системы в $i$-ом состоянии. Это очень важное понятие, используемое в физике, теории информации и других областях. Опуская предпосылки введения (комбинаторные и теоретико-информационные) этого понятия, отметим, что, интуитивно, энтропия соответствует степени хаоса в системе. Чем выше энтропия, тем менее упорядочена система и наоборот. <h4>Пример</h4> Для иллюстрации того, как энтропия поможет определить хорошие признаки для построения дерева, вспомним пример определения цвета шарика по его координате. Конечно, ничего общего с жизнью это не имеет, но позволяет показать, как энтропия используется для построения дерева решений. <img src="https://habrastorage.org/files/c96/80a/a4b/c9680aa4babc40f4bbc8b3595e203979.png"/><br> Здесь 9 синих шариков и 11 желтых. Если мы наудачу вытащили шарик, то он с вероятностью $p_1=\frac{9}{20}$ будет синим и с вероятностью $p_2=\frac{11}{20}$ – желтым. Значит, энтропия состояния $S_0 = -\frac{9}{20}log_2{\frac{9}{20}}-\frac{11}{20}log_2{\frac{11}{20}} \approx 1$. Само это значение пока ни о чем нам не говорит. Теперь посмотрим, как изменится энтропия, если разбить шарики на две группы – с координатой меньше либо равной 12 и больше 12. <img src="https://habrastorage.org/files/186/444/a8b/186444a8bd0e451c8324ca8529f8d4f4.png"/><br> В левой группе оказалось 13 шаров, из которых 8 синих и 5 желтых. Энтропия этой группы равна $S_1 = -\frac{5}{13}log_2{\frac{5}{13}}-\frac{8}{13}log_2{\frac{8}{13}} \approx 0.96$. В правой группе оказалось 7 шаров, из которых 1 синий и 6 желтых. Энтропия правой группы равна $S_2 = -\frac{1}{7}log_2{\frac{1}{7}}-\frac{6}{7}log_2{\frac{6}{7}} \approx 0.6$. Как видим, энтропия уменьшилась в обеих группах по сравнению с начальным состоянием, хоть в левой и не сильно. Поскольку энтропия – по сути степень хаоса (или неопределенности) в системе, уменьшение энтропии называют приростом информации. Формально прирост информации (information gain, IG) при разбиении выборки по признаку $Q$ (в нашем примере это признак "$x \leq 12$") определяется как $$\Large IG(Q) = S_O - \sum_{i=1}^{q}\frac{|N_i|}{N}S_i,$$ где $q$ – число групп после разбиения, $N_i$ – число элементов выборки, у которых признак $Q$ имеет $i$-ое значение. В нашем случае после разделения получилось две группы ($q = 2$) – одна из 13 элементов ($N_1 = 13$), вторая – из 7 ($N_2 = 7$). Прирост информации получился $$\Large IG("x \leq 12") = S_0 - \frac{13}{20}S_1 - \frac{7}{20}S_2 \approx 0.16.$$ Получается, разделив шарики на две группы по признаку "координата меньше либо равна 12", мы уже получили более упорядоченную систему, чем в начале. Продолжим деление шариков на группы до тех пор, пока в каждой группе шарики не будут одного цвета. <img src="https://habrastorage.org/files/dae/a88/2b0/daea882b0a8e4ef4b23325c88f0353a1.png"/><br> Для правой группы потребовалось всего одно дополнительное разбиение по признаку "координата меньше либо равна 18", для левой – еще три. Очевидно, энтропия группы с шариками одного цвета равна 0 ($log_2{1} = 0$), что соответствует представлению, что группа шариков одного цвета – упорядоченная. В итоге мы построили дерево решений, предсказывающее цвет шарика по его координате. Отметим, что такое дерево решений может плохо работать для новых объектов (определения цвета новых шариков), поскольку оно идеально подстроилось под обучающую выборку (изначальные 20 шариков). Для классификации новых шариков лучше подойдет дерево с меньшим числом "вопросов", или разделений, пусть даже оно и не идеально разбивает по цветам обучающую выборку. Эту проблему, переобучение, мы еще рассмотрим далее. Алгоритм построения дерева В основе популярных алгоритмов построения дерева решений, таких как ID3 и C4.5, лежит принцип жадной максимизации прироста информации – на каждом шаге выбирается тот признак, при разделении по которому прирост информации оказывается наибольшим. Дальше процедура повторяется рекурсивно, пока энтропия не окажется равной нулю или какой-то малой величине (если дерево не подгоняется идеально под обучающую выборку во избежание переобучения). В разных алгоритмах применяются разные эвристики для "ранней остановки" или "отсечения", чтобы избежать построения переобученного дерева. python def build(L): create node t if the stopping criterion is True: assign a predictive model to t else: Find the best binary split L = L_left + L_right t.left = build(L_left) t.right = build(L_right) return t Другие критерии качества разбиения в задаче классификации Мы разобрались, в том, как понятие энтропии позволяет формализовать представление о качестве разбиения в дереве. Но это всего-лишь эвристика, существуют и другие: Неопределенность Джини (Gini impurity): $G = 1 - \sum\limits_k (p_k)^2$. Максимизацию этого критерия можно интерпретировать как максимизацию числа пар объектов одного класса, оказавшихся в одном поддереве. Подробнее об этом (как и обо многом другом) можно узнать из репозитория Евгения Соколова. Не путать с индексом Джини! Подробнее об этой путанице – в блогпосте Александра Дьяконова Ошибка классификации (misclassification error): $E = 1 - \max\limits_k p_k$ На практике ошибка классификации почти не используется, а неопределенность Джини и прирост информации работают почти одинаково. В случае задачи бинарной классификации ($p_+$ – вероятность объекта иметь метку +) энтропия и неопределенность Джини примут следующий вид:<br><br> $$ S = -p_+ \log_2{p_+} -p_- \log_2{p_-} = -p_+ \log_2{p_+} -(1 - p_{+}) \log_2{(1 - p_{+})};$$ $$ G = 1 - p_+^2 - p_-^2 = 1 - p_+^2 - (1 - p_+)^2 = 2p_+(1-p_+).$$ Когда мы построим графики этух двух функций от аргумента $p_+$, то увидим, что график энтропии очень близок к графику удвоенной неопределенности Джини, и поэтому на практике эти два критерия "работают" почти одинаково. End of explanation """ # первый класс np.random.seed(7) train_data = np.random.normal(size=(100, 2)) train_labels = np.zeros(100) # добавляем второй класс train_data = np.r_[train_data, np.random.normal(size=(100, 2), loc=2)] train_labels = np.r_[train_labels, np.ones(100)] """ Explanation: А теперь практический пример Рассмотрим пример применения дерева решений из библиотеки Scikit-learn для синтетических данных. Сгенерируем данные. Два класса будут сгенерированы из двух нормальных распределений с разными средними. End of explanation """ def get_grid(data, eps=0.01): x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 return np.meshgrid(np.arange(x_min, x_max, eps), np.arange(y_min, y_max, eps)) """ Explanation: Напишем вспомогательную функцию, которая будет возвращать решетку для дальнейшей красивой визуализации. End of explanation """ plt.rcParams['figure.figsize'] = (10,8) plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100, cmap='autumn', edgecolors='black', linewidth=1.5) plt.plot(range(-2,5), range(4,-3,-1)); """ Explanation: Отобразим данные. Неформально, задача классификации в этом случае – построить какую-то "хорошую" границу, разделяющую 2 класса (красные точки от желтых). Интуиция подсказывает, что хорошо на новых данных будет работать какая-то гладкая граница, разделяющая 2 класса, или хотя бы просто прямая (в $n$-мерном случае - гиперплоскость). End of explanation """ from sklearn.tree import DecisionTreeClassifier # параметр min_samples_leaf указывает, при каком минимальном количестве # элементов в узле он будет дальше разделяться rs = 17 clf_tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=rs) # обучаем дерево clf_tree.fit(train_data, train_labels) # немного кода для отображения разделяющей поверхности xx, yy = get_grid(train_data) predicted = clf_tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.pcolormesh(xx, yy, predicted, cmap='autumn') plt.scatter(train_data[:, 0], train_data[:, 1], c=train_labels, s=100, cmap='autumn', edgecolors='black', linewidth=1.5); """ Explanation: Попробуем разделить эти два класса, обучив дерево решений. В дереве будем использовать параметр max_depth, ограничивающий глубину дерева. Визуализируем полученную границу разделения класссов. End of explanation """ # используем .dot формат для визуализации дерева from sklearn.tree import export_graphviz export_graphviz(clf_tree, feature_names=['x1', 'x2'], out_file='small_tree.dot', filled=True) !dot -Tpng small_tree.dot -o small_tree.png !rm small_tree.dot """ Explanation: А как выглядит само построенное дерево? Видим, что дерево "нарезает" пространство на 7 прямоугольников (в дереве 7 листьев). В каждом таком прямоугольнике прогноз дерева будет константным, по превалированию объектов того или иного класса. End of explanation """ data = pd.DataFrame({'Возраст пилота': [19,64,18,20,38,49,55,25,29,31,33], 'Задержка рейса': [1,0,1,0,1,0,0,1,1,0,1]}) data """ Explanation: <img src='small_tree.png'> Как "читается" такое дерево? В начале было 200 объектов, 100 – одного класса и 100 – другого. Энтропия начального состояния была максимальной – 1. Затем было сделано разбиение объектов на 2 группы в зависимости от сравнения признака $x_1$ со значением $1.1034$ (найдите этот участок границы на рисунке выше, до дерева). При этом энтропия и в левой, и в правой группе объектов уменьшилась. И так далее, дерево строится до глубины 3. При такой визуализации чем больше объектов одного класса, тем цвет вершины ближе к темно-оранжевому и, наоборот, чем больше объектов второго класса, тем ближе цвет к темно-синему. В начале объектов одного лкасса поровну, поэтому корневая вершина дерева – белого цвета. Как дерево решений работает с количественными признаками Допустим, в выборке имеется количественный признак "Возраст", имеющий много уникальных значений. Дерево решений будет искать лучшее (по критерию типа прироста информации) разбиение выборки, проверяя бинарные признаки типа "Возраст < 17", "Возраст < 22.87" и т.д. Для решения этой проблемы применяют эвристики для ограничения числа порогов, с которыми мы сравниваем количественный признак. Рассмотрим это на игрушечном примере. Пусть в нашем датасете на kaggle появился новый признак: End of explanation """ data.sort_values('Возраст пилота') """ Explanation: Отсортируем ее по возрастанию возраста. End of explanation """ age_tree = DecisionTreeClassifier(random_state=17) age_tree.fit(data['Возраст пилота'].values.reshape(-1, 1), data['Задержка рейса'].values) """ Explanation: Обучим на этих данных дерево решений (без ограничения глубины) и посмотрим на него. End of explanation """ export_graphviz(age_tree, feature_names=['Возраст пилота'], out_file='age_tree.dot', filled=True) !dot -Tpng age_tree.dot -o age_tree.png """ Explanation: Видим, что дерево задействовало 5 значений, с которыми сравнивается возраст: 43.5, 19, 22.5, 30 и 32 года. Если приглядеться, то это аккурат средние значения между возрастами, при которых целевой класс "меняется" с 1 на 0 или наоборот. То есть в качестве порогов для "нарезания" количественного признака, дерево "смотрит" на те значения, при которых целевой класс меняет свое значение. Подумайте, почему не имеет смысла в данном случае рассматривать признак "Возраст пилота < 18". End of explanation """ data2 = pd.DataFrame({'Возраст пилота': [19,64,18,20,38,49,55,25,29,31,33], 'Зарплата пилота': [25,80,22,36,37,59,74,70,33,102,88], 'Задержка рейса': [1,0,1,0,1,0,0,1,1,0,1]}) data2 """ Explanation: <img src='age_tree.png'> Рассмотрим пример посложнее: добавим признак "Зарплата пилота" (тыс. рублей/месяц). End of explanation """ data2.sort_values('Возраст пилота') data2.sort_values('Зарплата пилота') age_sal_tree = DecisionTreeClassifier(random_state=17) age_sal_tree.fit(data2[['Возраст пилота', 'Зарплата пилота']].values, data2['Задержка рейса'].values); export_graphviz(age_sal_tree, feature_names=['Возраст пилота', 'Зарплата пилота'], out_file='age_sal_tree.dot', filled=True) !dot -Tpng age_sal_tree.dot -o age_sal_tree.png """ Explanation: Если отсортировать по возрасту, то целевой класс ("Задержка рейса") меняется (с 1 на 0 или наоборот) 5 раз. А если отсортировать по зарплате – то 7 раз. Как теперь дерево будет выбирать признаки? Посмотрим. End of explanation """ from sklearn.utils import shuffle from sklearn.model_selection import train_test_split df_k = pd.read_csv('/Users/Nonna/Desktop/BananaML/BananaML/kaggle_flight/train_dataset.csv') df_k = shuffle(df_k) df_k = df_k.head(250) train_df = df_k[['Month', 'DayofMonth', 'DayOfWeek', 'UniqueCarrier', 'target']] train_df = train_df.fillna(train_df.mean()) train_df = pd.get_dummies(train_df, columns = ['Month', 'DayofMonth', 'DayOfWeek', 'UniqueCarrier']) x_train, x_test, y_train, y_test = train_test_split(train_df.drop('target', axis = 1), train_df.target, test_size=0.3, random_state=42) print(x_train.shape, x_test.shape) """ Explanation: <img src='age_sal_tree.png'> Видим, что в дереве задействованы как разбиения по возрасту, так и по зарплате. Причем пороги, с которыми сравниваются признаки: 43.5 и 22.5 года – для возраста и 95 и 30.5 тыс. руб/мес – для зарплаты. И опять можно заметить, что 95 тыс. – это среднее между 88 и 102, при этом человек с зарплатой 88 оказался "плохим", а с 102 – "хорошим". То же самое для 30.5 тыс. То есть перебирались сравнения зарплаты и возраста не со всеми возможными значениями, а только с несколькими. А почему в дереве оказались именно эти признаки? Потому что по ним разбиения оказались лучше (по критерию неопределенности Джини). Вывод: самая простая эвристика для обработки количественных признаков в дереве решений: количественный признак сортируется по возрастанию, и в дереве проверяются только те пороги, при которых целевой признак меняет значение. Дополнительно, когда в данных много количественных признаков, и у каждого много уникальных значений, могут отбираться не все пороги, описанные выше, а только топ-N, дающих максимальный прирост все того же критерия. То есть, по сути, для каждого порога строится дерево глубины 1, считается насколько снизилась энтропия (или неопределенность Джини) и выбираются только лучшие пороги, с которыми стоит сравнивать количественный признак. Основные способы борьбы с переобучением в случае деревьев решений искусственное ограничение глубины или минимального числа объектов в листе: построение дерева просто в какой-то момент прекращается; стрижка дерева (pruning). При таком подходе дерево сначала строится до максимальной глубины, потом постепенно, снизу вверх, некоторые вершины дерева убираются за счет сравнения по качеству дерева с данным разбиением и без него (сравнение проводится с помощью кросс-валидации, о которой чуть ниже). Подробнее можно почитать в материалах репозитория Евгения Соколова. Класс DecisionTreeClassifier в Scikit-learn Основные параметры класса sklearn.tree.DecisionTreeClassifier: max_depth – максимальная глубина дерева max_features - максимальное число признаков, по которым ищется лучшее разбиение в дереве (это нужно потому, что при большом количестве признаков будет "дорого" искать лучшее (по критерию типа прироста информации) разбиение среди всех признаков) min_samples_leaf – минимальное число объектов в листе. У этого параметра есть понятная интерпретация: скажем, если он равен 5, то дерево будет порождать только те классифицирующие правила, которые верны как мимимум для 5 объектов Параметры дерева надо настраивать в зависимости от входных данных, делается это обычно с помощью кросс-валидации. Попробуем сделать это на нашем любимом датасете. End of explanation """ from sklearn.model_selection import GridSearchCV, cross_val_score from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_score tree = DecisionTreeClassifier(max_depth=5, random_state=17) tree_params = {'max_depth': range(1,11), 'max_features': range(4,19)} tree_grid = GridSearchCV(tree, tree_params, cv=5, n_jobs=-1, verbose=True, scoring='roc_auc') tree_grid.fit(x_train, y_train) """ Explanation: Теперь настроим параметры дерева на кросс-валидации. Настраивать будем максимальную глубину и максимальное используемое на каждом разбиении число признаков. Суть того, как работает GridSearchCV: для каждой уникальной пары значений параметров max_depth и max_features будет проведена 5-кратная кросс-валидация и выберется лучшее сочетание параметров. End of explanation """ tree_grid.best_params_ tree_grid.best_score_ roc_auc_score(y_test, tree_grid.predict(x_test)) from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=17) print(np.mean(cross_val_score(forest, x_train, y_train, cv=5))) forest_params = {'max_depth': range(1,11), 'max_features': range(4,19)} forest_grid = GridSearchCV(forest, forest_params, cv=5, n_jobs=-1, verbose=True, scoring='roc_auc') forest_grid.fit(x_train, y_train) forest_grid.best_params_, forest_grid.best_score_ roc_auc_score(y_test, forest_grid.predict(x_test)) """ Explanation: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации: End of explanation """ from sklearn.tree import export_graphviz export_graphviz(tree_grid.best_estimator_, feature_names=train_df.columns[:-1], out_file='flight_tree.dot', filled=True) !dot -Tpng flight_tree.dot -o flight_tree.png """ Explanation: Нарисуем получившееся дерево: End of explanation """ from sklearn.datasets import load_digits """ Explanation: <img src='flight_tree.png'> Деревья решений в задаче распознавания рукописных цифр MNIST Теперь посмотрим на описанные 2 алгоритма в реальной задаче. Используемый "встроенные" в sklearn данные по рукописным цифрам. Эта задача будет примером, когда метод ближайших соседей работает на удивление хорошо. End of explanation """ data = load_digits() X, y = data.data, data.target """ Explanation: Загружаем данные. End of explanation """ X[0,:].reshape([8,8]) """ Explanation: Картинки здесь представляются матрицей 8 x 8 (интенсивности белого цвета для каждого пикселя). Далее эта матрица "разворачивается" в вектор длины 64, получается признаковое описание объекта. End of explanation """ f, axes = plt.subplots(1, 4, sharey=True, figsize=(16,6)) for i in range(4): axes[i].imshow(X[i,:].reshape([8,8])); """ Explanation: Нарисуем несколько рукописных цифр, видим, что они угадываются. End of explanation """ np.bincount(y) """ Explanation: Посмотрим на соотношение классов в выборке, видим, что примерно поровну нулей, единиц, ..., девяток. End of explanation """ X_train, X_holdout, y_train, y_holdout = train_test_split(X, y, test_size=0.3, random_state=17) """ Explanation: Выделим 70% выборки (X_train, y_train) под обучение и 30% будут отложенной выборкой (X_holdout, y_holdout). отложенная выборка никак не будет участвовать в настройке параметров моделей, на ней мы в конце, после этой настройки, оценим качество полученной модели. End of explanation """ tree = DecisionTreeClassifier(max_depth=5, random_state=17) %%time tree.fit(X_train, y_train) """ Explanation: Обучим дерево решений, опять параметры пока наугад берем. End of explanation """ from sklearn.metrics import accuracy_score tree_pred = tree.predict(X_holdout) accuracy_score(y_holdout, tree_pred) """ Explanation: Сделаем прогнозы для отложенной выборки. Видим, что метод ближайших соседей справился намного лучше. Но это мы пока выбирали параметры наугад. End of explanation """ tree_params = {'max_depth': [1, 2, 3, 5, 10, 20, 25, 30, 40, 50, 64], 'max_features': [1, 2, 3, 5, 10, 20 ,30, 50, 64]} tree_grid = GridSearchCV(tree, tree_params, cv=5, n_jobs=-1, verbose=True, scoring='accuracy') tree_grid.fit(X_train, y_train) """ Explanation: Теперь так же, как раньше настроим параметры моделей на кросс-валидации End of explanation """ tree_grid.best_params_, tree_grid.best_score_ accuracy_score(y_holdout, tree_grid.predict(X_holdout)) """ Explanation: Лучшее сочетание параметров и соответствующая средняя доля правильных ответов на кросс-валидации: End of explanation """ np.mean(cross_val_score(RandomForestClassifier(random_state=17), X_train, y_train, cv=5)) rf = RandomForestClassifier(random_state=17, n_jobs=-1).fit(X_train, y_train) accuracy_score(y_holdout, rf.predict(X_holdout)) """ Explanation: Это уже не 66%, но и не 97%. Обучим на этих же данных случайный лес, он на большинстве выборок работает лучше, чем просто деревья. Но сейчас у нас исключение. End of explanation """ def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30): data, target = [], [] for i in range(n): x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max) if np.abs(x1 - x2) > 0.5: data.append([x1, x2]) target.append(np.sign(x1 - x2)) return np.array(data), np.array(target) X, y = form_linearly_separable_data() plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn', edgecolors='black'); """ Explanation: Результаты эксперимента: | | CV | Holdout | |-----|:-----:|:-------:| | DT | 0.844 | 0.838 | | RF | 0.935 | 0.941 | Обозначения: CV и Holdout– средние доли правильных ответов модели на кросс-валидации и отложенной выборке соот-но. DT – дерево решений, RF – случайный лес Плюсы и минусы деревьев решений Плюсы: - Порождение четких правил классификации, понятных человеку, например, "если возраст < 25 и интерес к мотоциклам, то отказать в кредите". Это свойство называют интерпретируемостью модели; - Деревья решений могут легко визуализироваться, то есть может "интерпретироваться" (строгого определения я не видел) как сама модель (дерево), так и прогноз для отдельного взятого тестового объекта (путь в дереве); - Быстрые процессы обучения и прогнозирования; - Малое число параметров модели; - Поддержка и числовых, и категориальных признаков. Минусы: - У порождения четких правил классификации есть и другая сторона: деревья очень чувствительны к шумам во входных данных, вся модель может кардинально измениться, если немного изменится обучающая выборка (например, если убрать один из признаков или добавить несколько объектов), поэтому и правила классификации могут сильно изменяться, что ухудшает интерпретируемость модели; - Разделяющая граница, построенная деревом решений, имеет свои ограничения (состоит из гиперплоскостей, перпендикулярных какой-то из координатной оси), и на практике дерево решений по качеству классификации уступает некоторым другим методам; - Необходимость отсекать ветви дерева (pruning) или устанавливать минимальное число элементов в листьях дерева или максимальную глубину дерева для борьбы с переобучением. Впрочем, переобучение - проблема всех методов машинного обучения; - Нестабильность. Небольшие изменения в данных могут существенно изменять построенное дерево решений. С этой проблемой борются с помощью ансамблей деревьев решений (рассмотрим далее); - Проблема поиска оптимального дерева решений (минимального по размеру и способного без ошибок классифицировать выборку) NP-полна, поэтому на практике используются эвристики типа жадного поиска признака с максимальным приростом информации, которые не гарантируют нахождения глобально оптимального дерева; - Сложно поддерживаются пропуски в данных. Friedman оценил, что на поддержку пропусков в данных ушло около 50% кода CART (классический алгоритм построения деревьев классификации и регрессии – Classification And Regression Trees, в sklearn реализована улучшенная версия именно этого алгоритма); - Модель умеет только интерполировать, но не экстраполировать (это же верно и для леса и бустинга на деревьях). То есть дерево решений делает константный прогноз для объектов, находящихся в признаковом пространстве вне параллелепипеда, охватывающего все объекты обучающей выборки. В нашем примере с желтыми и синими шариками это значит, что модель дает одинаковый прогноз для всех шариков с координатой > 19 или < 0. Сложный случай для деревьев В продолжение обсуждения плюсов и минусов приведем очень простой пример задачи классификации, с которым дерево справляется, но делает все как-то "сложнее", чем хотелось бы. Создадим множество точек на плоскости (2 признака), каждая точка будет относиться к одному из классов (+1, красные, или -1 – желтые). Если смотреть на это как на задачу классификации, то вроде все очень просто – классы разделяются прямой. End of explanation """ tree = DecisionTreeClassifier(random_state=17).fit(X, y) xx, yy = get_grid(X, eps=.05) predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.pcolormesh(xx, yy, predicted, cmap='autumn') plt.scatter(X[:, 0], X[:, 1], c=y, s=100, cmap='autumn', edgecolors='black', linewidth=1.5) plt.title('Easy task. Decision tree compexifies everything'); """ Explanation: Однако дерево решений строит уж больно сложную границу и само по себе оказывается глубоким. Кроме того, представьте, как плохо дерево будет обобщаться на пространство вне представленного квадрата $30 \times 30$, обрамляющего обучающую выборку. End of explanation """ export_graphviz(tree, feature_names=['x1', 'x2'], out_file='deep_toy_tree.dot', filled=True) !dot -Tpng deep_toy_tree.dot -o deep_toy_tree.png """ Explanation: Вот такая сложная конструкция, хотя решение (хорошая разделяющая поверхность) – это всего лишь прямая $x_1 = x_2$. End of explanation """ ! jupyter nbconvert Desicion_trees_practise.ipynb --to html """ Explanation: <img src='deep_toy_tree.png'> End of explanation """
Diyago/Machine-Learning-scripts
classification/ods_session3_decision_trees.ipynb
apache-2.0
import numpy as np import pandas as pd from matplotlib import pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import accuracy_score from sklearn.tree import DecisionTreeClassifier, export_graphviz """ Explanation: <center> Деревья решений для классификации и регрессии В этом задании мы разберемся с тем, как работает дерево решений в задаче регрессии, а также построим (и настроим) классифицирующие деревья решений в задаче прогнозирования сердечно-сосудистых заболеваний. Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме. End of explanation """ X = np.linspace(-2, 2, 7) y = X ** 3 plt.scatter(X, y) plt.xlabel(r'$x$') plt.ylabel(r'$y$'); """ Explanation: 1. Простой пример восстановления регрессии с помощью дерева решений Рассмотрим следующую одномерную задачу восстановления регрессии. Неформально, надо построить функцию $a(x)$, приближающую искомую зависимость $y = f(x)$ в терминах среднеквадратичной ошибки: $min \sum_i {(a(x_i) - f(x_i))}^2$. Подробно мы рассмотрим эту задачу в следующий раз (4-я статья курса), а пока поговорим о том, как решать эту задачу с помощью дерева решений. Предварительно прочитайте небольшой раздел "Дерево решений в задаче регрессии" 3-ей статьи курса. End of explanation """ X = np.linspace(-2, 2, 7) y = X ** 3 plt.scatter(X, y) plt.scatter(np.linspace(-2, 2, 7), np.linspace(-2, 2, 7)*0) plt.xlabel(r'$x$') plt.ylabel(r'$y$'); """ Explanation: Проделаем несколько шагов в построении дерева решений. Исходя из соображений симметрии, выберем пороги для разбиения равными соответственно 0, 1.5 и -1.5. Напомним, что в случае задачи восстановления регрессии листовая вершина выдает среднее значение ответа по всем объектам обучающей выборки, попавшим в эту вершину. Итак, начнём. Дерево глубины 0 состоит из одного корня, который содержит всю обучающую выборку. Как будут выглядеть предсказания данного дерева для $x \in [-2, 2]$? Постройте соответствующий график. Тут без sklearn – разбираемся просто с ручкой, бумажкой и Python, если надо. End of explanation """ xx = np.linspace(-2, 2, 200) predictions = [np.mean(y[X < 0]) if x < 0 else np.mean(y[X >= 0]) for x in xx] X = np.linspace(-2, 2, 7) y = X ** 3 plt.scatter(X, y); plt.plot(xx, predictions, c='red'); """ Explanation: Произведем первое разбиение выборки по предикату $[x < 0]$. Получим дерево глубины 1 с двумя листьями. Постройте аналогичный график предсказаний для этого дерева. End of explanation """ def regression_var_criterion(X, y, t): X_left, X_right = X[X < t], X[X >= t] y_left, y_right = y[X < t], y[X >= t] return np.var(y) - X_left.shape[0] / X.shape[0] * np.var(y_left) - X_right.shape[0] / X.shape[0] * np.var(y_right) thresholds = np.linspace(-1.9, 1.9, 100) crit_by_thres = [regression_var_criterion(X, y, thres) for thres in thresholds] plt.plot(thresholds, crit_by_thres) plt.xlabel('threshold') plt.ylabel('Regression criterion'); X = np.linspace(-2, -1, 3) y = X ** 3 print(np.sum((y-3.5555)**2)/len(y)) """ Explanation: В алгоритме построения дерева решений признак и значение порога, по которым происходит разбиение выборки, выбираются исходя из некоторого критерия. Для регрессии обычно используется дисперсионный критерий: $$Q(X, j, t) = D(X) - \dfrac{|X_l|}{|X|} D(X_l) - \dfrac{|X_r|}{|X|} D(X_r),$$ где $X$ – выборка, находящаяся в текущей вершине, $X_l$ и $X_r$ – разбиение выборки $X$ на две части по предикату $[x_j < t]$ (то есть по $j$-ому признаку и порогу $t$), $|X|$, $|X_l|$, $|X_r|$ - размеры соответствующих выборок, а $D(X)$ – дисперсия ответов на выборке $X$: $$D(X) = \dfrac{1}{|X|} \sum_{x_j \in X}(y_j – \dfrac{1}{|X|}\sum_{x_i \in X}y_i)^2,$$ где $y_i = y(x_i)$ – ответ на объекте $x_i$. При каждом разбиении вершины выбираются признак $j$ и значение порога $t$, максимизирующие значение функционала $Q(X, j, t)$. В нашем случае признак всего один, поэтому $Q$ зависит только от значения порога $t$ (и ответов выборки в данной вершине). Постройте график функции $Q(X, t)$ в корне в зависимости от значения порога $t$ на отрезке $[-1.9, 1.9]$. End of explanation """ t=np.array([0, 1.5, -1.5]) X = np.linspace(-2, 2, 7) y = X ** 3 def tree_fit(X, y, t): Xl = np.array([]) Xll = np.array([]) Xlr = np.array([]) Xr = np.array([]) Xrl = np.array([]) Xrr = np.array([]) yl = np.array([]) yll = np.array([]) ylr = np.array([]) yr = np.array([]) yrl = np.array([]) yrr = np.array([]) for i in range(len(X)): if X[i]<t[0]: Xl = np.append(Xl, X[i]) yl = np.append(yl, y[i]) else: Xr = np.append(Xr, X[i]) yr = np.append(yr, y[i]) for i in range(len(Xl)): if Xl[i]<t[2]: Xll = np.append(Xll, Xl[i]) yll = np.append(yll, yl[i]) else: Xlr = np.append(Xlr, Xl[i]) ylr = np.append(ylr, yl[i]) for i in range(len(Xr)): if Xr[i]<t[1]: Xrl = np.append(Xrl, Xr[i]) yrl = np.append(yrl, yr[i]) else: Xrr = np.append(Xrr, Xr[i]) yrr = np.append(yrr, yr[i]) return yll, ylr, yrl, yrr, t def tree_predict(X_): ll, lr, rl, rr, t_ = tree_fit(X, y, t) y_=np.array([]) for i in range(len(X_)): result = np.array([]) if X_[i]<t_[0]: if X_[i]<t_[2]: result=np.append(result, ll) else: result=np.append(result, lr) else: if X_[i]<t_[1]: result=np.append(result, rl) else: result=np.append(result, rr) result = np.sum(result)/len(result) y_ = np.append(y_, result) return y_ X_test = np.arange(-2, 2, 0.01) y_test = tree_predict(X_test) plt.plot(X_test, y_test) plt.xlabel(r'$x$') plt.ylabel(r'$y$'); """ Explanation: <font color='red'>Вопрос 1.</font> Оптимально ли с точки зрения дисперсионного критерия выбранное нами значение порога $t = 0$? - Да - Нет Теперь произведем разбиение в каждой из листовых вершин. В левой (соответствующей ветви $x < 0$) – по предикату $[x < -1.5]$, а в правой (соответствующей ветви $x \geqslant 0$) – по предикату $[x < 1.5]$. Получится дерево глубины 2 с 7 вершинами и 4 листьями. Постройте график предсказаний этого дерева для $x \in [-2, 2]$. End of explanation """ df = pd.read_csv('mlbootcamp5_train.csv', index_col='id', sep=';') df.head() """ Explanation: <font color='red'>Вопрос 2.</font> Из какого числа отрезков состоит график (необходимо считать как горизонтальные, так и вертикальные прямые), изображающий предсказания построенного дерева на отрезке [-2, 2]? - 5 - 6 - 7 - 8 2. Построение дерева решений для прогноза сердечно-сосудистых заболеваний Считаем в DataFrame знакомый нам набор данных по сердечно-сосудистым заболеваниям. End of explanation """ import math df = pd.concat([df, pd.get_dummies(df['cholesterol'], prefix="cholesterol"), pd.get_dummies(df['gluc'], prefix="gluc")], axis=1) df.drop(['cholesterol', 'gluc'], axis=1, inplace=True) df['age_years'] = (df.age / 365.25).astype('int') df.head() """ Explanation: Сделайте небольшие преобразования признаков: постройте признак "возраст в годах" (полных лет), а также постройте по 3 бинарных признака на основе cholesterol и gluc, где они, соответственно, равны 1, 2 или 3. Эта техника называется dummy-кодированием или One Hot Encoding (OHE), удобней всего в данном случае использовать pandas.get_dummmies. Исходные признаки cholesterol и gluc после кодирования использовать не нужно. End of explanation """ y = df.cardio df.drop(['cardio'], axis=1, inplace=True) X = df X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.3, random_state=17) """ Explanation: Разбейте выборку на обучающую и отложенную (holdout) части в пропорции 7/3. Для этого используйте метод sklearn.model_selection.train_test_split, зафиксируйте у него random_state=17. End of explanation """ import pydot import os os.environ["PATH"] += os.pathsep + r'C:\Users\i.ashrapov\AppData\Local\Continuum\Anaconda3\pkgs\graphviz-2.38.0-4\Library\bin\graphviz' tree = DecisionTreeClassifier(max_depth=3, random_state=17) tree.fit(X_train, y_train) dot_data=export_graphviz(tree, out_file="hw2.dot", feature_names=X_train.columns) (graph,) = pydot.graph_from_dot_file("hw2.dot") graph """ Explanation: Обучите на выборке (X_train, y_train) дерево решений с ограничением на максимальную глубину в 3. Зафиксируйте у дерева random_state=17. Визуализируйте дерево с помошью sklearn.tree.export_graphviz, dot и pydot. Пример дан в статье под спойлером "Код для отрисовки дерева". Названия файлов писать без кавычек, для того чтобы работало в jupyter notebook. Обратите внимание, что команды в Jupyter notebook, начинающиеся с восклицательного знака – это терминальные команды (которые мы обычно запускаем в терминале/командной строке). End of explanation """ y_predict = tree.predict(X_valid) accuracy_score(y_valid, y_predict) """ Explanation: <font color='red'>Вопрос 3.</font> Какие 3 признака задействуются при прогнозе в построенном дереве решений? (то есть эти три признака "можно найти в дереве") - weight, height, gluc=3 - smoke, age, gluc=3 - age, weight, chol=3 - age, ap_hi, chol=3 Сделайте с помощью обученного дерева прогноз для отложенной выборки (X_valid, y_valid). Посчитайте долю верных ответов (accuracy). End of explanation """ tree_params = {'max_depth': list(range(2, 11))} tree_grid = GridSearchCV(tree, tree_params, cv=5, n_jobs=-1, verbose=True) tree_grid.fit(X_train, y_train) """ Explanation: Теперь на кросс-валидации по выборке (X_train, y_train) настройте глубину дерева, чтобы повысить качество модели. Используйте GridSearchCV, 5-кратную кросс-валидацию. Зафиксируйте у дерева random_state=17. Перебирайте параметр max_depth от 2 до 10. End of explanation """ scores = [x for x in tree_grid.cv_results_['mean_test_score']] plt.plot(tree_params['max_depth'], scores) plt.xlabel('max_depth') plt.ylabel('Mean score') """ Explanation: Нарисуйте график того, как меняется средняя доля верных ответов на кросс-валидации в зависимости от значения max_depth. End of explanation """ print(tree_grid.best_params_) y_predict = tree_grid.predict(X_valid) accuracy_score(y_valid, y_predict) """ Explanation: Выведите лучшее значение max_depth, то есть такое, при котором среднее значение метрики качества на кросс-валидации максимально. Посчитайте также, какова теперь доля верных ответов на отложенной выборке. Все это можно сделать с помощью обученного экземпляра класса GridSearchCV. End of explanation """ df.head() sub_df = pd.DataFrame(df.smoke.copy()) sub_df['male'] = df.gender - 1 sub_df['age_45_50'] = ((df.age_years >= 45) & (df.age_years < 50) ).astype('int') sub_df['age_50_55'] = ((df.age_years >= 50) & (df.age_years < 55) ).astype('int') sub_df['age_55_60'] = ((df.age_years >= 55) & (df.age_years < 60) ).astype('int') sub_df['age_60_65'] = ((df.age_years >= 60) & (df.age_years < 65) ).astype('int') sub_df['ap_hi_120_140'] = ((df.ap_hi >= 120) & (df.ap_hi < 140)).astype('int') sub_df['ap_hi_140_160'] = ((df.ap_hi >= 140) & (df.ap_hi < 160)).astype('int') sub_df['ap_hi_160_180'] = ((df.ap_hi >= 160) & (df.ap_hi < 180)).astype('int') sub_df['chol=1'] = (df.cholesterol_1 == 1).astype('int') sub_df['chol=2'] = (df.cholesterol_2 == 2).astype('int') sub_df['chol=3'] = (df.cholesterol_3 == 3).astype('int') tree = DecisionTreeClassifier(max_depth=3, random_state=17).fit(sub_df, y) dot_data=export_graphviz(tree, out_file="hw3.dot", feature_names=sub_df.columns) (graph,) = pydot.graph_from_dot_file("hw3.dot") graph.write_png("hw3.png") """ Explanation: <font color='red'>Вопрос 4.</font> Имеется ли на кривой валидации по максимальной глубине дерева пик accuracy, если перебирать max_depth от 2 до 10? Повысила ли настройка глубины дерева качество классификации (accuracy) более чем на 1% на отложенной выборке (надо посмотреть на выражение (acc2 - acc1) / acc1 * 100%, где acc1 и acc2 – доли верных ответов на отложенной выборке до и после настройки max_depth соответственно)? - да, да - да, нет - нет, да - нет, нет Обратимся опять (как и в 1 домашке) к картинке, демонстрирующей шкалу SCORE для расчёта риска смерти от сердечно-сосудистого заболевания в ближайшие 10 лет. <img src='../../img/SCORE2007.png' width=70%> Создайте бинарные признаки, примерно соответствующие этой картинке: - $age \in [45,50), \ldots age \in [60,65) $ (4 признака) - верхнее артериальное давление: $ap_hi \in [120,140), ap_hi \in [140,160), ap_hi \in [160,180),$ (3 признака) Если значение возраста или артериального давления не попадает ни в один из интервалов, то все бинарные признаки будут равны нулю. Далее будем строить дерево решений с этим признаками, а также с признаками smoke, cholesterol и gender. Из признака cholesterol надо сделать 3 бинарных, соотв-х уникальным значениям признака ( cholesterol=1, cholesterol=2 и cholesterol=3), эта техника называется dummy-кодированием или One Hot Encoding (OHE). Признак gender надо перекодировать: значения 1 и 2 отобразить на 0 и 1. Признак лучше переименовать в male (0 – женщина, 1 – мужчина). В общем случае кодирование значений делает sklearn.preprocessing.LabelEncoder, но в данном случае легко обойтись и без него. Итак, дерево решений строится на 12 бинарных признаках (исходные признаки не берем). Постройте дерево решений с ограничением на максимальную глубину = 3 и обучите его на всей исходной обучающей выборке. Используйте DecisionTreeClassifier, на всякий случай зафикисровав random_state=17, остальные аргументы (помимо max_depth и random_state) оставьте по умолчанию. <font color='red'>Вопрос 5.</font> Какой бинарный признак из 12 перечисленных оказался самым важным для обнаружения ССЗ, то есть поместился в вершину построенного дерева решений? - Верхнее артериальное давление от 160 до 180 (мм рт.ст.) - Пол мужской / женский - Верхнее артериальное давление от 140 до 160 (мм рт.ст.) - Возраст от 50 до 55 (лет) - Курит / не курит - Возраст от 60 до 65 (лет) End of explanation """
jlema/Udacity-Self-Driving-Car-Engineer-Nanodegree
Term 1- Computer Vision and Deep Learning/Project 1 - Finding Lane Lines in a Video Stream/P1.ipynb
apache-2.0
#importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image """ Explanation: Finding Lane Lines on the Road In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output". The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below. <figure> <img src="line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> End of explanation """ import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=6): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ # Setup variables from math import floor left_line_count = 0 right_line_count = 0 left_average_slope = 0 right_average_slope = 0 left_average_x = 0 left_average_y = 0 right_average_x = 0 right_average_y = 0 left_min_y = left_max_y = left_min_y = right_min_y = right_max_y = img.shape[1] for line in lines: for x1,y1,x2,y2 in line: # Calculate each line slope slope = (y2-y1)/(x2-x1) if (not math.isinf(slope) and slope != 0): # Classify lines by slope (lines at the right have positive slope) # Calculate total slope, x and y to then find average # Calculate min y if (slope >= 0.5 and slope <= 0.85): right_line_count += 1 right_average_slope += slope right_average_x += x1 + x2 right_average_y += y1 + y2 if right_min_y > y1: right_min_y = y1 if right_min_y > y2: right_min_y = y2 elif (slope <= -0.5 and slope >= -0.85): left_line_count += 1 left_average_slope += slope left_average_x += x1 + x2 left_average_y += y1 + y2 if left_min_y > y1: left_min_y = y1 if left_min_y > y2: left_min_y = y2 if ((left_line_count != 0) and (right_line_count != 0)): # Find average slope for each side left_average_slope = left_average_slope / left_line_count right_average_slope = right_average_slope / right_line_count # Find average x and y for each side left_average_x = left_average_x / (left_line_count * 2) left_average_y = left_average_y / (left_line_count * 2) right_average_x = right_average_x / (right_line_count * 2) right_average_y = right_average_y / (right_line_count * 2) # Find y intercept for each side # b = y - mx left_y_intercept = left_average_y - left_average_slope * left_average_x right_y_intercept = right_average_y - right_average_slope * right_average_x # Find max x values for each side # x = ( y - b ) / m left_max_x = floor((left_max_y - left_y_intercept) / left_average_slope) right_max_x = floor((right_max_y - right_y_intercept) / right_average_slope) # Find min x values for each side left_min_x = floor((left_min_y - left_y_intercept) / left_average_slope) right_min_x = floor((right_min_y - right_y_intercept) / right_average_slope) # Draw left line cv2.line(img, (left_min_x, left_min_y), (left_max_x, left_max_y), color, thickness) # Draw right line cv2.line(img, (right_min_x, right_min_y), (right_max_x, right_max_y), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((*img.shape, 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ) """ Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are: cv2.inRange() for color selection cv2.fillPoly() for regions selection cv2.line() to draw lines on an image given endpoints cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file cv2.bitwise_and() to apply a mask to an image Check out the OpenCV documentation to learn about these and discover even more awesome functionality! Below are some helper functions to help get you started. They should look familiar from the lesson! End of explanation """ import os # Remove any previously processed images filelist = [ f for f in os.listdir('test_images/') if f.find('processed') != -1] for f in filelist: print('Removing image:', 'test_images/' + f) os.remove('test_images/' + f) test_images = os.listdir('test_images/') for fname in test_images: # Get image path and name details basedir, basename = os.path.split(fname) root, ext = os.path.splitext(basename) # Read in an image image = mpimg.imread('test_images/' + basename) imshape = image.shape # Print out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape, 'and name', basename) # Make a grayscale copy of the image for processing gray = grayscale(image) # Define kernel size for Gaussian smoothing / blurring kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) # Define Canny transform paramerets low_threshold = 50 high_threshold = 150 edges = canny(blur_gray, low_threshold, high_threshold) vertices = np.array([[(100,imshape[0]),(450, 325), (550, 325), (imshape[1]-100,imshape[0])]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 2 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 45 # minimum number of votes (intersections in Hough grid cell) min_line_len = 20 # minimum number of pixels making up a line max_line_gap = 60 # maximum gap in pixels between connectable line segments lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) lines_edges = weighted_img(lines, image, α=0.8, β=1., λ=0.) print('Saving image:', root + '_processed.jpg') mpimg.imsave('test_images/' + root + '_processed.jpg', lines_edges) """ Explanation: Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos. End of explanation """ # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image with lines are drawn on lanes) imshape = image.shape gray = grayscale(image) # Define kernel size for Gaussian smoothing / blurring kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) # Define Canny transform paramerets low_threshold = 50 high_threshold = 150 edges = canny(blur_gray, low_threshold, high_threshold) vertices = np.array([[(100,imshape[0]),(450, 325), (550, 325), (imshape[1]-100,imshape[0])]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 2 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 45 # minimum number of votes (intersections in Hough grid cell) min_line_len = 20 # minimum number of pixels making up a line max_line_gap = 60 # maximum gap in pixels between connectable line segments lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) result = weighted_img(lines, image, α=0.8, β=1., λ=0.) return result """ Explanation: run your solution on all test_images and make copies into the test_images directory). Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: solidWhiteRight.mp4 solidYellowLeft.mp4 End of explanation """ white_output = 'white.mp4' clip1 = VideoFileClip("solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) """ Explanation: Let's try the one with the solid white lane on the right first ... End of explanation """ HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) """ Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. End of explanation """ yellow_output = 'yellow.mp4' clip2 = VideoFileClip('solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) """ Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline. Now for the one with the solid yellow lane on the left. This one's more tricky! End of explanation """ challenge_output = 'extra.mp4' clip2 = VideoFileClip('challenge.mp4') challenge_clip = clip2.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) """ Explanation: Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below! The current algorithm is likely to fail with: 1. Curved lane lines 2. Different lighting conditions on the road 3. Vertical lane lines (infinite slope) 4. Lane lines that slope in the same direction I can imagine making my algorithm better or more robust by 1. Instead of interpolating into a line, interpolate into a curve, maybe a bezier with several control points. 2. Analyze the contrast/brightness in the area of interest and even (average?) it out so darker areas become lighter. 3. Treat vertical lines as a separate scenario and either ignore them or assign some default values 4. Separate the left and the right side of the image and analyze the lines on each side independently Submission If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review. Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! End of explanation """
ANTsX/ANTsPy
tutorials/motionCorrectionExample.ipynb
apache-2.0
import ants import numpy as np """ Explanation: Motion correction in ANTsPy We rely on ants.registration to do motion correction which provides the user with full access to parameters and outputs. The key steps, then, are to: * split the N dimensional (e.g. N=4) image to a list of N-1 dimensional images * run registration to a selected fixed image for each image in the list * merge the results back to a N dimensional image. End of explanation """ image = ants.image_read(ants.get_ants_data('r16')) image2 = ants.image_read(ants.get_ants_data('r64')) ants.set_spacing( image, (2,2) ) ants.set_spacing( image2, (2,2) ) imageTar = ants.make_image( ( *image2.shape, 2 ) ) ants.set_spacing( imageTar, (2,2,2) ) fmri = ants.list_to_ndimage( imageTar, [image,image2] ) """ Explanation: We illustrate the steps below by building a 3D "functional" image and then "motion correcting" just as we would do with functional MRI or any other dynamic modality. End of explanation """ ants.set_direction( fmri, np.eye( 3 ) * 2 ) images_unmerged = ants.ndimage_to_list( fmri ) motion_corrected = list() for i in range( len( images_unmerged ) ): areg = ants.registration( images_unmerged[0], images_unmerged[i], "SyN" ) motion_corrected.append( areg[ 'warpedmovout' ] ) """ Explanation: Now we motion correct this image just using the first slice as target. End of explanation """ motCorr = ants.list_to_ndimage( fmri, motion_corrected ) # ants.image_write( motCorr, '/tmp/temp.nii.gz' ) """ Explanation: Merge the resuling list back to a 3D image. End of explanation """
certik/chess
examples_manual/Convergence3.ipynb
mit
%pylab inline ! grep "multipv 1" log4.txt | grep -v lowerbound | grep -v upperbound > log4_g.txt def parse_info(l): D = {} k = l.split() i = 0 assert k[i] == "info" i += 1 while i < len(k): if k[i] == "depth": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "seldepth": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "multipv": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "score": if k[i+1] == "cp": D["score_p"] = int(k[i+2]) / 100. # score in pawns i += 3 elif k[i] == "nodes": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "nps": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "hashfull": D[k[i]] = int(k[i+1]) / 1000. # between 0 and 1 i += 2 elif k[i] == "tbhits": D[k[i]] = int(k[i+1]) i += 2 elif k[i] == "time": D[k[i]] = int(k[i+1]) / 1000. # elapsed time in [s] i += 2 elif k[i] == "pv": D[k[i]] = k[i+1:] return D else: raise Exception("Unknown kw") # Convert to an array of lists D = [] for l in open("log4_g.txt").readlines(): D.append(parse_info(l)) # Convert to a list of arrays data = {} for key in D[-1].keys(): d = [] for x in D: if key in x: d.append(x[key]) else: d.append(-1) if key != "pv": d = array(d) data[key] = d """ Explanation: Convergence Description of the UCI protocol: https://ucichessengine.wordpress.com/2011/03/16/description-of-uci-protocol/ Let us parse the logs first: End of explanation """ title("Number of nodes searched in time") plot(data["time"] / 60., data["nodes"], "o") xlabel("Time [min]") ylabel("Nodes") grid() show() """ Explanation: The Speed of Search The number of nodes searched depend linearly on time: End of explanation """ title("Positions per second in time") plot(data["time"] / 60., data["nps"], "o") xlabel("Time [min]") ylabel("Positions / s") grid() show() """ Explanation: So nodes per second is roughly constant: End of explanation """ title("Hashtable usage") hashfull = data["hashfull"] hashfull[hashfull == -1] = 0 plot(data["time"] / 60., hashfull * 100, "o") xlabel("Time [min]") ylabel("Hashtable filled [%]") grid() show() """ Explanation: The hashtable usage is at full capacity: End of explanation """ title("Number of nodes vs. depth") semilogy(data["depth"], data["nodes"], "o") x = data["depth"] y = exp(x/2.2) y = y / y[-1] * data["nodes"][-1] semilogy(x, y, "-") xlabel("Depth [half moves]") ylabel("Nodes") grid() show() title("Number of time vs. depth") semilogy(data["depth"], data["time"]/60., "o") xlabel("Depth [half moves]") ylabel("Time [min]") grid() show() """ Explanation: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau): End of explanation """ title("Score") plot(data["depth"], data["score_p"], "o") xlabel("Depth [half moves]") ylabel("Score [pawns]") grid() show() """ Explanation: Convergence wrt. Depth End of explanation """ for i in range(len(data["depth"])): print "%2i %s" % (data["depth"][i], " ".join(data["pv"][i])[:100]) """ Explanation: Convergence of the variations: End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/51cca4c9f4bd40623cb6bfa890e2eb4b/20_erp_stats.ipynb
bsd-3-clause
import numpy as np import matplotlib.pyplot as plt from scipy.stats import ttest_ind import mne from mne.channels import find_ch_adjacency, make_1020_channel_selections from mne.stats import spatio_temporal_cluster_test np.random.seed(0) # Load the data path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif' epochs = mne.read_epochs(path) name = "NumberOfLetters" # Split up the data by the median length in letters via the attached metadata median_value = str(epochs.metadata[name].median()) long_words = epochs[name + " > " + median_value] short_words = epochs[name + " < " + median_value] """ Explanation: Visualising statistical significance thresholds on EEG data MNE-Python provides a range of tools for statistical hypothesis testing and the visualisation of the results. Here, we show a few options for exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based permutation approaches (here with Threshold-Free Cluster Enhancement); and how to visualise the results. The underlying data comes from :footcite:DufauEtAl2015; we contrast long vs. short words. TFCE is described in :footcite:SmithNichols2009. End of explanation """ time_windows = ((.2, .25), (.35, .45)) elecs = ["Fz", "Cz", "Pz"] index = ['condition', 'epoch', 'time'] # display the EEG data in Pandas format (first 5 rows) print(epochs.to_data_frame(index=index)[elecs].head()) report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}" print("\nTargeted statistical test results:") for (tmin, tmax) in time_windows: long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index) short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index) for elec in elecs: # extract data A = long_df[elec].groupby("condition").mean() B = short_df[elec].groupby("condition").mean() # conduct t test t, p = ttest_ind(A, B) # display results format_dict = dict(elec=elec, tmin=tmin, tmax=tmax, df=len(epochs.events) - 2, t_val=t, p=p) print(report.format(**format_dict)) """ Explanation: If we have a specific point in space and time we wish to test, it can be convenient to convert the data into Pandas Dataframe format. In this case, the :class:mne.Epochs object has a convenient :meth:mne.Epochs.to_data_frame method, which returns a dataframe. This dataframe can then be queried for specific time windows and sensors. The extracted data can be submitted to standard statistical tests. Here, we conduct t-tests on the difference between long and short words. End of explanation """ # Calculate adjacency matrix between sensors from their locations adjacency, _ = find_ch_adjacency(epochs.info, "eeg") # Extract data: transpose because the cluster test requires channels to be last # In this case, inference is done over items. In the same manner, we could # also conduct the test over, e.g., subjects. X = [long_words.get_data().transpose(0, 2, 1), short_words.get_data().transpose(0, 2, 1)] tfce = dict(start=.2, step=.2) # Calculate statistical thresholds t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test( X, tfce, adjacency=adjacency, n_permutations=100) # a more standard number would be 1000+ significant_points = cluster_pv.reshape(t_obs.shape).T < .05 print(str(significant_points.sum()) + " points selected by TFCE ...") """ Explanation: Absent specific hypotheses, we can also conduct an exploratory mass-univariate analysis at all sensors and time points. This requires correcting for multiple tests. MNE offers various methods for this; amongst them, cluster-based permutation methods allow deriving power from the spatio-temoral correlation structure of the data. Here, we use TFCE. End of explanation """ # We need an evoked object to plot the image to be masked evoked = mne.combine_evoked([long_words.average(), short_words.average()], weights=[1, -1]) # calculate difference wave time_unit = dict(time_unit="s") evoked.plot_joint(title="Long vs. short words", ts_args=time_unit, topomap_args=time_unit) # show difference wave # Create ROIs by checking channel labels selections = make_1020_channel_selections(evoked.info, midline="12z") # Visualize the results fig, axes = plt.subplots(nrows=3, figsize=(8, 8)) axes = {sel: ax for sel, ax in zip(selections, axes.ravel())} evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False, mask=significant_points, show_names="all", titles=None, **time_unit) plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3, label="µV") plt.show() """ Explanation: The results of these mass univariate analyses can be visualised by plotting :class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image) and masking points for significance. Here, we group channels by Regions of Interest to facilitate localising effects on the head. End of explanation """
kaleoyster/ProjectNBI
nbi-utilities/data_gen/decisionFlowChart/Untitled.ipynb
gpl-2.0
category = Counter(df['category']).keys() values = Counter(df['category']).values() plt.bar(category, values) plt.xticks(rotation='vertical') plt.show() """ Explanation: Number of bridges with respect to baseline difference score End of explanation """ category = Counter(df['intervention']).keys() values = Counter(df['intervention']).values() plt.bar(category, values) plt.xticks(rotation='vertical') plt.show() """ Explanation: Number of bridges with repect to interventions identified by NDOT flow chart End of explanation """ category = Counter(df['rfIntervention']).keys() values = Counter(df['rfIntervention']).values() plt.bar(category, values) plt.xticks(rotation='vertical') plt.show() Counter(df[df['rfIntervention'] == 'yes']['category']) df[df['rfIntervention'] == 'yes'] # Counter(df[df['rfIntervention'] == 'no']['category']) df[df['rfIntervention'] == 'no'].head(10) """ Explanation: Number of bridges with respect to 'Yes' or 'No' intervention identified by random forest End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/inpe/cmip6/models/sandbox-3/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: INPE Source ID: SANDBOX-3 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:07 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
FRESNA/atlite
examples/plotting_with_atlite.ipynb
gpl-3.0
import os import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec import seaborn as sns import geopandas as gpd import pandas as pd from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() import cartopy.crs as ccrs from cartopy.crs import PlateCarree as plate import cartopy.io.shapereader as shpreader import xarray as xr import atlite import logging import warnings warnings.simplefilter('ignore') logging.captureWarnings(False) logging.basicConfig(level=logging.INFO) """ Explanation: Plotting with Atlite This little notebook creates all the plots given in the introduction section. Geographical plotting with Atlite can be efficiently and straightfowardly done when relying on some well maintained python packages. In particular a good rule of thumb is following. When it comes to projections and transformation &rightarrow; ask Cartopy plotting shapes &rightarrow; ask GeoPandas plotting data on geographical grids or time series &rightarrow; ask xarray Since they interact well together, one has just to consider some essential commands. So, let's dive into the code! First of all import all relevant packages End of explanation """ shpfilename = shpreader.natural_earth(resolution='10m', category='cultural', name='admin_0_countries') reader = shpreader.Reader(shpfilename) UkIr = gpd.GeoSeries({r.attributes['NAME_EN']: r.geometry for r in reader.records()}, crs={'init': 'epsg:4326'} ).reindex(['United Kingdom', 'Ireland']) """ Explanation: Note: geopandas will also require the descartes package to be installed. Create shapes for United Kingdom and Ireland use the shapereader of Cartopy to retrieve high resoluted shapes make a GeoSeries with the shapes End of explanation """ # Define the cutout; this will not yet trigger any major operations cutout = atlite.Cutout(path="uk-2011-01", module="era5", bounds=UkIr.unary_union.bounds, time="2011-01") # This is where all the work happens (this can take some time, for us it took ~15 minutes). cutout.prepare() """ Explanation: Create the cutout create a cutout with geographical bounds of the shapes Here we use the data from ERA5 from UK and Ireland in January of 2011. End of explanation """ projection = ccrs.Orthographic(-10, 35) """ Explanation: Define a overall projection This projection will be used throughout the following plots. It has to be assigned to every axis that should be based on this projection End of explanation """ cells = cutout.grid df = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) country_bound = gpd.GeoSeries(cells.unary_union) projection = ccrs.Orthographic(-10, 35) fig, ax = plt.subplots(subplot_kw={'projection': projection}, figsize=(6, 6)) df.plot(ax=ax, transform=plate()) country_bound.plot(ax=ax, edgecolor='orange', facecolor='None', transform=plate()) fig.tight_layout() """ Explanation: Plotting Plot Earth with cutout bound create GeoSeries with cell relevant data plot 'naturalearth_lowres' (country shapes) with unary union of cells on top End of explanation """ fig = plt.figure(figsize=(12, 7)) gs = GridSpec(3, 3, figure=fig) ax = fig.add_subplot(gs[:, 0:2], projection=projection) plot_grid_dict = dict(alpha=0.1, edgecolor='k', zorder=4, aspect='equal', facecolor='None', transform=plate()) UkIr.plot(ax=ax, zorder=1, transform=plate()) cells.plot(ax=ax, **plot_grid_dict) country_bound.plot(ax=ax, edgecolor='orange', facecolor='None', transform=plate()) ax.outline_patch.set_edgecolor('white') ax1 = fig.add_subplot(gs[0, 2]) cutout.data.wnd100m.mean(['x', 'y']).plot(ax=ax1) ax1.set_frame_on(False) ax1.xaxis.set_visible(False) ax2 = fig.add_subplot(gs[1, 2], sharex=ax1) cutout.data.influx_direct.mean(['x', 'y']).plot(ax=ax2) ax2.set_frame_on(False) ax2.xaxis.set_visible(False) ax3 = fig.add_subplot(gs[2, 2], sharex=ax1) cutout.data.runoff.mean(['x', 'y']).plot(ax=ax3) ax3.set_frame_on(False) ax3.set_xlabel(None) fig.tight_layout() """ Explanation: Plot the cutout's raw data create matplotlib GridSpec country shapes and cells on left hand side time series for wind100m, influx_direct, runoff on right hand side End of explanation """ cap_factors = cutout.wind(turbine='Vestas_V112_3MW', capacity_factor=True) fig, ax = plt.subplots(subplot_kw={'projection': projection}, figsize=(9, 7)) cap_factors.name = 'Capacity Factor' cap_factors.plot(ax=ax, transform=plate(), alpha=0.8) cells.plot(ax=ax, **plot_grid_dict) ax.outline_patch.set_edgecolor('white') fig.tight_layout(); """ Explanation: Plot capacity factors calculate the mean capacity factors for each cell for a selected turbine (e.g. Vestas V112 3MW) use xarray plotting function to directly plot data plot cells GeoSeries on top End of explanation """ sites = gpd.GeoDataFrame([['london', 0.7, 51.3, 20], ['dublin', -6.16, 53.21, 30], ['edinburgh', -3.13, 55.5, 10]], columns=['name', 'x', 'y', 'capacity'] ).set_index('name') nearest = cutout.data.sel( {'x': sites.x.values, 'y': sites.y.values}, 'nearest').coords sites['x'] = nearest.get('x').values sites['y'] = nearest.get('y').values cells_generation = sites.merge( cells, how='inner').rename(pd.Series(sites.index)) layout = xr.DataArray(cells_generation.set_index(['y', 'x']).capacity.unstack())\ .reindex_like(cap_factors).rename('Installed Capacity [MW]') fig, ax = plt.subplots(subplot_kw={'projection': projection}, figsize=(9, 7)) UkIr.plot(ax=ax, zorder=1, transform=plate(), alpha=0.3) cells.plot(ax=ax, **plot_grid_dict) layout.plot(ax=ax, transform=plate(), cmap='Reds', vmin=0, label='Installed Capacity [MW]') ax.outline_patch.set_edgecolor('white') fig.tight_layout() fig, axes = plt.subplots(len(sites), sharex=True, figsize=(9, 4)) power_generation = cutout.wind('Vestas_V112_3MW', layout=layout, shapes=cells_generation.geometry) power_generation.to_pandas().plot(subplots=True, ax=axes) axes[2].set_xlabel('date') axes[1].set_ylabel('Generation [MW]') fig.tight_layout() """ Explanation: Plot power generation for selected areas First define a capacity layout, defining on which sites to install how much turbine capacity Generate the power generation time series for the selected sites End of explanation """ from shapely.geometry import Point fig = plt.figure(figsize=(12, 7)) gs = GridSpec(3, 3, figure=fig) ax = fig.add_subplot(gs[:, 0:2], projection=projection) df = gpd.GeoDataFrame(UkIr, columns=['geometry']).assign(color=['1', '2']) df.plot(column='color', ax=ax, zorder=1, transform=plate(), alpha=0.6) sites.assign(geometry=sites.apply(lambda ds: Point(ds.x, ds.y), axis=1) ).plot(ax=ax, zorder=2, transform=plate(), color='indianred') ax.outline_patch.set_edgecolor('white') power_generation = cutout.wind('Vestas_V112_3MW', layout=layout.fillna(0), shapes=UkIr ).to_pandas().rename_axis(index='', columns='shapes') ax1 = fig.add_subplot(gs[1, 2]) power_generation['Ireland'].plot.area( ax=ax1, title='Ireland', color='indianred') ax2 = fig.add_subplot(gs[2, 2]) power_generation['United Kingdom'].plot.area( ax=ax2, title='United Kingdom', color='darkgreen') ax2.set_xlabel('date') [ax.set_ylabel('Generation [MW]') for ax in [ax1,ax2]] fig.tight_layout() """ Explanation: Aggregate power generation per country shape End of explanation """ fig, ax = plt.subplots(figsize=(7, 7)) indicator_matrix_ir = cutout.indicatormatrix(UkIr)[0] indicator_matrix_ir = xr.DataArray(indicator_matrix_ir.toarray().reshape(cutout.shape), dims=['lat','lon'], coords=[cutout.coords['lat'], cutout.coords['lon']]) indicator_matrix_ir.plot(cmap="Greens", ax=ax) """ Explanation: Plot indicator matrix use seaborn heatmap for plotting the indicator matrix of the United Kingdom shape This indicator matrix is used to tell Atlite, which cells in the cutout represent the land area of the UK. End of explanation """
mtasende/Machine-Learning-Nanodegree-Capstone
notebooks/prod/n03_day14_model_choosing_close_feat_all_syms_equal.ipynb
mit
# Basic imports import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import datetime as dt import scipy.optimize as spo import sys from time import time from sklearn.metrics import r2_score, median_absolute_error %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') import predictor.feature_extraction as fe import utils.preprocessing as pp import utils.misc as misc AHEAD_DAYS = 14 """ Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 14 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation. End of explanation """ datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl') print(datasets_params_list_df.shape) datasets_params_list_df.head() train_days_arr = 252 * np.array([1, 2, 3]) params_list_df = pd.DataFrame() for train_days in train_days_arr: temp_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == AHEAD_DAYS].copy() temp_df['train_days'] = train_days params_list_df = params_list_df.append(temp_df, ignore_index=True) print(params_list_df.shape) params_list_df.head() """ Explanation: Let's get the data. End of explanation """ from predictor.dummy_mean_predictor import DummyPredictor PREDICTOR_NAME = 'dummy' # Global variables eval_predictor = DummyPredictor() step_eval_days = 60 # The step to move between training/validation pairs params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days} results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params) results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1) results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1) # Pickle that! results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME)) results_df['mre'].plot() print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])])) print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])])) """ Explanation: Let's find the best params set for some different models - Dummy Predictor (mean) End of explanation """ from predictor.linear_predictor import LinearPredictor PREDICTOR_NAME = 'linear' # Global variables eval_predictor = LinearPredictor() step_eval_days = 60 # The step to move between training/validation pairs params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days} results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params) results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1) results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1) # Pickle that! results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME)) results_df['mre'].plot() print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])])) print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])])) """ Explanation: - Linear Predictor End of explanation """ from predictor.random_forest_predictor import RandomForestPredictor PREDICTOR_NAME = 'random_forest' # Global variables eval_predictor = RandomForestPredictor() step_eval_days = 60 # The step to move between training/validation pairs params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days} results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params) results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1) results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1) # Pickle that! results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME)) results_df['mre'].plot() print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])])) print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])])) """ Explanation: - Random Forest model End of explanation """
esa-as/2016-ml-contest
MandMs/Facies_classification-M&Ms_SVM_rbf_kernel.ipynb
apache-2.0
%matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.metrics import f1_score, accuracy_score, make_scorer from sklearn.model_selection import LeaveOneGroupOut, validation_curve import pandas as pd from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data """ Explanation: Facies classification using an SVM classifier with RBF kernel Contest entry by: <a href="https://github.com/mycarta">Matteo Niccoli</a> and <a href="https://github.com/dahlmb">Mark Dahl</a> Original contest notebook by Brendon Hall, Enthought <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas in this notebook,</span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl,</span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a support vector machine to classify facies types. The plan After a quick exploration of the dataset, we will: - run cross-validated grid search (with stratified k-fold) for parameter tuning - look at learning curves to get an idea of bias vs. variance, and under fitting vs. over fitting - train a new classifier with tuned parameters using leave-one-well-out as a method of testing Exploring the dataset First, we will examine the data set we will use to train the classifier. End of explanation """ training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() """ Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are: * Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE. * Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS) The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone 2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone) 6. Wackestone (limestone) 7. Dolomite 8. Packstone-grainstone (limestone) 9. Phylloid-algal bafflestone (limestone) These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors. Facies |Label| Adjacent Facies :---: | :---: |:--: 1 |SS| 2 2 |CSiS| 1,3 3 |FSiS| 2 4 |SiSh| 5 5 |MS| 4,6 6 |WS| 5,7 7 |D| 6,8 8 |PS| 6,7,9 9 |BS| 7,8 Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type. End of explanation """ # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale #5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD', '#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) training_data.describe() """ Explanation: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. End of explanation """ PE_mask = training_data['PE'].notnull().values training_data = training_data[PE_mask] training_data.describe() """ Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, most values have 4149 valid values except for PE, which has 3232. We will drop the feature vectors that don't have a valid PE entry. End of explanation """ y = training_data['Facies'].values print y[25:40] print np.shape(y) X = training_data.drop(['Formation', 'Well Name','Facies','FaciesLabels'], axis=1) print np.shape(X) X.describe(percentiles=[.05, .25, .50, .75, .95]) """ Explanation: Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector. End of explanation """ from sklearn.model_selection import GridSearchCV """ Explanation: Stratified K-fold validation to evaluate model performance One of the key steps in machine learning is to estimate a model's performance on data that it has not seen before. Scikit-learn provides a simple utility utility (train_test_split) to partition the data into a training and a test set, but the disadvantage with that is that we ignore a portion of our dataset during training. An additional disadvantage of simple spit, inherent to log data, is that there's a depth dependence. A possible strategy to avoid this is cross-validation. With k-fold cross-validation we randomly split the data into k-folds without replacement, where k-1 folds are used for training and one fold for testing. The process is repeated k times, and the performance is obtained by taking the average of the k individual performances. Stratified k-fold is an improvement over standard k-fold in that the class proportions are preserved in each fold to ensure that each fold is representative of the class proportions in the data. Grid search for parameter tuning Another important aspect of machine learning is the search for the optimal model parameters (i.e. those that will yield the best performance). This tuning is done using grid search. The above short summary is based on Sebastian Raschka's <a href="https://github.com/rasbt/python-machine-learning-book"> Python Machine Learning</a> book. End of explanation """ Fscorer = make_scorer(f1_score, average = 'micro') Ascorer = make_scorer(accuracy_score) """ Explanation: Two birds with a stone Below we will perform grid search with stratified K-fold: http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_digits.html#sphx-glr-auto-examples-model-selection-grid-search-digits-py. This will give us reasonable values for the more critical (for performance) classifier's parameters. Make performance scorers Used to evaluate training, testing, and validation performance. End of explanation """ from sklearn import svm SVC_classifier = svm.SVC(cache_size = 800, random_state=1) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values parm_grid={'kernel': ['linear', 'rbf'], 'C': [0.5, 1, 5, 10, 15], 'gamma':[0.0001, 0.001, 0.01, 0.1, 1, 10]} grid_search = GridSearchCV(SVC_classifier, param_grid=parm_grid, scoring = Fscorer, cv=10) # Stratified K-fold with n_splits=10 # For integer inputs, if the estimator is a # classifier and y is either binary or multiclass, # as in our case, StratifiedKFold is used grid_search.fit(X, y) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_)) grid_search.best_estimator_ """ Explanation: SVM classifier SImilar to the classifier in the article (but, as you will see, it uses a different kernel). We will re-import the data so as to pre-process it as in the tutorial. End of explanation """ from sklearn.model_selection import learning_curve from sklearn.model_selection import ShuffleSplit def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(0.1, 1., 5)): """ Generate a simple plot of the test and training learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 3-fold cross-validation, - integer, to specify the number of folds. - An object to be used as a cross-validation generator. - An iterable yielding train/test splits. For integer/None inputs, if ``y`` is binary or multiclass, :class:`StratifiedKFold` used. If the estimator is not a classifier or if ``y`` is neither binary nor multiclass, :class:`KFold` is used. Refer :ref:`User Guide <cross_validation>` for the various cross-validators that can be used here. n_jobs : integer, optional Number of jobs to run in parallel (default 1). """ plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, scoring = Fscorer) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training F1") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation F1") plt.legend(loc="best") return plt """ Explanation: Learning curves The idea from this point forward is to use the parameters, as tuned above, but to create a brand new classifier for the learning curves exercise. This classifier will therefore be well tuned but would not have seen the training data. We will look at learning curves of training and (cross-validated) testing error versus number of samples, hoping to gain some insight into whether: - since we will be testing eventually using a leave one-well-out, would we have enough samples? - is there a good bias-variance trade-off? In other words, is the classifier under-fitting, over-fitting, or just right? The plots are adapted from: http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html End of explanation """ training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] print well_name, 'out: ', np.shape(train)[0], 'training samples - ', np.shape(test)[0], 'test samples' """ Explanation: First things first, how many samples do we have for each leave-one-well-out split? End of explanation """ from sklearn import svm SVC_classifier_learn = svm.SVC(C=5, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values title = "Learning Curves (SVC)" # Learning curves with 50 iterations to get smoother mean test and train # score curves; each time we hold 15% of the data randomly as a validation set. # This is equivalent to leaving about 1 well out, on average (3232 minus ~2800 samples) cv = ShuffleSplit(n_splits=50, test_size=0.15, random_state=1) plot_learning_curve(SVC_classifier_learn, title, X, y, cv=cv, ylim=(0.45, 0.75), n_jobs=4) plt.show() """ Explanation: On average, we'll have about 2830 samples for training curves and 400 for testing curves. End of explanation """ SVC_classifier_learn_2 = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) title = "Learning Curves (SVC)" # Learning curves with 50 iterations to get smoother mean test and train # score curves; each time we hold 15% of the data randomly as a validation set. # This is equivalent to leaving about 1 well out, on average (3232 minus ~2800 samples) cv = ShuffleSplit(n_splits=50, test_size=0.15, random_state=1) plot_learning_curve(SVC_classifier_learn_2, title, X, y, cv=cv, ylim=(0.45, 0.75), n_jobs=4) plt.show() """ Explanation: Observations Neither training nor cross-validation scores are very high. The scores start to converge at just about the number of samples (on average) we intend to use for training of our final classifier with leave-one-well-out well cross-validation. But there's still a bit of a gap, which may indicate slight over-fitting (variance a bit high). Since we cannot address the overfitting by increasing the number of samples (without sacrificing the leave-one-well-out strategy), we can increase regularization a bit by slightly decreasing the parameter C. End of explanation """ from sklearn import svm SVC_classifier_conf = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) svc_pred = SVC_classifier_conf.fit(X,y) svc_pred = SVC_classifier_conf.predict(X) from sklearn.metrics import confusion_matrix from classification_utilities import display_cm, display_adj_cm conf = confusion_matrix(svc_pred, y) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) """ Explanation: Confusion matrix Let's see how we do with predicting the actual facies, by looking at a confusion matrix. We do this by keeping the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. End of explanation """ from sklearn import svm SVC_classifier_LOWO = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() f1_SVC = [] for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] SVC_classifier_LOWO.fit(X[train], y[train]) pred = SVC_classifier_LOWO.predict(X[test]) sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro') print("{:>20s} {:.3f}".format(well_name, sc)) f1_SVC.append(sc) print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_SVC)/(1.0*(len(f1_SVC)))) """ Explanation: Final classifier We now train our final classifier with leave-one-well-out validation. Again, we keep the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. End of explanation """ from sklearn import svm SVC_classifier_LOWO_C5 = svm.SVC(C=5, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() f1_SVC = [] for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] SVC_classifier_LOWO_C5.fit(X[train], y[train]) pred = SVC_classifier_LOWO_C5.predict(X[test]) sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro') print("{:>20s} {:.3f}".format(well_name, sc)) f1_SVC.append(sc) print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_SVC)/(1.0*(len(f1_SVC)))) """ Explanation: NB: the final classifier above resulted in a validated F1 score of 0.536 with the blind facies in the STUART and CRAWFORD wells. This compares favourably with the previous SVM implementations. However, had we used a parameter C equal to 5: End of explanation """
statsmodels/statsmodels.github.io
v0.13.0/examples/notebooks/generated/regression_plots.ipynb
bsd-3-clause
%matplotlib inline from statsmodels.compat import lzip import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.formula.api import ols plt.rc("figure", figsize=(16, 8)) plt.rc("font", size=14) """ Explanation: Regression Plots End of explanation """ prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data prestige.head() prestige_model = ols("prestige ~ income + education", data=prestige).fit() print(prestige_model.summary()) """ Explanation: Duncan's Prestige Dataset Load the Data We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>. End of explanation """ fig = sm.graphics.influence_plot(prestige_model, criterion="cooks") fig.tight_layout(pad=1.0) """ Explanation: Influence plots Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix. Externally studentized residuals are residuals that are scaled by their standard deviation where $$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$ with $$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$ $n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix $$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$ The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence. End of explanation """ fig = sm.graphics.plot_partregress( "prestige", "income", ["income", "education"], data=prestige ) fig.tight_layout(pad=1.0) fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige) fig.tight_layout(pad=1.0) """ Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br /> RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br /> therefore, large influence. Partial Regression Plots (Duncan) Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br /> Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br /> independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br /> In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br /> the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br /> $X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br /> of the former versus the latter residuals. <br /> The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br /> are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br /> individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br /> with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br /> linearity. End of explanation """ subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"]) prestige_model2 = ols( "prestige ~ income + education", data=prestige, subset=subset ).fit() print(prestige_model2.summary()) """ Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this. End of explanation """ fig = sm.graphics.plot_partregress_grid(prestige_model) fig.tight_layout(pad=1.0) """ Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br /> points, but you can use them to identify problems and then use plot_partregress to get more information. End of explanation """ fig = sm.graphics.plot_ccpr(prestige_model, "education") fig.tight_layout(pad=1.0) """ Explanation: Component-Component plus Residual (CCPR) Plots The CCPR plot provides a way to judge the effect of one regressor on the <br /> response variable by taking into account the effects of the other <br /> independent variables. The partial residuals plot is defined as <br /> $\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br /> $X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br /> is highly correlated with any of the other independent variables. If this <br /> is the case, the variance evident in the plot will be an underestimate of <br /> the true variance. End of explanation """ fig = sm.graphics.plot_ccpr_grid(prestige_model) fig.tight_layout(pad=1.0) """ Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid. End of explanation """ fig = sm.graphics.plot_regress_exog(prestige_model, "education") fig.tight_layout(pad=1.0) """ Explanation: Single Variable Regression Diagnostics The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor. End of explanation """ fig = sm.graphics.plot_fit(prestige_model, "education") fig.tight_layout(pad=1.0) """ Explanation: Fit Plot The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable. End of explanation """ # dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv") # dta = dta.set_index("State", inplace=True).dropna() # dta.rename(columns={"VR" : "crime", # "MR" : "murder", # "M" : "pctmetro", # "W" : "pctwhite", # "H" : "pcths", # "P" : "poverty", # "S" : "single" # }, inplace=True) # # crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit() dta = sm.datasets.statecrime.load_pandas().data crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit() print(crime_model.summary()) """ Explanation: Statewide Crime 2009 Dataset Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below. End of explanation """ fig = sm.graphics.plot_partregress_grid(crime_model) fig.tight_layout(pad=1.0) fig = sm.graphics.plot_partregress( "murder", "hs_grad", ["urban", "poverty", "single"], data=dta ) fig.tight_layout(pad=1.0) """ Explanation: Partial Regression Plots (Crime Data) End of explanation """ fig = sm.graphics.plot_leverage_resid2(crime_model) fig.tight_layout(pad=1.0) """ Explanation: Leverage-Resid<sup>2</sup> Plot Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot. End of explanation """ fig = sm.graphics.influence_plot(crime_model) fig.tight_layout(pad=1.0) """ Explanation: Influence Plot End of explanation """ from statsmodels.formula.api import rlm rob_crime_model = rlm( "murder ~ urban + poverty + hs_grad + single", data=dta, M=sm.robust.norms.TukeyBiweight(3), ).fit(conv="weights") print(rob_crime_model.summary()) # rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights") # print(rob_crime_model.summary()) """ Explanation: Using robust regression to correct for outliers. Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples. End of explanation """ weights = rob_crime_model.weights idx = weights > 0 X = rob_crime_model.model.exog[idx.values] ww = weights[idx] / weights[idx].mean() hat_matrix_diag = ww * (X * np.linalg.pinv(X).T).sum(1) resid = rob_crime_model.resid resid2 = resid ** 2 resid2 /= resid2.sum() nobs = int(idx.sum()) hm = hat_matrix_diag.mean() rm = resid2.mean() from statsmodels.graphics import utils fig, ax = plt.subplots(figsize=(16, 8)) ax.plot(resid2[idx], hat_matrix_diag, "o") ax = utils.annotate_axes( range(nobs), labels=rob_crime_model.model.data.row_labels[idx], points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5, 5)] * nobs, size="large", ax=ax, ) ax.set_xlabel("resid2") ax.set_ylabel("leverage") ylim = ax.get_ylim() ax.vlines(rm, *ylim) xlim = ax.get_xlim() ax.hlines(hm, *xlim) ax.margins(0, 0) """ Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888) End of explanation """
openai/openai-python
examples/embeddings/Zero-shot_classification.ipynb
mit
import pandas as pd import numpy as np from sklearn.metrics import classification_report df = pd.read_csv('output/embedded_1k_reviews.csv') df['babbage_similarity'] = df.babbage_similarity.apply(eval).apply(np.array) df['babbage_search'] = df.babbage_search.apply(eval).apply(np.array) df= df[df.Score!=3] df['sentiment'] = df.Score.replace({1:'negative', 2:'negative', 4:'positive', 5:'positive'}) """ Explanation: Zero-shot classification using the embeddings In this notebook we will classify the sentiment of reviews using embeddings and zero labeled data! The dataset is created in the Obtain_dataset Notebook. We'll define positive sentiment to be 4 and 5-star reviews, and negative sentiment to be 1 and 2-star reviews. 3-star reviews are considered neutral and we won't use them for this example. We will perform zero-shot classification by embedding descriptions of each class and then comparing new samples to those class embeddings. End of explanation """ from openai.embeddings_utils import cosine_similarity, get_embedding from sklearn.metrics import PrecisionRecallDisplay def evaluate_emeddings_approach( labels = ['negative', 'positive'], engine = 'text-similarity-babbage-001', ): label_embeddings = [get_embedding(label, engine=engine) for label in labels] def label_score(review_embedding, label_embeddings): return cosine_similarity(review_embedding, label_embeddings[1]) - cosine_similarity(review_embedding, label_embeddings[0]) engine_col_name = engine.replace('-','_').replace('_query','') probas = df[engine_col_name].apply(lambda x: label_score(x, label_embeddings)) preds = probas.apply(lambda x: 'positive' if x>0 else 'negative') report = classification_report(df.sentiment, preds) print(report) display = PrecisionRecallDisplay.from_predictions(df.sentiment, probas, pos_label='positive') _ = display.ax_.set_title("2-class Precision-Recall curve") evaluate_emeddings_approach(labels=['negative', 'positive'], engine='text-similarity-babbage-001') """ Explanation: Zero-Shot Classification To perform zero shot classification, we want to predict labels for our samples without any training. To do this, we can simply embed short descriptions of each label, such as positive and negative, and then compare the cosine distance between embeddings of samples and label descriptions. The highest similarity label to the sample input is the predicted label. We can also define a prediction score to be the difference between the cosine distance to the positive and to the negative label. This score can be used for plotting a precision-recall curve, which can be used to select a different tradeoff between precision and recall, by selecting a different threshold. End of explanation """ evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='text-similarity-babbage-001') """ Explanation: We can see that this classifier already performs extremely well. We used similarity embeddings, and the simplest possible label name. Let's try to improve on this by using more descriptive label names, and search embeddings. End of explanation """ evaluate_emeddings_approach(labels=['An Amazon review with a negative sentiment.', 'An Amazon review with a positive sentiment.'], engine='text-similarity-babbage-001') """ Explanation: Using the search embeddings and descriptive names leads to an additional improvement in performance. End of explanation """
eric-haibin-lin/mxnet
example/multi-task/multi-task-learning.ipynb
apache-2.0
import logging import random import time import matplotlib.pyplot as plt import mxnet as mx from mxnet import gluon, nd, autograd import numpy as np """ Explanation: Multi-Task Learning Example This is a simple example to show how to use mxnet for multi-task learning. The network is jointly going to learn whether a number is odd or even and to actually recognize the digit. For example 1 : 1 and odd 2 : 2 and even 3 : 3 and odd etc In this example we don't expect the tasks to contribute to each other much, but for example multi-task learning has been successfully applied to the domain of image captioning. In A Multi-task Learning Approach for Image Captioning by Wei Zhao, Benyou Wang, Jianbo Ye, Min Yang, Zhou Zhao, Ruotian Luo, Yu Qiao, they train a network to jointly classify images and generate text captions End of explanation """ batch_size = 128 epochs = 5 ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu() lr = 0.01 """ Explanation: Parameters End of explanation """ train_dataset = gluon.data.vision.MNIST(train=True) test_dataset = gluon.data.vision.MNIST(train=False) def transform(x,y): x = x.transpose((2,0,1)).astype('float32')/255. y1 = y y2 = y % 2 #odd or even return x, np.float32(y1), np.float32(y2) """ Explanation: Data We get the traditionnal MNIST dataset and add a new label to the existing one. For each digit we return a new label that stands for Odd or Even End of explanation """ train_dataset_t = train_dataset.transform(transform) test_dataset_t = test_dataset.transform(transform) """ Explanation: We assign the transform to the original dataset End of explanation """ train_data = gluon.data.DataLoader(train_dataset_t, shuffle=True, last_batch='rollover', batch_size=batch_size, num_workers=5) test_data = gluon.data.DataLoader(test_dataset_t, shuffle=False, last_batch='rollover', batch_size=batch_size, num_workers=5) print("Input shape: {}, Target Labels: {}".format(train_dataset[0][0].shape, train_dataset_t[0][1:])) """ Explanation: We load the datasets DataLoaders End of explanation """ class MultiTaskNetwork(gluon.HybridBlock): def __init__(self): super(MultiTaskNetwork, self).__init__() self.shared = gluon.nn.HybridSequential() with self.shared.name_scope(): self.shared.add( gluon.nn.Dense(128, activation='relu'), gluon.nn.Dense(64, activation='relu'), gluon.nn.Dense(10, activation='relu') ) self.output1 = gluon.nn.Dense(10) # Digist recognition self.output2 = gluon.nn.Dense(1) # odd or even def hybrid_forward(self, F, x): y = self.shared(x) output1 = self.output1(y) output2 = self.output2(y) return output1, output2 """ Explanation: Multi-task Network The output of the featurization is passed to two different outputs layers End of explanation """ loss_digits = gluon.loss.SoftmaxCELoss() loss_odd_even = gluon.loss.SigmoidBCELoss() """ Explanation: We can use two different losses, one for each output End of explanation """ mx.random.seed(42) random.seed(42) net = MultiTaskNetwork() net.initialize(mx.init.Xavier(), ctx=ctx) net.hybridize() # hybridize for speed trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':lr}) """ Explanation: We create and initialize the network End of explanation """ def evaluate_accuracy(net, data_iterator): acc_digits = mx.metric.Accuracy(name='digits') acc_odd_even = mx.metric.Accuracy(name='odd_even') for i, (data, label_digit, label_odd_even) in enumerate(data_iterator): data = data.as_in_context(ctx) label_digit = label_digit.as_in_context(ctx) label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1) output_digit, output_odd_even = net(data) acc_digits.update(label_digit, output_digit.softmax()) acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5) return acc_digits.get(), acc_odd_even.get() """ Explanation: Evaluate Accuracy We need to evaluate the accuracy of each task separately End of explanation """ alpha = 0.5 # Combine losses factor for e in range(epochs): # Accuracies for each task acc_digits = mx.metric.Accuracy(name='digits') acc_odd_even = mx.metric.Accuracy(name='odd_even') # Accumulative losses l_digits_ = 0. l_odd_even_ = 0. for i, (data, label_digit, label_odd_even) in enumerate(train_data): data = data.as_in_context(ctx) label_digit = label_digit.as_in_context(ctx) label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1) with autograd.record(): output_digit, output_odd_even = net(data) l_digits = loss_digits(output_digit, label_digit) l_odd_even = loss_odd_even(output_odd_even, label_odd_even) # Combine the loss of each task l_combined = (1-alpha)*l_digits + alpha*l_odd_even l_combined.backward() trainer.step(data.shape[0]) l_digits_ += l_digits.mean() l_odd_even_ += l_odd_even.mean() acc_digits.update(label_digit, output_digit.softmax()) acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5) print("Epoch [{}], Acc Digits {:.4f} Loss Digits {:.4f}".format( e, acc_digits.get()[1], l_digits_.asscalar()/(i+1))) print("Epoch [{}], Acc Odd/Even {:.4f} Loss Odd/Even {:.4f}".format( e, acc_odd_even.get()[1], l_odd_even_.asscalar()/(i+1))) print("Epoch [{}], Testing Accuracies {}".format(e, evaluate_accuracy(net, test_data))) """ Explanation: Training Loop We need to balance the contribution of each loss to the overall training and do so by tuning this alpha parameter within [0,1]. End of explanation """ def get_random_data(): idx = random.randint(0, len(test_dataset)) img = test_dataset[idx][0] data, _, _ = test_dataset_t[idx] data = data.as_in_context(ctx).expand_dims(axis=0) plt.imshow(img.squeeze().asnumpy(), cmap='gray') return data data = get_random_data() digit, odd_even = net(data) digit = digit.argmax(axis=1)[0].asnumpy() odd_even = (odd_even.sigmoid()[0] > 0.5).asnumpy() print("Predicted digit: {}, odd: {}".format(digit, odd_even)) """ Explanation: Testing End of explanation """
liganega/Gongsu-DataSci
ref_materials/excs/Lab-08.ipynb
gpl-3.0
Celsius = [36.2, 36.7, 47.3, 17.8] """ Explanation: 연습문제 리스트 조건제시법 예제 섭씨 온도로 이루어진 리스트가 다음과 있다. End of explanation """ Fahrenheit = [1.8 * C + 32 for C in Celsius] Fahrenheit """ Explanation: 위 리스트를 이용하여 화씨 온도로 이루어진 리스트를 구현하는 방법은 아래와 같다. End of explanation """ colors = ["red", "purple", "yellow", "blue", "green"] things = [ "triangle", "rectangle", "pentagon" ] """ Explanation: 예제 5개의 색깔과 삼각형, 사각형, 오각형 세 종류의 사물이 다음과 같이 있다. End of explanation """ all_combination = [(x, y) for x in things for y in colors] all_combination """ Explanation: (모양, 색깔) 형태의 튜플들의 가능한 모든 조합을 갖는 리스트를 구현하려면 다음과 할 수 있다. 총 15가지의 조합이 가능하다. End of explanation """ import urllib url = "http://weather.noaa.gov/pub/data" +\ "/observations/metar/decoded/" def NOAA_string(s): noaa_data_string = urllib.urlopen(url + s + '.TXT').read() return noaa_data_string def NOAA_temperature(s): if s.find("Temperature") == -1: return "Info NA" else: L = s.split('\n') for index, line in enumerate(L): if line.find("Temperature") == -1: pass else: break temp_line = L[index].split() return int(float(temp_line[-2][1:])) def city_temperature(s): return NOAA_temperature(NOAA_string(s)) print(city_temperature('A302')) print(city_temperature('RKSG')) """ Explanation: 해시 테이이블 활용 온라인 상의 아래 사이트에는 미국 해양대기청(NOAA)에서 수집하는 9천여 개의 세계 주요 도시들의 날씨 정보를 기상관측센터별 텍스트 파일로 저장되어 있다. http://weather.noaa.gov/pub/data/observations/metar/decoded/ 예를 들어 평택 기상관측센터의 날씨 정보는 RKSG.TXT 파일에 저장되어 있다. 즉, 아래 링크를 누르면 평택의 현재 날씨 정보를 확인할 수 있다. http://weather.noaa.gov/pub/data/observations/metar/decoded/RKSG.TXT Lab-07의 연습1에서 특정 기상관측센터에서 수집한 날씨정보에서 섭씨온도를 추출하는 방법을 다루었는데 이제는 NOAA 사이트에서 제공하는 모든 도시들의 섭씨온도 정보를 쉽게 확인해주는 함수 city_temperature를 구현하고자 한다. 즉, 아래처럼 작동해야 한다. city_temperature('RKSG') = 22 city_temperature('RKSI') = 20 참고로 NOAA에서 제공하는 한국에 위치한 기상관측센터별 코드는 아래 사이트에서 확인할 수 있으며, 예를 들어 인천공항에 위치한 기상관측센터의 코드는 RKSI임을 사이트 주소를 통해 확인할 수 있다. http://weather.noaa.gov/weather/KR_cc.html Lab-07, 연습 1 문제 재확인 먼저 Lab-07의 연습 1에서는 평택의 온도정보를 확인하는 방법을 구현하였으며, 이에 대한 하나의 견본 답안은 아래와 같다. def NOAA_string(): url = "http://weather.noaa.gov/pub/data" +\ "/observations/metar/decoded/RKSG.TXT" noaa_data_string = urllib.urlopen(url).read() return noaa_data_string def NOAA_temperature(s): L = s.split('\n') Line7 = L[6].split() print(str(int(Line7[-2][1:])) + " C") 이제 아래 명령어를 실행하면 평택의 섭씨온도를 확인할 수 있다. NOAA_temperature(NOAA_string()) 문제 NOAA_string 함수의 코드는 RKSG라는 코드명 기상관측센터, 즉 평택에 위치한 기상관측센터의 날씨정보만을 갖고 온다. 이제 NOAA_string 함수를 코드명을 인자로 입력받아 해당 기상관측센터에서 수집한 날씨정보를 가져올 수 있도록 수정하라. 예를 들어, NOAA_string('RKSG')는 평택의 날씨 정보를, NOAA_string('RKSI')는 인천공항의 날씨정보를 리턴해야 한다. 문제 위 견본답안에서 사용된 NOAA_temperature 함수의 코드에서 Line7 변수의 값을 정의할 때 온도 정보가 해당 파일의 7번 줄에 있음을 가정한다. 그런데 이런 가정은 매우 위험하다. 실제로 코드명이 CWHP인 기상관제센터에서 확인된 날씨정보에서 온도 정보는 5번 줄에 적혀있다. http://weather.noaa.gov/pub/data/observations/metar/decoded/CWHP.TXT 또한 A302 코드명을 가진 기상관제센터의 정보에서는 Temperature, 즉 온도 정보 자체가 없다. http://weather.noaa.gov/pub/data/observations/metar/decoded/A302.TXT city_temperature 함수를 구현하기 위해서는 NOAA_temperature 함수를 특정 숫자에 의존하지 않도록 구현해야 한다. Temperature 정보의 유무를 먼저 확인한 후에, 있다면 몇 번 줄에 있는지를 알아내서 활용해서 NOAA_temperature 함수를 구현하라. 온도정보가 존재하지 않으면 'Info NA'를 프린태해야 한다. NA는 Not Available의 줄임말이다. 예를 들어 In [2]: NOAA_temperature(NOAA_string('RKSG')) Out[2]: 22 In [3]: NOAA_temperature(NOAA_string('RKSI')) Out[3]: 20 In [3]: NOAA_temperature(NOAA_string('A302')) Out[3]: 'Info NA' 의 결과를 얻을 수 있어야 한다. 즉, city_temperature 함수는 아래와 같이 정의될 수 있다. def city_temperature(s): return NOAA_temperature(NOAA_string(s)) 힌트: 특정 단어가 몇 번 줄에 나타나는지를 확인하기 위해서는 enumerate 함수와 find 메소드를 활용할 수 있다. enumerate 함수의 활용법은 아래와 같다. In [1]: for index, line in enumerate(['a', 'b', 'c', 'd']): print("{} {}").format(index, line) 0 a 1 b 2 c 3 d 견본답안 End of explanation """ import requests NOAA = 'http://weather.noaa.gov/pub/data/observations/metar/decoded/' city_name_info = str(requests.get(NOAA).text) type(city_name_info) """ Explanation: 연습문제 위 연습문제에서는 NOAA 사이트에 저장된 문서를 필요할 때마다 하나씩 확인하는 함수를 구현하였다. 이번에는 NOAA 사이트가 제공하는 모든 기상관측센터의 온도정보를 한꺼번에 가져와서 코드명과 섭씨온도의 쌍으로 이루어진 시퀀스 자료형을 만드는 방법을 배운다. 먼저, 다시 한 번 아래 사이트를 방문해보자. 그러면 ****.TXT 형식의 이름을 가진 수천 개의 파일을 확인할 수 있다. http://weather.noaa.gov/pub/data/observations/metar/decoded/ 정확히 9천여개의 파일이 들어 있다. (어떻게 알 수 있을까?) 그렇다면 위 온라인상에 위치한 폴더 안에 저장되어 있는 모든 파일들의 파일명으로 이루어진 리스트를 만들 수는 없을까가 당연히 궁금해진다. 앞서 인터넷 특정 주소에 저장된 텍스트파일을 읽어드리기 위해 urllib 모듈의 urlopen 함수를 사용하였는데, 이제는 온라인 상의 특정 폴더 안에 저장된 파일들의 이름을 불러오는 함수가 필요하다. requests 모듈에 있는 get이라는 함수가 urlopen 함수와 비슷한 기능을 수행하며 온라인 상의 특정 폴더에 대한 정보를 가져온다. 아래와 같이 실행하면 위 사이트에 연결된 폴더에 대한 정보를 통으로, 즉 하나의 커다란 문자열로 가져온다. End of explanation """ city_name_info_line = city_name_info.split('\n') type(city_name_info_line) len(city_name_info_line) """ Explanation: 아래 명령어를 실행하면 print(city_name_info) 굉장히 긴 내용을 확인할 수 있다. (9천여줄이나 된다.) 예를 들어 처음 부분은 아래와 같이 시작한다. ====== &lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Index of /pub/data/observations/metar/decoded&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Index of /pub/data/observations/metar/decoded&lt;/h1&gt; &lt;pre&gt;&lt;img src="/icons/blank.gif" alt="Icon "&gt; &lt;a href="?C=N;O=D"&gt;Name&lt;/a&gt; &lt;a href="?C=M;O=A"&gt;Last modified&lt;/a&gt; &lt;a href="?C=S;O=A"&gt;Size&lt;/a&gt; &lt;a href="?C=D;O=A"&gt;Description&lt;/a&gt;&lt;hr&gt;&lt;img src="/icons/back.gif" alt="[DIR]"&gt; &lt;a href="/pub/data/observations/metar/"&gt;Parent Directory&lt;/a&gt; - &lt;img src="/icons/text.gif" alt="[TXT]"&gt; &lt;a href="A302.TXT"&gt;A302.TXT&lt;/a&gt; 09-Sep-2011 16:09 329 &lt;img src="/icons/text.gif" alt="[TXT]"&gt; &lt;a href="AABS.TXT"&gt;AABS.TXT&lt;/a&gt; 16-Nov-2008 19:09 248 &lt;img src="/icons/text.gif" alt="[TXT]"&gt; &lt;a href="AAXX.TXT"&gt;AAXX.TXT&lt;/a&gt; 12-Oct-2015 21:02 334 ====== html을 다뤄본 적이 있다면 위 내용의 의미를 알 수 있을 것이지만 여기서는 중요하지 않다. 우리에게 필요한 정보가 어디에 있는지만 확인할 수 있으면 충분하기 때문이다. 자세히 보면 밑에서부터 세 줄은 동일한 모양을 갖추고 있다. 다만 ****.TXT 라는 파일명만 다를 뿐이며, 바로 그 파일명들이 우리가 원하는 정보이다. 이미 말했듯 그러한 줄이 9천여 개가 존재한다. 자 이제 city_name_info 파일을 줄단위로 쪼개보자. End of explanation """ city_name_info_line[:10] """ Explanation: 처음 몇 줄만을 확인해보자. End of explanation """ city_name_info_line[8:18] """ Explanation: 9번 줄부터 도시 정보에 대한 파일들 정보가 들어있음을 확인 할 수 있다. 처음 10개 도시 정보를 확인해보자. End of explanation """ city_name_info_line[-10:] """ Explanation: 아래 사이트에 보이는 상위 10개 도시의 코드명 파일과 일치하는 내용을 담고 있음을 확인할 수 있다. http://weather.noaa.gov/pub/data/observations/metar/decoded/ 마지막 몇 줄을 확인해도 마찬가지이다. End of explanation """ city_name_info_line[-10:-4] """ Explanation: 마지막 4줄은 역시 html 관련 정보이며 기상관측센터와 아무 상관이 없다. 따라서 city_name_info_line의 8번줄에서 끝에서 5번줄까지가 도시 정보를 담은 파일들이다. city_name_info_line[8:-4] End of explanation """ A302_line = city_name_info_line[8] num = A302_line.find('.TXT') print(A302_line[num-4 : num]) """ Explanation: 이제 city_line_info_line의 각 항목에서 각 도시의 코드명을 추출하기는 쉽다. 예를 들어 첫 번째 기상관측센터의 코드명인 A302을 8번 줄에서 다음과 같이 추출할 수 있다. End of explanation """ city_codes = [] for line in city_name_info_line[8:-4]: num = line.find('.TXT') city_codes.append(line[num-4 : num]) """ Explanation: 위 코드에 for문을 적용하면 모든 도시의 코드명으로 이루어진 리스트를 생성할 수 있다. End of explanation """ city_codes = [line[line.find('.TXT')-4 : line.find('.TXT')] for line in city_name_info_line[8:-4]] city_codes[-10:] len(city_codes) """ Explanation: 위 코드를 리스트 조건제시법으로 구현할 수도 있다. End of explanation """ city_codes[:10] """ Explanation: 현재 8947개의 기상관측센터의 코드명이 저장되어 있음을 확인할 수 있다. 주의: 기상관측센터의 정확한 개수는 변할 수 있다. 처음 10개 기상관측센터 코드명은 아래와 같다. End of explanation """ def city_temp_list(num): L = [] for city in city_codes[:num]: temp = city_temperature(city) L.append([city, temp]) return L """ Explanation: 다음 단계는 코드명과 해당 도시의 온도를 쌍으로 갖는 시퀀스 자료형을 만드는 일이다. 여기서는 리스트를 이용하는 방식과 해시 테이블을 이용하는 방식을 다룬다. 문제 코드명과 해당 기상관측센터에서 측정된 섭씨 온도로 이루어진 길이가 2인 리스트들의 리스트를 작성하는 코드를 구현하라. 아래와 같은 리스트이어야 한다. [['A302', 'Info NA'], ['AABS', 24], ['AABS', 24], .... ] 예를 들어 처음 num개 기상관측센터의 이름과 온도 리스트를 생성하는 함수를 다음과 같이 만들 수 있다. num에 넣을 수 있는 최대 값은 len(city_codes) 이다. 주의: num 인자값으로 작은 숫자를 넣어야 한다. 그렇지 않으면 아주 긴 시간이 걸릴 수 있다. 견본답안 End of explanation """ List_sample30 = city_temp_list(30) """ Explanation: 이제 처음 30개 기상관측센터의 정보를 저장해보자. End of explanation """ city_codes[:30] """ Explanation: 그런 다음 특정 기상관측센터에서 측정한 온도를 확인해보자. 먼저 처음 30개 기상관측센터의 리스트는 아래와 같다. 즉, 처음 30개 기상관측센터의 정보를 확인하는 데에 10여초 정도 걸린다. 9000여 개 전체를 확인하고자 한다면 4000초, 즉 한 시간 이상 걸린다는 계산이다. End of explanation """ def list_search(x, xs): for y in xs: if x == y[0]: return y[1] else: pass """ Explanation: 쌍들의 리스트에서 특정 코드명의 온도를 찾아내는 코드는 예를 들어 아래와 같다. End of explanation """ %time list_search('BDAB', List_sample30) %time list_search('ABLC', List_sample30) %time list_search('ABBN', List_sample30) %time for code in city_codes[:30]: list_search(code, List_sample30) """ Explanation: list_search 함수의 작동시간을 측정하려면 아래와 같이 실행하면 된다. End of explanation """ def cities(xs): L = [] for x in xs: L.append(x[x.find('.TXT') - 4: x.find('.TXT') + 4]) return L """ Explanation: 30개 정보를 모두 확인하는데 총 60 마이크로 초(microsecond)가 걸린다. 참고로 1 마이크로 초는 10 ** -6 초이다. 주의: 시간은 사용하는 컴퓨터의 성능과 환경에 크게 차이가 날 수 있다. End of explanation """ def city_temp_hash(num): H = {} for city in city_codes[:num]: temp = city_temperature(city) H[city] = temp return H Hash_sample30 = city_temp_hash(30) Hash_sample30 %time Hash_sample30['BDAB'] %time Hash_sample30['AVLC'] %time Hash_sample30['ABBN'] %time for code in city_codes[:30]: Hash_sample30[code] """ Explanation: 문제 코드명과 해당 기상관측센터에서 측정된 섭씨 온도의 정보를 해시 테이블을 이용하여 구현하는 코드를 작성하라. 아래와 같은 사전이어야 한다. {'A302':'Info NA', 'AABS':24, 'AABS':24, .... } 견본답안 End of explanation """ def city_temp_list_F(num): F = [] for center in city_temp_list(num): if isinstance(center[1], int): F.append([center[0], float(1.8 * center[1] + 32)]) else: F.append([center[0], 'Info NA']) return F city_temp_list_F(10) """ Explanation: 해시 테이블을 이용하였을 경우 30개의 샘플 정보를 모두 확인하는 데에 12 마이크로 초가 걸린다. 이는 앞서 리스트를 이용한 경우보다 몇 배가 빠른 속도이다. 리스트의 길이가 길어질 수록 속도차이는 더욱 벌어질 것이다. 각자 확인할 수 있을 것이다. 문제 온도 정보를 화씨(Fahrenheit)로 바꾸어 기상관측센터 코드와 화씨 온도를 쌍으로 갖는 리스트를 구하는 코드를 구현하면 아래와 같다. 섭씨를 화씨로 바꾸는 계산은 아래와 같다. F = 1.8 * C + 32 견본답안 End of explanation """
arturops/deep-learning
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
mit
import pandas as pd import numpy as np import tensorflow as tf import tflearn from tflearn.data_utils import to_categorical """ Explanation: Sentiment analysis with TFLearn In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you. We'll start off by importing all the modules we'll need, then load and prepare the data. End of explanation """ reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None) """ Explanation: Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ from collections import Counter total_counts = Counter() for index,review in reviews.iterrows(): for word in review[0].split(' '): total_counts[word] += 1 print("Total words in data set: ", len(total_counts)) """ Explanation: Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours. End of explanation """ vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60]) """ Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words. End of explanation """ print(vocab[-1], ': ', total_counts[vocab[-1]]) """ Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words. End of explanation """ word2idx = {key:idx for (idx,key) in enumerate(vocab)} ## create the word-to-index dictionary """ Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on. End of explanation """ def text_to_vector(text): word_vector = np.zeros(len(word2idx),dtype=int) for word in text.split(' '): if word2idx.get(word,None) != None: word_vector[word2idx[word]] += 1 return word_vector """ Explanation: Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary. End of explanation """ text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] """ Explanation: If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ``` End of explanation """ word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :23] """ Explanation: Now, run through our entire review data set and convert each review to a word vector. End of explanation """ Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2) trainY """ Explanation: Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later. End of explanation """ # Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() # Start the network graph #Input net = tflearn.input_data([None, 10000]) #Hidden layers 250, 10 net = tflearn.fully_connected(net, 400, activation='ReLU') net = tflearn.fully_connected(net, 10, activation='ReLU') #output net = tflearn.fully_connected(net, 2, activation='softmax') #Training specifications net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') # End of network graph model = tflearn.DNN(net) return model """ Explanation: Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. End of explanation """ model = build_model() """ Explanation: Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network. End of explanation """ predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters. End of explanation """ # Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Moonlight is by far the best movie of 2016." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence) """ Explanation: Try out your own text! End of explanation """
robotcator/gensim
docs/notebooks/word2vec.ipynb
lgpl-2.1
# import modules & set up logging import gensim, logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) sentences = [['first', 'sentence'], ['second', 'sentence']] # train word2vec on the two sentences model = gensim.models.Word2Vec(sentences, min_count=1) """ Explanation: Word2Vec Tutorial In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”. Word2vec is very useful in automatic text tagging, recommender systems and machine translation. Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words. This tutorial In this tutorial you will learn how to train and evaluate word2vec models on your business data. Preparing the Input Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings): End of explanation """ # create some toy data to use with the following example import smart_open, os if not os.path.exists('./data/'): os.makedirs('./data/') filenames = ['./data/f1.txt', './data/f2.txt'] for i, fname in enumerate(filenames): with smart_open.smart_open(fname, 'w') as fout: for line in sentences[i]: fout.write(line + '\n') class MySentences(object): def __init__(self, dirname): self.dirname = dirname def __iter__(self): for fname in os.listdir(self.dirname): for line in open(os.path.join(self.dirname, fname)): yield line.split() sentences = MySentences('./data/') # a memory-friendly iterator print(list(sentences)) # generate the Word2Vec model model = gensim.models.Word2Vec(sentences, min_count=1) print(model) print(model.wv.vocab) """ Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large. Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence… For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line: End of explanation """ # build the same model, making the 2 steps explicit new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter) # can be a non-repeatable, 1-pass generator print(new_model) print(model.wv.vocab) """ Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another. Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language. 1. The first pass collects words and their frequencies to build an internal dictionary tree structure. 2. The second pass trains the neural model. These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way: End of explanation """ # Set file names for train and test data test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep lee_train_file = test_data_dir + 'lee_background.cor' class MyText(object): def __iter__(self): for line in open(lee_train_file): # assume there's one document per line, tokens separated by whitespace yield line.lower().split() sentences = MyText() print(sentences) """ Explanation: More data would be nice For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim): End of explanation """ # default value of min_count=5 model = gensim.models.Word2Vec(sentences, min_count=10) """ Explanation: Training Word2Vec accepts several parameters that affect both training speed and quality. min_count min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them: End of explanation """ # default value of size=100 model = gensim.models.Word2Vec(sentences, size=200) """ Explanation: size size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto. Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds. End of explanation """ # default value of workers=3 (tutorial says 1...) model = gensim.models.Word2Vec(sentences, workers=4) """ Explanation: workers workers, the last of the major parameters (full list here) is for training parallelization, to speed up training: End of explanation """ model.accuracy('./datasets/questions-words.txt') """ Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow). Memory At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes). Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB. There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above. Evaluating Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application. Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder. For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning. The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?). Gensim supports the same evaluation set, in exactly the same format: End of explanation """ model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv') """ Explanation: This accuracy takes an optional parameter restrict_vocab which limits which test examples are to be considered. In the December 2016 release of Gensim we added a better way to evaluate semantic similarity. By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable. End of explanation """ from tempfile import mkstemp fs, temp_path = mkstemp("gensim_temp") # creates a temp file model.save(temp_path) # save the model new_model = gensim.models.Word2Vec.load(temp_path) # open the model """ Explanation: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial. Storing and loading models You can store/load models using the standard gensim methods: End of explanation """ model = gensim.models.Word2Vec.load(temp_path) more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']] model.build_vocab(more_sentences, update=True) model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter) # cleaning up temp os.close(fs) os.remove(temp_path) """ Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing. In addition, you can load models created by the original C tool, both using its text and binary formats: model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # using gzipped/bz2 input works too, no need to unzip: model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True) Online training / Resuming training Advanced users can load a model and continue training it with more sentences and new vocabulary words: End of explanation """ model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1) model.doesnt_match("input is lunch he sentence cat".split()) print(model.similarity('human', 'party')) print(model.similarity('tree', 'murder')) """ Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate. Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there. Using the model Word2Vec supports several word similarity tasks out of the box: End of explanation """ print(model.predict_output_word(['emergency', 'beacon', 'received'])) """ Explanation: You can get the probability distribution for the center word given the context words as input: End of explanation """ model['tree'] # raw NumPy vector of a word """ Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words. If you need the raw output vectors in your application, you can access these either on a word-by-word basis: End of explanation """
gaufung/PythonStandardLibrary
mathematic/math.ipynb
mit
import math print('pi', math.pi) print('e', math.e) print('nan', math.nan) print('inf', math.inf) """ Explanation: The math module implements many of the IEEE functions that would normally be found in the native platform C libraries for complex mathematical operations using floating point values, including logarithms and trigonometric operations. Special Constants End of explanation """ import math print('{:^3} {:6} {:6} {:6}'.format( 'e', 'x', 'x**2', 'isinf')) print('{:-^3} {:-^6} {:-^6} {:-^6}'.format( '', '', '', '')) for e in range(0, 201, 20): x = 10.0 ** e y = x * x print('{:3d} {:<6g} {:<6g} {!s:6}'.format( e, x, y, math.isinf(y), )) x = 10.0 ** 200 print('x=',x) print('x*x=', x*x) print('x**2=', end='') try: print(x**2) except OverflowError as err: print(err) """ Explanation: Exceptional Value End of explanation """ import math INPUTS = [ (1000, 900, 0.1), (100, 90, 0.1), (10, 9, 0.1), (1, 0.9, 0.1), (0.1, 0.09, 0.1), ] print('{:^8} {:^8} {:^8} {:^8} {:^8} {:^8}'.format( 'a', 'b', 'rel_tol', 'abs(a-b)', 'tolerance', 'close') ) print('{:-^8} {:-^8} {:-^8} {:-^8} {:-^8} {:-^8}'.format( '-', '-', '-', '-', '-', '-'), ) fmt = '{:8.2f} {:8.2f} {:8.2f} {:8.2f} {:8.2f} {!s:>8}' for a, b, rel_tol in INPUTS: close = math.isclose(a, b, rel_tol=rel_tol) tolerance = rel_tol * max(abs(a), abs(b)) abs_diff = abs(a - b) print(fmt.format(a, b, rel_tol, abs_diff, tolerance, close)) import math INPUTS = [ (1.0, 1.0 + 1e-07, 1e-08), (1.0, 1.0 + 1e-08, 1e-08), (1.0, 1.0 + 1e-09, 1e-08), ] print('{:^8} {:^11} {:^8} {:^10} {:^8}'.format( 'a', 'b', 'abs_tol', 'abs(a-b)', 'close') ) print('{:-^8} {:-^11} {:-^8} {:-^10} {:-^8}'.format( '-', '-', '-', '-', '-'), ) for a, b, abs_tol in INPUTS: close = math.isclose(a, b, abs_tol=abs_tol) abs_diff = abs(a - b) print('{:8.2f} {:11} {:8} {:0.9f} {!s:>8}'.format( a, b, abs_tol, abs_diff, close)) """ Explanation: Comparing abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) End of explanation """ import math HEADINGS = ('i', 'int', 'trunk', 'floor', 'ceil') print('{:^5} {:^5} {:^5} {:^5} {:^5}'.format(*HEADINGS)) print('{:-^5} {:-^5} {:-^5} {:-^5} {:-^5}'.format( '', '', '', '', '', )) fmt = '{:5.1f} {:5.1f} {:5.1f} {:5.1f} {:5.1f}' TEST_VALUES = [ -1.5, -0.8, -0.5, -0.2, 0, 0.2, 0.5, 0.8, 1, ] for i in TEST_VALUES: print(fmt.format( i, int(i), math.trunc(i), math.floor(i), math.ceil(i), )) """ Explanation: Converting Floating Point values to Integers End of explanation """ import math values = [0.1] * 10 print('Input values:', values) print('sum() : {:.20f}'.format(sum(values))) s = 0.0 for i in values: s += i print('for-loop : {:.20f}'.format(s)) print('math.fsum() : {:.20f}'.format(math.fsum(values))) """ Explanation: Commonly Used Calculations End of explanation """ import math print('{:^7} {:^7} {:^7}'.format( 'Degrees', 'Radians', 'Expected')) print('{:-^7} {:-^7} {:-^7}'.format( '', '', '')) INPUTS = [ (0, 0), (30, math.pi / 6), (45, math.pi / 4), (60, math.pi / 3), (90, math.pi / 2), (180, math.pi), (270, 3 / 2.0 * math.pi), (360, 2 * math.pi), ] for deg, expected in INPUTS: print('{:7d} {:7.2f} {:7.2f}'.format( deg, math.radians(deg), expected, )) import math INPUTS = [ (0, 0), (math.pi / 6, 30), (math.pi / 4, 45), (math.pi / 3, 60), (math.pi / 2, 90), (math.pi, 180), (3 * math.pi / 2, 270), (2 * math.pi, 360), ] print('{:^8} {:^8} {:^8}'.format( 'Radians', 'Degrees', 'Expected')) print('{:-^8} {:-^8} {:-^8}'.format('', '', '')) for rad, expected in INPUTS: print('{:8.2f} {:8.2f} {:8.2f}'.format( rad, math.degrees(rad), expected, )) """ Explanation: Angle End of explanation """
survey-methods/samplics
docs/source/tutorial/estimation.ipynb
mit
from IPython.core.display import Image, display import numpy as np import pandas as pd import samplics from samplics.datasets import Nhanes2, Nhanes2brr, Nhanes2jk, Nmihs from samplics.estimation import TaylorEstimator, ReplicateEstimator """ Explanation: Estimation of population parameters The objective of this tutorial is illustrate the use of the samplics estimation APIs. There are two main classes: TaylorEstimator and ReplicateEstimator. The former class uses linearization methods to estimate variance of population parameters while the latter uses replicate-based methods (bootstrap, brr/fay, and jackknife) to estimate the variance. End of explanation """ # Load Nhanes sample data nhanes2_cls = Nhanes2() nhanes2_cls.load_data() nhanes2 = nhanes2_cls.data nhanes2.head(15) """ Explanation: Taylor approximation <a name="section1"></a> End of explanation """ zinc_mean_str = TaylorEstimator("mean") zinc_mean_str.estimate( y=nhanes2["zinc"], samp_weight=nhanes2["finalwgt"], stratum=nhanes2["stratid"], psu=nhanes2["psuid"], remove_nan=True, ) print(zinc_mean_str) """ Explanation: We calculate the survey mean of the level of zinc using Stata and we get the following Using samplics, the same estimate can be obtained using the snippet of code below. End of explanation """ zinc_mean_nostr = TaylorEstimator("mean") zinc_mean_nostr.estimate( y=nhanes2["zinc"], samp_weight=nhanes2["finalwgt"], psu=nhanes2["psuid"], remove_nan=True ) print(zinc_mean_nostr) """ Explanation: Let's remove the stratum parameter then we get the following with stata with samplics, we get ... End of explanation """ ratio_bp_lead = TaylorEstimator("ratio") ratio_bp_lead.estimate( y=nhanes2["highbp"], samp_weight=nhanes2["finalwgt"], x=nhanes2["highlead"], stratum=nhanes2["stratid"], psu=nhanes2["psuid"], remove_nan=True, ) print(ratio_bp_lead) """ Explanation: The other parameters currently implemented in TaylorEstimator are TOTAL, PROPORTION and RATIO. TOTAL and PROPORTION have the same function call as the MEAN parameter. For the RATIO parameter, it is necessary to provide the parameter x. End of explanation """ # Load NMIHS sample data nmihs_cls = Nmihs() nmihs_cls.load_data() nmihs = nmihs_cls.data nmihs.head(15) """ Explanation: Replicate-based variance estimation <a name="section2"></a> Bootstrap <a name="section21"></a> End of explanation """ # rep_wgt_boot = nmihsboot.loc[:, "bsrw1":"bsrw50"] birthwgt = ReplicateEstimator("bootstrap", "mean").estimate( y=nmihs["birth_weight"], samp_weight=nmihs["finalwgt"], rep_weights=nmihs.loc[:, "bsrw1":"bsrw50"], remove_nan=True, ) print(birthwgt) """ Explanation: Let's estimate the average birth weight using the bootstrap weights. End of explanation """ # Load NMIHS sample data nhanes2brr_cls = Nhanes2brr() nhanes2brr_cls.load_data() nhanes2brr = nhanes2brr_cls.data nhanes2brr.head(15) """ Explanation: Balanced repeated replication (BRR) <a name="section22"></a> End of explanation """ brr = ReplicateEstimator("brr", "ratio") ratio_wgt_hgt = brr.estimate( y=nhanes2brr["weight"], samp_weight=nhanes2brr["finalwgt"], x=nhanes2brr["height"], rep_weights=nhanes2brr.loc[:, "brr_1":"brr_32"], remove_nan=True, ) print(ratio_wgt_hgt) """ Explanation: Let's estimate the average birth weight using the BRR weights. End of explanation """ # Load NMIHS sample data nhanes2jk_cls = Nhanes2jk() nhanes2jk_cls.load_data() nhanes2jk = nhanes2jk_cls.data nhanes2jk.head(15) """ Explanation: Jackknife <a name="section23"></a> End of explanation """ jackknife = ReplicateEstimator("jackknife", "ratio") ratio_wgt_hgt2 = jackknife.estimate( y=nhanes2jk["weight"], samp_weight=nhanes2jk["finalwgt"], x=nhanes2jk["height"], rep_weights=nhanes2jk.loc[:, "jkw_1":"jkw_62"], rep_coefs=0.5, remove_nan=True, ) print(ratio_wgt_hgt2) """ Explanation: In this case, stratification was used to calculate the jackknife weights. The stratum variable is not indicated in the dataset or survey design description. However, it says that the number of strata is 31 and the number of replicates is 62. Hence, the jackknife replicate coefficient is $(n_h - 1) / n_h = (2-1) / 2 = 0.5$. Now we can call replicate() and specify rep_coefs = 0.5. End of explanation """
empet/Plotly-plots
Les-miserables-network.ipynb
gpl-3.0
import igraph as ig """ Explanation: A 3D graph representing the network of coappearances of characters in Victor Hugo's novel Les Miserables ## We define our graph as an igraph.Graph object. Python igraph is a library for high-performance graph generation and analysis. End of explanation """ import json data = [] with open('miserables.json') as f: for line in f: data.append(json.loads(line)) data=data[0] data print data.keys() """ Explanation: Read graph data from a json file: End of explanation """ N=len(data['nodes']) N """ Explanation: Get the number of nodes: End of explanation """ L=len(data['links']) Edges=[(data['links'][k]['source'], data['links'][k]['target']) for k in range(L)] """ Explanation: Define the list of edges: End of explanation """ G=ig.Graph(Edges, directed=False) """ Explanation: Define the Graph object from Edges: End of explanation """ data['nodes'][0] labels=[] group=[] for node in data['nodes']: labels.append(node['name']) group.append(node['group']) """ Explanation: Extract the node attributes, 'group', and 'name': End of explanation """ layt=G.layout('kk', dim=3) """ Explanation: Get the node positions, set by the Kamada-Kawai layout for 3D graphs: End of explanation """ layt[5] """ Explanation: layt is a list of three elements lists (the coordinates of nodes): End of explanation """ Xn=[layt[k][0] for k in range(N)]# x-coordinates of nodes Yn=[layt[k][1] for k in range(N)]# y-coordinates Zn=[layt[k][2] for k in range(N)]# z-coordinates Xe=[] Ye=[] Ze=[] for e in Edges: Xe+=[layt[e[0]][0],layt[e[1]][0], None]# x-coordinates of edge ends Ye+=[layt[e[0]][1],layt[e[1]][1], None] Ze+=[layt[e[0]][2],layt[e[1]][2], None] import plotly.plotly as py from plotly.graph_objs import * trace1=Scatter3d(x=Xe, y=Ye, z=Ze, mode='lines', line=Line(color='rgb(125,125,125)', width=1), hoverinfo='none' ) trace2=Scatter3d(x=Xn, y=Yn, z=Zn, mode='markers', name='actors', marker=Marker(symbol='dot', size=6, color=group, colorscale='Viridis', line=Line(color='rgb(50,50,50)', width=0.5) ), text=labels, hoverinfo='text' ) axis=dict(showbackground=False, showline=False, zeroline=False, showgrid=False, showticklabels=False, title='' ) layout = Layout( title="Network of coappearances of characters in Victor Hugo's novel<br> Les Miserables (3D visualization)", width=1000, height=1000, showlegend=False, scene=Scene( xaxis=XAxis(axis), yaxis=YAxis(axis), zaxis=ZAxis(axis), ), margin=Margin( t=100 ), hovermode='closest', annotations=Annotations([ Annotation( showarrow=False, text="Data source: <a href='http://bost.ocks.org/mike/miserables/miserables.json'>[1]</a>", xref='paper', yref='paper', x=0, y=0.1, xanchor='left', yanchor='bottom', font=Font( size=14 ) ) ]), ) data=Data([trace1, trace2]) py.sign_in('empet', 'jkxft90od0') fig=Figure(data=data, layout=layout) py.plot(fig, filename='Les-Miserables') """ Explanation: Set data for the Plotly plot of the graph: End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: <div> <a href="https://plot.ly/~empet/9059/" target="_blank" title="Network of coappearances of characters in Victor Hugo&#39;s novel&lt;br&gt; Les Miserables (3D visualization)" style="display: block; text-align: center;"><img src="https://plot.ly/~empet/9059.png" alt="Network of coappearances of characters in Victor Hugo&#39;s novel&lt;br&gt; Les Miserables (3D visualization)" style="max-width: 100%;width: 1000px;" width="1000" onerror="this.onerror=null;this.src='https://plot.ly/404.png';" /></a> <script data-plotly="empet:9059" src="https://plot.ly/embed.js" async></script> </div> End of explanation """
wbbeyourself/cn-deep-learning
dog-project/dog_app.ipynb
mit
from sklearn.datasets import load_files from keras.utils import np_utils import numpy as np from glob import glob # 定义函数来加载train,test和validation数据集 def load_dataset(path): data = load_files(path) dog_files = np.array(data['filenames']) dog_targets = np_utils.to_categorical(np.array(data['target']), 133) return dog_files, dog_targets # 加载train,test和validation数据集 train_files, train_targets = load_dataset('dogImages/train') valid_files, valid_targets = load_dataset('dogImages/valid') test_files, test_targets = load_dataset('dogImages/test') # 加载狗品种列表 dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))] # 打印数据统计描述 print('There are %d total dog categories.' % len(dog_names)) print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files]))) print('There are %d training dog images.' % len(train_files)) print('There are %d validation dog images.' % len(valid_files)) print('There are %d test dog images.'% len(test_files)) """ Explanation: 卷积神经网络(Convolutional Neural Network, CNN) 项目:实现一个狗品种识别算法App 在这个notebook文件中,有些模板代码已经提供给你,但你还需要实现更多的功能来完成这个项目。除非有明确要求,你无须修改任何已给出的代码。以'(练习)'开始的标题表示接下来的代码部分中有你需要实现的功能。这些部分都配有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示。 除了实现代码外,你还需要回答一些与项目及代码相关的问题。每个需要回答的问题都会以 '问题 X' 标记。请仔细阅读每个问题,并且在问题后的 '回答' 部分写出完整的答案。我们将根据 你对问题的回答 和 撰写代码实现的功能 来对你提交的项目进行评分。 提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。 项目中显示为_选做_的部分可以帮助你的项目脱颖而出,而不是仅仅达到通过的最低要求。如果你决定追求更高的挑战,请在此 notebook 中完成_选做_部分的代码。 让我们开始吧 在这个notebook中,你将迈出第一步,来开发可以作为移动端或 Web应用程序一部分的算法。在这个项目的最后,你的程序将能够把用户提供的任何一个图像作为输入。如果可以从图像中检测到一只狗,它会输出对狗品种的预测。如果图像中是一个人脸,它会预测一个与其最相似的狗的种类。下面这张图展示了完成项目后可能的输出结果。(……实际上我们希望每个学生的输出结果不相同!) 在现实世界中,你需要拼凑一系列的模型来完成不同的任务;举个例子,用来预测狗种类的算法会与预测人类的算法不同。在做项目的过程中,你可能会遇到不少失败的预测,因为并不存在完美的算法和模型。你最终提交的不完美的解决方案也一定会给你带来一个有趣的学习经验! 项目内容 我们将这个notebook分为不同的步骤,你可以使用下面的链接来浏览此notebook。 Step 0: 导入数据集 Step 1: 检测人脸 Step 2: 检测狗狗 Step 3: 从头创建一个CNN来分类狗品种 Step 4: 使用一个CNN来区分狗的品种(使用迁移学习) Step 5: 建立一个CNN来分类狗的品种(使用迁移学习) Step 6: 完成你的算法 Step 7: 测试你的算法 在该项目中包含了如下的问题: 问题 1 问题 2 问题 3 问题 4 问题 5 问题 6 问题 7 问题 8 问题 9 问题 10 问题 11 <a id='step0'></a> 步骤 0: 导入数据集 导入狗数据集 在下方的代码单元(cell)中,我们导入了一个狗图像的数据集。我们使用 scikit-learn 库中的 load_files 函数来获取一些变量: - train_files, valid_files, test_files - 包含图像的文件路径的numpy数组 - train_targets, valid_targets, test_targets - 包含独热编码分类标签的numpy数组 - dog_names - 由字符串构成的与标签相对应的狗的种类 End of explanation """ import random random.seed(8675309) # 加载打乱后的人脸数据集的文件名 human_files = np.array(glob("lfw/*/*")) random.shuffle(human_files) # 打印数据集的数据量 print('There are %d total human images.' % len(human_files)) """ Explanation: 导入人脸数据集 在下方的代码单元中,我们导入人脸图像数据集,文件所在路径存储在名为 human_files 的 numpy 数组。 End of explanation """ import cv2 import matplotlib.pyplot as plt %matplotlib inline # 提取预训练的人脸检测模型 face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml') # 加载彩色(通道顺序为BGR)图像 img = cv2.imread(human_files[3]) # 将BGR图像进行灰度处理 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 在图像中找出脸 faces = face_cascade.detectMultiScale(gray) # 打印图像中检测到的脸的个数 print('Number of faces detected:', len(faces)) # 获取每一个所检测到的脸的识别框 for (x,y,w,h) in faces: # 在人脸图像中绘制出识别框 cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) # 将BGR图像转变为RGB图像以打印 cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 展示含有识别框的图像 plt.imshow(cv_rgb) plt.show() """ Explanation: <a id='step1'></a> 步骤1:检测人脸 我们将使用 OpenCV 中的 Haar feature-based cascade classifiers 来检测图像中的人脸。OpenCV 提供了很多预训练的人脸检测模型,它们以XML文件保存在 github。我们已经下载了其中一个检测模型,并且把它存储在 haarcascades 的目录中。 在如下代码单元中,我们将演示如何使用这个检测模型在样本图像中找到人脸。 End of explanation """ # 如果img_path路径表示的图像检测到了脸,返回"True" def face_detector(img_path): img = cv2.imread(img_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray) return len(faces) > 0 """ Explanation: 在使用任何一个检测模型之前,将图像转换为灰度图是常用过程。detectMultiScale 函数使用储存在 face_cascade 中的的数据,对输入的灰度图像进行分类。 在上方的代码中,faces 以 numpy 数组的形式,保存了识别到的面部信息。它其中每一行表示一个被检测到的脸,该数据包括如下四个信息:前两个元素 x、y 代表识别框左上角的 x 和 y 坐标(参照上图,注意 y 坐标的方向和我们默认的方向不同);后两个元素代表识别框在 x 和 y 轴两个方向延伸的长度 w 和 d。 写一个人脸识别器 我们可以将这个程序封装为一个函数。该函数的输入为人脸图像的路径,当图像中包含人脸时,该函数返回 True,反之返回 False。该函数定义如下所示。 End of explanation """ human_files_short = human_files[:100] dog_files_short = train_files[:100] ## 请不要修改上方代码 ## TODO: 基于human_files_short和dog_files_short ## 中的图像测试face_detector的表现 """ Explanation: 【练习】 评估人脸检测模型 <a id='question1'></a> 问题 1: 在下方的代码块中,使用 face_detector 函数,计算: human_files 的前100张图像中,能够检测到人脸的图像占比多少? dog_files 的前100张图像中,能够检测到人脸的图像占比多少? 理想情况下,人图像中检测到人脸的概率应当为100%,而狗图像中检测到人脸的概率应该为0%。你会发现我们的算法并非完美,但结果仍然是可以接受的。我们从每个数据集中提取前100个图像的文件路径,并将它们存储在human_files_short和dog_files_short中。 End of explanation """ ## (选做) TODO: 报告另一个面部检测算法在LFW数据集上的表现 ### 你可以随意使用所需的代码单元数 """ Explanation: <a id='question2'></a> 问题 2: 就算法而言,该算法成功与否的关键在于,用户能否提供含有清晰面部特征的人脸图像。 那么你认为,这样的要求在实际使用中对用户合理吗?如果你觉得不合理,你能否想到一个方法,即使图像中并没有清晰的面部特征,也能够检测到人脸? 回答: <a id='Selection1'></a> 选做: 我们建议在你的算法中使用opencv的人脸检测模型去检测人类图像,不过你可以自由地探索其他的方法,尤其是尝试使用深度学习来解决它:)。请用下方的代码单元来设计和测试你的面部监测算法。如果你决定完成这个_选做_任务,你需要报告算法在每一个数据集上的表现。 End of explanation """ from keras.applications.resnet50 import ResNet50 # 定义ResNet50模型 ResNet50_model = ResNet50(weights='imagenet') """ Explanation: <a id='step2'></a> 步骤 2: 检测狗狗 在这个部分中,我们使用预训练的 ResNet-50 模型去检测图像中的狗。下方的第一行代码就是下载了 ResNet-50 模型的网络结构参数,以及基于 ImageNet 数据集的预训练权重。 ImageNet 这目前一个非常流行的数据集,常被用来测试图像分类等计算机视觉任务相关的算法。它包含超过一千万个 URL,每一个都链接到 1000 categories 中所对应的一个物体的图像。任给输入一个图像,该 ResNet-50 模型会返回一个对图像中物体的预测结果。 End of explanation """ from keras.preprocessing import image from tqdm import tqdm def path_to_tensor(img_path): # 用PIL加载RGB图像为PIL.Image.Image类型 img = image.load_img(img_path, target_size=(224, 224)) # 将PIL.Image.Image类型转化为格式为(224, 224, 3)的3维张量 x = image.img_to_array(img) # 将3维张量转化为格式为(1, 224, 224, 3)的4维张量并返回 return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors) """ Explanation: 数据预处理 在使用 TensorFlow 作为后端的时候,在 Keras 中,CNN 的输入是一个4维数组(也被称作4维张量),它的各维度尺寸为 (nb_samples, rows, columns, channels)。其中 nb_samples 表示图像(或者样本)的总数,rows, columns, 和 channels 分别表示图像的行数、列数和通道数。 下方的 path_to_tensor 函数实现如下将彩色图像的字符串型的文件路径作为输入,返回一个4维张量,作为 Keras CNN 输入。因为我们的输入图像是彩色图像,因此它们具有三个通道( channels 为 3)。 该函数首先读取一张图像,然后将其缩放为 224×224 的图像。 随后,该图像被调整为具有4个维度的张量。 对于任一输入图像,最后返回的张量的维度是:(1, 224, 224, 3)。 paths_to_tensor 函数将图像路径的字符串组成的 numpy 数组作为输入,并返回一个4维张量,各维度尺寸为 (nb_samples, 224, 224, 3)。 在这里,nb_samples是提供的图像路径的数据中的样本数量或图像数量。你也可以将 nb_samples 理解为数据集中3维张量的个数(每个3维张量表示一个不同的图像。 End of explanation """ from keras.applications.resnet50 import preprocess_input, decode_predictions def ResNet50_predict_labels(img_path): # 返回img_path路径的图像的预测向量 img = preprocess_input(path_to_tensor(img_path)) return np.argmax(ResNet50_model.predict(img)) """ Explanation: 基于 ResNet-50 架构进行预测 对于通过上述步骤得到的四维张量,在把它们输入到 ResNet-50 网络、或 Keras 中其他类似的预训练模型之前,还需要进行一些额外的处理: 1. 首先,这些图像的通道顺序为 RGB,我们需要重排他们的通道顺序为 BGR。 2. 其次,预训练模型的输入都进行了额外的归一化过程。因此我们在这里也要对这些张量进行归一化,即对所有图像所有像素都减去像素均值 [103.939, 116.779, 123.68](以 RGB 模式表示,根据所有的 ImageNet 图像算出)。 导入的 preprocess_input 函数实现了这些功能。如果你对此很感兴趣,可以在 这里 查看 preprocess_input的代码。 在实现了图像处理的部分之后,我们就可以使用模型来进行预测。这一步通过 predict 方法来实现,它返回一个向量,向量的第 i 个元素表示该图像属于第 i 个 ImageNet 类别的概率。这通过如下的 ResNet50_predict_labels 函数实现。 通过对预测出的向量取用 argmax 函数(找到有最大概率值的下标序号),我们可以得到一个整数,即模型预测到的物体的类别。进而根据这个 清单,我们能够知道这具体是哪个品种的狗狗。 End of explanation """ def dog_detector(img_path): prediction = ResNet50_predict_labels(img_path) return ((prediction <= 268) & (prediction >= 151)) """ Explanation: 完成狗检测模型 在研究该 清单 的时候,你会注意到,狗类别对应的序号为151-268。因此,在检查预训练模型判断图像是否包含狗的时候,我们只需要检查如上的 ResNet50_predict_labels 函数是否返回一个介于151和268之间(包含区间端点)的值。 我们通过这些想法来完成下方的 dog_detector 函数,如果从图像中检测到狗就返回 True,否则返回 False。 End of explanation """ ### TODO: 测试dog_detector函数在human_files_short和dog_files_short的表现 """ Explanation: 【作业】评估狗狗检测模型 <a id='question3'></a> 问题 3: 在下方的代码块中,使用 dog_detector 函数,计算: human_files_short中图像检测到狗狗的百分比? dog_files_short中图像检测到狗狗的百分比? End of explanation """ from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # Keras中的数据预处理过程 train_tensors = paths_to_tensor(train_files).astype('float32')/255 valid_tensors = paths_to_tensor(valid_files).astype('float32')/255 test_tensors = paths_to_tensor(test_files).astype('float32')/255 """ Explanation: <a id='step3'></a> 步骤 3: 从头开始创建一个CNN来分类狗品种 现在我们已经实现了一个函数,能够在图像中识别人类及狗狗。但我们需要更进一步的方法,来对狗的类别进行识别。在这一步中,你需要实现一个卷积神经网络来对狗的品种进行分类。你需要__从头实现__你的卷积神经网络(在这一阶段,你还不能使用迁移学习),并且你需要达到超过1%的测试集准确率。在本项目的步骤五种,你还有机会使用迁移学习来实现一个准确率大大提高的模型。 在添加卷积层的时候,注意不要加上太多的(可训练的)层。更多的参数意味着更长的训练时间,也就是说你更可能需要一个 GPU 来加速训练过程。万幸的是,Keras 提供了能够轻松预测每次迭代(epoch)花费时间所需的函数。你可以据此推断你算法所需的训练时间。 值得注意的是,对狗的图像进行分类是一项极具挑战性的任务。因为即便是一个正常人,也很难区分布列塔尼犬和威尔士史宾格犬。 布列塔尼犬(Brittany) | 威尔士史宾格犬(Welsh Springer Spaniel) - | - <img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200"> 不难发现其他的狗品种会有很小的类间差别(比如金毛寻回犬和美国水猎犬)。 金毛寻回犬(Curly-Coated Retriever) | 美国水猎犬(American Water Spaniel) - | - <img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200"> 同样,拉布拉多犬(labradors)有黄色、棕色和黑色这三种。那么你设计的基于视觉的算法将不得不克服这种较高的类间差别,以达到能够将这些不同颜色的同类狗分到同一个品种中。 黄色拉布拉多犬(Yellow Labrador) | 棕色拉布拉多犬(Chocolate Labrador) | 黑色拉布拉多犬(Black Labrador) - | - <img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220"> 我们也提到了随机分类将得到一个非常低的结果:不考虑品种略有失衡的影响,随机猜测到正确品种的概率是1/133,相对应的准确率是低于1%的。 请记住,在深度学习领域,实践远远高于理论。大量尝试不同的框架吧,相信你的直觉!当然,玩得开心! 数据预处理 通过对每张图像的像素值除以255,我们对图像实现了归一化处理。 End of explanation """ from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D from keras.layers import Dropout, Flatten, Dense from keras.models import Sequential model = Sequential() ### TODO: 定义你的网络架构 model.summary() ## 编译模型 model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) """ Explanation: 【练习】模型架构 创建一个卷积神经网络来对狗品种进行分类。在你代码块的最后,执行 model.summary() 来输出你模型的总结信息。 我们已经帮你导入了一些所需的 Python 库,如有需要你可以自行导入。如果你在过程中遇到了困难,如下是给你的一点小提示——该模型能够在5个 epoch 内取得超过1%的测试准确率,并且能在CPU上很快地训练。 <a id='question4'></a> 问题 4: 在下方的代码块中尝试使用 Keras 搭建卷积网络的架构,并回答相关的问题。 你可以尝试自己搭建一个卷积网络的模型,那么你需要回答你搭建卷积网络的具体步骤(用了哪些层)以及为什么这样搭建。 你也可以根据上图提示的步骤搭建卷积网络,那么请说明为何如上的架构能够在该问题上取得很好的表现。 回答: End of explanation """ from keras.callbacks import ModelCheckpoint ### TODO: 设置训练模型的epochs的数量 epochs = None ### 不要修改下方代码 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', verbose=1, save_best_only=True) model.fit(train_tensors, train_targets, validation_data=(valid_tensors, valid_targets), epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1) ## 加载具有最好验证loss的模型 model.load_weights('saved_models/weights.best.from_scratch.hdf5') """ Explanation: 【练习】训练模型 <a id='question5'></a> 问题 5: 在下方代码单元训练模型。使用模型检查点(model checkpointing)来储存具有最低验证集 loss 的模型。 可选题:你也可以对训练集进行 数据增强,来优化模型的表现。 End of explanation """ # 获取测试数据集中每一个图像所预测的狗品种的index dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors] # 报告测试准确率 test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions) print('Test accuracy: %.4f%%' % test_accuracy) """ Explanation: 测试模型 在狗图像的测试数据集上试用你的模型。确保测试准确率大于1%。 End of explanation """ bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz') train_VGG16 = bottleneck_features['train'] valid_VGG16 = bottleneck_features['valid'] test_VGG16 = bottleneck_features['test'] """ Explanation: <a id='step4'></a> 步骤 4: 使用一个CNN来区分狗的品种 使用 迁移学习(Transfer Learning)的方法,能帮助我们在不损失准确率的情况下大大减少训练时间。在以下步骤中,你可以尝试使用迁移学习来训练你自己的CNN。 得到从图像中提取的特征向量(Bottleneck Features) End of explanation """ VGG16_model = Sequential() VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:])) VGG16_model.add(Dense(133, activation='softmax')) VGG16_model.summary() ## 编译模型 VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) ## 训练模型 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', verbose=1, save_best_only=True) VGG16_model.fit(train_VGG16, train_targets, validation_data=(valid_VGG16, valid_targets), epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1) ## 加载具有最好验证loss的模型 VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5') """ Explanation: 模型架构 该模型使用预训练的 VGG-16 模型作为固定的图像特征提取器,其中 VGG-16 最后一层卷积层的输出被直接输入到我们的模型。我们只需要添加一个全局平均池化层以及一个全连接层,其中全连接层使用 softmax 激活函数,对每一个狗的种类都包含一个节点。 End of explanation """ # 获取测试数据集中每一个图像所预测的狗品种的index VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16] # 报告测试准确率 test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions) print('Test accuracy: %.4f%%' % test_accuracy) """ Explanation: 测试模型 现在,我们可以测试此CNN在狗图像测试数据集中识别品种的效果如何。我们在下方打印出测试准确率。 End of explanation """ from extract_bottleneck_features import * def VGG16_predict_breed(img_path): # 提取bottleneck特征 bottleneck_feature = extract_VGG16(path_to_tensor(img_path)) # 获取预测向量 predicted_vector = VGG16_model.predict(bottleneck_feature) # 返回此模型预测的狗的品种 return dog_names[np.argmax(predicted_vector)] """ Explanation: 使用模型预测狗的品种 End of explanation """ ### TODO: 从另一个预训练的CNN获取bottleneck特征 """ Explanation: <a id='step5'></a> 步骤 5: 建立一个CNN来分类狗的品种(使用迁移学习) 现在你将使用迁移学习来建立一个CNN,从而可以从图像中识别狗的品种。你的 CNN 在测试集上的准确率必须至少达到60%。 在步骤4中,我们使用了迁移学习来创建一个使用基于 VGG-16 提取的特征向量来搭建一个 CNN。在本部分内容中,你必须使用另一个预训练模型来搭建一个 CNN。为了让这个任务更易实现,我们已经预先对目前 keras 中可用的几种网络进行了预训练: VGG-19 bottleneck features ResNet-50 bottleneck features Inception bottleneck features Xception bottleneck features 这些文件被命名为为: Dog{network}Data.npz 其中 {network} 可以是 VGG19、Resnet50、InceptionV3 或 Xception 中的一个。选择上方网络架构中的一个,下载相对应的bottleneck特征,并将所下载的文件保存在目录 bottleneck_features/ 中。 【练习】获取模型的特征向量 在下方代码块中,通过运行下方代码提取训练、测试与验证集相对应的bottleneck特征。 bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz') train_{network} = bottleneck_features['train'] valid_{network} = bottleneck_features['valid'] test_{network} = bottleneck_features['test'] End of explanation """ ### TODO: 定义你的框架 ### TODO: 编译模型 """ Explanation: 【练习】模型架构 建立一个CNN来分类狗品种。在你的代码单元块的最后,通过运行如下代码输出网络的结构: &lt;your model's name&gt;.summary() <a id='question6'></a> 问题 6: 在下方的代码块中尝试使用 Keras 搭建最终的网络架构,并回答你实现最终 CNN 架构的步骤与每一步的作用,并描述你在迁移学习过程中,使用该网络架构的原因。 回答: End of explanation """ ### TODO: 训练模型 ### TODO: 加载具有最佳验证loss的模型权重 """ Explanation: 【练习】训练模型 <a id='question7'></a> 问题 7: 在下方代码单元中训练你的模型。使用模型检查点(model checkpointing)来储存具有最低验证集 loss 的模型。 当然,你也可以对训练集进行 数据增强 以优化模型的表现,不过这不是必须的步骤。 End of explanation """ ### TODO: 在测试集上计算分类准确率 """ Explanation: 【练习】测试模型 <a id='question8'></a> 问题 8: 在狗图像的测试数据集上试用你的模型。确保测试准确率大于60%。 End of explanation """ ### TODO: 写一个函数,该函数将图像的路径作为输入 ### 然后返回此模型所预测的狗的品种 """ Explanation: 【练习】使用模型测试狗的品种 实现一个函数,它的输入为图像路径,功能为预测对应图像的类别,输出为你模型预测出的狗类别(Affenpinscher, Afghan_hound 等)。 与步骤5中的模拟函数类似,你的函数应当包含如下三个步骤: 根据选定的模型载入图像特征(bottleneck features) 将图像特征输输入到你的模型中,并返回预测向量。注意,在该向量上使用 argmax 函数可以返回狗种类的序号。 使用在步骤0中定义的 dog_names 数组来返回对应的狗种类名称。 提取图像特征过程中使用到的函数可以在 extract_bottleneck_features.py 中找到。同时,他们应已在之前的代码块中被导入。根据你选定的 CNN 网络,你可以使用 extract_{network} 函数来获得对应的图像特征,其中 {network} 代表 VGG19, Resnet50, InceptionV3, 或 Xception 中的一个。 <a id='question9'></a> 问题 9: End of explanation """ ### TODO: 设计你的算法 ### 自由地使用所需的代码单元数吧 """ Explanation: <a id='step6'></a> 步骤 6: 完成你的算法 实现一个算法,它的输入为图像的路径,它能够区分图像是否包含一个人、狗或两者都不包含,然后: 如果从图像中检测到一只__狗__,返回被预测的品种。 如果从图像中检测到__人__,返回最相像的狗品种。 如果两者都不能在图像中检测到,输出错误提示。 我们非常欢迎你来自己编写检测图像中人类与狗的函数,你可以随意地使用上方完成的 face_detector 和 dog_detector 函数。你__需要__在步骤5使用你的CNN来预测狗品种。 下面提供了算法的示例输出,但你可以自由地设计自己的模型! <a id='question10'></a> 问题 10: 在下方代码块中完成你的代码。 End of explanation """ ## TODO: 在你的电脑上,在步骤6中,至少在6张图片上运行你的算法。 ## 自由地使用所需的代码单元数吧 """ Explanation: <a id='step7'></a> 步骤 7: 测试你的算法 在这个部分中,你将尝试一下你的新算法!算法认为__你__看起来像什么类型的狗?如果你有一只狗,它可以准确地预测你的狗的品种吗?如果你有一只猫,它会将你的猫误判为一只狗吗? <a id='question11'></a> 问题 11: 在下方编写代码,用至少6张现实中的图片来测试你的算法。你可以使用任意照片,不过请至少使用两张人类图片(要征得当事人同意哦)和两张狗的图片。 同时请回答如下问题: 输出结果比你预想的要好吗 :) ?或者更糟 :( ? 提出至少三点改进你的模型的想法。 End of explanation """
rabernat/xgcm
doc/example_mitgcm.ipynb
mit
import xarray as xr import numpy as np import xgcm from matplotlib import pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10,6) """ Explanation: MITgcm Example xgcm is developed in close coordination with the xmitgcm package. The metadata in datasets constructed by xmitgcm should always be compatible with xgcm's expectations. xmitgcm is necessary for reading MITgcm's binary MDS file format. However, for this example, the MDS files have already been converted and saved as netCDF. Below are some example of how to make calculations on mitgcm-style datasets using xgcm. First we import xarray and xgcm: End of explanation """ # hack to make file name work with nbsphinx and binder import os fname = '../datasets/mitgcm_example_dataset_v2.nc' if not os.path.exists(fname): fname = '../' + fname ds = xr.open_dataset(fname) ds """ Explanation: Now we open the example dataset, which is stored with the xgcm github repository in the datasets folder. End of explanation """ grid = xgcm.Grid(ds, periodic=['X', 'Y']) grid """ Explanation: Creating the grid object Next we create a Grid object from the dataset. We need to tell xgcm that the X and Y axes are periodic. (The other axes will be assumed to be non-periodic.) End of explanation """ u_transport = ds.UVEL * ds.dyG * ds.hFacW * ds.drF v_transport = ds.VVEL * ds.dxG * ds.hFacS * ds.drF """ Explanation: We see that xgcm identified five different axes: X (longitude), Y (latitude), Z (depth), T (time), and 1RHO (the axis generated by the output of the LAYERS package). Velocity Gradients The gradients of the velocity field can be decomposed as divergence, vorticity, and strain. Below we use xgcm to compute the velocity gradients of the horizontal flow. Divergence The divergence of the horizontal flow is is expressed as $$ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} $$ In discrete form, using MITgcm notation, the formula for the C-grid is $$ ( \delta_i \Delta y_g \Delta r_f h_w u + \delta_j \Delta x_g \Delta r_f h_s v ) / A_c$$ First we calculate the volume transports in each direction: End of explanation """ display(u_transport.dims) display(v_transport.dims) """ Explanation: The u_transport DataArray is on the left point of the X axis, while the v_transport DataArray is on the left point of the Y axis. End of explanation """ div_uv = (grid.diff(u_transport, 'X') + grid.diff(v_transport, 'Y')) / ds.rA div_uv.dims """ Explanation: Now comes the xgcm magic: we take the diff along both axes and divide by the cell area element to find the divergence of the horizontal flow. Note how this new variable is at the cell center point. End of explanation """ div_uv[0, 0].plot() """ Explanation: We plot this near the surface and observe the expected patern of divergence at the equator and in the subpolar regions, and convergence in the subtropical gyres. End of explanation """ zeta = (-grid.diff(ds.UVEL * ds.dxC, 'Y') + grid.diff(ds.VVEL * ds.dyC, 'X'))/ds.rAz zeta """ Explanation: Vorticity The vertical component of the vorticity is a fundamental quantity of interest in ocean circulation theory. It is defined as $$ \zeta = - \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \ . $$ On the c-grid, a finite-volume representation is given by $$ \zeta = (- \delta_j \Delta x_c u + \delta_i \Delta y_c v ) / A_\zeta \ . $$ In xgcm, we calculate this quanity as End of explanation """ zeta_bt = (zeta * ds.drF).sum(dim='Z') zeta_bt.plot(vmax=2e-4) """ Explanation: ...which we can see is located at the YG, XG horizontal position (also commonly called the vorticity point). We plot the vertical integral of this quantity, i.e. the barotropic vorticity: End of explanation """ u_bt = (ds.UVEL * ds.hFacW * ds.drF).sum(dim='Z') v_bt = (ds.VVEL * ds.hFacS * ds.drF).sum(dim='Z') zeta_bt_alt = (-grid.diff(u_bt * ds.dxC, 'Y') + grid.diff(v_bt * ds.dyC, 'X'))/ds.rAz zeta_bt_alt.plot(vmax=2e-4) """ Explanation: A different way to calculate the barotropic vorticity is to take the curl of the vertically integrated velocity. This formulation also allows us to incorporate the $h$ factors representing partial cell thickness. End of explanation """ strain = (grid.diff(ds.UVEL * ds.dyG, 'X') - grid.diff(ds.VVEL * ds.dxG, 'Y')) / ds.rA strain[0,0].plot() """ Explanation: Strain Another interesting quantity is the horizontal strain, defined as $$ s = \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y} \ . $$ On the c-grid, a finite-volume representation is given by $$ s = (\delta_i \Delta y_g u - \delta_j \Delta x_g v ) / A_c \ . $$ End of explanation """ psi = grid.cumsum(-u_bt * ds.dyG, 'Y', boundary='fill') psi """ Explanation: Barotropic Transport Streamfunction We can use the barotropic velocity to calcuate the barotropic transport streamfunction, defined via $$ u_{bt} = - \frac{\partial \Psi}{\partial y} \ , \ \ v_{bt} = \frac{\partial \Psi}{\partial x} \ .$$ We calculate this by integrating $u_{bt}$ along the Y axis using the grid object's cumsum method: End of explanation """ (psi[0] / 1e6).plot.contourf(levels=np.arange(-160, 40, 5)) """ Explanation: We see that xgcm automatically shifted the Y-axis position from center (YC) to left (YG) during the cumsum operation. We convert to sverdrups and plot with a contour plot. End of explanation """ maskZ = grid.interp(ds.hFacS, 'X') (psi[0] / 1e6).where(maskZ[0]).plot.contourf(levels=np.arange(-160, 40, 5)) """ Explanation: This doesn't look nice because it lacks a suitable land mask. The dataset has no mask provided for the vorticity point. But we can build one with xgcm! End of explanation """ ke = 0.5*(grid.interp((ds.UVEL*ds.hFacW)**2, 'X') + grid.interp((ds.VVEL*ds.hFacS)**2, 'Y')) ke[0,0].where(ds.maskC[0]).plot() """ Explanation: Kinetic Energy Finally, we plot the kinetic energy $1/2 (u^2 + v^2)$ by interpoloting both quantities the cell center point. End of explanation """
trangel/Data-Science
bayesian_modeling/TRG_finding_suspect.ipynb
gpl-3.0
try: import google.colab IN_COLAB = True except: IN_COLAB = False if IN_COLAB: print("Downloading Colab files") ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/bayesian-methods-for-ml/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab setup_google_colab.load_data_final_project() ! pip install GPy gpyopt import numpy as np import matplotlib.pyplot as plt from IPython.display import clear_output import tensorflow as tf import GPy import GPyOpt import keras from keras.layers import Input, Dense, Lambda, InputLayer, concatenate, Activation, Flatten, Reshape from keras.layers.normalization import BatchNormalization from keras.layers.convolutional import Conv2D, Deconv2D from keras.losses import MSE from keras.models import Model, Sequential from keras import backend as K from keras import metrics from keras.datasets import mnist from keras.utils import np_utils from tensorflow.python.framework import ops from tensorflow.python.framework import dtypes import utils import os %matplotlib inline """ Explanation: First things first Click File -> Save a copy in Drive and click Open in new tab in the pop-up window to save your progress in Google Drive. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box to enable faster GPU training. Final project: Finding the suspect <a href="https://en.wikipedia.org/wiki/Facial_composite">Facial composites</a> are widely used in forensics to generate images of suspects. Since victim or witness usually isn't good at drawing, computer-aided generation is applied to reconstruct the face attacker. One of the most commonly used techniques is evolutionary systems that compose the final face from many predefined parts. In this project, we will try to implement an app for creating a facial composite that will be able to construct desired faces without explicitly providing databases of templates. We will apply Variational Autoencoders and Gaussian processes for this task. The final project is developed in a way that you can apply learned techniques to real project yourself. We will include the main guidelines and hints, but a great part of the project will need your creativity and experience from previous assignments. Setup Load auxiliary files and then install and import the necessary libraries. End of explanation """ sess = tf.InteractiveSession() K.set_session(sess) latent_size = 8 ! shred -u CelebA_VAE_small_8.h5; wget https://github.com/hse-aml/bayesian-methods-for-ml/releases/download/v0.1/CelebA_VAE_small_8.h5 -O CelebA_VAE_small_8.h5 vae, encoder, decoder = utils.create_vae(batch_size=128, latent=latent_size) sess.run(tf.global_variables_initializer()) vae.load_weights('CelebA_VAE_small_8.h5') K.set_learning_phase(False) latent_placeholder = tf.placeholder(tf.float32, (1, latent_size)) decode = decoder(latent_placeholder) """ Explanation: Grading As some of the final project tasks can be graded only visually, the final assignment is graded using the peer-review procedure. You will be asked to upload your Jupyter notebook on the web and attach a link to it in the submission form. Detailed submission instructions and grading criterions are written at the end of this notebook. Model description We will first train variational autoencoder on face images to compress them to low dimension. One important feature of VAE is that constructed latent space is dense. That means that we can traverse the latent space and reconstruct any point along our path into a valid face. Using this continuous latent space we can use Bayesian optimization to maximize some similarity function between a person's face in victim/witness's memory and a face reconstructed from the current point of latent space. Bayesian optimization is an appropriate choice here since people start to forget details about the attacker after they were shown many similar photos. Because of this, we want to reconstruct the photo with the smallest possible number of trials. Generating faces For this task, you will need to use some database of face images. There are multiple datasets available on the web that you can use: for example, <a href="http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html">CelebA</a> or <a href="http://vis-www.cs.umass.edu/lfw/">Labeled Faces in the Wild</a>. We used Aligned & Cropped version of CelebA that you can find <a href="https://www.dropbox.com/sh/8oqt9vytwxb3s4r/AADSNUu0bseoCKuxuI5ZeTl1a/Img?dl=0&preview=img_align_celeba.zip">here</a> to pretrain VAE model for you. See optional part of the final project if you wish to train VAE on your own. <b>Task 1:</b> Train VAE on faces dataset and draw some samples from it. (You can use code from previous assignments. You may also want to use convolutional encoders and decoders as well as tuning hyperparameters) End of explanation """ ### TODO: Draw 25 samples from VAE here plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i+1) z = np.random.normal(size=(1, latent_size)) image = sess.run(decode, feed_dict={latent_placeholder : z})[0] ### YOUR CODE HERE plt.imshow(np.clip(image, 0, 1)) plt.axis('off') """ Explanation: GRADED 1 (3 points): Draw 25 samples from trained VAE model As the first part of the assignment, you need to become familiar with the trained model. For all tasks, you will only need a decoder to reconstruct samples from a latent space. To decode the latent variable, you need to run decode operation defined above with random samples from a standard normal distribution. End of explanation """ class FacialComposit: def __init__(self, decoder, latent_size): self.latent_size = latent_size self.latent_placeholder = tf.placeholder(tf.float32, (1, latent_size)) self.decode = decoder(self.latent_placeholder) self.samples = None self.images = None self.rating = None def _get_image(self, latent): img = sess.run(self.decode, feed_dict={self.latent_placeholder: latent[None, :]})[0] img = np.clip(img, 0, 1) return img @staticmethod def _show_images(images, titles): assert len(images) == len(titles) clear_output() plt.figure(figsize=(3*len(images), 3)) n = len(titles) for i in range(n): plt.subplot(1, n, i+1) plt.imshow(images[i]) plt.title(str(titles[i])) plt.axis('off') plt.show() @staticmethod def _draw_border(image, w=2): bordred_image = image.copy() bordred_image[:, :w] = [1, 0, 0] bordred_image[:, -w:] = [1, 0, 0] bordred_image[:w, :] = [1, 0, 0] bordred_image[-w:, :] = [1, 0, 0] return bordred_image def query_initial(self, n_start=10, select_top=10): ''' Creates initial points for Bayesian optimization Generate *n_start* random images and asks user to rank them. Gives maximum score to the best image and minimum to the worst. :param n_start: number of images to rank initialy. :param select_top: number of images to keep ''' samples = np.random.normal(size=(n_start, latent_size)) images = np.array([self._get_image(samples[i]) for i in range(n_start)]) self._show_images(images, range(n_start)) rating = [] idx = [] print('Please rank images from best to worst: 0 = worst') for i in range(select_top): idx.append(i) msg = 'Ranking for image {0}: '.format(i) rank = int(input(msg)) rating.append(rank) self.samples = np.array(samples[idx]) ### YOUR CODE HERE (size: select_top x 64 x 64 x 3) self.images = np.array(images[idx]) ### YOUR CODE HERE (size: select_top x 64 x 64 x 3) self.rating = np.array(rating) ### YOUR CODE HERE (size: select_top) print(self.samples.shape, self.images.shape, self.rating.shape ) # Check that tensor sizes are correct np.testing.assert_equal(self.rating.shape, [select_top]) np.testing.assert_equal(self.images.shape, [select_top, 64, 64, 3]) np.testing.assert_equal(self.samples.shape, [select_top, self.latent_size]) def evaluate(self, candidate): ''' Queries candidate vs known image set. Adds candidate into images pool. :param candidate: latent vector of size 1xlatent_size ''' initial_size = len(self.images) ### YOUR CODE HERE ## Show user an image and ask to assign score to it. ## You may want to show some images to user along with their scores ## You should also save candidate, corresponding image and rating candidate_image = self._get_image(candidate[0]).reshape((1, 64, 64, 3)) worst_image = self.images[np.argmin(self.rating)] best_image = self.images[np.argmax(self.rating)] images = [candidate_image[0], worst_image, best_image] titles = ['Candidate', 'Worst image', 'Best image'] self._show_images(images, titles) candidate_rating = int(input('Ranking for candidate image')) ### YOUR CODE HERE print(candidate_rating) self.images = np.append(self.images, candidate_image, axis=0) self.rating = np.append(self.rating, candidate_rating) self.samples = np.append(self.samples, candidate, axis=0) assert len(self.images) == initial_size + 1 assert len(self.rating) == initial_size + 1 assert len(self.samples) == initial_size + 1 return candidate_rating def optimize(self, n_iter=10, w=4, acquisition_type='MPI', acquisition_par=0.3): if self.samples is None: self.query_initial() bounds = [{'name': 'z_{0:03d}'.format(i), 'type': 'continuous', 'domain': (-w, w)} for i in range(self.latent_size)] optimizer = GPyOpt.methods.BayesianOptimization(f=self.evaluate, domain=bounds, acquisition_type = acquisition_type, acquisition_par = acquisition_par, exact_eval=False, # Since we are not sure model_type='GP', X=self.samples, Y=self.rating[:, None], maximize=True) optimizer.run_optimization(max_iter=n_iter, eps=-1) def get_best(self): index_best = np.argmax(self.rating) return self.images[index_best] def draw_best(self, title=''): index_best = np.argmax(self.rating) image = self.images[index_best] plt.imshow(image) plt.title(title) plt.axis('off') plt.show() """ Explanation: Search procedure Now that we have a way to reconstruct images, we need to set up an optimization procedure to find a person that will be the most similar to the one we are thinking about. To do so, we need to set up some scoring utility. Imagine that you want to generate an image of Brad Pitt. You start with a small number of random samples, say 5, and rank them according to their similarity to your vision of Brad Pitt: 1 for the worst, 5 for the best. You then rate image by image using GPyOpt that works in a latent space of VAE. For the new image, you need to somehow assign a real number that will show how good this image is. The simple idea is to ask a user to compare a new image with previous images (along with their scores). A user then enters score to a current image. The proposed scoring has a lot of drawbacks, and you may feel free to come up with new ones: e.g. showing user 9 different images and asking a user which image looks the "best". Note that the goal of this task is for you to implement a new algorithm by yourself. You may try different techniques for your task and select one that works the best. <b>Task 2:</b> Implement person search using Bayesian optimization. (You can use code from the assignment on Gaussian Processes) Note: try varying acquisition_type and acquisition_par parameters. End of explanation """ composit = FacialComposit(decoder, 8) composit.optimize() composit.draw_best('Darkest hair') """ Explanation: GRADED 2 (3 points): Describe your approach below: How do you assign a score to a new image? How do you select reference images to help user assign a new score? What are the limitations of your approach? How do you assing a score to a new image? This is done manually by asking the user to rank the image How do you select reference images to help user assign a new score? I show 3 images the worst image, best image and candidate image so that the user can situate the new image within the previously processeed images. What are the limitations of your approach? The whole approach is not deterministic, so it will randomnly generate images The scoring process is subject to the user judgment Testing your algorithm In these sections, we will apply the implemented app to search for different people. Each task will ask you to generate images that will have some property like "dark hair" or "mustache". You will need to run your search algorithm and provide the best discovered image. Task 3.1: Finding person with darkest hair (3 points) End of explanation """ composit = FacialComposit(decoder, 8) composit.optimize() composit.draw_best('Widest smile') """ Explanation: Task 3.2. Finding person with the widest smile (3 points) End of explanation """ composit = FacialComposit(decoder, 8) composit.optimize() composit.draw_best('Lecturer') """ Explanation: Task 3.3. Finding Daniil Polykovskiy or Alexander Novikov — lecturers of this course (3 points) Note: this task highly depends on the quality of a VAE and a search algorithm. You may need to restart your search algorithm a few times and start with larget initial set. End of explanation """ ### Your code here """ Explanation: <small>Don't forget to post resulting image of lecturers on the forum ;)</small> Task 3.4. Finding specific person (optional, but very cool) Now that you have a good sense of what your algorithm can do, here is an optional assignment for you. Think of a famous person and take look at his/her picture for a minute. Then use your app to create an image of the person you thought of. You can post it in the forum <a href="https://www.coursera.org/learn/bayesian-methods-in-machine-learning/discussions/forums/SE06u3rLEeeh0gq4yYKIVA">Final project: guess who!</a> End of explanation """
saashimi/code_guild
interactive-coding-challenges/graphs_trees/bst/bst_challenge.ipynb
mit
class Node(object): def __init__(self, data): # TODO: Implement me pass def insert(root, data): # TODO: Implement me pass """ Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small> Challenge Notebook Problem: Implement a binary search tree with an insert method. Constraints Test Cases Algorithm Code Unit Test Constraints Can we assume we are working with valid integers? Yes Can we assume all left descendents <= n < all right descendents? Yes For simplicity, can we use just a Node class without a wrapper Tree class? Yes Do we have to keep track of the parent nodes? This is optional Test Cases Insert Insert will be tested through the following traversal: In-Order Traversal (Provided) 5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5 Algorithm Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code End of explanation """ %run dfs.py %run ../utils/results.py # %load test_bst.py from nose.tools import assert_equal class TestTree(object): def __init__(self): self.results = Results() def test_tree(self): node = Node(5) insert(node, 2) insert(node, 8) insert(node, 1) insert(node, 3) in_order_traversal(node, self.results.add_result) assert_equal(str(self.results), '[1, 2, 3, 5, 8]') self.results.clear_results() node = Node(1) insert(node, 2) insert(node, 3) insert(node, 4) insert(node, 5) in_order_traversal(node, self.results.add_result) assert_equal(str(self.results), '[1, 2, 3, 4, 5]') print('Success: test_tree') def main(): test = TestTree() test.test_tree() if __name__ == '__main__': main() """ Explanation: Unit Test The following unit test is expected to fail until you solve the challenge. End of explanation """
matousc89/Python-Adaptive-Signal-Processing-Handbook
notebooks/adaptive_filters_realtime.ipynb
mit
import numpy as np import matplotlib.pylab as plt import padasip as pa %matplotlib inline plt.style.use('ggplot') # nicer plots np.random.seed(52102) # always use the same random seed to make results comparable """ Explanation: Adaptive Filters Real-time Use with Padasip Module This tutorial shows how to use Padasip module for filtering and prediction with adaptive filters in real-time. Lets start with importing padasip. In the following examples we will also use numpy and matplotlib. End of explanation """ def measure_x(): # input vector of size 3 x = np.random.random(3) return x def measure_d(x): # meausure system output d = 2*x[0] + 1*x[1] - 1.5*x[2] return d """ Explanation: One Sample Ahead Prediction Example with the NLMS Filter Consider measurement of a variable $\textbf{d}$ in time $k$. The inputs of the system which produces this variable is also measured at every sample $\textbf{x}(k)$. We will simulate the measurement via the following function End of explanation """ filt = pa.filters.FilterNLMS(3, mu=1.) """ Explanation: For prediction of the variable $d(k)$ it is possible to use any implemented fitler (LMS, RLS, NLMS). In this case the NLMS filter is used. The filter (as a size of 3 in this example) can be created as follows End of explanation """ N = 100 log_d = np.zeros(N) log_y = np.zeros(N) for k in range(N): # measure input x = measure_x() # predict new value y = filt.predict(x) # do the important stuff with prediction output pass # measure output d = measure_d(x) # update filter filt.adapt(d, x) # log values log_d[k] = d log_y[k] = y """ Explanation: Now the created filter can be used in the loop in real time as End of explanation """ plt.figure(figsize=(12.5,6)) plt.plot(log_d, "b", label="target") plt.plot(log_y, "g", label="prediction") plt.xlabel("discrete time index [k]") plt.legend() plt.tight_layout() plt.show() """ Explanation: Now, according to logged values it is possible to display the learning process of the filter. End of explanation """
mkliegl/custom-sklearn
heavytail.ipynb
mit
from __future__ import print_function import numpy as np from sklearn.linear_model import Ridge from flexible_linear import FlexibleLinearRegression import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('ggplot') %matplotlib inline np.random.seed(1) """ Explanation: Cost function for heavy-tailed noise End of explanation """ N = 500 A = 50 B = 3 x = np.linspace(0, 100, N, dtype=float) X = x.reshape(-1, 1) # scikit-learn wants 2d arrays y = A + B * x plt.plot(x, y, '-') """ Explanation: Generate some data Our "ground truth" data is a simple linear function. End of explanation """ y_gauss = y + 20*np.random.randn(N) plt.plot(x, y, '-', x, y_gauss, '-') plt.legend(['True', 'Noisy'], loc='upper left') """ Explanation: Let's try adding Gaussian (normal) noise... End of explanation """ y_cauchy = y + 20*np.random.standard_cauchy(N) plt.plot(x, y, '-', x, y_cauchy, '.') plt.legend(['True', 'Noisy'], loc='lower right') """ Explanation: ... or some Cauchy (heavy-tailed) noise End of explanation """ clf = Ridge() clf.fit(X, y_gauss) pred = clf.predict(X) plt.plot(x, y, '-', x, pred, '-') plt.legend(['True', 'Recovered'], loc='upper left') print(" True: %.3f + %.3f * x" % (A, B)) print("Recovered: %.3f + %.3f * x" % (clf.intercept_, clf.coef_[0])) """ Explanation: Trying to recover the linear function using Ridge regression Gaussian noise For Gaussian noise the recovery works fairly well. End of explanation """ clf = Ridge() clf.fit(X, y_cauchy) pred = clf.predict(X) plt.plot(x, y, '-', x, pred, '-') plt.legend(['True', 'Recovered'], loc='upper left') print(" True: %.3f + %.3f * x" % (A, B)) print("Recovered: %.3f + %.3f * x" % (clf.intercept_, clf.coef_[0])) """ Explanation: Cauchy noise Let's try the same with the Cauchy noise. End of explanation """ clf = FlexibleLinearRegression(cost_func='l2', C=0.0) clf.fit(X, y_cauchy) pred = clf.predict(X) plt.plot(x, y, '-', x, pred, '-') plt.legend(['True', 'Recovered'], loc='upper left') print(" True: %.3f + %.3f * x" % (A, B)) print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1])) """ Explanation: That looks much less impressive. The problem is that the squared-$\ell^2$-norm minimized by Ridge regression is too sensitive to outliers. Using a less sensitive cost function Let $$z_i = (X \cdot W - y)_i$$ denote the residuals. Instead of the usual least-squares approach of minimizing $$\min_W ||z||_{\ell^2}^2 = \frac12 \sum_i |z_i|^2 \,, $$ let's try to minimize the $\ell^1$ norm $$\min_W ||z||_{\ell^1} = \sum_i |z_i|$$ instead. Alternatively, since that is not smooth, we could try to minimize an objective like $$ \min_W \eta^2 \sum_i \left( \sqrt{ 1 + \left( \frac{|z_i|}{\eta} \right)^2 } - 1 \right) \,, $$ where $\eta$ is a positive scale parameter. This is a smooth convex cost function based component-wise on the Japanese bracket: $$ \langle z \rangle := \sqrt{1 + |z|^2} \,. $$ It behaves like the least-squares cost function for $|z| \ll \eta$, but grows like $O(|z|)$ for $|z| \gg \eta$. So it gives us the best of $\ell^1$ and squared-$\ell^2$: The smoothness of squared-$\ell^2$ for small values and the insensitivity to outliers of $\ell^1$ for large values. We use a custom FlexibleLinearRegression scikit-learn estimator that allows us to try these different cost functions. Note: For simplicity, we omit regularization in the above (and below by setting C=0.0). It really does not matter for this toy problem. squared $\ell^2$ cost This is pretty much the same as Ridge above (minus the regularization). It performs horribly. End of explanation """ clf = FlexibleLinearRegression(cost_func='l1', C=0.0) clf.fit(X, y_cauchy) pred = clf.predict(X) plt.plot(x, y, '-', x, pred, '-') plt.legend(['True', 'Recovered'], loc='upper left') print(" True: %.3f + %.3f * x" % (A, B)) print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1])) """ Explanation: $\ell^1$ cost If we switch to the $\ell^1$ cost, we start to get very good results! End of explanation """ clf = FlexibleLinearRegression(cost_func='japanese', C=0.0, cost_opts={'eta': 10.0}) clf.fit(X, y_cauchy) pred = clf.predict(X) plt.plot(x, y, '-', x, pred, '-') plt.legend(['True', 'Recovered'], loc='upper left') print(" True: %.3f + %.3f * x" % (A, B)) print("Recovered: %.3f + %.3f * x" % (clf.coef_[0], clf.coef_[1])) """ Explanation: Japanese cost If we switch to the Japanese bracket cost, we get results just as good. For this simple example, the $\ell^1$ cost already gave us good answers, but for more complex problems the smoothness of the Japanese bracket may make it a numerically safer choice. End of explanation """
AEW2015/PYNQ_PR_Overlay
docs/source/6a_base_overlay_iop.ipynb
bsd-3-clause
from pynq import Overlay from pynq.iop import Pmod_OLED from pynq.iop import PMODB ol = Overlay("base.bit") ol.download() oled = Pmod_OLED(PMODB) """ Explanation: Using Peripherals with the Base overlay Base overlay The PYNQ-Z1 has 2 Pmod connectors. PMODA and PMODB as indicated below are connected to the FPGA fabric. Using Pmods with an overlay To use a peripheral two software components are required; a driver application written in C for the IOP, and a Python module. These components are provided as part of the Pynq package for supported peripherals. See the IO Processors: Writing your own software section of the documentation for writing drivers for your own peripherals. The Python module instantiates the peripheral, and loads the driver application to the appropriate IOP. The IOP will also be reset and start executing the new application. The Python module will send commands which the IOP will interpret and execute. The Python module may also send the data if necessary. The IOP will read from and write data into the shared memory area. Example: Using the OLED and the Ambient Light Sensor (ALS) This examples requires the PmodOLED (OLED), and PmodALS (Ambient Light Sensor). Plug the PmodALS into PMODA, and PmodOLED into the top row of PMODB. (Currently, the PmodALS can only be used in the top row of a Pmod port.) OLED displaying light reading from ambient light sensor: Execute the next cell to load the FPGA fabric with the desired overlay, and then import the OLED module and instantiate it on PMODB: End of explanation """ oled.write("Hello World") oled.clear() """ Explanation: Try writing a message to the OLED. End of explanation """ from pynq.iop import Pmod_ALS from pynq.iop import PMODA als = Pmod_ALS(PMODA) als.read() """ Explanation: Import the ALS library, create an instance of the ALS Pmod, and read the value from the sensor. End of explanation """ oled.write("Light value : " + str(als.read())) import time from pynq.iop import Pmod_ALS from pynq.iop import PMODA als = Pmod_ALS(PMODA) als.set_log_interval_ms(100) als.start_log() time.sleep(1) als.stop_log() als.get_log() """ Explanation: Write the value from the ALS to the OLED. The ALS sensor returns an 8-bit value. 0 : Darkest 255 : Brightest End of explanation """
atulsingh0/MachineLearning
python_DC/ST_Python_02b.ipynb
gpl-3.0
# import import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.set() %run ST_Python_02a.py #import ipynb.fs.full.ST_Python_02a import io from nbformat import current def execute_notebook(nbfile): with io.open(nbfile) as f: nb = current.read(f, 'json') ip = get_ipython() for cell in nb.worksheets[0].cells: if cell.cell_type != 'code': continue ip.run_cell(cell.input) #execute_notebook("ST_Python_02a.ipynb") #%run 'ST_Python_02a.ipynb' """ Explanation: Statistical Thinking in Python (Part 2) End of explanation """ def permutation_sample(data1, data2): """Generate a permutation sample from two data sets.""" # Concatenate the data sets: data data = np.concatenate((data1, data2)) # Permute the concatenated array: permuted_data permuted_data = np.random.permutation(data) # Split the permuted array into two: perm_sample_1, perm_sample_2 perm_sample_1 = permuted_data[:len(data1)] perm_sample_2 = permuted_data[len(data1):] return perm_sample_1, perm_sample_2 rain_july = np.array([ 66.2, 39.7, 76.4, 26.5, 11.2, 61.8, 6.1, 48.4, 89.2, 104. , 34. , 60.6, 57.1, 79.1, 90.9, 32.3, 63.8, 78.2, 27.5, 43.4, 30.1, 17.3, 77.5, 44.9, 92.2, 39.6, 79.4, 66.1, 53.5, 98.5, 20.8, 55.5, 39.6, 56. , 65.1, 14.8, 13.2, 88.1, 8.4, 32.1, 19.6, 40.4, 2.2, 77.5, 105.4, 77.2, 38. , 27.1, 111.8, 17.2, 26.7, 23.3, 77.2, 87.2, 27.7, 50.6, 60.3, 15.1, 6. , 29.4, 39.3, 56.3, 80.4, 85.3, 68.4, 72.5, 13.3, 28.4, 14.7, 37.4, 49.5, 57.2, 85.9, 82.1, 31.8, 126.6, 30.7, 41.4, 33.9, 13.5, 99.1, 70.2, 91.8, 61.3, 13.7, 54.9, 62.5, 24.2, 69.4, 83.1, 44. , 48.5, 11.9, 16.6, 66.4, 90. , 34.9, 132.8, 33.4, 225. , 7.6, 40.9, 76.5, 48. , 140. , 55.9, 54.1, 46.4, 68.6, 52.2, 108.3, 14.6, 11.3, 29.8, 130.9, 152.4, 61. , 46.6, 43.9, 30.9, 111.1, 68.5, 42.2, 9.8, 285.6, 56.7, 168.2, 41.2, 47.8, 166.6, 37.8, 45.4, 43.2]) rain_november = np.array([ 83.6, 30.9, 62.2, 37. , 41. , 160.2, 18.2, 122.4, 71.3, 44.2, 49.1, 37.6, 114.5, 28.8, 82.5, 71.9, 50.7, 67.7, 112. , 63.6, 42.8, 57.2, 99.1, 86.4, 84.4, 38.1, 17.7, 102.2, 101.3, 58. , 82. , 101.4, 81.4, 100.1, 54.6, 39.6, 57.5, 29.2, 48.8, 37.3, 115.4, 55.6, 62. , 95. , 84.2, 118.1, 153.2, 83.4, 104.7, 59. , 46.4, 50. , 147.6, 76.8, 59.9, 101.8, 136.6, 173. , 92.5, 37. , 59.8, 142.1, 9.9, 158.2, 72.6, 28. , 112.9, 119.3, 199.2, 50.7, 44. , 170.7, 67.2, 21.4, 61.3, 15.6, 106. , 116.2, 42.3, 38.5, 132.5, 40.8, 147.5, 93.9, 71.4, 87.3, 163.7, 141.4, 62.6, 84.9, 28.8, 121.1, 28.6, 32.4, 112. , 50. , 96.9, 81.8, 70.4, 117.5, 41.2, 124.9, 78.2, 93. , 53.5, 50.5, 42.6, 47.9, 73.1, 129.1, 56.9, 103.3, 60.5, 134.3, 93.1, 49.5, 48.2, 167.9, 27. , 111.1, 55.4, 36.2, 57.4, 66.8, 58.3, 60. , 161.6, 112.7, 37.4, 110.6, 56.6, 95.8, 126.8]) for i in range(50): # Generate permutation samples perm_sample_1, perm_sample_2 = permutation_sample(rain_july, rain_november) # Compute ECDFs x_1, y_1 = ecdf(perm_sample_1) x_2, y_2 = ecdf(perm_sample_2) # Plot ECDFs of permutation sample _ = plt.plot(x_1, y_1, marker='.', linestyle='none', color='red', alpha=0.02) _ = plt.plot(x_2, y_2 , marker='.', linestyle='none', color='blue', alpha=0.02) # Create and plot ECDFs from original data x_1, y_1 = ecdf(rain_july) x_2, y_2 = ecdf(rain_november) _ = plt.plot(x_1, y_1, marker='.', linestyle='none', color='red') _ = plt.plot(x_2, y_2, marker='.', linestyle='none', color='blue') # Label axes, set margin, and show plot plt.margins(0.02) _ = plt.xlabel('monthly rainfall (mm)') _ = plt.ylabel('ECDF') plt.show() """ Explanation: Permutation Sampling End of explanation """ def draw_perm_reps(data_1, data_2, func, size=1): """Generate multiple permutation replicates.""" # Initialize array of replicates: perm_replicates perm_replicates = np.empty(size) for i in range(size): # Generate permutation sample perm_sample_1, perm_sample_2 = permutation_sample(data_1, data_2) # Compute the test statistic perm_replicates[i] = func(perm_sample_1, perm_sample_2) return perm_replicates force_a =np.array([ 1.612, 0.605, 0.327, 0.946, 0.541, 1.539, 0.529, 0.628, 1.453, 0.297, 0.703, 0.269, 0.751, 0.245, 1.182, 0.515, 0.435, 0.383, 0.457, 0.73 ]) force_b = np.array([ 0.172, 0.142, 0.037, 0.453, 0.355, 0.022, 0.502, 0.273, 0.72 , 0.582, 0.198, 0.198, 0.597, 0.516, 0.815, 0.402, 0.605, 0.711, 0.614, 0.468]) def diff_of_means(data_1, data_2): """Difference in means of two arrays.""" # The difference of means of data_1, data_2: diff diff = np.mean(data_1) - np.mean(data_2) return diff # Compute difference of mean impact force from experiment: empirical_diff_means empirical_diff_means = diff_of_means(force_a, force_b) # Draw 10,000 permutation replicates: perm_replicates perm_replicates = draw_perm_reps(force_a, force_b, diff_of_means, size=10000) # Compute p-value: p p = np.sum(perm_replicates >= empirical_diff_means) / len(perm_replicates) # Print the result print('p-value =', p) """ Explanation: Pipeline for hypothesis testing ● Clearly state the null hypothesis ● Define your test statistic ● Generate many sets of simulated data assuming the null hypothesis is true ● Compute the test statistic for each simulated data set ● The p-value is the fraction of your simulated data sets for which the test statistic is at least as extreme as for the real data End of explanation """ # Make an array of translated impact forces: translated_force_b translated_force_b = force_b - np.mean(force_b) + 0.55 # Take bootstrap replicates of Frog B's translated impact forces: bs_replicates bs_replicates = draw_bs_reps(translated_force_b, np.mean, 10000) # Compute fraction of replicates that are less than the observed Frog B force: p p = np.sum(bs_replicates <= np.mean(force_b)) / 10000 # Print the p-value print('p = ', p) """ Explanation: A one-sample bootstrap hypothesis test Another juvenile frog was studied, Frog C, and you want to see if Frog B and Frog C have similar impact forces. Unfortunately, you do not have Frog C's impact forces available, but you know they have a mean of 0.55 N. Because you don't have the original data, you cannot do a permutation test, and you cannot assess the hypothesis that the forces from Frog B and Frog C come from the same distribution. You will therefore test another, less restrictive hypothesis: The mean strike force of Frog B is equal to that of Frog C. To set up the bootstrap hypothesis test, you will take the mean as our test statistic. Remember, your goal is to calculate the probability of getting a mean impact force less than or equal to what was observed for Frog B if the hypothesis that the true mean of Frog B's impact forces is equal to that of Frog C is true. You first translate all of the data of Frog B such that the mean is 0.55 N. This involves adding the mean force of Frog C and subtracting the mean force of Frog B from each measurement of Frog B. This leaves other properties of Frog B's distribution, such as the variance, unchanged. End of explanation """ # Compute difference of mean impact force from experiment: empirical_diff_means empirical_diff_means = diff_of_means(force_a , force_b) # Concatenate forces: forces_concat forces_concat = np.concatenate((force_a, force_b)) # Initialize bootstrap replicates: bs_replicates bs_replicates = np.empty(10000) for i in range(10000): # Generate bootstrap sample bs_sample = np.random.choice(forces_concat, size=len(forces_concat)) # Compute replicate bs_replicates[i] = diff_of_means(bs_sample[:len(force_a)], bs_sample[len(force_a):]) # Compute and print p-value: p p = np.sum(bs_replicates >= empirical_diff_means) / len(bs_replicates) #p = np.sum(empirical_diff_means<np.mean(bs_replicates)) / 10000 print('p-value =', p) """ Explanation: A bootstrap test for identical distributions In the video, we looked at a one-sample test, but we can do two sample tests. We can even test the same hypothesis that we tested with a permutation test: that the Frog A and Frog B have identically distributed impact forces. To do this test on two arrays with n1 and n2 entries, we do a very similar procedure as a permutation test. We concatenate the arrays, generate a bootstrap sample from it, and take the first n1 entries of the bootstrap sample as belonging to the first data set and the last n2 as belonging to the second. We then compute the test statistic, e.g., the difference of means, to get a bootstrap replicate. The p-value is the number of bootstrap replicates for which the test statistic is less than what was observed. Now, you will perform a bootstrap test of the hypothesis that Frog A and Frog B have identical distributions of impact forces using the difference of means test statistic. The two arrays are available to you as force_a and force_b. End of explanation """ # Compute mean of all forces: mean_force mean_force = np.mean(forces_concat) # Generate shifted arrays force_a_shifted = force_a - np.mean(force_a) + mean_force force_b_shifted = force_b - np.mean(force_b) + mean_force # Compute 10,000 bootstrap replicates from shifted arrays bs_replicates_a = draw_bs_reps(force_a_shifted, np.mean, 10000) bs_replicates_b = draw_bs_reps(force_b_shifted, np.mean, 10000) # Get replicates of difference of means: bs_replicates bs_replicates = bs_replicates_a - bs_replicates_b # Compute and print p-value: p p = np.sum(bs_replicates >= empirical_diff_means) / 10000 print('p-value =', p) """ Explanation: A two-sample bootstrap hypothesis test for difference of means. You performed a one-sample bootstrap hypothesis test, which is impossible to do with permutation. Testing the hypothesis that two samples have the same distribution may be done with a bootstrap test, but a permutation test is preferred because it is more accurate (exact, in fact). But therein lies the limit of a permutation test; it is not very versatile. We now want to test the hypothesis that Frog A and Frog B have the same mean impact force, but not necessarily the same distribution. This, too, is impossible with a permutation test. To do the two-sample bootstrap test, we shift both arrays to have the same mean, since we are simulating the hypothesis that their means are, in fact, equal. We then draw bootstrap samples out of the shifted arrays and compute the difference in means. This constitutes a bootstrap replicate, and we generate many of them. The p-value is the fraction of replicates with a difference in means greater than or equal to what was observed. The objects forces_concat and empirical_diff_means are already in your namespace. End of explanation """
DawesLab/LabNotebooks
Jones Calculus for EIT Setup.ipynb
mit
qwp = np.matrix([[1, 0],[0, -1j]]) R(-np.pi/4)*qwp*R(np.pi/4) qwp45 = wp(np.pi/2, np.pi/4) qwp45 wp(np.pi/2, 0) vpol = np.matrix([[0,0],[0,1]]) vpol np.exp(1j*np.pi/4) horiz = np.matrix([[1],[0]]) output = qwp*horiz intensity(output) before_cell = wp(np.pi/2,np.pi/4)*wp(np.pi,np.pi/10)*horiz output = vpol*wp(np.pi/2,-np.pi/4)*before_cell intensity(output) """ Explanation: Begin testing here End of explanation """ ivals = [] thetas = np.linspace(0,np.pi) for theta in thetas: # vpol quarter quarter half input output = vpol*wp(np.pi/2-0.5,theta)*horiz ivals.append(intensity(output)) plt.plot(thetas,ivals) plt.title("rotating qwp w/error between crossed pols") ivals = [] thetas = np.linspace(0,np.pi/2) for phi in [0.3,0.4,0.5,0.6,0.7]: # try a range of phase errors to compare ivals = [] for theta in thetas: # vpol quarter quarter half input output = vpol*wp(np.pi/2 - phi,-theta)*wp(np.pi/2 - phi,theta)*wp(np.pi - phi,np.pi/19)*horiz ivals.append(intensity(output)) plt.plot(thetas,ivals,label=phi) plt.legend() plt.ylabel("I output") plt.xlabel("qwp1 angle (rad)") # Rotating 780 QWP between crossed pols # at 795 nm ivals = [] thetas = np.linspace(0,np.pi) for phi in [0.0,0.3,0.4,0.5,0.6,0.7]: ivals = [] for theta in thetas: output = vpol*wp(np.pi/2 - phi,theta)*horiz ivals.append(intensity(output)) plt.plot(thetas,ivals,label=phi) plt.legend() plt.ylabel("I output") plt.xlabel("qwp angle (rad)") plt.title("QWP w/ phase error") """ Explanation: As given by newport (https://www.newport.com/f/quartz-zero-order-waveplates), the wave error at 795 (vs 780) corresponds to a normalized wavelength of 1.02, giving wave error of -0.08 waves. That corresponds to a phase error of 2pi*(-0.08) = 0.5 radians. We'll explore the effect of this phase error below: End of explanation """ # try to plot vectors for the polarization components fig = plt.figure() ax = fig.add_axes([0.1, 0.1, 0.9, 0.9], polar=True) r = np.arange(0, 3.0, 0.01) theta = 2*np.pi*r ax.set_rmax(1.2) #plt.grid(True) # arrow at 0 arr1 = plt.arrow(0, 0, 0, 1, alpha = 0.5, width=0.03, length_includes_head=True, edgecolor = 'black', facecolor = 'red', zorder = 5) # arrow at 45 degree arr2 = plt.arrow(np.pi/4, 0, 0, 1, alpha = 0.5, width=0.03, length_includes_head=True, edgecolor = 'black', facecolor = 'blue', zorder = 5) plt.show() from qutip import * %matplotlib notebook # Start horizontal pol, propagate through system: phi = 0.03 theta = pi/4 out = wp(np.pi/2 - phi,theta)*wp(np.pi,pi/19)*horiz state = Qobj(out) b = Bloch() b.set_label_convention("polarization jones") b.add_states(state) b.show() 2*pi*0.25 - 2*pi*0.245 """ Explanation: So we should expect about 0.25 maximum intensity through our 780 waveplate. tinkering below here End of explanation """
ellisonbg/leafletwidget
examples/DrawControl.ipynb
mit
dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}}, rectangle={'shapeOptions': {'color': '#0000FF'}}, circle={'shapeOptions': {'color': '#0000FF'}}, circlemarker={}, ) def handle_draw(target, action, geo_json): print(action) print(geo_json) dc.on_draw(handle_draw) m.add_control(dc) """ Explanation: Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style. End of explanation """ dc.last_action dc.last_draw """ Explanation: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamically anytime a new drawn path arrives. End of explanation """ dc.clear_circles() dc.clear_polylines() dc.clear_rectangles() dc.clear_markers() dc.clear_polygons() dc.clear() """ Explanation: It's possible to remove all drawings from the map End of explanation """ m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px')) m2 """ Explanation: Let's draw a second map and try to import this GeoJSON data into it. End of explanation """ map_center_link = link((m, 'center'), (m2, 'center')) map_zoom_link = link((m, 'zoom'), (m2, 'zoom')) new_poly = GeoJSON(data=dc.last_draw) m2.add_layer(new_poly) """ Explanation: We can use link to synchronize traitlets of the two maps: End of explanation """ dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={}, circle={'shapeOptions': {'color': '#0000FF'}}) m2.add_control(dc2) """ Explanation: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details. Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit. End of explanation """
cdt15/lingam
examples/BottomUpParceLiNGAM.ipynb
mit
import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot import warnings warnings.filterwarnings('ignore') print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) """ Explanation: BottomUpParceLiNGAM Import and settings In this example, we need to import numpy, pandas, and graphviz in addition to lingam. End of explanation """ np.random.seed(1000) x6 = np.random.uniform(size=1000) x3 = 2.0*x6 + np.random.uniform(size=1000) x0 = 0.5*x3 + np.random.uniform(size=1000) x2 = 2.0*x6 + np.random.uniform(size=1000) x1 = 0.5*x0 + 0.5*x2 + np.random.uniform(size=1000) x5 = 0.5*x0 + np.random.uniform(size=1000) x4 = 0.5*x0 - 0.5*x2 + np.random.uniform(size=1000) # The latent variable x6 is not included. X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T, columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X.head() m = np.array([[0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0], [0.5, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0], [0.5, 0.0,-0.5, 0.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) dot = make_dot(m) # Save pdf dot.render('dag') # Save png dot.format = 'png' dot.render('dag') dot """ Explanation: Test data First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3. End of explanation """ model = lingam.BottomUpParceLiNGAM() model.fit(X) """ Explanation: Causal Discovery To run causal discovery, we create a BottomUpParceLiNGAM object and call the fit method. End of explanation """ model.causal_order_ """ Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering. End of explanation """ model.adjacency_matrix_ """ Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan. End of explanation """ make_dot(model.adjacency_matrix_) """ Explanation: We can draw a causal graph by utility funciton. End of explanation """ p_values = model.get_error_independence_p_values(X) print(p_values) """ Explanation: Independence between error variables To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$. End of explanation """ import warnings warnings.filterwarnings('ignore', category=UserWarning) model = lingam.BottomUpParceLiNGAM() result = model.bootstrap(X, n_sampling=100) """ Explanation: Bootstrapping We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling. End of explanation """ cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True) """ Explanation: Causal Directions Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more. End of explanation """ print_causal_directions(cdc, 100) """ Explanation: We can check the result by utility function. End of explanation """ dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True) """ Explanation: Directed Acyclic Graphs Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more. End of explanation """ print_dagc(dagc, 100) """ Explanation: We can check the result by utility function. End of explanation """ prob = result.get_probabilities(min_causal_effect=0.01) print(prob) """ Explanation: Probability Using the get_probabilities() method, we can get the probability of bootstrapping. End of explanation """ causal_effects = result.get_total_causal_effects(min_causal_effect=0.01) # Assign to pandas.DataFrame for pretty display df = pd.DataFrame(causal_effects) labels = [f'x{i}' for i in range(X.shape[1])] df['from'] = df['from'].apply(lambda x : labels[x]) df['to'] = df['to'].apply(lambda x : labels[x]) df """ Explanation: Total Causal Effects Using the get_total_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable. We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below. End of explanation """ df.sort_values('effect', ascending=False).head() df.sort_values('probability', ascending=True).head() """ Explanation: We can easily perform sorting operations with pandas.DataFrame. End of explanation """ df[df['to']=='x1'].head() """ Explanation: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1. End of explanation """ import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline from_index = 0 # index of x0 to_index = 5 # index of x5 plt.hist(result.total_effects_[:, to_index, from_index]) """ Explanation: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below. End of explanation """ from_index = 3 # index of x3 to_index = 1 # index of x0 pd.DataFrame(result.get_paths(from_index, to_index)) """ Explanation: Bootstrap Probability of Path Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [3, 0, 1] shows the path from variable X3 through variable X0 to variable X1. End of explanation """
metpy/MetPy
v0.11/_downloads/8532b75251585046a16f04a9afaef079/Advanced_Sounding.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, SkewT from metpy.units import units """ Explanation: Advanced Sounding Plot a sounding using MetPy with more advanced features. Beyond just plotting data, this uses calculations from metpy.calc to find the lifted condensation level (LCL) and the profile of a surface-based parcel. The area between the ambient profile and the parcel profile is colored as well. End of explanation """ col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'), how='all' ).reset_index(drop=True) """ Explanation: Upper air data can be obtained using the siphon package, but for this example we will use some of MetPy's sample data. End of explanation """ p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) """ Explanation: We will pull the data out of the example dataset into individual variables and assign units. End of explanation """ fig = plt.figure(figsize=(9, 9)) add_metpy_logo(fig, 115, 100) skew = SkewT(fig, rotation=45) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot. skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) skew.ax.set_xlim(-40, 60) # Calculate LCL height and plot as black dot. Because `p`'s first value is # ~1000 mb and its last value is ~250 mb, the `0` index is selected for # `p`, `T`, and `Td` to lift the parcel from the surface. If `p` was inverted, # i.e. start from low value, 250 mb, to a high value, 1000 mb, the `-1` index # should be selected. lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0]) skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black') # Calculate full parcel profile and add to plot as black line prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC') skew.plot(p, prof, 'k', linewidth=2) # Shade areas of CAPE and CIN skew.shade_cin(p, T, prof) skew.shade_cape(p, T, prof) # An example of a slanted line at constant T -- in this case the 0 # isotherm skew.ax.axvline(0, color='c', linestyle='--', linewidth=2) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Show the plot plt.show() """ Explanation: Create a new figure. The dimensions here give a good aspect ratio. End of explanation """
sevamoo/SOMPY
sompy/examples/.ipynb_checkpoints/AirFlights_hexagonal_grid-checkpoint.ipynb
apache-2.0
%matplotlib inline import math import glob import matplotlib.pyplot as plt import numpy as np import pandas as pd import urllib3 from sklearn.externals import joblib import random import matplotlib from sompy.sompy import SOMFactory from sompy.visualization.plot_tools import plot_hex_map import logging """ Explanation: Study of airflight delay causes with Self Organizing Maps - Example of hexagonal lattice This notebook is intended to be a brief guide on how to use Self Organizing Maps with the SOMPY library in Python. We are going to use hexagonal lattice in this example in order to understand the main causes of airflight cancellations Data description The U.S. Department of Transportation's (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT's monthly Air Travel Consumer Report, published about 30 days after the month's end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. This version of the dataset was compiled from the Statistical Computing Statistical Graphics 2009 Data Expo and is also available here, here and here Fields description Year: 2008 Month: 1-12 DayofMonth: 1-31 DayOfWeek: 1 (Monday) - 7 (Sunday) DepTime: actual departure time (local, hhmm) CRSDepTime: scheduled departure time (local, hhmm) ArrTime: actual arrival time (local, hhmm) CRSArrTime: scheduled arrival time (local, hhmm) UniqueCarrier: unique carrier code FlightNum: flight number TailNum: plane tail number ActualElapsedTime: in minutes CRSElapsedTime: in minutes AirTime: in minutes ArrDelay: arrival delay, in minutes DepDelay: departure delay, in minutes Origin: origin IATA airport code Dest: destination IATA airport code Distance: in miles TaxiIn: taxi in time, in minutes TaxiOut: taxi out time in minutes Cancelled: was the flight cancelled? CancellationCode: reason for cancellation (A = carrier, B = weather, C = NAS, D = security) Diverted: 1 = yes, 0 = no CarrierDelay: in minutes WeatherDelay: in minutes NASDelay: National Air System delay in minutes SecurityDelay in minutes LateAircraftDelay in minutes End of explanation """ df = pd.read_csv("./DelayedFlights.csv") df = df[["Month","DayofMonth", "DayOfWeek","DepTime", "AirTime", "Distance", "SecurityDelay","WeatherDelay", "NASDelay", "CarrierDelay", "ArrDelay", "DepDelay", "LateAircraftDelay", "Cancelled"]] clustering_vars = ["Month", "DayofMonth", "DepTime", "AirTime", "LateAircraftDelay", "DepDelay", "ArrDelay", "CarrierDelay"] df = df.fillna(0) data = df[clustering_vars].values names = clustering_vars df.describe() """ Explanation: Data Processing End of explanation """ %time # Train the model with different parameters. The more, the better. Each iteration is stored in disk for further study for i in range(1000): sm = SOMFactory().build(data, mapsize=[random.choice(list(range(15, 25))), random.choice(list(range(10, 15)))], normalization = 'var', initialization='random', component_names=names, lattice="hexa") sm.train(n_job=4, verbose=False, train_rough_len=30, train_finetune_len=100) joblib.dump(sm, "model_{}.joblib".format(i)) # Study the models trained and plot the errors obtained in order to select the best one models_pool = glob.glob("./model*") errors=[] for model_filepath in models_pool: sm = joblib.load(model_filepath) topographic_error = sm.calculate_topographic_error() quantization_error = sm.calculate_quantization_error() errors.append((topographic_error, quantization_error)) e_top, e_q = zip(*errors) plt.scatter(e_top, e_q) plt.xlabel("Topographic error") plt.ylabel("Quantization error") plt.show() # Manually select the model with better features. In this case, the #3 model has been selected because # quantization error is distributed across 34-40u and the topographic error varies much more, # so the model with lower topographic error has been selected. It is very important to keep the topographic # error as low as possible to assure a correct prototyping. selected_model = 3 sm = joblib.load(models_pool[selected_model]) topographic_error = sm.calculate_topographic_error() quantization_error = sm.calculate_quantization_error() print ("Topographic error = %s\n Quantization error = %s" % (topographic_error, quantization_error)) """ Explanation: Model Training As the data is relatively high, the model takes some time to train. We didn't finetune the hyperparameters of the algorithm and this is a potential improvement topic. End of explanation """ from sompy.visualization.mapview import View2D view2D = View2D(10,10,"", text_size=7) view2D.show(sm, col_sz=5, which_dim="all", denormalize=True) plt.show() """ Explanation: Results Components plane The components map shows the values of the variables for each prototype and allows us to extract conclusions consisting of non-linear patterns between variables. We have represented 2 types of components maps. - The prototypes visualization: it shows the patterns learned by the neural network which are used to determine de winning neuron of each training instance - The real visualization with exogeneous variables: it shows the real average value of the components of each lattice element. This visualization should be used with 2 purposes: (i) compare it with the prototypes visualization to assess how good is the prototypes modeling and (ii) to add other exogeneous variables (those which have not been used to build the self organizing map) in order to study their relation with the endogeneous variables. If the quantization error is not very high and a proper visual assessment has been done assuring that the prototupes and real visualizations look very alike, the prototypes visualization can be used as a final product, since it is much visual appealing. Prototypes component plane End of explanation """ # Addition of some exogeneous variables to the map exogeneous_vars = [c for c in df.columns if not c in clustering_vars+["Cancelled", "bmus"]] df["bmus"] = sm.project_data(data) df = df[clustering_vars + exogeneous_vars + ["Cancelled"] + ["bmus"]] empirical_codebook=df.groupby("bmus").mean().values matplotlib.rcParams.update({'font.size': 10}) plot_hex_map(empirical_codebook.reshape(sm.codebook.mapsize + [empirical_codebook.shape[-1]]), titles=df.columns[:-1], shape=[4, 5], colormap=None) plt.show() """ Explanation: Real values component plane End of explanation """ from sompy.visualization.bmuhits import BmuHitsView #sm.codebook.lattice="rect" vhts = BmuHitsView(12,12,"Hits Map",text_size=7) vhts.show(sm, anotate=True, onlyzeros=False, labelsize=7, cmap="autumn", logaritmic=False) plt.show() """ Explanation: Hits-map This visualization is very important because it shows how the instances are spreaded across the hexagonal lattice. The more instances lay into a cell, the more instances it is representing and hence the more we have to take it into acount End of explanation """ from sompy.visualization.hitmap import HitMapView sm.cluster(4) hits = HitMapView(12, 12,"Clustering",text_size=10, cmap=plt.cm.jet) a=hits.show(sm, anotate=True, onlyzeros=False, labelsize=7, cmap="Pastel1") plt.show() """ Explanation: Clustering This visualization helps us to focus on the groups which share similar characteristics End of explanation """
LFPy/LFPy
examples/LFPy-example-08.ipynb
gpl-3.0
# importing some modules, setting some matplotlib values for pl.plot. import LFPy import numpy as np import scipy.stats import matplotlib.pyplot as plt plt.rcParams.update({'font.size' : 12, 'figure.facecolor' : '1', 'figure.subplot.wspace' : 0.5, 'figure.subplot.hspace' : 0.5}) #seed for random generation np.random.seed(1234) """ Explanation: Example plot for LFPy: Passive cell model adapted from Mainen and Sejnokwski (1996) This is an example scripts using LFPy with a passive cell model adapted from Mainen and Sejnowski, Nature 1996, for the original files, see http://senselab.med.yale.edu/modeldb/ShowModel.asp?model=2488 Here, excitatory and inhibitory neurons are distributed on different parts of the morphology, with stochastic spike times produced by the LFPy.inputgenerators.stationary_gamma() function. Same as LFPy-example-7.ipynb, just without the active conductances Copyright (C) 2017 Computational Neuroscience Group, NMBU. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. End of explanation """ def insert_synapses(synparams, section, n, spTimesFun, args): '''find n compartments to insert synapses onto''' idx = cell.get_rand_idx_area_norm(section=section, nidx=n) #Insert synapses in an iterative fashion for i in idx: synparams.update({'idx' : int(i)}) # Some input spike train using the function call [spiketimes] = spTimesFun(**args) # Create synapse(s) and setting times using the Synapse class in LFPy s = LFPy.Synapse(cell, **synparams) s.set_spike_times(spiketimes) """ Explanation: Function declarations: End of explanation """ # define cell parameters used as input to cell-class cellParameters = { 'morphology' : 'morphologies/L5_Mainen96_wAxon_LFPy.hoc', 'cm' : 1.0, # membrane capacitance 'Ra' : 150, # axial resistance 'v_init' : -65, # initial crossmembrane potential 'passive' : True, # switch on passive mechs 'passive_parameters' : {'g_pas' : 1./30000, 'e_pas' : -65}, # passive params 'nsegs_method' : 'lambda_f',# method for setting number of segments, 'lambda_f' : 100, # segments are isopotential at this frequency 'dt' : 2**-4, # dt of LFP and NEURON simulation. 'tstart' : -100, #start time, recorders start at t=0 'tstop' : 200, #stop time of simulation #'custom_code' : ['active_declarations_example3.hoc'], # will run this file } # Synaptic parameters taken from Hendrickson et al 2011 # Excitatory synapse parameters: synapseParameters_AMPA = { 'e' : 0, #reversal potential 'syntype' : 'Exp2Syn', #conductance based exponential synapse 'tau1' : 1., #Time constant, rise 'tau2' : 3., #Time constant, decay 'weight' : 0.005, #Synaptic weight 'record_current' : True, #record synaptic currents } # Excitatory synapse parameters synapseParameters_NMDA = { 'e' : 0, 'syntype' : 'Exp2Syn', 'tau1' : 10., 'tau2' : 30., 'weight' : 0.005, 'record_current' : True, } # Inhibitory synapse parameters synapseParameters_GABA_A = { 'e' : -80, 'syntype' : 'Exp2Syn', 'tau1' : 1., 'tau2' : 12., 'weight' : 0.005, 'record_current' : True } # where to insert, how many, and which input statistics insert_synapses_AMPA_args = { 'section' : 'apic', 'n' : 100, 'spTimesFun' : LFPy.inputgenerators.get_activation_times_from_distribution, 'args' : dict(n=1, tstart=0, tstop=cellParameters['tstop'], distribution=scipy.stats.gamma, rvs_args=dict(a=0.5, loc=0., scale=40) ) } insert_synapses_NMDA_args = { 'section' : ['dend', 'apic'], 'n' : 15, 'spTimesFun' : LFPy.inputgenerators.get_activation_times_from_distribution, 'args' : dict(n=1, tstart=0, tstop=cellParameters['tstop'], distribution=scipy.stats.gamma, rvs_args=dict(a=2, loc=0, scale=50) ) } insert_synapses_GABA_A_args = { 'section' : 'dend', 'n' : 100, 'spTimesFun' : LFPy.inputgenerators.get_activation_times_from_distribution, 'args' : dict(n=1, tstart=0, tstop=cellParameters['tstop'], distribution=scipy.stats.gamma, rvs_args=dict(a=0.5, loc=0., scale=40) ) } # Define electrode geometry corresponding to a laminar electrode, where contact # points have a radius r, surface normal vectors N, and LFP calculated as the # average LFP in n random points on each contact: N = np.empty((16, 3)) for i in range(N.shape[0]): N[i,] = [1, 0, 0] #normal unit vec. to contacts # put parameters in dictionary electrodeParameters = { 'sigma' : 0.3, # Extracellular potential 'x' : np.zeros(16) + 25, # x,y,z-coordinates of electrode contacts 'y' : np.zeros(16), 'z' : np.linspace(-500, 1000, 16), 'n' : 20, 'r' : 10, 'N' : N, } # Parameters for the cell.simulate() call, recording membrane- and syn.-currents simulationParameters = { 'rec_imem' : True, # Record Membrane currents during simulation } """ Explanation: Parameters etc.: Define parameters, using dictionaries. It is possible to set a few more parameters for each class or functions, but we chose to show only the most important ones here. End of explanation """ # Initialize cell instance, using the LFPy.Cell class cell = LFPy.Cell(**cellParameters) # Align apical dendrite with z-axis cell.set_rotation(x=4.98919, y=-4.33261, z=0.) # Insert synapses using the function defined earlier insert_synapses(synapseParameters_AMPA, **insert_synapses_AMPA_args) insert_synapses(synapseParameters_NMDA, **insert_synapses_NMDA_args) insert_synapses(synapseParameters_GABA_A, **insert_synapses_GABA_A_args) # perform NEURON simulation, results saved as attributes in the cell instance cell.simulate(**simulationParameters) # Initialize electrode geometry, then calculate the LFP, using the # LFPy.RecExtElectrode class. Note that now cell is given as input to electrode # and created after the NEURON simulations are finished electrode = LFPy.RecExtElectrode(cell, **electrodeParameters) print('simulating LFPs....') electrode.data = electrode.get_transformation_matrix() @ cell.imem print('done') """ Explanation: Main simulation procedure: End of explanation """ from example_suppl import plot_ex3 fig = plot_ex3(cell, electrode) # fig.savefig('LFPy-example-08.pdf', dpi=300) """ Explanation: Plot: End of explanation """
PyLCARS/PythonUberHDL
myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/FirstInFirstOutMemory-checkpoint.ipynb
bsd-3-clause
from myhdl import * from myhdlpeek import Peeker import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from sympy import * init_printing() import random #https://github.com/jrjohansson/version_information %load_ext version_information %version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random #helper functions to read in the .v and .vhd generated files into python def VerilogTextReader(loc, printresult=True): with open(f'{loc}.v', 'r') as vText: VerilogText=vText.read() if printresult: print(f'***Verilog modual from {loc}.v***\n\n', VerilogText) return VerilogText def VHDLTextReader(loc, printresult=True): with open(f'{loc}.vhd', 'r') as vText: VerilogText=vText.read() if printresult: print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText) return VerilogText """ Explanation: \title{First in First out (FIFO) memory in myHDL} \author{Steven K Armour} \maketitle The FIFO memory, also called queue-ed (as in an English queue) memory is a common write-read scheme employed with sequential memory such as time measurements. The fundamental scheme is that the first data to be written into the memory storage(RAM, etc) is the first to be read out followed by the second data read and so on. <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#References" data-toc-modified-id="References-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>References</a></span></li><li><span><a href="#Libarys-and-Helper-functions" data-toc-modified-id="Libarys-and-Helper-functions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Libarys and Helper functions</a></span></li><li><span><a href="#Writer-Pointer" data-toc-modified-id="Writer-Pointer-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Writer Pointer</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#Read-Pointer" data-toc-modified-id="Read-Pointer-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Read Pointer</a></span><ul class="toc-item"><li><span><a href="#myHDL-testing" data-toc-modified-id="myHDL-testing-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>myHDL testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#Memory-Array" data-toc-modified-id="Memory-Array-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Memory Array</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class="toc-item"><li><span><a href="#Conversion-Issue-:" data-toc-modified-id="Conversion-Issue-:-5.3.1"><span class="toc-item-num">5.3.1&nbsp;&nbsp;</span>Conversion Issue :</a></span></li></ul></li></ul></li><li><span><a href="#Status-Signal" data-toc-modified-id="Status-Signal-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Status Signal</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6.1"><span class="toc-item-num">6.1&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-6.2"><span class="toc-item-num">6.2&nbsp;&nbsp;</span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-6.3"><span class="toc-item-num">6.3&nbsp;&nbsp;</span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#FIFO" data-toc-modified-id="FIFO-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>FIFO</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-7.1"><span class="toc-item-num">7.1&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-7.2"><span class="toc-item-num">7.2&nbsp;&nbsp;</span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-7.3"><span class="toc-item-num">7.3&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class="toc-item"><li><span><a href="#Conversion-Issue-:" data-toc-modified-id="Conversion-Issue-:-7.3.1"><span class="toc-item-num">7.3.1&nbsp;&nbsp;</span>Conversion Issue :</a></span></li></ul></li></ul></li></ul></div> References @misc{loi le_2017, title={Verilog code for FIFO memory}, url={http://www.fpga4student.com/2017/01/verilog-code-for-fifo-memory.html}, journal={Fpga4student.com}, author={Loi Le, Van}, year={2017} } Libarys and Helper functions End of explanation """ @block def write_pointer(wr, fifo_full, wptr, fifo_we, clk, rst_n): """ Input: wr(bool):write signal fifo_full(bool): fifo full signal clk(bool): clock rst_n(bool): negtive reset signal Ouput: wptr(5bit): the write in memory pointer fifo_we(bool): the write enable indication signal """ fifo_we_i=Signal(bool(0)) @always_comb def enableLogic(): fifo_we_i.next= not fifo_full and wr #counter wptr_i=Signal(intbv(0)[5:0]) @always(clk.posedge, rst_n.negedge) def pointerUpdate(): if rst_n: wptr_i.next=0 elif fifo_we_i: wptr_i.next=wptr_i+1 else: wptr_i.next=wptr_i @always_comb def OuputBuffer(): fifo_we.next=fifo_we_i wptr.next=wptr_i return instances() """ Explanation: Writer Pointer In order to use RAM memory, we must employ memory pointers which are values stored in the FIFO that tell the FIFO where the memory is stored. The write pointer (wptr) is an incremental counter that is increased for each data entry that is added to the memory. Thus the write_pointer is simply a counter with some extra controls \begin{figure} \centerline{\includegraphics[width=10cm]{WriterPointer.png}} \caption{\label{fig:WP} Writer Pointer Functianl Digram } \end{figure} End of explanation """ Peeker.clear() wr=Signal(bool(0)); Peeker(wr, 'wr') fifo_full=Signal(bool(0)); Peeker(fifo_full, 'fifo_full') wptr=Signal(intbv(0)[5:]); Peeker(wptr, 'wptr') fifo_we=Signal(bool(0)); Peeker(fifo_we, 'fifo_we') clk=Signal(bool(0)); Peeker(clk, 'clk') rst_n=Signal(bool(0)); Peeker(rst_n, 'rst_n') DUT=write_pointer(wr, fifo_full, wptr, fifo_we, clk, rst_n) def write_pointerTB(): """ myHDL only Testbench for `write_pointer module` """ @always(delay(1)) def ClkGen(): clk.next=not clk @instance def stimules(): i=0 while True: if i==0: wr.next=1 elif i==10: wr.next=0 elif i==12: wr.next=1 elif i==14: fifo_full.next=1 elif i==16: rst_n.next=1 elif i==18: rst_n.next=0 elif i==20: raise StopSimulation() i+=1 yield clk.posedge return instances() sim=Simulation(DUT, write_pointerTB(), *Peeker.instances()).run() Peeker.to_wavedrom() write_pointerData=Peeker.to_dataframe() write_pointerData=write_pointerData[write_pointerData['clk']==1] write_pointerData.drop('clk', axis=1, inplace=True) write_pointerData.reset_index(drop=True, inplace=True) write_pointerData """ Explanation: myHDL Testing End of explanation """ DUT.convert() VerilogTextReader('write_pointer'); """ Explanation: Verilog Code End of explanation """ @block def write_pointerTBV(): """ myHDL->Verilog Testbench for `write_pointer module` """ wr=Signal(bool(0)) fifo_full=Signal(bool(0)) wptr=Signal(intbv(0)[5:]) fifo_we=Signal(bool(0)) clk=Signal(bool(0)) rst_n=Signal(bool(0)) @always_comb def print_data(): print(wr, fifo_full, wptr, fifo_we, clk, rst_n) DUT=write_pointer(wr, fifo_full, wptr, fifo_we, clk, rst_n) @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: wr.next=1 elif i==10: wr.next=0 elif i==12: wr.next=1 elif i==14: fifo_full.next=1 elif i==16: rst_n.next=1 elif i==18: rst_n.next=0 elif i==20: raise StopSimulation() else: pass i+=1 yield clk.posedge return instances() TB=write_pointerTBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('write_pointerTBV'); """ Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{write_pointerRTL.png}} \caption{\label{fig:WPRTL} write_pointer RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{write_pointerSyn.png}} \caption{\label{fig:WPSYN} write_pointer Synthesized schematic; Xilinx Vivado 2017.4} \end{figure} Verilog Testbench End of explanation """ @block def read_pointer(rd, fifo_empty, rptr, fifo_rd, clk, rst_n): """ Input: rd(bool):write signal fifo_empty(bool): fifo empty signal clk(bool): clock rst_n(bool): negtive reset signal Ouput: rptr(5bit): the read out memory pointer fifo_rd(bool): the read enable indication signal """ fifo_rd_i=Signal(bool(0)) @always_comb def enableLogic(): fifo_rd_i.next=not fifo_empty and rd rptr_i=Signal(intbv(0)[5:0]) @always(clk.posedge, rst_n.negedge) def pointerUpdate(): if rst_n: rptr_i.next=0 elif fifo_rd_i: rptr_i.next=rptr_i+1 else: rptr_i.next=rptr_i @always_comb def output(): fifo_rd.next=fifo_rd_i rptr.next=rptr_i return instances() """ Explanation: Read Pointer The Read pointer serves the same function as the write_pointer but increments the read pointer that calls up sequentially the memory location to read from. \begin{figure} \centerline{\includegraphics[width=10cm]{ReadPointer.png}} \caption{\label{fig:RP} Read Pointer Functianl Digram } \end{figure} End of explanation """ Peeker.clear() rd=Signal(bool(0)); Peeker(rd, 'rd') fifo_empty=Signal(bool(0)); Peeker(fifo_empty, 'fifo_empty') rptr=Signal(intbv(0)[5:]); Peeker(rptr, 'rptr') fifo_rd=Signal(bool(0)); Peeker(fifo_rd, 'fifo_rd') clk=Signal(bool(0)); Peeker(clk, 'clk') rst_n=Signal(bool(0)); Peeker(rst_n, 'rst_n') DUT=read_pointer(rd, fifo_empty, rptr, fifo_rd, clk, rst_n) def read_pointerTB(): """ myHDL only Testbench for `read_pointer module` """ @always(delay(1)) def ClkGen(): clk.next=not clk @instance def stimules(): i=0 while True: if i==0: rd.next=1 elif i==10: rd.next=0 elif i==12: rd.next=1 elif i==14: fifo_empty.next=1 elif i==16: rst_n.next=1 elif i==18: rst_n.next=0 elif i==20: raise StopSimulation() i+=1 yield clk.posedge return instances() sim=Simulation(DUT, read_pointerTB(), *Peeker.instances()).run() Peeker.to_wavedrom() read_pointerData=Peeker.to_dataframe() read_pointerData=read_pointerData[read_pointerData['clk']==1] read_pointerData.drop('clk', axis=1, inplace=True) read_pointerData.reset_index(drop=True, inplace=True) read_pointerData """ Explanation: myHDL testing End of explanation """ DUT.convert() VerilogTextReader('read_pointer'); """ Explanation: Verilog Code End of explanation """ @block def read_pointerTBV(): """ myHDL -> Verilog Testbench for `read_pointer` module """ rd=Signal(bool(0)) fifo_empty=Signal(bool(0)) rptr=Signal(intbv(0)[5:]) fifo_rd=Signal(bool(0)) clk=Signal(bool(0)) rst_n=Signal(bool(0)) @always_comb def print_data(): print(rd, fifo_empty, rptr, fifo_rd, clk, rst_n) DUT=read_pointer(rd, fifo_empty, rptr, fifo_rd, clk, rst_n) @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: rd.next=1 elif i==10: rd.next=0 elif i==12: rd.next=1 elif i==14: fifo_empty.next=1 elif i==16: rst_n.next=1 elif i==18: rst_n.next=0 elif i==20: raise StopSimulation() else: pass i+=1 yield clk.posedge return instances() TB=read_pointerTBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('read_pointerTBV'); """ Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{read_pointerRTL.png}} \caption{\label{fig:RPRTL} read_pointer RTL Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{read_pointerSYN.png}} \caption{\label{fig:RPRTL} read_pointer Synthesized schematic; Xilinx Vivado 2017.4} \end{figure} Verilog Testbench End of explanation """ @block def memory_array(data_in, fifo_we, wptr, rptr, data_out, clk, clear): """ Input: data_in(8bit): data to be writen fifo_we(bool): write enable wptr(5bit): write memory address pointer rptr(5bit): read memory address pointer clk(bool): clock clear(bool): signal to clear clear memeory to 0 Ouput: data_out(8bit): data to be read out based on`rptr` """ data_out_i=[Signal(intbv(0)[8:]) for _ in range(16)] @always(clk.posedge) def uptake(): if fifo_we: data_out_i[wptr[4:]].next=data_in @always_comb def output(): data_out.next=data_out_i[rptr[4:]] @always(clear.negedge) def clearMem(): for i in range(16): data_out_i[i].next=0 return instances() """ Explanation: Memory Array The memory array is a simple RAM memory that uses the wptr to assign the data_in location in the RAM and the rptr to pull the memory to output to data_out \begin{figure} \centerline{\includegraphics[width=10cm]{Memory_array.png}} \caption{\label{fig:WP} Memory Array Functional Diagram} \end{figure} End of explanation """ Peeker.clear() data_in=Signal(intbv(0)[8:]); Peeker(data_in, 'data_in') fifo_we=Signal(bool(0)); Peeker(fifo_we, 'fifo_we') wptr=Signal(intbv(0)[5:]); Peeker(wptr, 'wptr') rptr=Signal(intbv(0)[5:]); Peeker(rptr, 'rptr') data_out=Signal(intbv(0)[8:]); Peeker(data_out, 'data_out') clk=Signal(bool(0)); Peeker(clk, 'clk') clear=Signal(bool(0)); Peeker(clear, 'clear') TestData=np.random.randint(low=data_in.min, high=data_in.max, size=16) TestData=TestData.astype(int) DUT=memory_array(data_in, fifo_we, wptr, rptr, data_out, clk, clear) def memory_arrayTB(): """ myHDL only testbench for `memory_array` module """ @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: fifo_we.next=0 elif i==2: fifo_we.next=1 elif i==13: clear.next=1 elif i==14: clear.next=0 elif i==16: raise StopSimulation() data_in.next=int(TestData[wptr]) wptr.next=wptr+1 if i!=0: rptr.next=rptr+1 i+=1 yield clk.posedge return instances() sim=Simulation(DUT, memory_arrayTB(), *Peeker.instances()).run() Peeker.to_wavedrom() memoryData=Peeker.to_dataframe() memoryData=memoryData[memoryData['clk']==1] memoryData.drop('clk', axis=1, inplace=True) memoryData.reset_index(drop=True, inplace=True) memoryData memoryData.drop([0, 1], axis=0, inplace=True) memoryData.drop(['fifo_we', 'rptr', 'wptr'], axis=1, inplace=True) memoryData.reset_index(inplace=True, drop=True) memoryData['data_out_shift-1']=np.array(memoryData.data_out.shift(-1)).astype(int) memoryData.drop(12, axis=0, inplace=True) memoryData (memoryData['data_in']==memoryData['data_out_shift-1']).all() """ Explanation: myHDL Testing End of explanation """ DUT.convert() VerilogTextReader('memory_array'); """ Explanation: Verilog Code End of explanation """ @block def memory_arrayTBV(): """ myHDL -> verilog testbench for `memory_array` module """ data_in=Signal(intbv(0)[8:]) fifo_we=Signal(bool(0)) wptr=Signal(intbv(0)[5:]) rptr=Signal(intbv(0)[5:]) data_out=Signal(intbv(0)[8:]) clk=Signal(bool(0)) clear=Signal(bool(0)) TestData_i=[Signal(intbv(int(i))[8:]) for i in TestData] @always_comb def print_data(): print(data_in, fifo_we, wptr, rptr, data_out, clk, clear) DUT=memory_array(data_in, fifo_we, wptr, rptr, data_out, clk, clear) @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: fifo_we.next=0 elif i==2: fifo_we.next=1 elif i==13: clear.next=1 elif i==14: clear.next=0 elif i==16: raise StopSimulation() else: pass data_in.next=TestData_i[wptr] wptr.next=wptr+1 if i!=0: rptr.next=rptr+1 i+=1 yield clk.posedge return instances() TB=memory_arrayTBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('memory_arrayTBV'); """ Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{memory_arrayRTL.png}} \caption{\label{fig:MARTL} memory_array RTL Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{memory_arraySYN.png}} \caption{\label{fig:MASYN} memory_array Synthesized schematic; Xilinx Vivado 2017.4} \end{figure} Verilog Testbench Conversion Issue : At present I can not get the values stored in TestData numpy array to be transcribed to the output Verilog code memory_arrayTBV If someone can figure out how to, or make an improvement to the myHDL converter. The fix would be greatly appreciated by myself and the rest of the myHDL user base End of explanation """ @block def fifoStatus(wr, rd, fifo_we, fifo_rd, wptr, rptr, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, clk, rst_n): """ Input: wr(bool): write signal rd(bool): read signal fifo_we(bool): write enable signal fifo_rd(bool): read enable signal wptr(5bit): write pointer rptr(5bit): read pointer clk(bool): clock rst_n(bool): reset Ouput: fifo_full(bool): signal indicating the fifo memory is full fifo_empty(bool):signal indicating the fifo memory is empty fifo_threshold(bool): signal indicating that the fifo is about to overflow fifo_overflow(bool): signal indicating that the fifo rptr has overflowed fifo_underflow(bool): signal indicating that the fifo wptr has underflowed """ #interalStores fifo_full_i=Signal(bool(0)) fifo_empty_i=Signal(bool(0)) fifo_threshold_i=Signal(bool(0)) fifo_overflow_i=Signal(bool(0)) fifo_underflow_i=Signal(bool(0)) #interal wires fbit_comp=Signal(bool(0)) pointer_equal=Signal(bool(0)) pointer_result=Signal(intbv(0)[5:].signed()) overflow_set=Signal(bool(0)) underflow_set=Signal(bool(0)) @always_comb def logic1(): fbit_comp.next=wptr[4]^rptr[4] if wptr[3:0]-rptr[3:0]: pointer_equal.next=0 else: pointer_equal.next=1 pointer_result.next=wptr[4:0]-rptr[4:0] overflow_set.next=fifo_full_i & wr underflow_set.next=fifo_empty_i & rd @always_comb def logic2(): fifo_full_i.next=fbit_comp & pointer_equal fifo_empty_i.next=(not fbit_comp) & pointer_equal if pointer_result[4] or pointer_result[3]: fifo_threshold_i.next=1 else: fifo_threshold_i.next=0 @always(clk.posedge, rst_n.negedge) def overflowControl(): if rst_n: fifo_overflow_i.next=0 elif overflow_set==1 and fifo_rd==0: fifo_overflow_i.next=1 elif fifo_rd: fifo_overflow_i.next=0 else: fifo_overflow_i.next=fifo_overflow_i @always(clk.posedge, rst_n.negedge) def underflowControl(): if rst_n: fifo_underflow_i.next=0 elif underflow_set==1 and fifo_we==0: fifo_underflow_i.next=1 elif fifo_we: fifo_underflow_i.next=0 else: fifo_underflow_i.next=fifo_underflow_i @always_comb def outputBuffer(): fifo_full.next=fifo_full_i fifo_empty.next=fifo_empty_i fifo_threshold.next=fifo_threshold_i fifo_overflow.next=fifo_overflow_i fifo_underflow.next=fifo_underflow_i return instances() """ Explanation: Status Signal The status signal module is a internal check module that checks for impending overflow, overflow, and underflow of the FIFO memory \begin{figure} \centerline{\includegraphics[width=10cm]{fifoStatus.png}} \caption{\label{fig:WP} fifoStatus Functional Diagram} \end{figure} End of explanation """ Peeker.clear() wr=Signal(bool(0)); Peeker(wr, 'wr') rd=Signal(bool(0)); Peeker(rd, 'rd') fifo_we=Signal(bool(0)); Peeker(fifo_we, 'fifo_we') fifo_rd=Signal(bool(0)); Peeker(fifo_rd, 'fifo_rd') wptr=Signal(intbv(0)[5:]); Peeker(wptr, 'wptr') rptr=Signal(intbv(0)[5:]); Peeker(rptr, 'rptr') fifo_full=Signal(bool(0)); Peeker(fifo_full, 'fifo_full') fifo_empty=Signal(bool(0)); Peeker(fifo_empty, 'fifo_empty') fifo_threshold=Signal(bool(0)); Peeker(fifo_threshold, 'fifo_threshold') fifo_overflow=Signal(bool(0)); Peeker(fifo_overflow, 'fifo_overflow') fifo_underflow=Signal(bool(0)); Peeker(fifo_underflow, 'fifo_underflow') clk=Signal(bool(0)); Peeker(clk, 'clk') rst_n=Signal(bool(0)); Peeker(rst_n, 'rst_n') DUT=fifoStatus(wr, rd, fifo_we, fifo_rd, wptr, rptr, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, clk, rst_n) def fifoStatusTB(): """ myHDL only test bench for `fifoStatus` module Note: Not a complet testbench, could be better """ @always(delay(1)) def ClkGen(): clk.next=not clk @instance def stimules(): i=0 while True: if i==0: wr.next=1; rd.next=1 fifo_we.next=0; fifo_rd.next=0 elif i==2: wr.next=0; rd.next=0 fifo_we.next=1; fifo_rd.next=1 elif i==4: wr.next=1; rd.next=1 fifo_we.next=1; fifo_rd.next=1 if i>=6 and i<=20: wptr.next=wptr+1 if i>=7 and i<=20: rptr.next=rptr+1 if i==20: rst_n.next=1 elif i==21: rst_n.next=0 elif i==23: raise StopSimulation() i+=1 yield clk.posedge return instances() sim=Simulation(DUT, fifoStatusTB(), *Peeker.instances()).run() Peeker.to_wavedrom() Peeker.to_dataframe() """ Explanation: myHDL Testing End of explanation """ DUT.convert() VerilogTextReader('fifoStatus'); """ Explanation: Verilog Code End of explanation """ @block def fifoStatusTBV(): """ myHDL -> verilog test bench for `fifoStatus` module Note: Not a complet testbench, could be better """ wr=Signal(bool(0)); Peeker(wr, 'wr') rd=Signal(bool(0)); Peeker(rd, 'rd') fifo_we=Signal(bool(0)); Peeker(fifo_we, 'fifo_we') fifo_rd=Signal(bool(0)); Peeker(fifo_rd, 'fifo_rd') wptr=Signal(intbv(0)[5:]); Peeker(wptr, 'wptr') rptr=Signal(intbv(0)[5:]); Peeker(rptr, 'rptr') fifo_full=Signal(bool(0)); Peeker(fifo_full, 'fifo_full') fifo_empty=Signal(bool(0)); Peeker(fifo_empty, 'fifo_empty') fifo_threshold=Signal(bool(0)); Peeker(fifo_threshold, 'fifo_threshold') fifo_overflow=Signal(bool(0)); Peeker(fifo_overflow, 'fifo_overflow') fifo_underflow=Signal(bool(0)); Peeker(fifo_underflow, 'fifo_underflow') clk=Signal(bool(0)); Peeker(clk, 'clk') rst_n=Signal(bool(0)); Peeker(rst_n, 'rst_n') @always_comb def print_data(): print(wr, rd, fifo_we, fifo_rd, wptr, rptr, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow) DUT=fifoStatus(wr, rd, fifo_we, fifo_rd, wptr, rptr, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, clk, rst_n) @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: wr.next=1; rd.next=1 fifo_we.next=0; fifo_rd.next=0 elif i==2: wr.next=0; rd.next=0 fifo_we.next=1; fifo_rd.next=1 elif i==4: wr.next=1; rd.next=1 fifo_we.next=1; fifo_rd.next=1 else: pass if i>=6 and i<=20: wptr.next=wptr+1 if i>=7 and i<=20: rptr.next=rptr+1 if i==20: rst_n.next=1 elif i==21: rst_n.next=0 elif i==23: raise StopSimulation() else: pass i+=1 yield clk.posedge return instances() TB=fifoStatusTBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('fifoStatusTBV'); """ Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{fifoStatusRTL.png}} \caption{\label{fig:FIFOStatusRTL} fifoStatus RTL Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{fifoStatusSYN.png}} \caption{\label{fig:MASYN} fifoStatus Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} Verilog Testbench End of explanation """ @block def fifo_mem(wr, rd, data_in, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, data_out, clk, rst_n, clear): """ Input: wr(bool):write signal rd(bool):write signal data_in(8bit): data to be writen clk(bool): clock rst_n(bool): negtive reset signal clear(bool): signal to clear clear memeory to 0 Output: fifo_full(bool): signal indicating the fifo memory is full fifo_empty(bool):signal indicating the fifo memory is empty fifo_threshold(bool): signal indicating that the fifo is about to overflow fifo_overflow(bool): signal indicating that the fifo has overflowed fifo_underflow(bool): signal indicating that the fifo has underflowed data_out(8bit): data to be read out """ wptr=Signal(intbv(0)[5:]); rptr=Signal(intbv(0)[5:]) fifo_we=Signal(bool(0)); fifo_rd=Signal(bool(0)) WPointerAcum=write_pointer(wr, fifo_full, wptr, fifo_we, clk, rst_n) RPointerAcum=read_pointer(rd, fifo_empty, rptr, fifo_rd, clk, rst_n) InternalMem=memory_array(data_in, fifo_we, wptr, rptr, data_out, clk, clear) FIFOControl=fifoStatus(wr, rd, fifo_we, fifo_rd, wptr, rptr, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, clk, rst_n) return instances() """ Explanation: FIFO \begin{figure} \centerline{\includegraphics[width=10cm]{FIFO.png}} \caption{\label{fig:WP} FIFO Functional Diagram} \end{figure} End of explanation """ Peeker.clear() wr=Signal(bool(0)); Peeker(wr, 'wr') rd=Signal(bool(0)); Peeker(rd, 'rd') data_in=Signal(intbv(0)[8:]); Peeker(data_in, 'data_in') fifo_full=Signal(bool(0)); Peeker(fifo_full, 'fifo_full') fifo_empty=Signal(bool(0)); Peeker(fifo_empty, 'fifo_empty') fifo_threshold=Signal(bool(0)); Peeker(fifo_threshold, 'fifo_threshold') fifo_overflow=Signal(bool(0)); Peeker(fifo_overflow, 'fifo_overflow') fifo_underflow=Signal(bool(0)); Peeker(fifo_underflow, 'fifo_underflow') data_out=Signal(intbv(0)[8:]); Peeker(data_out, 'data_out') clk=Signal(bool(0)); Peeker(clk, 'clk') rst_n=Signal(bool(0)); Peeker(rst_n, 'rst_n') clear=Signal(bool(0)); Peeker(clear, 'clear') DUT=fifo_mem(wr, rd, data_in, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, data_out, clk, rst_n, clear) def fifo_memTB(): """ myHDL only test bench for `fifo_mem` module Note: Not a complet testbench, could be better """ @always(delay(1)) def ClkGen(): clk.next=not clk @instance def stimules(): i=0 while True: if i==0: wr.next=1; rd.next=1 elif i==16: wr.next=0; rd.next=1 elif i==32: wr.next=0; rd.next=1 elif i==48: rst_n.next=1 elif i==49: rst_n.next=0 elif i==50: wr.next=1; rd.next=1 if i<16: data_in.next=int(TestData[i]) elif i>=16 and i<32: data_in.next=int(TestData[i-16]) elif i>=32 and i<48: data_in.next=int(TestData[i-32]) elif i==48 or i==49: pass else: data_in.next=int(TestData[i-51]) if i==66: raise StopSimulation() i+=1 yield clk.posedge return instances() sim=Simulation(DUT, fifo_memTB(), *Peeker.instances()).run() Peeker.to_wavedrom() fifoData=Peeker.to_dataframe(); fifoData fifoData=fifoData[fifoData['clk']==1] fifoData.drop('clk', axis=1, inplace=True) fifoData.reset_index(drop=True, inplace=True) fifoData fifoData.tail(20) """ Explanation: myHDL Testing End of explanation """ DUT.convert() VerilogTextReader('fifo_mem'); """ Explanation: Verilog Code End of explanation """ @block def fifo_memTBV(): """ myHDL ->Verilog test bench for `fifo_mem` module Note: Not a complet testbench, could be better """ wr=Signal(bool(0)) rd=Signal(bool(0)) data_in=Signal(intbv(0)[8:]) TestData_i=[Signal(intbv(int(i))[8:]) for i in TestData] fifo_full=Signal(bool(0)) fifo_empty=Signal(bool(0)) fifo_threshold=Signal(bool(0)) fifo_overflow=Signal(bool(0)) fifo_underflow=Signal(bool(0)) data_out=Signal(intbv(0)[8:]) clk=Signal(bool(0)) rst_n=Signal(bool(0)) clear=Signal(bool(0)) @always_comb def print_data(): print(wr, rd, data_in, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, data_out, clk, rst_n, clear) DUT=fifo_mem(wr, rd, data_in, fifo_full, fifo_empty, fifo_threshold, fifo_overflow, fifo_underflow, data_out, clk, rst_n, clear) @instance def clk_signal(): while True: clk.next = not clk yield delay(1) @instance def stimules(): i=0 while True: if i==0: wr.next=1; rd.next=1 elif i==16: wr.next=0; rd.next=1 elif i==32: wr.next=0; rd.next=1 elif i==48: rst_n.next=1 elif i==49: rst_n.next=0 elif i==50: wr.next=1; rd.next=1 else: pass if i<16: data_in.next=int(TestData_i[i]) elif i>=16 and i<32: data_in.next=int(TestData_i[i-16]) elif i>=32 and i<48: data_in.next=int(TestData_i[i-32]) elif i==48 or i==49: pass else: data_in.next=int(TestData_i[i-51]) if i==66: raise StopSimulation() i+=1 yield clk.posedge return instances() TB=fifo_memTBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('fifo_memTBV'); """ Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{fifo_memRTL.png}} \caption{\label{fig:FIFORTL} fifo_mem RTL Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{fifo_memSYN.png}} \caption{\label{fig:fifo_memSYN} fifo_mem Synthesized schematic; Xilinx Vivado 2017.4} \end{figure} Verilog Testbench Conversion Issue : At present I can not get the values stored in TestData numpy array to be transcribed to the output Verilog code memory_arrayTBV If someone can figure out how to, or make an improvement to the myHDL converter. The fix would be greatly appreciated by myself and the rest of the myHDL user base End of explanation """
yl565/statsmodels
examples/notebooks/predict.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function import numpy as np import statsmodels.api as sm """ Explanation: Prediction (out of sample) End of explanation """ nsample = 50 sig = 0.25 x1 = np.linspace(0, 20, nsample) X = np.column_stack((x1, np.sin(x1), (x1-5)**2)) X = sm.add_constant(X) beta = [5., 0.5, 0.5, -0.02] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample) """ Explanation: Artificial data End of explanation """ olsmod = sm.OLS(y, X) olsres = olsmod.fit() print(olsres.summary()) """ Explanation: Estimation End of explanation """ ypred = olsres.predict(X) print(ypred) """ Explanation: In-sample prediction End of explanation """ x1n = np.linspace(20.5,25, 10) Xnew = np.column_stack((x1n, np.sin(x1n), (x1n-5)**2)) Xnew = sm.add_constant(Xnew) ynewpred = olsres.predict(Xnew) # predict out of sample print(ynewpred) """ Explanation: Create a new sample of explanatory variables Xnew, predict and plot End of explanation """ import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(x1, y, 'o', label="Data") ax.plot(x1, y_true, 'b-', label="True") ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), 'r', label="OLS prediction") ax.legend(loc="best"); """ Explanation: Plot comparison End of explanation """ from statsmodels.formula.api import ols data = {"x1" : x1, "y" : y} res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit() """ Explanation: Predicting with Formulas Using formulas can make both estimation and prediction a lot easier End of explanation """ res.params """ Explanation: We use the I to indicate use of the Identity transform. Ie., we don't want any expansion magic from using **2 End of explanation """ res.predict(exog=dict(x1=x1n)) """ Explanation: Now we only have to pass the single variable and we get the transformed right-hand side variables automatically End of explanation """
science-of-imagination/nengo-buffer
Project/trained_mental_rotation_ens_inhibition.ipynb
gpl-3.0
import nengo import numpy as np import cPickle from nengo_extras.data import load_mnist from nengo_extras.vision import Gabor, Mask from matplotlib import pylab import matplotlib.pyplot as plt import matplotlib.animation as animation import scipy.ndimage from scipy.ndimage.interpolation import rotate """ Explanation: Using the trained weights in an ensemble of neurons On the function points branch of nengo On the vision branch of nengo_extras End of explanation """ # --- load the data img_rows, img_cols = 28, 28 (X_train, y_train), (X_test, y_test) = load_mnist() X_train = 2 * X_train - 1 # normalize to -1 to 1 X_test = 2 * X_test - 1 # normalize to -1 to 1 """ Explanation: Load the MNIST database End of explanation """ temp = np.diag([1]*10) ZERO = temp[0] ONE = temp[1] TWO = temp[2] THREE= temp[3] FOUR = temp[4] FIVE = temp[5] SIX = temp[6] SEVEN =temp[7] EIGHT= temp[8] NINE = temp[9] labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE] dim =28 """ Explanation: Each digit is represented by a one hot vector where the index of the 1 represents the number End of explanation """ label_weights = cPickle.load(open("label_weights_choose_enc1000.p", "rb")) activity_to_img_weights = cPickle.load(open("activity_to_img_weights_choose_enc1000.p", "rb")) #rotated_clockwise_after_encoder_weights = cPickle.load(open("rotated_clockwise_after_encoder_weights_rot_enc1000.p", "r")) rotated_counter_after_encoder_weights = cPickle.load(open("rotated_counter_after_encoder_weights_choose_enc1000.p", "r")) #identity_after_encoder_weights = cPickle.load(open("identity_after_encoder_weights1000.p","r")) #rotation_clockwise_weights = cPickle.load(open("rotation_clockwise_weights1000.p","rb")) #rotation_counter_weights = cPickle.load(open("rotation_weights1000.p","rb")) #Training with filters used on train images #low_pass_weights = cPickle.load(open("low_pass_weights1000.p", "rb")) #rotated_counter_after_encoder_weights_noise = cPickle.load(open("rotated_after_encoder_weights_counter_filter_noise5000.p", "r")) #rotated_counter_after_encoder_weights_filter = cPickle.load(open("rotated_after_encoder_weights_counter_filter5000.p", "r")) """ Explanation: Load the saved weight matrices that were created by training the model End of explanation """ #A value of zero gives no inhibition def inhibit_rotate_clockwise(t): if t < 0.5: return dim**2 else: return 0 def inhibit_rotate_counter(t): if t < 0.5: return 0 else: return dim**2 def inhibit_identity(t): if t < 0.3: return dim**2 else: return dim**2 def intense(img): newImg = img.copy() newImg[newImg < 0] = -1 newImg[newImg > 0] = 1 return newImg def node_func(t,x): #clean = scipy.ndimage.gaussian_filter(x, sigma=1) #clean = scipy.ndimage.median_filter(x, 3) clean = intense(x) return clean #Create stimulus at horizontal weight = np.dot(label_weights,activity_to_img_weights) img = np.dot(THREE,weight) img = scipy.ndimage.rotate(img.reshape(28,28),90).ravel() pylab.imshow(img.reshape(28,28),cmap="gray") plt.show() """ Explanation: Functions to perform the inhibition of each ensemble End of explanation """ rng = np.random.RandomState(9) n_hid = 1000 model = nengo.Network(seed=3) with model: #Stimulus only shows for brief period of time stim = nengo.Node(lambda t: THREE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))# For cycling through input #Starting the image at horizontal #stim = nengo.Node(lambda t:img if t< 0.1 else 0) ens_params = dict( eval_points=X_train, neuron_type=nengo.LIF(), #Why not use LIF? intercepts=nengo.dists.Choice([-0.5]), max_rates=nengo.dists.Choice([100]), ) # linear filter used for edge detection as encoders, more plausible for human visual system #encoders = Gabor().generate(n_hid, (11, 11), rng=rng) #encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) ''' degrees = 6 #must have same number of excoders as neurons (Want each random encoder to have same encoder at every angle) encoders = Gabor().generate(n_hid/(360/degrees), (11, 11), rng=rng) encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) rotated_encoders = encoders.copy() #For each randomly generated encoder, create the same encoder at every angle (increments chosen by degree) for encoder in encoders: rotated_encoders = np.append(rotated_encoders, [encoder],axis =0) for i in range(1,59): #new_gabor = rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel() rotated_encoders = np.append(rotated_encoders, [rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel()],axis =0) #rotated_encoders = np.append(rotated_encoders, [encoder],axis =0) ''' rotated_encoders = cPickle.load(open("encoders.p", "r")) #Num of neurons does not divide evenly with 6 degree increments, so add random encoders extra_encoders = Gabor().generate(n_hid - len(rotated_encoders), (11, 11), rng=rng) extra_encoders = Mask((28, 28)).populate(extra_encoders, rng=rng, flatten=True) all_encoders = np.append(rotated_encoders, extra_encoders, axis =0) encoders = all_encoders #Ensemble that represents the image with different transformations applied to it ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params) #Connect stimulus to ensemble, transform using learned weight matrices nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T) #nengo.Connection(stim, ens) #for rotated stim #Recurrent connection on the neurons of the ensemble to perform the rotation nengo.Connection(ens.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1) #nengo.Connection(ens.neurons, ens.neurons, transform = low_pass_weights.T, synapse=0.1) #Identity ensemble #ens_iden = nengo.Ensemble(n_hid,dim**2, seed=3, encoders=encoders, **ens_params) #Rotation ensembles #ens_clock_rot = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params) ens_counter_rot = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params) #Inhibition nodes #inhib_iden = nengo.Node(inhibit_identity) #inhib_clock_rot = nengo.Node(inhibit_rotate_clockwise) #inhib_counter_rot = nengo.Node(inhibit_rotate_counter) #Connect the main ensemble to each manipulation ensemble and back with appropriate transformation #Identity #nengo.Connection(ens.neurons, ens_iden.neurons,transform=identity_after_encoder_weights.T,synapse=0.1) #nengo.Connection(ens_iden.neurons, ens.neurons,transform=identity_after_encoder_weights.T,synapse=0.1) #Clockwise #nengo.Connection(ens.neurons, ens_clock_rot.neurons, transform = rotated_clockwise_after_encoder_weights.T,synapse=0.1) #nengo.Connection(ens_clock_rot.neurons, ens.neurons, transform = rotated_clockwise_after_encoder_weights.T,synapse = 0.1) #Counter-clockwise #nengo.Connection(ens.neurons, ens_counter_rot.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1) #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1) #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights_filter.T, synapse=0.1) #nengo.Connection(ens.neurons, ens_counter_rot.neurons, transform = low_pass_weights.T, synapse=0.1) #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = low_pass_weights.T, synapse=0.1) #Clean up by a node #n = nengo.Node(node_func, size_in=dim**2) #nengo.Connection(ens.neurons,n,transform=activity_to_img_weights.T, synapse=0.1) #nengo.Connection(n,ens_counter_rot,synapse=0.1) #Connect the inhibition nodes to each manipulation ensemble #nengo.Connection(inhib_iden, ens_iden.neurons, transform=[[-1]] * n_hid) #nengo.Connection(inhib_clock_rot, ens_clock_rot.neurons, transform=[[-1]] * n_hid) #nengo.Connection(inhib_counter_rot, ens_counter_rot.neurons, transform=[[-1]] * n_hid) #Collect output, use synapse for smoothing probe = nengo.Probe(ens.neurons,synapse=0.1) sim = nengo.Simulator(model) sim.run(5) """ Explanation: The network where the mental imagery and rotation occurs The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work The number of neurons (n_hid) must be the same as was used for training The input must be shown for a short period of time to be able to view the rotation The recurrent connection must be from the neurons because the weight matices were trained on the neuron activities End of explanation """ '''Animation for Probe output''' fig = plt.figure() output_acts = [] for act in sim.data[probe]: output_acts.append(np.dot(act,activity_to_img_weights)) def updatefig(i): im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True) return im, ani = animation.FuncAnimation(fig, updatefig, interval=0.1, blit=True) plt.show() #ouput_acts = sim.data[probe] plt.subplot(261) plt.title("100") pylab.imshow(np.reshape(output_acts[100],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(262) plt.title("500") pylab.imshow(np.reshape(output_acts[500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(263) plt.title("1000") pylab.imshow(np.reshape(output_acts[1000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(264) plt.title("1500") pylab.imshow(np.reshape(output_acts[1500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(265) plt.title("2000") pylab.imshow(np.reshape(output_acts[2000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(266) plt.title("2500") pylab.imshow(np.reshape(output_acts[2500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(267) plt.title("3000") pylab.imshow(np.reshape(output_acts[3000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(268) plt.title("3500") pylab.imshow(np.reshape(output_acts[3500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(269) plt.title("4000") pylab.imshow(np.reshape(output_acts[4000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(2,6,10) plt.title("4500") pylab.imshow(np.reshape(output_acts[4500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(2,6,11) plt.title("5000") pylab.imshow(np.reshape(output_acts[4999],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.show() """ Explanation: The following is not part of the brain model, it is used to view the output for the ensemble Since it's probing the neurons themselves, the output must be transformed from neuron activity to visual image End of explanation """ #The filename includes the number of neurons and which digit is being rotated filename = "mental_rotation_output_ONE_" + str(n_hid) + ".p" cPickle.dump(sim.data[probe], open( filename , "wb" ) ) """ Explanation: Pickle the probe's output if it takes a long time to run End of explanation """ testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights)) testing = output_acts[300] plt.subplot(131) pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) #Get image #testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights)) #noise = np.random.random([28,28]).ravel() testing = node_func(0,testing) plt.subplot(132) pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) #Get activity of image _, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing) #Get encoder outputs testing_filter = np.dot(testing_act,rotated_counter_after_encoder_weights_filter) #Get activities testing_filter = ens.neuron_type.rates(testing_filter, sim.data[ens].gain, sim.data[ens].bias) for i in range(5): testing_filter = np.dot(testing_filter,rotated_counter_after_encoder_weights_filter) testing_filter = ens.neuron_type.rates(testing_filter, sim.data[ens].gain, sim.data[ens].bias) testing_filter = np.dot(testing_filter,activity_to_img_weights) testing_filter = node_func(0,testing_filter) _, testing_filter = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing_filter) #testing_rotate = np.dot(testing_rotate,rotation_weights) testing_filter = np.dot(testing_filter,activity_to_img_weights) plt.subplot(133) pylab.imshow(np.reshape(testing_filter,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.show() plt.subplot(121) pylab.imshow(np.reshape(X_train[0],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) #Get activity of image _, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=X_train[0]) testing_rotate = np.dot(testing_act,activity_to_img_weights) plt.subplot(122) pylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.show() """ Explanation: Testing End of explanation """ letterO = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights)) plt.subplot(161) pylab.imshow(np.reshape(letterO,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) letterL = np.dot(SEVEN,label_weights) for _ in range(30): letterL = np.dot(letterL,rotation_weights) letterL = np.dot(letterL,activity_to_img_weights) plt.subplot(162) pylab.imshow(np.reshape(letterL,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) letterI = np.dot(ONE,np.dot(label_weights,activity_to_img_weights)) plt.subplot(163) pylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.subplot(165) pylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) letterV = np.dot(SEVEN,label_weights) for _ in range(40): letterV = np.dot(letterV,rotation_weights) letterV = np.dot(letterV,activity_to_img_weights) plt.subplot(164) pylab.imshow(np.reshape(letterV,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) letterA = np.dot(SEVEN,label_weights) for _ in range(10): letterA = np.dot(letterA,rotation_weights) letterA = np.dot(letterA,activity_to_img_weights) plt.subplot(166) pylab.imshow(np.reshape(letterA,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.show() """ Explanation: Just for fun End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/gapic/automl/showcase_automl_video_classification_batch.ipynb
apache-2.0
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = '--user' else: USER_FLAG = '' ! pip3 install -U google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex client library: AutoML video classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_video_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_video_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create video classification models and do batch prediction using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the Human Motion dataset from MIT. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. Objective In this tutorial, you create an AutoML video classification model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation """ REGION = 'us-central1' #@param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. End of explanation """ # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION """ Explanation: Vertex constants Setup up the following constants for Vertex: API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. End of explanation """ # Video Dataset type DATA_SCHEMA = 'gs://google-cloud-aiplatform/schema/dataset/metadata/video_1.0.0.yaml' # Video Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/video_classification_io_format_1.0.0.yaml" # Video Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_classification_1.0.0.yaml" """ Explanation: AutoML constants Set constants unique to AutoML datasets and training: Dataset Schemas: Tells the Dataset resource service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated). Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for. End of explanation """ if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU"))) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) """ Explanation: Hardware Accelerators Set the hardware accelerators (e.g., GPU), if any, for prediction. Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100 Otherwise specify (None, None) to use a container image to run on a CPU. End of explanation """ if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = 'n1-standard' VCPU = '4' DEPLOY_COMPUTE = MACHINE_TYPE + '-' + VCPU print('Deploy machine type', DEPLOY_COMPUTE) """ Explanation: Container (Docker) image For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine Type Next, set the machine type to use for prediction. Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs End of explanation """ # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient( client_options=client_options ) return client def create_model_client(): client = aip.ModelServiceClient( client_options=client_options ) return client def create_pipeline_client(): client = aip.PipelineServiceClient( client_options=client_options ) return client def create_job_client(): client = aip.JobServiceClient( client_options=client_options ) return client clients = {} clients['dataset'] = create_dataset_client() clients['model'] = create_model_client() clients['pipeline'] = create_pipeline_client() clients['job'] = create_job_client() for client in clients.items(): print(client) """ Explanation: Tutorial Now you are ready to start creating your own AutoML video classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Job Service for batch prediction and custom training. End of explanation """ TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset(display_name=name, metadata_schema_uri=schema, labels=labels) operation = clients['dataset'].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("hmdb,tst-" + TIMESTAMP, DATA_SCHEMA) """ Explanation: Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following: Uses the dataset client service. Creates an Vertex Dataset resource (aip.Dataset), with the following parameters: display_name: The human-readable name you choose to give it. metadata_schema_uri: The schema for the dataset type. Calls the client dataset service method create_dataset, with the following parameters: parent: The Vertex location root path for your Database, Model and Endpoint resources. dataset: The Vertex dataset object instance you created. The method returns an operation object. An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method: | Method | Description | | ----------- | ----------- | | result() | Waits for the operation to complete and returns a result object in JSON format. | | running() | Returns True/False on whether the operation is still running. | | done() | Returns True/False on whether the operation is completed. | | canceled() | Returns True/False on whether the operation was canceled. | | cancel() | Cancels the operation (this may take up to 30 seconds). | End of explanation """ # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split('/')[-1] print(dataset_id) """ Explanation: Now save the unique dataset identifier for the Dataset resource instance you created. End of explanation """ IMPORT_FILE = 'gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv' """ Explanation: Data preparation The Vertex Dataset resource for video has some requirements for your data. Videos must be stored in a Cloud Storage bucket. Each video file must be in a video format (MPG, AVI, ...). There must be an index file stored in your Cloud Storage bucket that contains the path and label for each video. The index file must be either CSV or JSONL. CSV For video classification, the CSV index file has a few requirements: No heading. First column is the Cloud Storage path to the video. Second column is the label. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation """ if 'IMPORT_FILES' in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head """ Explanation: Quick peek at your data You will use a version of the MIT Human Motion dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation """ def import_data(dataset, gcs_sources, schema): config = [{ 'gcs_source': {'uris': gcs_sources}, 'import_schema_uri': schema }] print("dataset:", dataset_id) start_time = time.time() try: operation = clients['dataset'].import_data(name=dataset_id, import_configs=config) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print("after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled()) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA) """ Explanation: Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following: Uses the Dataset client. Calls the client method import_data, with the following parameters: name: The human readable name you give to the Dataset resource (e.g., hmdb,tst). import_configs: The import configuration. import_configs: A Python list containing a dictionary, with the key/value entries: gcs_sources: A list of URIs to the paths of the one or more index files. import_schema_uri: The schema identifying the labeling type. The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break. End of explanation """ def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split('/')[-1] input_config = {'dataset_id': dataset_id, 'fraction_split': { 'training_fraction': 0.8, 'test_fraction': 0.2 }} training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients['pipeline'].create_training_pipeline(parent=PARENT, training_pipeline=training_pipeline) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline """ Explanation: Train the model Now train an AutoML video classification model using your Vertex Dataset resource. To train the model, do the following steps: Create an Vertex training pipeline for the Dataset resource. Execute the pipeline to start the training. Create a training pipeline You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: Being reusable for subsequent training jobs. Can be containerized and ran as a batch job. Can be distributed. All the steps are associated with the same pipeline job for tracking progress. Use this helper function create_pipeline, which takes the following parameters: pipeline_name: A human readable name for the pipeline job. model_name: A human readable name for the model. dataset: The Vertex fully qualified dataset identifier. schema: The dataset labeling (annotation) training schema. task: A dictionary describing the requirements for the training job. The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters: parent: The Vertex location root path for your Dataset, Model and Endpoint resources. training_pipeline: the full specification for the pipeline training job. Let's look now deeper into the minimal requirements for constructing a training_pipeline specification: display_name: A human readable name for the pipeline job. training_task_definition: The dataset labeling (annotation) training schema. training_task_inputs: A dictionary describing the requirements for the training job. model_to_upload: A human readable name for the model. input_data_config: The dataset specification. dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. Note for video, validation split is not supported -- only training and test. End of explanation """ PIPE_NAME = "hmdb,tst_pipe-" + TIMESTAMP MODEL_NAME = "hmdb,tst_model-" + TIMESTAMP task = json_format.ParseDict({}, Value()) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task) """ Explanation: Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. For video classification, there are no required minimal fields to specify. Finally, you create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object. End of explanation """ # The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split('/')[-1] print(pipeline_id) """ Explanation: Now save the unique identifier of the training pipeline you created. End of explanation """ def get_training_pipeline(name, silent=False): response = clients['pipeline'].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id) """ Explanation: Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter: name: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED. End of explanation """ while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id) """ Explanation: Deployment Training the above model may take upwards of 20 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name. End of explanation """ def list_model_evaluations(name): response = clients['model'].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print('auPrc', metrics['auPrc']) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id) """ Explanation: Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter: name: The Vertex fully qualified model identifier for the Model resource. This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric. For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (auPrc) you will print the result. End of explanation """ test_items = ! gsutil cat $IMPORT_FILE | head -n2 if len(test_items[0]) == 5: _, test_item_1, test_label_1, _, _ = str(test_items[0]).split(',') _, test_item_2, test_label_2, _, _ = str(test_items[1]).split(',') else: test_item_1, test_label_1, _, _ = str(test_items[0]).split(',') test_item_2, test_label_2, _, _ = str(test_items[1]).split(',') print(test_item_1, test_label_1) print(test_item_2, test_label_2) """ Explanation: Model deployment for batch prediction Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction. For online prediction, you: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. Make online prediction requests to the Endpoint resource. For batch-prediction, you: Create a batch prediction job. The job service will provision resources for the batch prediction request. The results of the batch prediction request are returned to the caller. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction request Now do a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation """ import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + '/test.jsonl' with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: data = { "content": test_item_1, "mimeType": "video/avi", "timeSegmentStart": "0.0s", 'timeSegmentEnd': '5.0s' } f.write(json.dumps(data) + '\n') data = { "content": test_item_2, "mimeType": "video/avi", "timeSegmentStart": "0.0s", 'timeSegmentEnd': '5.0s' } f.write(json.dumps(data) + '\n') print(gcs_input_uri) ! gsutil cat $gcs_input_uri """ Explanation: Make a batch input file Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs: content: The Cloud Storage path to the video. mimeType: The content type. In our example, it is an avi file. timeSegmentStart: The start timestamp in the video to do prediction on. Note, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour). timeSegmentEnd: The end timestamp in the video to do prediction on. End of explanation """ MIN_NODES = 1 MAX_NODES = 1 """ Explanation: Compute instance scaling You have several choices on scaling the compute instances for handling your batch prediction requests: Single Instance: The batch prediction requests are processed on a single compute instance. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one. Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them. Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions. The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request. End of explanation """ BATCH_MODEL = "hmdb,tst_batch-" + TIMESTAMP def create_batch_prediction_job(display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES } } response = clients['job'].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = 'jsonl' OUT_FORMAT = 'jsonl' # [jsonl] response = create_batch_prediction_job(BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None) """ Explanation: Make batch prediction request Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters: display_name: The human readable name for the prediction job. model_name: The Vertex fully qualified identifier for the Model resource. gcs_source_uri: The Cloud Storage path to the input file -- which you created above. gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to. parameters: Additional filtering parameters for serving prediction results. The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters: parent: The Vertex location root path for Dataset, Model and Pipeline resources. batch_prediction_job: The specification for the batch prediction job. Let's now dive into the specification for the batch_prediction_job: display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. dedicated_resources: The compute resources to provision for the batch prediction job. machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated. starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES. max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES. model_parameters: Additional filtering parameters for serving prediction results. confidenceThreshold: The minimum confidence threshold on doing a prediction. maxPredictions: The maximum number of predictions to return per classification, sorted by confidence. oneSecIntervalClassification: If True, predictions are made on one second intervals. shotClassification: If True, predictions are made on each camera shot boundary. segmentClassification: If True, predictions are made on each time segment; otherwise prediction is made for the entire time segment. input_config: The input source and format type for the instances to predict. instances_format: The format of the batch prediction request file: csv or jsonl. gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. output_config: The output destination and format for the predictions. prediction_format: The format of the batch prediction response file: jsonl only. gcs_destination: The output destination for the predictions. You might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision. - Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision. - Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall. In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for an action to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes: 1. There is a tie, both 0.5, and returns two predictions. 2. One value is above 0.5 and the rest are below 0.5, and returns one prediction. This call is an asychronous operation. You will print from the response object a few select fields, including: name: The Vertex fully qualified identifier assigned to the batch prediction job. display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. generate_explanations: Whether True/False explanations were provided with the predictions (explainability). state: The state of the prediction job (pending, running, etc). Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state. End of explanation """ # The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split('/')[-1] print(batch_job_id) """ Explanation: Now get the unique identifier for the batch prediction job you created. End of explanation """ def get_batch_prediction_job(job_name, silent=False): response = clients['job'].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: # not all data types support explanations print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" error:", response.error) gcs_destination = response.output_config.gcs_destination print(" gcs_destination") print(" output_uri_prefix:", gcs_destination.output_uri_prefix) return gcs_destination.output_uri_prefix, response.state predictions, state = get_batch_prediction_job(batch_job_id) """ Explanation: Get information on a batch prediction job Use this helper function get_batch_prediction_job, with the following paramter: job_name: The Vertex fully qualified identifier for the batch prediction job. The helper function calls the job client service's get_batch_prediction_job method, with the following paramter: name: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id The helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination. End of explanation """ def get_latest_predictions(gcs_out_dir): ''' Get the latest prediction subfolder using the timestamp in the subfolder name''' folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split('/')[-2] if subfolder.startswith('prediction-'): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60) """ Explanation: Get the predictions When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED. Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.jsonl. Now display (cat) the contents. You will see multiple JSON objects, one for each prediction. For each prediction: content: The video that was input for the prediction request. displayName: The prediction action. confidence: The confidence in the prediction between 0 and 1. timeSegmentStart/timeSegmentEnd: The time offset of the start and end of the predicted action. End of explanation """ delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and 'dataset_id' in globals(): clients['dataset'].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and 'pipeline_id' in globals(): clients['pipeline'].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and 'model_to_deploy_id' in globals(): clients['model'].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and 'endpoint_id' in globals(): clients['endpoint'].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and 'batch_job_id' in globals(): clients['job'].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and 'job_id' in globals(): clients['job'].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and 'hpt_job_id' in globals(): clients['job'].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and 'BUCKET_NAME' in globals(): ! gsutil rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation """
jasonpcasey/ipeds-peers
.ipynb_checkpoints/peer_examples-checkpoint.ipynb
mit
nx.degree(g, 3) nx.degree(g, 4) """ Explanation: A node's degree is the number of connections it has. End of explanation """ nx.clustering(g, 0) nx.clustering(g, 4) nx.clustering(g, 1) """ Explanation: The local clustering coefficient is the fraction of a node's connections that are also connected. End of explanation """ df = pd.DataFrame() df["node"] = g.nodes() df["centrality"] = pd.Series(nx.degree_centrality(g)) df["eigenvector"] = pd.Series(nx.eigenvector_centrality(g)) df["betweenness"] = pd.Series(nx.betweenness_centrality(g, normalized = True, endpoints = True)) df nx.betweenness_centrality(g, normalized = True, endpoints = True) nx.betweenness_centrality(g, normalized = True, endpoints = False) """ Explanation: Betweenness centrality estimates which nodes are important in connecting networks End of explanation """
CUBoulder-ASTR2600/lectures
lecture_02_basics.ipynb
isc
10 / 3 # We provide integers # What will the output be? """ Explanation: Saving your iPython notebook File -> Save and Checkpoint Can change the name also in that menu. But also possible via clicking the name above. Talk about command mode and edit mode of cells. And the help window. Data Types: Integers vs. Floats End of explanation """ 1 / 10 + 2.0 # all fine here as well 4 / 2 # even so, mathematically not required, Python returns a float here as well. 4 // 2 # But if you need an integer to be returned, force it with // """ Explanation: A float. Note, that Python just automatically converts the result of division to floats, to be more correct. Those kind of automatic data type changes were a problem in the old times, which is why older systems would rather insist on returning the same kind of data type as the user provided. These days, the focus has shifted on rather doing the math correct and let the system deal with the overhead for this implicit data type change. End of explanation """ a = 5 a a = 'astring' a """ Explanation: The reason why this automatic type conversion is even possible within Python is because it is a so called "dynamically typed" programming languages. As opposed to "statically typed" ones like C(++) and Java. Meaning, in Python this is possible: End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('b23wrRfy7SM') """ Explanation: I just changed the datatype of a without deleting it first. It was just changed to whatever I need it to be. But remember: End of explanation """ x = 10 y = 2 * x x = 25 y # What is the value of y? If you are surprised, please discuss it. """ Explanation: (read here, if you are interested in all the multi-media display capabilities of the Jupyter notebook.) A note about names and values End of explanation """ 6.67e-11 * 5.97e24 * 70 / (6.37e6)**2 # remember: the return of the last line in any cell will be automatically printed """ Explanation: Nice (lengthy / thorough) discussion of this: http://nedbatchelder.com/text/names.html We haven't yet covered some of the concepts that appear in this blog post so don't panic if something looks unfamiliar. Today: More practice with IPython & a simple formula Recall that to start an Jupyter notebook, simply type (in your Linux shell): $&gt; jupyter notebook or to open a specific file and keep the terminal session free: $&gt; jupyter notebook filename.ipynb &amp; Note: Discuss cell types Code vs Markdown vs raw NB convert briefly Law of gravitation equation $F(r) = G \frac{m_1 m_2}{r^2}$ $G = 6.67 \times 10^{-11} \frac{\text{m}^3}{\text{kg} \cdot \text{s}^2}$ (the gravitational constant) $m_1$ is the mass of the first body in kilograms (kg) $m_2$ is the mass of the second body in kilograms (kg) $r$ is the distance between the centers of the two bodies in meters (m) Example 1 - Find the force of a person standing on earth For a person of mass 70 kg standing on the surface of the Earth (mass $5.97 \times 10^{24}$ kg, radius 6370 km (Earth fact sheet)) the force will be (in units of Newtons, 1 N = 0.225 lbs): $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 70}{(6.37 \times 10^{6})^2}$$ End of explanation """ 6.67e-11*5.97e24*70/(6.37e6)**2 """ Explanation: Notice that I put spaces on either side of each mathematical operator. This isn't required, but enhances clarity. Consider the alternative: End of explanation """ 6.67e-11 * 5.97e24 * 1 / (6.37e6)**2 """ Explanation: Example 2 - Find the acceleration due to Earth's gravity (the g in F = mg) Using the gravitation equation above, set $m_2 = 1$ kg $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 1}{(6.37 \times 10^{6})^2}$$ End of explanation """ G = 55 G = 6.67e-11 m1 = 5.97e24 m2 = 70 r = 6.37e6 F = G * m1 * m2 / r**2 # white-space for clarity! F # remember: no print needed for the last item of a cell. """ Explanation: Q. Why would the above $F(r)$ implementation be inconvenient if we had to do this computation many times, say for different masses? Q. How could we improve this? End of explanation """ G = 6.67e-11 mass_earth = 5.97e24 mass_object = 70 radius = 6.37e6 force = G * mass_earth * mass_object / radius**2 force """ Explanation: Q. What do the "x = y" statements do? End of explanation """ force2 = G * massEarth * \ massObject / radius**2 force2 """ Explanation: Q. Can you imagine a downside to descriptive variable names? Dealing with long lines of code Split long lines with a backslash (with no space after it, just carriage return): End of explanation """ lambda = 5000 # Some wavelength in Angstroms """ Explanation: Reserved Words Using "reserved words" will lead to an error: End of explanation """ # Comments are specified with the pound symbol # # Everything after a # in a line is ignored by Python """ Explanation: See p.10 of the textbook for a list of Python's reserved words. Some really common ones are: and, break, class, continue, def, del, if, elif, else, except, False, for, from, import, in, is, lambda, None, not, or, pass, return, True, try, while Comments End of explanation """ print('this') # but not 'that' """ Explanation: Q. What will the line below do? End of explanation """ # Comments without ''' ''' or # create an error: This is a comment that takes several lines. # However, in this form it does not, even for multiple lines: # ''' This is a really, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, long comment (not really). ''' # # We will use block comments to document modules later! """ Explanation: As an approx value, it's good practice to comment about 50\% of your code! But one can reduce that reasonbly, by choosing intelligle variable names. There is another way to specify "block comments": using two sets of 3 quotation marks ''' '''. End of explanation """ from math import pi # more in today's tutorial # With old style formatting "pi = %.6f" % pi # With new style formatting. # It's longer in this example, but is much more powerful in general. # You decide, which one you want to use. "pi = {:.6f}".format(pi) myPi = 3.92834234 print("The Earth's mass is %.0f kilograms." % myPi) # note the rounding that happens! print("This is myPi: {} is awesome".format(str(int(myPi)))) # converting to int cuts off decimals """ Explanation: Notice that that comment was actually printed. That's because it's not technically a comment that is totally ignored, but just a multi-line string object. It is being used in source code for documenting your code. Why does that work? Because that long multi-line string is not being assigned to a variable, so the Python interpreter just throws it away for not being used. But it's very useful for creating documented code! Formatting text and numbers End of explanation """ print(radius, force) # still alive from far above! """ Explanation: Hard to read!! (And, note the junk at the end.) Consider %x.yz % inside the quotes - means a "format statement" follows x is the number of characters in the resulting string - Not required y is the number of digits after the decimal point - Not required z is the format (e.g. f (float), e (scientific), s (string)) - Required % outside and to the right of the quotes - Separates text from variables -- more on this later - Uses parentheses if there is more than one variable There is a list of print format specifications on p. 12 in the textbook %s string (of ascii characters) %d integer %0xd integer padded with x leading zeros %f decimal notation with six decimals %e or %E compact scientific notation %g or %G compact decimal or scientific notation %xz format z right-justified in a field of width x %-xy same, left-justified %.yz format z with y decimals %x.yz format z with y decimals in a field of width x %% percentage sign The power of the new formatting If you don't care about length of the print: The type is being chosen correctly for you. Some more examples End of explanation """ # If we use triple quotes we don't have to # use \ for multiple lines print('''At the Earth's radius of %.2e meters, the force is %6.0f Newtons.''' % (radius, force)) # Justification print("At the Earth's radius of %.2e meters, \ the force is %-20f Newtons." % (radius, force)) """ Explanation: Q. What will the next statement print? End of explanation """ print("At the Earth's radius of %.2e meters, the force is %.0f Newtons." % (radius, force)) print("At the Earth's radius of %.2e meters, the force is %i Newtons." % (radius, force)) """ Explanation: Note when block comments are used, the text appears on 2 lines versus when using the \, the text appears all on 1 line. End of explanation """ print("At the Earth's radius of {:.2e} meters, the force is {:.0f} Newtons.".format(radius, force)) print("At the Earth's radius of {:.2e} meters, the force is {:i} Newtons.".format(radius, force)) # Line breaks can also be implemented with \n print('At the Earth radius of %.2e meters,\nthe force is\n%0.0f Newtons.' % (radius, force)) """ Explanation: Note the difference between %.0f (float) and %i (integer) (rounding vs. truncating) Also note, that the new formatting system actually warns you when you do something that would lose precision: End of explanation """
Microno95/DESolver
docs/examples/numpy/Example 3 - NumPy - N-Body Systems.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import desolver as de import desolver.backend as D D.set_float_fmt('float64') """ Explanation: N-Body Gravitationally Interacting System Let's try doing something more complicated: N-body dynamics. As the name implies, we have $N$ interacting bodies where the interactions are mediated by some forces. For this notebook, we'll look at gravitational interactions. It should be mentioned that N-body interactions are generally slow to compute because every body interacts with every other body and thus you have to compute the force $N(N-1)$, $\frac{N(N-1)}{2}$ times if the force is symmetric as in our case, which scales really poorly. There are methods to overcome this, but are not in the scope of this notebook as we'll be looking at $N\sim10$. First we import the libraries we'll need. I import all the matplotlib machinery using the magic command %matplotlib, but this is only for notebook/ipython environments. Then I import desolver and the desolver backend as well (this will be useful for specifying our problem), and set the default datatype to float64. End of explanation """ def Fij(ri, rj, G): rel_r = rj - ri return G*(1/D.norm(rel_r, ord=2)**3)*rel_r def rhs(t, state, masses, G): total_acc = D.zeros_like(state) for idx, (ri, mi) in enumerate(zip(state, masses)): for jdx, (rj, mj) in enumerate(zip(state[idx+1:], masses[idx+1:])): partial_force = Fij(ri[:3], rj[:3], G) total_acc[idx, 3:] += partial_force * mj total_acc[idx+jdx+1, 3:] -= partial_force * mi total_acc[:, :3] = state[:, 3:] return total_acc """ Explanation: Specifying the Dynamical System Writing out the dynamical equations will take a bit more work, but we'll start from Newton's Law of Gravitation and build from there. So, Newton's Law of Gravitation states the gravitational force exerted by one body on another is $$ \vec F_{ij} = G\frac{m_i m_j}{|\vec r_j - \vec r_i|^3} (\vec r_j - \vec r_i) $$ where $G$ is the gravitational constant, $m_i$ and $m_j$ are the masses of the two bodies, and $\vec r_i$ and $\vec r_j$ are the position vectors of the two bodies. Note that this is symmetric in terms of magnitude and merely the direction is flipped when we swap $i$ and $j$, i.e. $\vec F_{ij} = -\vec F_{ji}$. This is where the factor of $\frac{1}{2}$ comes from the in the number of computations required. This is not sufficient to compute the actual dynamics, we must find the force on a single body from all the other bodies. To do this, we simply add the forces of every other body in the simulation thus the force is $$ \vec F_i = \sum_{j\neq i}F_{ij} $$ You'll notice that we sum over $i\neq j$ thus there are $(N-1)$ $F_{ij}$ terms to add. This is where the factor of $(N-1)$ in the number of computations comes from, and since we do it for every single body, we do this computation $N$ times thus the factor of $N$ is explained as well. This is all well and good, but how do we write this? Well, desolver makes that part much simpler. Unlike solve_ivp, my equation representation is not required to be a single vector at any stage, thus I can easily have an array of shape (N, 6) to represent the state vectors of each body. This will also allow me to easily take advantage of numpy broadcasting semantics. End of explanation """ Msun = 1.98847*10**30 ## Mass of the Sun, kg AU = 149597871e3 ## 1 Astronomical Unit, m year = 365.25*24*3600 ## 1 year, s G = 4*D.pi**2 ## in solar masses, AU, years V = D.sqrt(G) ## Speed scale corresponding to the orbital speed required for a circular orbit at 1AU with a period of 1yr """ Explanation: NOTE: The line total_acc[:, :3] -= (D.sum(total_acc[:, :3]*masses[:, None], axis=0) / D.sum(masses)) is useful when we want to look at how the objects behave relative to the centre of mass. Generally this would be relative to the Sun if the other bodies are the planets in our Solar System given that most of the mass in our Solar System is concentrated at the Sun. Hénon Units Now we cannot use the equations in this form because they encompass a very large scale of values ranging from $10^{-11}$ for the gravitational constant to $10^{30}$ for the mass of the sun. Thus we will non-dimensionalise the system by measuring all the lengths by $1AU = 149597871km$, the masses by $M_\bigodot = 1.989\times 10^{30}kg$, and the time in $1yr$. In these units, the gravitational constant becomes $G = 4\pi^2$. Using the constants defined in the next cell, units in SI can be converted to units in our system. End of explanation """ initial_state = D.array([ [0.0, 0.0, 1.0, 0.0, -1.0, 0.0], [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], [0.25, 0.9682458365518543, 0.0, 0.9682458365518543*0, -0.25*0, 0.0], [-0.5, -0.8660254037844386, 0.0, -0.8660254037844386*0, 0.5*0, 0.0], ]) masses = D.array([ 1, 1, 1, 1, ]) rhs(0.0, initial_state, masses, G) """ Explanation: I've added 3 massive bodies at the ends of a scalene triangle End of explanation """ a = de.OdeSystem(rhs, y0=initial_state, dense_output=True, t=(0, 2.0), dt=0.00001, rtol=1e-14, atol=1e-14, constants=dict(G=G, masses=masses)) a.method = "RK1412" a.integrate() fig = plt.figure(figsize=(16,16)) com_motion = D.sum(a.y[:, :, :] * masses[None, :, None], axis=1) / D.sum(masses) fig = plt.figure(figsize=(16,16)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") for i in range(a.y.shape[1]): ax1.plot(a.y[:, i, 0], a.y[:, i, 1], color=f"C{i}") ax2.plot(a.y[:, i, 1], a.y[:, i, 2], color=f"C{i}") ax3.plot(a.y[:, i, 2], a.y[:, i, 0], color=f"C{i}") ax1.scatter(com_motion[:, 0], com_motion[:, 1], color='k') ax2.scatter(com_motion[:, 1], com_motion[:, 2], color='k') ax3.scatter(com_motion[:, 2], com_motion[:, 0], color='k') plt.tight_layout() """ Explanation: The Numerical Integration We will use the 14th order Runge-Kutta integrator to integrate our system with tolerances of $10^{-14}$ which should give fairly accurate results despite the very complicated trajectories of each body. Although symplectic integrators are great for conserving the energy, we prefer an adaptive integrator due to the fact that the bodies have close encounters which require much smaller steps to resolve. If we were to use small step sizes at all times, we'd be wasting computation whenever the bodies are far from each other. This leads us to pick an adaptive Runge-Kutta method. End of explanation """ def close_encounter(t, state, masses, G): distances_between_bodies = [] total_mass = D.sum(masses) center_of_mass = D.sum(state[:, :3] * masses[:, None], axis=1) / total_mass com_distances = D.norm(state[:, :3] - center_of_mass[:, None], axis=1) hill_radii = com_distances * D.pow(masses/(3*total_mass), 1/3) for idx,ri in enumerate(state[:, :3]): for jdx, rj in enumerate(state[idx+1:, :3]): distances_between_bodies.append(D.norm(ri - rj) - D.min([hill_radii[idx], hill_radii[jdx]])/2.0) return D.min(distances_between_bodies) a.reset() a.integrate(events=close_encounter) fig = plt.figure(figsize=(16,16)) com_motion = D.sum(a.y[:, :, :] * masses[None, :, None], axis=1) / D.sum(masses) fig = plt.figure(figsize=(16,16)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") for i in range(a.y.shape[1]): ax1.plot(a.y[:, i, 0], a.y[:, i, 1], color=f"C{i}", alpha=0.33) ax2.plot(a.y[:, i, 1], a.y[:, i, 2], color=f"C{i}", alpha=0.33) ax3.plot(a.y[:, i, 2], a.y[:, i, 0], color=f"C{i}", alpha=0.33) for j in a.events: ax1.scatter(j.y[i, 0], j.y[i, 1], c=f"C{i}", marker='x', alpha=1.0) ax2.scatter(j.y[i, 1], j.y[i, 2], c=f"C{i}", marker='x', alpha=1.0) ax3.scatter(j.y[i, 2], j.y[i, 0], c=f"C{i}", marker='x', alpha=1.0) ax1.scatter(com_motion[:, 0], com_motion[:, 1], color='k') ax2.scatter(com_motion[:, 1], com_motion[:, 2], color='k') ax3.scatter(com_motion[:, 2], com_motion[:, 0], color='k') plt.tight_layout() """ Explanation: Close Encounters and Event Detection Suppose we were interested in finding when two bodies are near each other, how would we do it? We could integrate the system, compare the distances of each body with one another and whenever they were below some threshold we can mark that time. But this is inherently inexact because maybe the actual closest point occurs between two timesteps and our adaptive integrator, being extra intelligent, did not require stepping through that exact point to find the closest encounter. So what can we do? We can use Event Detection! Event detection is where we find when some event occurs during a numerical integration and localise on that event. We need to first create a function that will tell us when two objects are nearby. We'll define nearby as whenever the two bodies are within half of each other's Hill sphere. The Hill radius is computed as $a\sqrt[3]{\frac{m}{3M_{total}}}$ where we assume that the eccentricity is unimportant and $a$ will be the distance from the centre of mass. End of explanation """ from matplotlib import animation, rc # set to location of ffmpeg to get animations working # For Linux or Mac # plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg' # For Windows plt.rcParams['animation.ffmpeg_path'] = 'C:\\ProgramData\\chocolatey\\bin\\ffmpeg.exe' from IPython.display import HTML %%capture # This magic command prevents the creation of a static figure image so that we can view the animation in the next cell t = a.t all_states = a.y planets = [all_states[:, i, :] for i in range(all_states.shape[1])] com_motion = D.sum(all_states * masses[None, :, None], axis=1) / D.sum(masses) plt.ioff() fig = plt.figure(figsize=(16,8)) ax1 = fig.add_subplot(131, aspect=1) ax2 = fig.add_subplot(132, aspect=1) ax3 = fig.add_subplot(133, aspect=1) ax1.set_xlabel("x (AU)") ax1.set_ylabel("y (AU)") ax2.set_xlabel("y (AU)") ax2.set_ylabel("z (AU)") ax3.set_xlabel("z (AU)") ax3.set_ylabel("x (AU)") xlims = D.abs(a.y[:, :, 0]).max() ylims = D.abs(a.y[:, :, 1]).max() zlims = D.abs(a.y[:, :, 2]).max() ax1.set_xlim(-xlims-0.25, xlims+0.25) ax2.set_xlim(-ylims-0.25, ylims+0.25) ax3.set_xlim(-zlims-0.25, zlims+0.25) ax1.set_ylim(-ylims-0.25, ylims+0.25) ax2.set_ylim(-zlims-0.25, zlims+0.25) ax3.set_ylim(-xlims-0.25, xlims+0.25) planets_pos_xy = [] planets_pos_yz = [] planets_pos_zx = [] planets_xy = [] planets_yz = [] planets_zx = [] com_xy, = ax1.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) com_yz, = ax2.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) com_zx, = ax3.plot([], [], color='k', linestyle='', marker='o', markersize=5.0, zorder=10) event_counter = 0 close_encounter_xy = [] close_encounter_yz = [] close_encounter_zx = [] for i in range(len(planets)): close_encounter_xy.append(ax1.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) close_encounter_yz.append(ax2.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) close_encounter_zx.append(ax3.plot([], [], color=f"k", marker='x', markersize=3.0, linestyle='', zorder=9)[0]) for i in range(a.y.shape[1]): planets_xy.append(ax1.plot([], [], color=f"C{i}", zorder=8)[0]) planets_yz.append(ax2.plot([], [], color=f"C{i}", zorder=8)[0]) planets_zx.append(ax3.plot([], [], color=f"C{i}", zorder=8)[0]) planets_pos_xy.append(ax1.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) planets_pos_yz.append(ax2.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) planets_pos_zx.append(ax3.plot([], [], color=f"C{i}", linestyle='', marker='.', zorder=8)[0]) def init(): global event_counter for i in range(len(planets)): planets_xy[i].set_data([], []) planets_yz[i].set_data([], []) planets_zx[i].set_data([], []) planets_pos_xy[i].set_data([], []) planets_pos_yz[i].set_data([], []) planets_pos_zx[i].set_data([], []) com_xy.set_data([], []) com_yz.set_data([], []) com_zx.set_data([], []) for i in range(len(planets)): close_encounter_xy[i].set_data(a.events[event_counter].y[i, 0], a.events[event_counter].y[i, 1]) close_encounter_yz[i].set_data(a.events[event_counter].y[i, 1], a.events[event_counter].y[i, 2]) close_encounter_zx[i].set_data(a.events[event_counter].y[i, 2], a.events[event_counter].y[i, 0]) return tuple(planets_xy + planets_yz + planets_zx + planets_pos_xy + planets_pos_yz + planets_pos_zx + [com_xy, com_yz, com_zx] + [close_encounter_xy, close_encounter_yz, close_encounter_zx]) def animate(frame_num): global event_counter for i in range(len(planets)): planets_xy[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 0], planets[i][max(frame_num-5, 0):frame_num, 1]) planets_yz[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 1], planets[i][max(frame_num-5, 0):frame_num, 2]) planets_zx[i].set_data(planets[i][max(frame_num-5, 0):frame_num, 2], planets[i][max(frame_num-5, 0):frame_num, 0]) planets_pos_xy[i].set_data(planets[i][frame_num:frame_num+1, 0], planets[i][frame_num:frame_num+1, 1]) planets_pos_yz[i].set_data(planets[i][frame_num:frame_num+1, 1], planets[i][frame_num:frame_num+1, 2]) planets_pos_zx[i].set_data(planets[i][frame_num:frame_num+1, 2], planets[i][frame_num:frame_num+1, 0]) com_xy.set_data(com_motion[frame_num:frame_num+1, 0], com_motion[frame_num:frame_num+1, 1]) com_yz.set_data(com_motion[frame_num:frame_num+1, 1], com_motion[frame_num:frame_num+1, 2]) com_zx.set_data(com_motion[frame_num:frame_num+1, 2], com_motion[frame_num:frame_num+1, 0]) if t[frame_num] >= a.events[event_counter].t and event_counter + 1 < len(a.events): event_counter += 1 for i in range(len(planets)): close_encounter_xy[i].set_data(a.events[event_counter].y[i, 0], a.events[event_counter].y[i, 1]) close_encounter_yz[i].set_data(a.events[event_counter].y[i, 1], a.events[event_counter].y[i, 2]) close_encounter_zx[i].set_data(a.events[event_counter].y[i, 2], a.events[event_counter].y[i, 0]) return tuple(planets_xy + planets_yz + planets_zx + planets_pos_xy + planets_pos_yz + planets_pos_zx + [com_xy, com_yz, com_zx] + [close_encounter_xy, close_encounter_yz, close_encounter_zx]) ani = animation.FuncAnimation(fig, animate, list(range(1, len(t))), interval=1500./60., blit=False, init_func=init) rc('animation', html='html5') # Uncomment to save an mp4 video of the animation # ani.save('Nbodies.mp4', fps=60) """ Explanation: We see that there are many close encounters and furthermore, the encounters are not restricted to any particular pairs of bodies, but sometimes happen with three bodies simultaneously. We will see this better in the next section where we look at an animation of the bodies. An Animated View of the Bodies The following code, although long, shows how to set up a matplotlib animation showing the bodies in all three planes and the close encounters they experience. End of explanation """ display(ani) """ Explanation: Here we see that the animation slows down whenever the bodies come close to each other and this is due to the adaptive timestepping of the numerical integration which takes more steps whenever there is a close encounter. Each "x" marks the point of all the bodies whenever there is a close encounter. Additionally, it's interesting to see that the center of mass (the large black dot in the center) does not move at all. Since we initialised the system with zero momentum, this is to be expected given that we are not doing anything to violate momentum conservation, but it is good to see that our numerical integration also respects this behaviour. End of explanation """
SParadiso18/juliasets
juliaplots.ipynb
mit
from juliaset import JuliaSet """ Explanation: Julia Set Plotting Extension Load module for a JuliaSet that conforms to the specified interface. It is wise to run the test suite in test_juliaset.py with nosetests prior to attempting to plot here. End of explanation """ # Math libraries import numpy as np from math import sqrt # Matplotlib plotting libraries import matplotlib.pyplot as plt %matplotlib inline # Bokeh plotting libraries import bokeh.plotting as blt blt.output_notebook() """ Explanation: Load additional libraries needed for plotting and profiling. End of explanation """ class JuliaSetPlot(JuliaSet): """Extend JuliaSet to add plotting functionality""" def __init__(self, *args, **kwargs): # Invoke constructor for JuliaSet first, unaltered JuliaSet.__init__(self, *args, **kwargs) # Add another attribute: a rendered image array self.img = np.array([]) def get_dim(self): """Return linear number of points in axis""" return int(4.0 / self._d) def render(self): """Render image as square array of ints""" if not self.set: self.generate() # Convert inefficient list to efficient numpy array self.img = np.array(self.set) # Reshape array into a 2d complex plane dim = int(sqrt(len(self.img))) self.img = np.reshape(self.img, (dim,dim)).T def show(self): """Use matplotlib to plot image as an efficient mesh""" if not self.img.size: self.render() plt.figure(1, figsize=(12,9)) xy = np.linspace(-2,2,self.get_dim()) plt.pcolormesh(xy, xy, self.img, cmap=plt.cm.hot) plt.colorbar() plt.show() def interact(self): """Use bokeh to plot an interactive image""" from matplotlib.colors import rgb2hex if not self.img.size: self.render() # Mimic matplotlib "hot" color palette colormap = plt.cm.get_cmap("hot") bokehpalette = [rgb2hex(m) for m in colormap(np.arange(colormap.N))] # Create bokeh figure f = blt.figure(x_range=(-2,2), y_range=(-2,2), plot_width=600, plot_height=600) f.image(image=[self.img], x=[-2], y=[-2], dw=[4], dh=[4], palette=bokehpalette, dilate=True) blt.show(f) """ Explanation: Extend JuliaSet class with additional functionality. End of explanation """ j = JuliaSetPlot(-1.037 + 0.17j) %time j.set_spacing(0.006) %time j.generate() %time j.show() """ Explanation: Visualize a Julia set using matplotlib. End of explanation """ j = JuliaSetPlot(-0.624 + 0.435j) %time j.set_spacing(0.006) %time j.generate() %time j.interact() %prun j.generate() %load_ext line_profiler %lprun -f j.generate j.generate() """ Explanation: Visualize a different Julia set using Bokeh as an interactive Javascript plot. End of explanation """
phobson/statsmodels
examples/notebooks/tsa_arma_1.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function import numpy as np import statsmodels.api as sm import pandas as pd from statsmodels.tsa.arima_process import arma_generate_sample np.random.seed(12345) """ Explanation: Autoregressive Moving Average (ARMA): Artificial data End of explanation """ arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) """ Explanation: Generate some data from an ARMA process: End of explanation """ arparams = np.r_[1, -arparams] maparams = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) """ Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated. End of explanation """ dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs) y = pd.TimeSeries(y, index=dates) arma_mod = sm.tsa.ARMA(y, order=(2,2)) arma_res = arma_mod.fit(trend='nc', disp=-1) print(arma_res.summary()) y.tail() import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10,8)) fig = arma_res.plot_predict(start='1999m6', end='2001m5', ax=ax) legend = ax.legend(loc='upper left') """ Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series. End of explanation """
ttsuchi/ttsuchi.github.io
notebooks/PCA.ipynb
mit
from numpy.random import standard_normal # Gaussian variables N = 1000; P = 5 X = standard_normal((N, P)) W = X - X.mean(axis=0,keepdims=True) print(dot(W[:,0], W[:,1])) """ Explanation: Principal Component Analysis and EigenFaces In this notebook, I will go through the basic concepts behind the principal component analysis (PCA). I will then apply PCA to a face dataset to find the characteristic faces ("eigenfaces"). What is PCA? PCA is a linear transformation. Suppose I have a $N \times P$ data matrix ${\bf X}$, where $N$ is the number of samples and $P$ is the dimension of each sample. Then PCA will find you a $K \times P$ matrix ${\bf V}$ such that $$ \underbrace{{\bf X}}{N \times P} = \underbrace{{\bf S}}{P \times K} \underbrace{{\bf V}}_{K \times P}. $$ Here, $K$ is the number of principal components with $K \le P$. But what does the V matrix do? ${\bf V}$ can be though of in many different ways. The first way is to think of it as a de-correlating transformation: originally, each variable (or dimension) in ${\bf X}$ - there are $P$ of them - may be correlated. That is, if I take any two column vectors of ${\bf X}$, say ${\bf x}_0$ and ${\bf x}_1$, their covariance is not going to be zero. Let's try this in a randomly generated data: End of explanation """ from sklearn.decomposition import PCA S=PCA(whiten=True).fit_transform(X) print(dot(S[:,0], S[:,1])) """ Explanation: I'll skip ahead and use a pre-canned PCA routine from scikit-learn (but I'll dig into it a bit later!) Let's see what happens to the transformed variables, ${\bf S}$: End of explanation """ from numpy.random import standard_normal from matplotlib.patches import Ellipse from numpy.linalg import svd @interact def plot_2d_pca(mu_x=FloatSlider(min=-3.0, max=3.0, value=0), mu_y=FloatSlider(min=-3.0, max=3.0, value=0), sigma_x=FloatSlider(min=0.2, max=1.8, value=1.8), sigma_y=FloatSlider(min=0.2, max=1.8, value=0.3), theta=FloatSlider(min=0.0, max=pi, value=pi/6), center=False): mu=array([mu_x, mu_y]) sigma=array([sigma_x, sigma_y]) R=array([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]) X=dot(standard_normal((1000, 2)) * sigma[newaxis,:],R.T) + mu[newaxis,:] # Plot the points and the ellipse fig, ax = plt.subplots(figsize=(8,8)) ax.scatter(X[:200,0], X[:200,1], marker='.') ax.grid() M=8.0 ax.set_xlim([-M,M]) ax.set_ylim([-M,M]) e=Ellipse(xy=array([mu_x, mu_y]), width=sigma_x*3, height=sigma_y*3, angle=theta/pi*180, facecolor=[1.0,0,0], alpha=0.3) ax.add_artist(e) # Perform PCA and plot the vectors if center: X_mean=X.mean(axis=0,keepdims=True) else: X_mean=zeros((1,2)) # Doing PCA here... I'm using svd instead of scikit-learn PCA, I'll come back to this. U,s,V =svd(X-X_mean, full_matrices=False) for v in dot(diag(s/sqrt(X.shape[0])),V): # Each eigenvector ax.arrow(X_mean[0,0],X_mean[0,1],-v[0],-v[1], head_width=0.5, head_length=0.5, fc='b', ec='b') Ustd=U.std(axis=0) ax.set_title('std(U*s) [%f,%f]' % (Ustd[0]*s[0],Ustd[1]*s[1])) """ Explanation: Another way to look at ${\bf V}$ is to think of them as projections. Since the row vectors of ${\bf V}$ is orthogonal to each other, the projected data ${\bf S}$ lines in a new "coordinate system" specified by ${\bf V}$. Furthermore, the new coordinate system is sorted in the decreasing order of variance in the original data. So, PCA can be thought of as calculating a new coordinate system where the basis vectors point toward the direction of largest variances first. <img src="files/images/PCA/pca.png" style="margin:auto; width: 483px;"/> Exercise 1. Let's get a feel for this in the following interactive example. Try moving the sliders around to generate the data, and see how the principal component vectors change. In this demo, mu_x and mu_y specifies the center of the data, sigma_x and sigma_y the standard deviations, and everything is rotated by the angle theta. The two blue arrows are the rows of ${\bf V}$ that gets calculated. When you click on center, the data is first centered (mean is subtracted from the data) first. (Question: why is it necessary to "center" data when mu_x and mu_y are not zero?) End of explanation """ import pickle dataset=pickle.load(open('data/cafe.pkl','r')) disp('dataset.images shape is %s' % str(dataset.images.shape)) disp('dataset.data shape is %s' % str(dataset.data.shape)) @interact def plot_face(image_id=(0, dataset.images.shape[0]-1)): plt.imshow(dataset.images[image_id],cmap='gray') plt.title('Image Id = %d, Gender = %d' % (dataset.target[image_id], dataset.gender[image_id])) plt.axis('off') """ Explanation: Yet another use for ${\bf V}$ is to perform a dimensionality reduction. In many scenarios you encounter in image manipulation (as I'll see soon), Imight want to have a more concise representation of the data ${\bf X}$. PCA with $K < P$ is one way to reduce the dimesionality: because PCA picks the directions with highest data variances, if a small number of top $K$ rows are sufficient to approximate (reconstruct) ${\bf X}$. How do Iactually perform PCA? Well, we can use from sklearn.decomposition import PCA. But for learning, let's dig just one step into what it acutally does. One of the easiest way to perform PCA is to use the singular value decomposition (SVD). SVD decomposes a matrix ${\bf X}$ into a unitary matrix ${\bf U}$, rectangular diagonal matrix ${\bf \Sigma}$ (called "singular values"), and another unitary matrix ${\bf W}$ such that $$ {\bf X} = {\bf U} {\bf \Sigma} {\bf W}$$ So how can Iuse that to do PCA? Well, it turns out ${\bf \Sigma} {\bf W}$ of SVD, are exactly what Ineed to calculate the ${\bf V}$ matrix for the PCA, so I just have to run SVD and set ${\bf V} = {\bf \Sigma} {\bf W}$. (Note: svd of numpy returns only the diagonal elements of ${\bf \Sigma}$.) Exercise 2. Generate 1000 10-dimensional data and perform PCA this way. Plot the squares of the singular values. To reduce the the $P$-dimesional data ${\bf X}$ to a $K$-dimensional data, I just need to pick the top $K$ row vectors of ${\bf V}$ - let's call that ${\bf W}$ - then calcuate ${\bf T} = {\bf X} {\bf W}^\intercal$. ${\bf T}$ then has the dimension $N \times K$. If I want to reconstruct the data ${\bf T}$, Isimply do ${\hat {\bf X}} = {\bf T} {\bf W}$ (and re-add the means for ${\bf X}$, if necessary). Exercise 3. Reduce the same data to 5 dimensions, then based on the projected data ${\bf T}$, reconstruct ${\bf X}$. What's the mean squared error of the reconstruction? Performing PCA on a face dataset Now that I have a handle on the PCA method, let's try applying it to a dataset consisting of face data. I will use the CAlifornia FAcial expressions dataset (CAFE) from http://cseweb.ucsd.edu/~gary/CAFE/ . The following code loads the dataset into the dataset variable: End of explanation """ X=dataset.data.copy() # So that Iwon't mess up the data in the dataset\ X_mean=X.mean(axis=0,keepdims=True) # Mean for each dimension across sample (centering) X_std=X.std(axis=0,keepdims=True) X-=X_mean disp(all(abs(X.mean(axis=0))<1e-12)) # Are means for all dimensions very close to zero? """ Explanation: Preprocessing I'll center the data by subtracting the mean. The first axis (axis=0) is the n_samples dimension. End of explanation """ from numpy.linalg import svd U,s,V=svd(X,compute_uv=True, full_matrices=False) disp(str(U.shape)) disp(str(s.shape)) disp(str(V.shape)) """ Explanation: Then I perform SVD to calculate the projection matrix $V$. By default, U,s,V=svd(...) returns full matrices, which will return $n \times n$ matrix U, $n$-dimensional vector of singular values s, and $d \times d$ matrix V. But here, I don't really need $d \times d$ matrix V; with full_matrices=False, svd only returns $n \times d$ matrix for V. End of explanation """ variance_ratio=s**2/(s**2).sum() # Normalized so that they add to one. @interact def plot_variance_ratio(n_components=(1, len(variance_ratio))): n=n_components-1 fig, axs = plt.subplots(1, 2, figsize=(12, 5)) axs[0].plot(variance_ratio) axs[0].set_title('Explained Variance Ratio') axs[0].set_xlabel('n_components') axs[0].axvline(n, color='r', linestyle='--') axs[0].axhline(variance_ratio[n], color='r', linestyle='--') axs[1].plot(cumsum(variance_ratio)) axs[1].set_xlabel('n_components') axs[1].set_title('Cumulative Sum') captured=cumsum(variance_ratio)[n] axs[1].axvline(n, color='r', linestyle='--') axs[1].axhline(captured, color='r', linestyle='--') axs[1].annotate(s='%f%% with %d components' % (captured * 100, n_components), xy=(n, captured), xytext=(10, 0.5), arrowprops=dict(arrowstyle="->")) """ Explanation: I can also plot how much each eigenvector in V contributes to the overall variance by plotting variance_ratio = $\frac{s^2}{\sum s^2}$. (Notice that s is already in the decreasing order.) The cumsum (cumulative sum) of variance_ratio then shows how much of the variance is explained by components up to n_components. End of explanation """ image_shape=dataset.images.shape[1:] # (H x W) @interact def plot_eigenface(eigenface=(0, V.shape[0]-1)): v=V[eigenface]*X_std plt.imshow(v.reshape(image_shape), cmap='gray') plt.title('Eigenface %d (%f to %f)' % (eigenface, v.min(), v.max())) plt.axis('off') """ Explanation: Since I'm dealing with face data, each row vector of ${\bf V}$ is called an "eigenface". The first "eigenface" is the one that explains a lot of variances in the data, whereas the last one explains the least. End of explanation """ @interact def plot_reconstruction(image_id=(0,dataset.images.shape[0]-1), n_components=(0, V.shape[0]-1), pc1_multiplier=FloatSlider(min=-2,max=2, value=1)): # This is where Iperform the projection and un-projection Vn=V[:n_components] M=ones(n_components) if n_components > 0: M[0]=pc1_multiplier X_hat=dot(multiply(dot(X[image_id], Vn.T), M), Vn) # Un-center I=X[image_id] + X_mean I_hat = X_hat + X_mean D=multiply(I-I_hat,I-I_hat) / multiply(X_std, X_std) # And plot fig, axs = plt.subplots(1, 3, figsize=(10, 10)) axs[0].imshow(I.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[0].axis('off') axs[0].set_title('Original') axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[1].axis('off') axs[1].set_title('Reconstruction') axs[2].imshow(1-D.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[2].axis('off') axs[2].set_title('Difference^2 (mean = %f)' % sqrt(D.mean())) plt.tight_layout() """ Explanation: Now I'll try reconstructing faces with different number of principal components (PCs)! Now, the transformed X is reconstructed by multiplying by the sample standard deviations for each dimension and adding the sample mean. For this reason, even for zero components, you get a face-like image! The rightmost plot is the "relative" reconstruction error (image minus the reconstruction squared, divided by the data standard deviations). White is where the error is close to zero, and black is where the relative error is large (1 or more). As you increase the number of PCs, you should see the error mostly going to zero (white). End of explanation """ def plot_morph(left=0, right=1, mix=0.5): # Projected images x_lft=dot(X[left], V.T) x_rgt=dot(X[right], V.T) # Mix x_avg = x_lft * (1.0-mix) + x_rgt * (mix) # Un-project X_hat = dot(x_avg[newaxis,:], V) I_hat = X_hat + X_mean # And plot fig, axs = plt.subplots(1, 3, figsize=(10, 10)) axs[0].imshow(dataset.images[left], cmap='gray', vmin=0, vmax=1) axs[0].axis('off') axs[0].set_title('Left') axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[1].axis('off') axs[1].set_title('Morphed (%.2f %% right)' % (mix * 100)) axs[2].imshow(dataset.images[right], cmap='gray', vmin=0, vmax=1) axs[2].axis('off') axs[2].set_title('Right') plt.tight_layout() interact(plot_morph, left=IntSlider(max=dataset.images.shape[0]-1), right=IntSlider(max=dataset.images.shape[0]-1,value=1), mix=FloatSlider(value=0.5, min=0, max=1.0)) """ Explanation: Image morphing As a fun exercise, I'll morph two images by taking averages of the two images within the transformed data space. How is it different than simply morphing them in the pixel space? End of explanation """
ucsd-ccbb/jupyter-genomics
notebooks/networkAnalysis/drug_gene_networks/drug_gene_networks.ipynb
mit
# import some useful packages import numpy as np import matplotlib.pyplot as plt import seaborn import networkx as nx import pandas as pd import random import json # latex rendering of text in graphs import matplotlib as mpl mpl.rc('text', usetex = False) mpl.rc('font', family = 'serif') % matplotlib inline """ Explanation: Initial investigation of drug-gene networks Brin Rosenthal (sbrosenthal@ucsd.edu) April 15, 2016 Prototype for tool to be added to Search Goals: Input a gene list Use the DrugBank database to suggest drugs related to genes in input list Note: data files and code for this notebook may be found in the 'data' and 'source' directories End of explanation """ # load the module import sys sys.path.append('../source') import drug_gene_heatprop import imp imp.reload(drug_gene_heatprop) path_to_DB_file = '../drugbank.0.json.new' # set path to drug bank file path_to_cluster_file = 'sample_matrix.csv' # set path to cluster file seed_genes = ['LETM1','RPL3','GRK4','RWDD4A'] # set seed genes (must be in cluster) gene_drug_df = drug_gene_heatprop.drug_gene_heatprop(seed_genes,path_to_DB_file,path_to_cluster_file, plot_flag=True) gene_drug_df.head(25) """ Explanation: Test drug_gene_heatprop module This section runs the inferred drug heat propagation module from a list of seed genes, and returns a list of genes ranked by their 'heat', or proximity to the seed gene set. These are the genes which we think will be most related to the seed genes. For a more detailed, step by step description of the process, continue reading past this section. End of explanation """ def load_DB_data(fname): ''' Load and process the drug bank data ''' with open(fname, 'r') as f: read_data = f.read() f.closed si = read_data.find('\'\n{\n\t"source":') sf = read_data.find('\ncurl') DBdict = dict() # fill in DBdict while si > 0: db_temp = json.loads(read_data[si+2:sf-2]) DBdict[db_temp['drugbank_id']]=db_temp # update read_data read_data = read_data[sf+10:] si = read_data.find('\'\n{\n\t"source":') sf = read_data.find('\ncurl') return DBdict DBdict = load_DB_data('/Users/brin/Documents/DrugBank/drugbank.0.json.new') # make a network out of drug-gene interactions DB_el = [] for d in DBdict.keys(): node_list = DBdict[d]['node_list'] for n in node_list: DB_el.append((DBdict[d]['drugbank_id'],n['name'])) G_DB = nx.Graph() G_DB.add_edges_from(DB_el) gene_nodes,drug_nodes = nx.bipartite.sets(G_DB) gene_nodes = list(gene_nodes) drug_nodes = list(drug_nodes) """ Explanation: More detailed description of methods below... Load the drug bank database, and create a network out of it Network is bipartite with types: Drugs Genes which are acted on by each drug End of explanation """ print('--> there are '+str(len(gene_nodes)) + ' genes with ' + str(len(drug_nodes)) + ' corresponding drugs') DB_degree = pd.Series(G_DB.degree()) DB_degree.sort(ascending=False) plt.figure(figsize=(18,5)) plt.bar(np.arange(70),DB_degree.head(70),width=.5) tmp = plt.xticks(np.arange(70)+.4,list(DB_degree.head(70).index),rotation=90,fontsize=11) plt.xlim(1,71) plt.ylim(0,200) plt.grid('off') plt.ylabel('number of connections (degree)',fontsize=16) """ Explanation: What is this drug-gene graph like? how sparse is it? Are there genes/drugs that have many connections? End of explanation """ # load a sample cluster for network visualization sample_genes = pd.read_csv('/Users/brin/Documents/DrugBank/sample_cluster.csv',header=None) sample_genes = list(sample_genes[0]) # also include neighbor genes neighbor_genes = [nx.neighbors(G_DB,x) for x in sample_genes if x in G_DB.nodes()] neighbor_genes = [val for sublist in neighbor_genes for val in sublist] sub_genes = [] sub_genes.extend(sample_genes) sub_genes.extend(neighbor_genes) G_DB_sample = nx.subgraph(G_DB,sub_genes) drug_nodes = list(np.intersect1d(neighbor_genes,G_DB.nodes())) gene_nodes = list(np.intersect1d(sample_genes,G_DB.nodes())) # return label positions offset by dx def calc_pos_labels(pos,dx=.03): # input node positions from nx.spring_layout() pos_labels = dict() for key in pos.keys(): pos_labels[key] = np.array([pos[key][0]+dx,pos[key][1]+dx]) return pos_labels pos = nx.spring_layout(G_DB_sample,k=.27) pos_labels = calc_pos_labels(pos) plt.figure(figsize=(14,14)) nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = drug_nodes,node_shape='s',node_size=80,alpha=.7,label='drugs') nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = gene_nodes,node_shape='o',node_size=80,node_color='blue',alpha=.7,label='genes') nx.draw_networkx_edges(G_DB_sample,pos=pos,alpha=.5) nx.draw_networkx_labels(G_DB_sample,pos=pos_labels,font_size=10) plt.grid('off') plt.legend(fontsize=12) plt.title('Adrenocortical carcinoma cluster 250250',fontsize=16) """ Explanation: But we probably want to focus on within-cluster interactions, instead of the whole graph Download a sample cluster from geneli.st (Adrenocortical carcinoma cluster 250250) Extract a subnetwork from total drug-gene network containing only the genes from this cluster, and associated drugs Plot this subnetwork End of explanation """ sample_mat = pd.read_csv('/Users/brin/Documents/DrugBank/sample_matrix.csv',index_col=0) print(sample_mat.head()) idx_to_node = dict(zip(range(len(sample_mat)),list(sample_mat.index))) sample_mat = np.array(sample_mat) sample_mat = sample_mat[::-1,0:-1] # reverse the indices for use in graph creation """ Explanation: Above, we plot the drug-gene interaction network for our sample cluster We're showing only the genes that have associated drugs, in the cluster This is one option for exploring drug-gene interaction space, as the sparseness of drugs/genes per cluster allows for easy visualization Another option... heat propagation Run heat propagation from seed nodes on a sample cluster, to prioritize genes (and their associated drugs) similar to seed node set Some questions to resolve: - ### How should we handle negative edge weights? - ### Should we return drugs associated with individual genes, or drugs most associated with total input gene list? End of explanation """ plt.figure(figsize=(7,7)) plt.matshow(sample_mat,cmap='bwr',vmin=-1,vmax=1,fignum=False) plt.grid('off') plt.title('Adrenocortical carcinoma cluster 250250',fontsize='16') """ Explanation: First let's plot the focal cluster of interest (Adrenocortical carcinoma cluster 250250) End of explanation """ G_cluster = nx.Graph() G_cluster = nx.from_numpy_matrix(np.abs(sample_mat)) G_cluster = nx.relabel_nodes(G_cluster,idx_to_node) pos = nx.spring_layout(G_cluster,k=.4) seed_genes = ['STIM2','USP46','FRYL','COQ2'] #['STIM2','USP46'] # input gene list here plt.figure(figsize=(10,10)) nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.5,node_color='blue') nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2) nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03) plt.grid('off') plt.title('Sample subnetwork: pre-heat propagation',fontsize=16) Wprime = network_prop.normalized_adj_matrix(G_cluster,weighted=True) Fnew = network_prop.network_propagation(G_cluster,Wprime,seed_genes) plt.figure(figsize=(10,10)) nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03) nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.8,node_color=Fnew[G_cluster.nodes()],cmap='jet', vmin=0,vmax=.005) nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2) plt.grid('off') plt.title('Sample subnetwork: post-heat propagation',fontsize=16) N = 50 Fnew.sort(ascending=False) print('Top N hot genes: ') Fnew.head(N) # plot the hot subgraph in gene-gene space G_cluster_sub = nx.subgraph(G_cluster,list(Fnew.head(N).index)) pos = nx.spring_layout(G_cluster_sub,k=.5) plt.figure(figsize=(10,10)) nx.draw_networkx_nodes(G_cluster_sub,pos=pos,node_size=100,node_color=Fnew[G_cluster_sub.nodes()],cmap='jet', vmin=0,vmax=.005) nx.draw_networkx_edges(G_cluster_sub,pos=pos,alpha=.05) pos_labels = calc_pos_labels(pos,dx=.05) nx.draw_networkx_labels(G_cluster_sub,pos=pos_labels) plt.grid('off') plt.title('Sample cluster: hot subnetwork \n (genes most related to input list)', fontsize=16) """ Explanation: Now we will convert the cluster correlation matrix back to network form End of explanation """ top_N_genes = list(Fnew.head(N).index) top_N_genes = list(np.setdiff1d(top_N_genes,seed_genes)) # only keep non-seed genes top_N_genes = Fnew[top_N_genes] top_N_genes.sort(ascending=False) top_N_genes = list(top_N_genes.index) drug_candidates_list = seed_genes # build up a list of genes and drugs that may be related to input list for g in top_N_genes: if g in G_DB.nodes(): # check if g is in drugbank graph drug_candidates_list.append(g) drug_neighs_temp = list(nx.neighbors(G_DB,g)) drug_candidates_list.extend(drug_neighs_temp) # make a subgraph of these drug/gene candidates G_DB_sub = nx.subgraph(G_DB,drug_candidates_list) # define drug_nodes and gene_nodes from the subgraph drug_nodes = list(np.intersect1d(neighbor_genes,G_DB_sub.nodes())) gene_nodes = list(np.intersect1d(sample_genes,G_DB_sub.nodes())) plt.figure(figsize=(12,12)) pos = nx.spring_layout(G_DB_sub) pos_labels = calc_pos_labels(pos,dx=.05) nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=gene_nodes,node_size=100,alpha=.7,node_color='blue',label='genes') nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=drug_nodes,node_size=100,alpha=.7,node_color='red',node_shape='s',label='drugs') nx.draw_networkx_edges(G_DB_sub,pos=pos,alpha=.5) nx.draw_networkx_labels(G_DB_sub,pos=pos_labels,font_color='black') plt.grid('off') ax_min = np.min(pos.values())-.3 ax_max = np.max(pos.values())+.3 plt.xlim(ax_min,ax_max) plt.ylim(ax_min,ax_max) plt.legend(fontsize=14) #plt.axes().set_aspect('equal') plt.title('Genes in hot subnetwork with associated drugs', fontsize=16) """ Explanation: Now let's look up the drugs associated with these genes to see if there are any good candidates End of explanation """
stevetjoa/stanford-mir
audio_representation.ipynb
mit
x, sr = librosa.load('audio/c_strum.wav') ipd.Audio(x, rate=sr) """ Explanation: &larr; Back to Index Audio Representation In performance, musicians convert sheet music representations into sound which is transmitted through the air as air pressure oscillations. In essence, sound is simply air vibrating (Wikipedia). Sound vibrates through the air as longitudinal waves, i.e. the oscillations are parallel to the direction of propagation. Audio refers to the production, transmission, or reception of sounds that are audible by humans. An audio signal is a representation of sound that represents the fluctuation in air pressure caused by the vibration as a function of time. Unlike sheet music or symbolic representations, audio representations encode everything that is necessary to reproduce an acoustic realization of a piece of music. However, note parameters such as onsets, durations, and pitches are not encoded explicitly. This makes converting from an audio representation to a symbolic representation a difficult and ill-defined task. Waveforms and the Time Domain The basic representation of an audio signal is in the time domain. Let's listen to a file: End of explanation """ plt.figure(figsize=(15, 5)) librosa.display.waveplot(x, sr, alpha=0.8) """ Explanation: (If you get an error using librosa.load, you may need to install ffmpeg.) The change in air pressure at a certain time is graphically represented by a pressure-time plot, or simply waveform. To plot a waveform, use librosa.display.waveplot: End of explanation """ ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/ADSR_parameter.svg/640px-ADSR_parameter.svg.png") """ Explanation: Digital computers can only capture this data at discrete moments in time. The rate at which a computer captures audio data is called the sampling frequency (often abbreviated fs) or sampling rate (often abbreviated sr). For this workshop, we will mostly work with a sampling frequency of 44100 Hz, the sampling rate of CD recordings. Timbre: Temporal Indicators Timbre is the quality of sound that distinguishes the tone of different instruments and voices even if the sounds have the same pitch and loudness. One characteristic of timbre is its temporal evolution. The envelope of a signal is a smooth curve that approximates the amplitude extremes of a waveform over time. Envelopes are often modeled by the ADSR model (Wikipedia) which describes four phases of a sound: attack, decay, sustain, release. During the attack phase, the sound builds up, usually with noise-like components over a broad frequency range. Such a noise-like short-duration sound at the start of a sound is often called a transient. During the decay phase, the sound stabilizes and reaches a steady periodic pattern. During the sustain phase, the energy remains fairly constant. During the release phase, the sound fades away. The ADSR model is a simplification and does not necessarily model the amplitude envelopes of all sounds. End of explanation """ T = 2.0 # seconds f0 = 1047.0 sr = 22050 t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.1*numpy.sin(2*numpy.pi*f0*t) ipd.Audio(x, rate=sr) """ Explanation: Timbre: Spectral Indicators Another property used to characterize timbre is the existence of partials and their relative strengths. Partials are the dominant frequencies in a musical tone with the lowest partial being the fundamental frequency. The partials of a sound are visualized with a spectrogram. A spectrogram shows the intensity of frequency components over time. (See Fourier Transform and Short-Time Fourier Transform for more.) Pure Tone Let's synthesize a pure tone at 1047 Hz, concert C6: End of explanation """ X = scipy.fft(x[:4096]) X_mag = numpy.absolute(X) # spectral magnitude f = numpy.linspace(0, sr, 4096) # frequency variable plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)') """ Explanation: Display the spectrum of the pure tone: End of explanation """ x, sr = librosa.load('audio/oboe_c6.wav') ipd.Audio(x, rate=sr) print(x.shape) """ Explanation: Oboe Let's listen to an oboe playing a C6: End of explanation """ X = scipy.fft(x[10000:14096]) X_mag = numpy.absolute(X) plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)') """ Explanation: Display the spectrum of the oboe: End of explanation """ x, sr = librosa.load('audio/clarinet_c6.wav') ipd.Audio(x, rate=sr) print(x.shape) X = scipy.fft(x[10000:14096]) X_mag = numpy.absolute(X) plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)') """ Explanation: Clarinet Let's listen to a clarinet playing a concert C6: End of explanation """
kraemerd17/kraemerd17.github.io
courses/python/material/ipynbs/NumPy Basics.ipynb
mit
import numpy as np """ Explanation: NumPy: Vectorized Array Processing in Python NumPy, short for Numerical Python, is the fundamental package required for high performance scientific computing and data analysis. It is the foundation on which nearly all of the higher-level tools we will use are built. Here are some of the things it provides: ndarray, a fast and space-efficient multidimensional array providing vectorized arithmetic operations and sophisticated broadcasting capabilities. We use "array" to refer to ndarrays colloquially. Standard mathematical functions for fast operations on entire arrays of data without having to write loops Tools for reading / writing array data to disk and working with memory-mapped files Linear algebra, random number generation, and Fourier transform capabilities Tools for integrating code written in C, C++, and Fortran The standard import line for NumPy is the following: End of explanation """ my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9] # equivalently, range(1,10) my_array = np.array(my_list) """ Explanation: Arrays NumPy arrays work fundamentally differently than the standard Python list, though they can be initialized by wrapping around a Python list, as below. End of explanation """ print my_array[0] # returns the first element of my_array print my_array[3:5] # returns the fourth and fifth elements of my_array print my_array[-2] # returns the second-to-last element of my_array """ Explanation: Arrays are indexed just like in standard Pyhon. For example, End of explanation """ print my_list + my_list print my_array + my_array print my_array * 8 """ Explanation: However, there are some immediate differences between Python lists and NumPy arrays. For example, consider these lines of code End of explanation """ np.arange(0,10,0.4) """ Explanation: It looks as though NumPy arrays handle arithmetic operations differently than Python lists! Stay tuned to learn more about this. Initializing NumPy arrays There are four main ways to initialize an array in NumPy. The first is by doing what was shown above, wrapping an NumPy array around an existing Python list. A second common initialization is by using arange, which takes a starting point, an ending point (which is excluded), and an increment value, and generates the array. For example: End of explanation """ np.zeros((2,4)) # note that np.zeros takes a tuple (m,n) """ Explanation: This is similar to Python's range function for lists. Another common way to initialize arrays is to use the zeros function, which generates an array of zeros for a given size. For example: End of explanation """ np.ones((3,1)) # don't forget the tuple! """ Explanation: Finally, sometimes you want to initialize an array to all 1s, so NumPy provides the ones function: End of explanation """ fidentity = np.eye (5, dtype='float16') bidentity = np.eye (5, dtype='bool') sidentity = np.eye (5, dtype='string_') print fidentity, "\n\n" print bidentity, "\n\n" print sidentity """ Explanation: The full list of array creation functions are given in the following table: Array creation functions array Convert input data (list, tuple, array, or other sequence type) to an ndarray either by inferring a dtype or explicitly specifying a dtype. Copies the input data by default asarray Convert input to ndarray, but do not copy if the input is already an ndarray arange Like the built-in range but returns an ndarray instead of a list ones, ones_like Produce an array of all 1's with the given shape and dtype.ones_like takes another array and produces a ones array of the same shape and dtype. zeros, zeros_like Like ones and ones_like but producing arrays of 0's instead empty, empty_like Create new arrays by allocating new memory, but do not populate with any values like ones and zeros eye, identity Create a square NxN identity matrix (1's on the diagonal and 0's elsewhere) Be careful using empty, because it stores the ndarray with whatever happens to be stored on the main stack. It's almost always preferable to initialize an "empty" array with zeros, because it is guaranteed what the values will be. Data types If necessary, you can specifiy the data type in the ndarray by including a dtype= parameter. The acceptable dtypes are: int8, uint8 (i1, u1) Signed and unsigned 8-bit (1 byte) integer types int16, uint16 (i2, u2) Signed and unsigned 16-bit integer types int32, uint32 (i4, u4) Signed and unsigned 32-bit integer types int64, uint64 (i8, u8) Signed and unsigned 64-bit integer types float16 (f2) Half-precision floating point float32 (f4 or f) Standard single-precision floating point. Compatibile with C float float64 (f8 or d) Standard double-precision floating point. Compatibile with C double and Python float object float128 (f16 or g) Extended floating point complex64, complex128, complex256 (c8, c16, c32) Complex numbers represented by 32, 64, or 128 floats, respectively bool (?) Boolean type storing True and False values object (O) Python object type string_ (S) Fixed-length string type (1 byte per character). For example, to create a string dtype with length 10, use S10 unicode_ (U) Fixed-length unicdoe type (number of bytes platform specific). Same specification semantics as string_ (e.g. U10). End of explanation """ a = np.array([1., 2., 3.]) """ Explanation: Accessing array elements NumPy arrays can be accessed in similar fashion to Python list types. For example, given an array End of explanation """ print a[0] print a[:1] print a[::-1] """ Explanation: we can access elements in the natural sense: End of explanation """ a = np.array([[1., 2.], [3., 4.]]) a[0] """ Explanation: There are two ways to access elements for multidimensional arrays. The most natural accessing syntax originates from the following observation: End of explanation """ print a[0][0], a[0][1] """ Explanation: In a 2x2 matrix, accessing the first element returns the first row vector of the matrix. Thus, we can treat a[0] as its own array, so we can access any of its elements by using the format: End of explanation """ print a[0,0], a[0,1] """ Explanation: This is called the recursive approach. There is another approach, which is unique to NumPy arrays, which lets you overload the arguments in the square brackets: End of explanation """ myarr = np.arange(1.,11.,1.) print myarr * 2. print myarr - 6. print myarr * myarr print np.log(myarr) """ Explanation: Exercise Take a large array, and use %timeit to determine which accessor approach is more efficient. The moral of the story is, use the NumPy specific approach! Operations between ndarrays and scalars Arrays are important because they enable you to express batch operations on data without writing any for loops. This is usually called vectorization. Any arithmetic operations between equal-size arrays applies the operation elementwise. Try it! Run the following code to get a sense of how ndarray operations with scalars work. End of explanation """ print my_array print my_array[::2] # the sub-array starting at 0 to the end with even index """ Explanation: Exercise Write a function that takes a tuple $(m,n)$ and a number $x$ generates an array of containing the value $x$. Note that you can extract a sub-array from a main array not only by slicing sections but by choosing iterations. For example, End of explanation """ def checkerboard(n): checks = np.zeros((n,n)) checks[0::2, 0::2] = 1 checks[1::2, 1::2] = 1 return checks checkerboard(5) """ Explanation: Write a function that declares an $n\times n$ "checkerboard" array of 0s and 1s. End of explanation """ np.random.rand() """ Explanation: Lab: Invertible Matrices NumPy provides powerful methods for randomly-generated arrays using the np.random class. The simplest random-number generation task is to produce a random number $x$ that is a member of the set $[0,1)$; that is, $0 \leq x < 1$. In NumPy, this is achieved by: End of explanation """ np.random.rand(5) """ Explanation: Notice that if you run this cell many times, you get a different result. np.random.rand() takes $n$ arguments which determine the shape of the output. For example, to get a 5-dimensional random vector, we can write End of explanation """ Z = np.array([[0,0,0,0,0,0], [0,0,0,1,0,0], [0,1,0,1,0,0], [0,0,1,1,0,0], [0,0,0,0,0,0], [0,0,0,0,0,0]],dtype='int64') """ Explanation: Each additional argument specifies another "axis" of the array output. So if you give two arguments, it produces a matrix; three arguments produces a "cube" matrix; and $n$ arguments produces an $n$-tensor. The class np.random has many more random array capabilities which you can find here. We will just use np.random.rand for this lab. Exercise Generate a three-dimensional random vector whose entries range between 4 and 8. Generalize this process into a function that produces an $n$-dimensional random vector whose components are elments of $[a,b)$. Main lab Consider the following problem: given an $n\times n$-dimension matrix $A$, what is the probability that $A$ is invertible? One reasonable interpretation of this problem is to ask how frequently a randomly-generated matrix is invertible. Your task is to write a function that generates a random $n\times n$ matrix whose values fall in the set $[-1,1)$, write a function to determine if such a matrix is invertible, write a function to approximate the probability of such an event by considering $N$ matrices and by recording the number of these that are invertible. Recall that a matrix $A$ is invertible if and only if $\det(A)\ne 0$. To this end, you might want to check out linalg, NumPy's linear algebra class. Also, because we are dealing with numerical precision, it probably is best not to expect that the determinant will ever actually be 0. Instead, we suggest including an error tolerance. It turns out that the overwhelming majority of random matrices are invertible. This simulation supports the theory, which you can read about on this StackExchange post. Hopefully, you're beginning to see the power of programming in problem-solving. Lab: Cellular Automata (From Nicolas P. Rougier's Numpy tutorial) We will construct a simulation of John Conway's Game of Life using the skills we have learned about NumPy. The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is the best-known example of a cellular automaton. The "game" is actually a zero-player game, meaning that its evolution is determined by its initial state, needing no input from human players. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbours, which are the cells that are directly horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: Any live cell with fewer than two live neighbours dies, as if by needs caused by underpopulation. Any live cell with more than three live neighbours dies, as if by overcrowding. Any live cell with two or three live neighbours lives, unchanged, to the next generation. Any dead cell with exactly three live neighbours becomes a live cell. The initial pattern constitutes the 'seed' of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed – births and deaths happen simultaneously, and the discrete moment at which this happens is sometimes called a tick. (In other words, each generation is a pure function of the one before.) The rules continue to be applied repeatedly to create further generations. Getting started The first thing to do is to create the proper numpy array to hold the cells. This can be done very easily: End of explanation """ Z.dtype """ Explanation: You don't have to specify the data type, as NumPy will interpret an array of integers as an int64 data type anyways. Sometimes, though, it's good practice to be specific. You can always check what datatype an array is by running End of explanation """ Z.shape """ Explanation: We can also check the shape of the array to make sure it is 6x6: End of explanation """ A = Z[1:5,1:5] """ Explanation: You already know how to access elements of Z. Write a statement to obtain the the element in the 3rd row and the 4th column. Better yet, we can access a subpart of the array using the slice notation. Obtain the subarray of Z containing the rows 2-5 and columns 2-5, and set it equal to an ndarray labelled A. End of explanation """ A[0, 0] = 9 print A, "\n\n" print Z """ Explanation: Be mindful of array pointers! Look at what happens when you run the following code: End of explanation """ print Z.base is None print A.base is Z A[0,0] = 0 # put A (and Z) back to normal. """ Explanation: We set the value of A[0,0] to 9 and we see immediate change in Z[1,1] because A[0,0] actually corresponds to Z[1,1]. This may seem trivial with such simple arrays, but things can become much more complex (we'll see that later). If in doubt, you can use ndarray.base to check easily if an array is part of another one: End of explanation """ 1 + (2*Z + 3) """ Explanation: Counting neighbors We now need a function to count the neighbours. We could do it the same way as for the python version, but this would make things very slow because of the nested loops. We would prefer to act on the whole array whenever possible, this is called vectorization. First, you need to know that you can manipulate Z as if (and only as if) it was a regular scalar: End of explanation """ Z + Z[-1:1, -1:1] """ Explanation: If you look carefully at the output, you may realize that the ouptut corresponds to the formula above applied individually to each element. Said differently, we have (1+(2*Z+3))[i,j] == (1+(2*Z[i,j]+3)) for any i,j. Ok, so far, so good. Now what happens if we add Z with one of its subpart, let's say Z[-1:1,-1:1] ? End of explanation """ Z + 1 """ Explanation: This raises a Value Error but more interestingly, numpy complains about the impossibility of broadcasting the two arrays together. Broadcasting is a very powerful feature of numpy and most of the time, it saves you a lot of hassle. Let's consider for example the following code: End of explanation """ N = np.zeros(Z.shape, dtype=int) N[1:-1,1:-1] += (Z[ :-2, :-2] + Z[ :-2,1:-1] + Z[ :-2,2:] + Z[1:-1, :-2] + Z[1:-1,2:] + Z[2: , :-2] + Z[2: ,1:-1] + Z[2: ,2:]) N """ Explanation: How can a matrix and a scalar be added together? Well, they can't. But NumPy is smart enough to guess that you actually want to add 1 to each of the element of Z. This concept of broadcasting is quite powerful and it will take you some time before masterizing it fully (if even possible). However, in the present case (counting neighbours if you remember), we won't use broadcasting (uh?). But we'll use vectorize computation using the following code: End of explanation """ def iterate(Z): # Iterate the game of life : naive version # Count neighbours N = np.zeros(Z.shape, int) N[1:-1,1:-1] += (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] + Z[1:-1,0:-2] + Z[1:-1,2:] + Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:]) N_ = N.ravel() Z_ = Z.ravel() # Apply rules R1 = np.argwhere( (Z_==1) & (N_ < 2) ) R2 = np.argwhere( (Z_==1) & (N_ > 3) ) R3 = np.argwhere( (Z_==1) & ((N_==2) | (N_==3)) ) R4 = np.argwhere( (Z_==0) & (N_==3) ) # Set new values Z_[R1] = 0 Z_[R2] = 0 Z_[R3] = Z_[R3] Z_[R4] = 1 # Make sure borders stay null Z[0,:] = Z[-1,:] = Z[:,0] = Z[:,-1] = 0 """ Explanation: To understand this code, have a look at the figure below: What we actually did with the above code is to add all the darker blue squares together. Since they have been chosen carefully, the result will be exactly what we expected. If you want to convince yourself, consider a cell in the lighter blue area of the central sub-figure and check what will the result for a given cell. Cell Iteration In a first approach, we can write the iterate function using the argwhere method that will give us the indices where a given condition is True. End of explanation """ def iterate_2(Z): # Count neighbours N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] + Z[1:-1,0:-2] + Z[1:-1,2:] + Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:]) # Apply rules birth = (N==3) & (Z[1:-1,1:-1]==0) survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1) Z[...] = 0 Z[1:-1,1:-1][birth | survive] = 1 return Z """ Explanation: Even if this first version does not use nested loops, it is far from optimal because of the use of the 4 argwhere calls that may be quite slow. We can instead take advantages of numpy features the following way. End of explanation """ import matplotlib.pyplot as plt from matplotlib import animation from IPython.display import Image %matplotlib inline """ Explanation: Now, let's throw together a simple visualization for our Game of Life! Don't worry about the matplotlib code, we'll deal with it later. End of explanation """ Z = np.random.randint(0,2,(256,256)) fig = plt.figure() ax = plt.axes() im = plt.imshow(Z, cmap='gray', interpolation='bicubic') def updatefig(*args): im.set_array(iterate_2(Z)) return im, ani = animation.FuncAnimation(fig, updatefig, interval=200, blit=True) ani.save('demonstration.gif', writer='imagemagick', fps=10) """ Explanation: The cell below saves our animated Game of Life as a .gif format on your computer. End of explanation """ Image(url='demonstration.gif') """ Explanation: And now, load up the .gif file with: End of explanation """
angelmtenor/data-science-keras
property_maintenance_fines.ipynb
mit
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import helper import keras helper.info_gpu() #sns.set_palette("GnBu_d") #helper.reproducible(seed=0) # Setup reproducible results from run to run using Keras %matplotlib inline """ Explanation: Property Maintenance Fines Predicting the probability that a set of blight tickets will be paid on time Supervised Learning. Classification Source: Applied Machine Learning in Python | Coursera. Solved with classical machine learning classifiers here Data provided by Michigan Data Science Team (MDST), the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) and the City of Detroit Detroit Open Data Portal. Each row of the dataset corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Features ticket_id - unique identifier for tickets agency_name - Agency that issued the ticket inspector_name - Name of inspector that issued the ticket violator_name - Name of the person/organization that the ticket was issued to violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator ticket_issued_date - Date and time the ticket was issued hearing_date - Date and time the violator's hearing was scheduled violation_code, violation_description - Type of violation disposition - Judgment and judgement type fine_amount - Violation fine amount, excluding fees admin_fee - $20 fee assigned to responsible judgments state_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations Labels payment_amount - Amount paid, if any payment_date - Date payment was made, if it was received payment_status - Current payment status as of Feb 1 2017 balance_due - Fines and fees still owed collection_status - Flag for payments in collections compliance [target variable for prediction] Null = Not responsible 0 = Responsible, non-compliant 1 = Responsible, compliant compliance_detail - More information on why each ticket was marked compliant or non-compliant End of explanation """ data_path = 'data/property_maintenance_fines_data.csv' target = ['compliance'] df_original = pd.read_csv(data_path, encoding='iso-8859-1', dtype='unicode') print("{} rows \n{} columns \ntarget: {}".format(*df_original.shape, target)) """ Explanation: 1. Data Processing End of explanation """ print(df_original[target].squeeze().value_counts(dropna=False)) # Remove rows with NULL targets df_original = df_original.dropna(subset=target) print(df_original[target].squeeze().value_counts()) print(df_original.shape) """ Explanation: Explore and Clean the target End of explanation """ from sklearn.model_selection import train_test_split df, df_test = train_test_split( df_original, test_size=0.2, stratify=df_original[target], random_state=0) """ Explanation: Imbalanced target: the evaluation metric used in this problem is the Area Under the ROC Curve Split original data into training and validation test set End of explanation """ df.head(2) """ Explanation: To avoid data leakage, only the training dataframe, df, will be explored and processed here Show the training data End of explanation """ helper.missing(df) """ Explanation: Missing values End of explanation """ def remove_features(df): relevant_col = ['agency_name', 'violation_street_name', 'city', 'state', 'violator_name', 'violation_code', 'late_fee', 'discount_amount', 'judgment_amount', 'disposition', 'fine_amount', 'compliance'] df = df[relevant_col] return df df = remove_features(df) print(df.shape) """ Explanation: Transform Data Remove irrelevant features End of explanation """ num = ['late_fee', 'discount_amount', 'judgment_amount', 'fine_amount'] df = helper.classify_data(df, target, numerical=num) pd.DataFrame(dict(df.dtypes), index=["Type"])[df.columns].head() # show data types """ Explanation: Classify variables End of explanation """ df, dict_categories = helper.remove_categories(df, target=target, ratio=0.001, show=False) """ Explanation: Remove low-frequency categorical values End of explanation """ df = helper.fill_simple(df, target, missing_categorical='Other') helper.missing(df); """ Explanation: Fill missing values Missing categorical values filled by 'Other' There are no numerical missing values End of explanation """ for i in ['state', 'disposition']: helper.show_categorical(df[[i]]) """ Explanation: Visualize the data Categorical features End of explanation """ for i in ['state', 'disposition']: helper.show_target_vs_categorical(df[[i, target[0]]], target) """ Explanation: Target vs Categorical features End of explanation """ helper.show_numerical(df, kde=True) """ Explanation: Numerical features End of explanation """ helper.show_target_vs_numerical(df, target, point_size=10 ,jitter=0.3, fit_reg=True) plt.ylim(ymin=-0.2, ymax=1.2) """ Explanation: Target vs Numerical features End of explanation """ helper.show_correlation(df, target, figsize=(6,3)) """ Explanation: Correlation between numerical features and target End of explanation """ droplist = [] # features to drop # For the model 'data' instead of 'df' data = df.copy() # del(df) data.drop(droplist, axis='columns', inplace=True) data.head(2) """ Explanation: 2. Neural Network Model Select the features End of explanation """ data, scale_param = helper.scale(data) """ Explanation: Scale numerical variables Shift and scale numerical variables to a standard normal distribution. The scaling factors are saved to be used for predictions. End of explanation """ data, dict_dummies = helper.replace_by_dummies(data, target) model_features = [f for f in data if f not in target] # sorted neural network inputs data.head(3) """ Explanation: Create dummy features Replace categorical features (no target) with dummy features End of explanation """ val_size = 0.2 random_state = 0 def validation_split(data, val_size=0.25): train, test = train_test_split( data, test_size=val_size, random_state=random_state, stratify=data[target]) # Separate the data into features and target (x=features, y=target) x_train, y_train = train.drop(target, axis=1).values, train[target].values x_val, y_val = test.drop(target, axis=1).values, test[target].values # _nc: non-categorical yet (needs one-hot encoding) return x_train, y_train, x_val, y_val x_train, y_train, x_val, y_val = validation_split(data, val_size=val_size) # x_train = x_train.astype(np.float16) y_train = y_train.astype(np.float16) # X_val = x_val.astype(np.float16) y_val = y_val.astype(np.float16) """ Explanation: Split the data into training and validation sets End of explanation """ def one_hot_output(y_train, y_val): num_classes = len(np.unique(y_train)) y_train = keras.utils.to_categorical(y_train, num_classes) y_val = keras.utils.to_categorical(y_val, num_classes) return y_train, y_val y_train, y_val = one_hot_output(y_train, y_val) print("train size \t X:{} \t Y:{}".format(x_train.shape, y_train.shape)) print("val size \t X:{} \t Y:{}".format(x_val.shape, y_val.shape)) """ Explanation: Encode the output End of explanation """ from sklearn.dummy import DummyClassifier clf = DummyClassifier(strategy='most_frequent').fit(x_train, np.ravel(y_train)) # The dummy 'most_frequent' classifier always predicts class 0 y_pred = clf.predict(x_val).reshape([-1, 1]) helper.binary_classification_scores(y_val[:, 1], y_pred); """ Explanation: Build a dummy classifier End of explanation """ from sklearn.ensemble import RandomForestClassifier %time clf_random_forest_opt = RandomForestClassifier(n_estimators = 30, max_features=150, \ max_depth=13, class_weight='balanced', n_jobs=-1, \ random_state=0).fit(x_train, np.ravel(y_train[:,1])) y_pred = clf_random_forest_opt.predict(x_val).reshape([-1, 1]) helper.binary_classification_scores(y_val[:, 1], y_pred); """ Explanation: Build a random forest classifier (best of grid search) End of explanation """ cw = helper.get_class_weight(y_train[:, 1]) # class weight (imbalanced target) import keras from keras.models import Sequential from keras.layers.core import Dense, Dropout def build_nn(input_size, output_size, summary=False): input_nodes = input_size // 8 model = Sequential() model.add(Dense(input_nodes, input_dim=input_size, activation='relu')) model.add(Dense(output_size, activation='softmax')) if summary: model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model model = build_nn(x_train.shape[1], y_train.shape[1], summary=True) """ Explanation: Build the Neural Network for Binary Classification End of explanation """ import os from time import time model_path = os.path.join("models", "detroit.h5") def train_nn(model, x_train, y_train, validation_data=None, path=False, show=True): """ Train the neural network model. If no validation_datais provided, a split for validation will be used """ if show: print('Training ....') callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=1)] t0 = time() history = model.fit( x_train, y_train, epochs=100, batch_size=2048, class_weight = cw, verbose=1, validation_split=0.3, validation_data = validation_data, callbacks=callbacks) if show: print("time: \t {:.1f} s".format(time() - t0)) helper.show_training(history) if path: model.save(path) print("\nModel saved at", path) return history model = None model = build_nn(x_train.shape[1], y_train.shape[1], summary=False) train_nn(model, x_train, y_train, path=None); from sklearn.metrics import roc_auc_score y_pred_train = model.predict(x_train, verbose=1) print('\n\n ROC_AUC train:\t{:.2f} \n'.format(roc_auc_score(y_train, y_pred_train))) y_pred_val = model.predict(x_val, verbose=1) print('\n\n ROC_AUC val:\t{:.2f}'.format(roc_auc_score(y_val, y_pred_val))) """ Explanation: Train the Neural Network End of explanation """ helper.binary_classification_scores(y_val[:, 1], y_pred_val[:, 1]); """ Explanation: Validate the model (validation set) End of explanation """ df_test.head(2) """ Explanation: Evaluate the final model (test set) End of explanation """ df_test = remove_features(df_test) df_test = helper.classify_data(df_test, target, numerical=num) df_test, _ = helper.remove_categories( df_test, target=target, show=False, dict_categories=dict_categories) df_test = helper.fill_simple(df_test, target, missing_categorical='Other') df_test, _ = helper.scale(df_test, scale_param) df_test, _ = helper.replace_by_dummies(df_test, target, dict_dummies) df_test = df_test[model_features+target] # sort columns to match training features order def separate_x_y(data): """ Separate the data into features and target (x=features, y=target) """ x, y = data.drop(target, axis=1).values, data[target].values x = x.astype(np.float16) y = y.astype(np.float16) return x, y x_test, y_test = separate_x_y(df_test) y_test = keras.utils.to_categorical(y_test, 2) """ Explanation: Process test data with training set parameters (no data leakage) End of explanation """ y_pred = clf_random_forest_opt.predict_proba(x_test)[:,1] helper.binary_classification_scores(y_test[:,1], y_pred); helper.show_feature_importances(model_features, clf_random_forest_opt) """ Explanation: Random Forest model End of explanation """ y_pred = model.predict(x_test, verbose=1)[:,1] helper.binary_classification_scores(y_test[:,1], y_pred); """ Explanation: Neural Network model End of explanation """ helper.ml_classification(x_train, y_train[:, 1], x_test, y_test[:, 1]) """ Explanation: Compare with other non-neural ML models End of explanation """
sinamoeini/mapp4py
examples/fracture-gcmc-tutorial/dislocation.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import mapp4py from mapp4py import md from lib.elasticity import rot, cubic, resize, displace, HirthEdge, HirthScrew """ Explanation: Introdcution This trial describes how to create edge and screw dislocations in iron BCC strating with one unitcell containing two atoms Background The elastic solution for displacement field of dislocations is provided in the paper Dislocation Displacement Fields in Anisotropic Media. Theoritical The paper mentioned in backgroud subsection deals with only one dislocation. Here we describe how to extend the solution to periodic array of dislocations. Since we are dealing with linear elasticity we can superpose (sum up) the displacement field of all the individual dislocations. Looking at the Eqs. (2-8) of abovementioned reference this boils done to finding a closed form soloution for $$\sum_{m=-\infty}^{\infty} \log\left(z-ma \right).$$ Where $z= x+yi$ and $a$ is a real number, equivakent to $\mathbf{H}_{00}$ that defines the periodicity of dislocations on x direction. Let us simplify the problem a bit further. Since this is the component displacement field we can add or subtract constant term so for each $\log\left(z-ma \right)$ we subtract a factor of $log\left(a \right)$, leading to $$\sum_{m=-\infty}^{\infty} \log\left(\frac{z}{a}-m \right).$$ Lets change $z/a$ to $z$ and when we arrive the solution we will change ot back $$\sum_{m=-\infty}^{\infty} \log\left(z-m \right).$$ Objective is to find a closed form solution for $$f\left(z\right)=\sum_{m=-\infty}^{\infty} \log\left(z-m \right).$$ First note that $$ f'\left(z\right)=\frac{1}{z}+\sum_{m=1}^{\infty}\frac{1}{z-m}+\frac{1}{z+m}, $$ and also $$ \frac{1}{z\mp m}=\mp \frac{1}{m}\sum_{n=0}^{\infty} \left(\pm \frac{z}{m}\right)^n. $$ This leads to $$ \frac{1}{z-m}+\frac{1}{z+m}=-\frac{2}{z}\sum_{n=1}^{\infty}\left(\frac{z}{m}\right)^{2n}, $$ and subsequently $$ f'\left(z\right)=\frac{1}{z}-\frac{2}{z}\sum_{n=1}^{\infty}\left(z\right)^{2n}\sum_{m=1}^{\infty}m^{-2n}, $$ $$ =\frac{1}{z}-\frac{2}{z}\sum_{n=1}^{\infty}\left(z\right)^{2n}\zeta\left(2n\right). $$ Where $\zeta$ is Riemann zeta function. Since $\zeta\left(0\right)=-1/2$, it simplifies to: $$ f'\left(z\right)=-\frac{2}{z}\sum_{n=0}^{\infty}\left(z\right)^{2n}\zeta\left(2n\right) $$ Note that $$ -\frac{\pi z\cot\left(\pi z\right)}{2}=\sum_{n=0}^{\infty}z^{2n} \zeta\left(2n\right) $$ I have no idea how I figured this out but it is true. Therefore, $$ f'\left(z\right)=\pi\cot\left(\pi z\right). $$ At this point one can naively assume that the problem is solved (like I did) and the answer is something like: $$ f\left(z\right)=\log\left[\sin\left(\pi z\right)\right]+C, $$ Where $C$ is a constant. However, after checking this against numerical vlaues you will see that this is completely wrong. The issue here is that startegy was wrong at the very begining. The sum of the displacelment of infinte dislocations will not converge since we have infinite discountinuity in displacement field. In other words they do not cancel each other they feed each other. But there is still a way to salvage this. Luckily, displacement is relative quantity and we are dealing with crystals. We can easily add a discontinuity in form an integer number burger vectors to a displacement field and nothing will be affected. So here is the trick: We will focus only on the displacement field of one unit cell dislocation (number 0). At each iteration we add two dislocation to its left and right. At $n$th iterations we add a discontinuity of the form $$ -\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] \pi i $$ and a constant of the form: $$ -2\log n. $$ In other words and we need to evaluate: $$ \lim_{m\to\infty}\sum_{n=-m}^{m} \biggl{ \log\left(z-n\right) -\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] \pi i -2\log\left(n \right) \biggr} + \pi, $$ which simplifies to $$ \lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-n\right) -\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i -2\log\left(\frac{m!!}{\sqrt{\pi}} \right) $$ Note that we added an extra $\pi$ to displacement field for aesthetic reasons. After a lot of manipulations and tricks (meaning I dont't remember how I got here) we arrive at the following relation: $$ \lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-n\right) -\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i -2\log\left(\frac{m!!}{\sqrt{\pi}} \right)=\log\left[\sin\left(\pi z\right)\right] $$ However, this is only valid when $$-1/2 \le\mathrm{Re}\left(z\right)\lt 1/2.$$ If one exceeds this domain the answer is: $$ \boxed{ \log\left[\sin\left(\pi z\right)\right]-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right]\left \lceil{\mathrm{Re}\left(\frac{z}{2}\right)}-\frac{3}{4}\right \rceil 2 \pi i } $$ Where $\lceil . \rceil$ is the cieling function. Of course there is probably a nicer form. Feel free to derive it Final formulation To account for peridicity of dislocations in $x$ direction, the expression $\log\left(z\right)$ in Eqs(2-7) of the paper, it should be replaced by: $$\lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-na\right) -\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i -2\log\left(\frac{m\,\,!!}{\sqrt{\pi}} \right),$$ which has the closed form: $$ \boxed{ \log\left[\sin\left(\pi\frac{z}{a}\right)\right]-\mathrm{Sign}\left[\mathrm{Im}\left(\frac{z}{a}\right)\right]\left \lceil{\mathrm{Re}\left(\frac{z}{2a}\right)}-\frac{3}{4}\right \rceil 2 \pi i. } $$ Preperation Import packages End of explanation """ from mapp4py import mpi if mpi().rank!=0: with open(os.devnull, 'w') as f: sys.stdout = f; """ Explanation: Block the output of all cores except for one End of explanation """ xprt = md.export_cfg(""); """ Explanation: Define an md.export_cfg object md.export_cfg has a call method that we can use to create quick snapshots of our simulation box End of explanation """ sim=md.atoms.import_cfg('configs/Fe_300K.cfg'); nlyrs_fxd=2 a=sim.H[0][0]; b_norm=0.5*a*np.sqrt(3.0); b=np.array([1.0,1.0,1.0]) s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0) """ Explanation: Screw dislocation End of explanation """ sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]]) """ Explanation: Create a $\langle110\rangle\times\langle112\rangle\times\frac{1}{2}\langle111\rangle$ cell create a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell Since mapp4py.md.atoms.cell_chenge() only accepts integer values start by creating a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell End of explanation """ H=np.array(sim.H); def _(x): if x[2] > 0.5*H[2, 2] - 1.0e-8: return False; else: x[2]*=2.0; sim.do(_); _ = np.full((3,3), 0.0) _[2, 2] = - 0.5 sim.strain(_) """ Explanation: Remove half of the atoms and readjust the position of remaining Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps: Remove the atoms that are above located above $\frac{1}{2}[111]$ Double the position of the remiaing atoms in the said direction Shrink the box affinly to half on that direction End of explanation """ displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0])) """ Explanation: Readjust the postions End of explanation """ max_natms=100000 H=np.array(sim.H); n_per_area=sim.natms/(H[0,0] * H[1,1]); _ =np.sqrt(max_natms/n_per_area); N0 = np.array([ np.around(_ / sim.H[0][0]), np.around(_ / sim.H[1][1]), 1], dtype=np.int32) sim *= N0; H = np.array(sim.H); H_new = np.array(sim.H); H_new[1][1] += 50.0 resize(sim, H_new, np.full((3),0.5) @ H) C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241); Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)]) hirth = HirthScrew(rot(C_Fe,Q), rot(b*0.5*a,Q)) ctr = np.full((3),0.5) @ H_new; s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1]) def _(x,x_d,x_dof): sy=(x[1]-ctr[1])/H[1, 1]; x0=(x-ctr)/H[0, 0]; if sy>s_fxd or sy<=-s_fxd: x_dof[1]=x_dof[2]=False; x+=b_norm*hirth.ave_disp(x0) else: x+=b_norm*hirth.disp(x0) sim.do(_) H = np.array(sim.H); H_inv = np.array(sim.B); H_new = np.array(sim.H); H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2) H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0] H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2) F = np.transpose(H_inv @ H_new); sim.strain(F - np.identity(3)) xprt(sim, "dumps/screw.cfg") """ Explanation: Replicating the unit cell End of explanation """ def make_scrw(nlyrs_fxd,nlyrs_vel,vel): #this is for 0K #c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608); #this is for 300K c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241); #N0=np.array([80,46,5],dtype=np.int32) sim=md.atoms.import_cfg('configs/Fe_300K.cfg'); a=sim.H[0][0]; b_norm=0.5*a*np.sqrt(3.0); b=np.array([1.0,1.0,1.0]) s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0) Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)]) c0=rot(c_Fe,Q) hirth = HirthScrew(rot(c_Fe,Q),np.dot(Q,b)*0.5*a) sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]]) displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0])) max_natms=1000000 n_per_vol=sim.natms/sim.vol; _=np.power(max_natms/n_per_vol,1.0/3.0); N1=np.full((3),0,dtype=np.int32); for i in range(0,3): N1[i]=int(np.around(_/sim.H[i][i])); N0=np.array([N1[0],N1[1],1],dtype=np.int32); sim*=N0; sim.kB=8.617330350e-5 sim.create_temp(300.0,8569643); H=np.array(sim.H); H_new=np.array(sim.H); H_new[1][1]+=50.0 resize(sim, H_new, np.full((3),0.5) @ H) ctr=np.dot(np.full((3),0.5),H_new); s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1]) s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1]) def _(x,x_d,x_dof): sy=(x[1]-ctr[1])/H[1][1]; x0=(x-ctr)/H[0][0]; if sy>s_fxd or sy<=-s_fxd: x_d[1]=0.0; x_dof[1]=x_dof[2]=False; x+=b_norm*hirth.ave_disp(x0) else: x+=b_norm*hirth.disp(x0) if sy<=-s_vel or sy>s_vel: x_d[2]=2.0*sy*vel; sim.do(_) H = np.array(sim.H); H_inv = np.array(sim.B); H_new = np.array(sim.H); H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2) H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0] H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2) F = np.transpose(H_inv @ H_new); sim.strain(F - np.identity(3)) return N1[2],sim; """ Explanation: putting it all together End of explanation """ sim=md.atoms.import_cfg('configs/Fe_300K.cfg'); nlyrs_fxd=2 a=sim.H[0][0]; b_norm=0.5*a*np.sqrt(3.0); b=np.array([1.0,1.0,1.0]) s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0) sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]]) H=np.array(sim.H); def _(x): if x[0] > 0.5*H[0, 0] - 1.0e-8: return False; else: x[0]*=2.0; sim.do(_); _ = np.full((3,3), 0.0) _[0,0] = - 0.5 sim.strain(_) displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0])) max_natms=100000 H=np.array(sim.H); n_per_area=sim.natms/(H[0, 0] * H[1, 1]); _ =np.sqrt(max_natms/n_per_area); N0 = np.array([ np.around(_ / sim.H[0, 0]), np.around(_ / sim.H[1, 1]), 1], dtype=np.int32) sim *= N0; # remove one layer along ... direction H=np.array(sim.H); frac=H[0,0] /N0[0] def _(x): if x[0] < H[0, 0] /N0[0] and x[1] >0.5*H[1, 1]: return False; sim.do(_) H = np.array(sim.H); H_new = np.array(sim.H); H_new[1][1] += 50.0 resize(sim, H_new, np.full((3),0.5) @ H) C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241); _ = np.cross(b,s) Q = np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)]) hirth = HirthEdge(rot(C_Fe,Q), rot(b*0.5*a,Q)) _ = (1.0+0.5*(N0[0]-1.0))/N0[0]; ctr = np.array([_,0.5,0.5]) @ H_new; frac = H[0][0]/N0[0] s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1]) def _(x,x_d,x_dof): sy=(x[1]-ctr[1])/H[1, 1]; x0=(x-ctr); if(x0[1]>0.0): x0/=(H[0, 0]-frac) else: x0/= H[0, 0] if sy>s_fxd or sy<=-s_fxd: x+=b_norm*hirth.ave_disp(x0); x_dof[0]=x_dof[1]=False; else: x+=b_norm*hirth.disp(x0); x[0]-=0.25*b_norm; sim.do(_) H = np.array(sim.H) H_new = np.array(sim.H); H_new[0, 0] -= 0.5*b_norm; resize(sim, H_new, np.full((3),0.5) @ H) xprt(sim, "dumps/edge.cfg") """ Explanation: Edge dislocation End of explanation """ def make_edge(nlyrs_fxd,nlyrs_vel,vel): #this is for 0K #c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608); #this is for 300K c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241); #N0=np.array([80,46,5],dtype=np.int32) sim=md.atoms.import_cfg('configs/Fe_300K.cfg'); a=sim.H[0][0]; b_norm=0.5*a*np.sqrt(3.0); b=np.array([1.0,1.0,1.0]) s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0) # create rotation matrix _ = np.cross(b,s) Q=np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)]) hirth = HirthEdge(rot(c_Fe,Q),np.dot(Q,b)*0.5*a) # create a unit cell sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]]) H=np.array(sim.H); def f0(x): if x[0]>0.5*H[0][0]-1.0e-8: return False; else: x[0]*=2.0; sim.do(f0); _ = np.full((3,3), 0.0) _[0,0] = - 0.5 sim.strain(_) displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0])) max_natms=1000000 n_per_vol=sim.natms/sim.vol; _=np.power(max_natms/n_per_vol,1.0/3.0); N1=np.full((3),0,dtype=np.int32); for i in range(0,3): N1[i]=int(np.around(_/sim.H[i][i])); N0=np.array([N1[0],N1[1],1],dtype=np.int32); N0[0]+=1; sim*=N0; # remove one layer along ... direction H=np.array(sim.H); frac=H[0][0]/N0[0] def _(x): if x[0] < H[0][0]/N0[0] and x[1]>0.5*H[1][1]: return False; sim.do(_) sim.kB=8.617330350e-5 sim.create_temp(300.0,8569643); H = np.array(sim.H); H_new = np.array(sim.H); H_new[1][1] += 50.0 ctr=np.dot(np.full((3),0.5),H); resize(sim,H_new, np.full((3),0.5) @ H) l=(1.0+0.5*(N0[0]-1.0))/N0[0]; ctr=np.dot(np.array([l,0.5,0.5]),H_new); frac=H[0][0]/N0[0] s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1]) s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1]) def f(x,x_d,x_dof): sy=(x[1]-ctr[1])/H[1][1]; x0=(x-ctr); if(x0[1]>0.0): x0/=(H[0][0]-frac) else: x0/= H[0][0] if sy>s_fxd or sy<=-s_fxd: x_d[1]=0.0; x_dof[0]=x_dof[1]=False; x+=b_norm*hirth.ave_disp(x0); else: x+=b_norm*hirth.disp(x0); if sy<=-s_vel or sy>s_vel: x_d[0]=2.0*sy*vel; x[0]-=0.25*b_norm; sim.do(f) H = np.array(sim.H) H_new = np.array(sim.H); H_new[0, 0] -= 0.5*b_norm; resize(sim, H_new, np.full((3),0.5) @ H) return N1[2], sim; nlyrs_fxd=2 nlyrs_vel=7; vel=-0.004; N,sim=make_edge(nlyrs_fxd,nlyrs_vel,vel) xprt(sim, "dumps/edge.cfg") _ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float); Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _; C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q) B = np.linalg.inv( np.array([ [C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]], [C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]], [C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]] ] )) _ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]]) mu = np.array([_[0],0.0]); if np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12: mu[1] = _[1]; else: mu[1] = _[2] alpha = np.real(mu); beta = np.imag(mu); p = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1] q = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu K = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0]) K_r = np.real(K) K_i = np.imag(K) Tr = np.stack([ np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])), np.array([[1.0, alpha[1]], [0.0, beta[1]]]) ], axis=1) def u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0]) def u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1]) def disp(x): _ = Tr @ x return K_r @ u_f0(_) + K_i @ u_f1(_) """ Explanation: putting it all together End of explanation """ _ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float); Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _; C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q) disp = crack(C) n = 300; r = 10; disp_scale = 0.3; n0 = int(np.round(n/ (1 +np.pi), )) n1 = n - n0 xs = np.concatenate(( np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]), r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]), np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]), ), axis =1) xs_def = xs + disp_scale * disp(xs) fig, ax = plt.subplots(figsize=(10.5,5), ncols = 2) ax[0].plot(xs[0], xs[1], "b-", label="non-deformed"); ax[1].plot(xs_def[0], xs_def[1], "r-.", label="deformed"); """ Explanation: Putting it all together End of explanation """
programmingscience/code
2014/20141230_2DPlotsonPythonP2.ipynb
gpl-3.0
from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y) xlabel('Time (s)') ylabel('Voltage (mV)') title('The simplest one, buddies') grid(True) show() """ Explanation: 20141230_2DPlotsonPythonP2.ipynb Two-dimensional plots on Python [Part II] Support material for the blog post "Two-dimensional plots on Python [Part II]", on Programming Science. Author: Alexandre 'Jaguar' Fioravante de Siqueira Contact: http://programmingscience.org/?page_id=26 Support material: http://www.github.com/programmingscience/code In order to cite this material, please use the reference below (this is a Chicago-like style): de Siqueira, Alexandre Fioravante. "Two-dimensional plots on Python [Part II]". Programming Science. 2014, Dec 30. Available at http://www.programmingscience.org/?p=33. Access date: (please put your access date here). Copyright (C) Alexandre Fioravante de Siqueira This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Custom 2D plots. Generating a simple 2D plot. End of explanation """ from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y, color='red') xlabel('Time (s)') ylabel('Voltage (mV)') title('The simplest one, buddies') grid(True) show() """ Explanation: Custom plot line: color='red'. End of explanation """ from pylab import * t = arange(0.0, 2.0,0.01) y = sin(2*pi*t) plot(t, y, color='green', linestyle='-.', linewidth=3) xlabel('Time (s)', fontweight='bold', fontsize=14) ylabel('Voltage (mV)', fontweight='bold', fontsize=14) title('The simplest one, buddies') grid(True) show() """ Explanation: A custom 2D plot, based on our first example. End of explanation """
nsrchemie/code_guild
wk1/notebooks/wk1.0.ipynb
mit
count = 1 for elem in range(1, 3 + 1): count *= elem print(count) """ Explanation: Wk1.0 Warm-up: I got 32767 problems and overflow is one of them. 1. Swap the values of two variables, a and b without using a temporary variable. 2. Suppose I had six different sodas. In how many different combinations could I drink the sodas? Write a program that calculates the number of unique combinations for 6 objects. Assume that I finish a whole sode before moving onto another one. End of explanation """ from math import factorial as f f(3) """ Explanation: 3. Extend your program to n objects. How many different combinations do I have for 5 objects? How about 15? What is the max number of objects I could calculate for if I was storing the result in a 32 bit integer? What happens if the combinations exceed 32 bits? End of explanation """ .1*10**20 """ Explanation: 4. What will the following code yield? Was it what you expected? What's going on here? .1 + .1 + .1 == .3 End of explanation """ def n_max(): inpt = eval(input("Please enter some values: ")) maximum = max_val(inpt) print("The largest value is", maximum) def max_val(ints): """Input: collection of ints. Returns: maximum of the collection int - the max integer.""" max = ints[0] for x in ints: if x > max: max = x return max assert max_val([1, 2, 3]) == 3 assert max_val([1, 1, 1]) == 1 assert max_val([1, 2, 2]) == 2 n_max() inpt = eval(input("Please enter three values: ")) list(inpt) """ Explanation: 5. Try typing in the command below and read this page format(.1, '.100g') Data structure of the day: tuples Switching variables: a second look How do we make a single tuple? slicing, indexing mutability tuple packing and unpacking using tuples in loops using tuples to unpack enumerate(lst) tuples as return values comparing tuples (0, 1, 2000000) &lt; (0, 3, 4) Design pattern: DSU Decorate Sort Undecorate Ex. ``` txt = 'but soft what light in yonder window breaks' words = txt.split() t = list() for word in words: t.append((len(word), word)) t.sort(reverse=True) res = list() for length, word in t: res.append(word) print(res) ``` Why would words.sort() not work? We can use tuples as a way to store related data addr = 'monty@python.org' uname, domain = addr.split('@') Advanced: tuples as argument parameters t = (a, b, c) func(*t) Tuples: exercises Exercise 1 Revise a previous program as follows: Read and parse the "From" lines and pull out the addresses from the line. Count the number of messages from each person using a dictionary. After all the data has been read print the person with the most commits by creating a list of (count, email) tuples from the dictionary and then sorting the list in reverse order and print out the person who has the most commits. ``` Sample Line: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 Enter a file name: mbox-short.txt cwen@iupui.edu 5 Enter a file name: mbox.txt zqian@umich.edu 195 ``` Exercise 2 This program counts the distribution of the hour of the day for each of the messages. You can pull the hour from the "From" line by finding the time string and then splitting that string into parts using the colon character. Once you have accumulated the counts for each hour, print out the counts, one per line, sorted by hour as shown below. Sample Execution: python timeofday.py Enter a file name: mbox-short.txt 04 3 06 1 07 1 09 2 10 3 11 6 14 1 15 2 16 4 17 2 18 1 19 1 Exercise 3 Write a program that reads a file and prints the letters in decreasing order of frequency. Your program should convert all the input to lower case and only count the letters a-z. Your program should not count spaces, digits, punctuation or anything other than the letters a-z. Find text samples from several different languages and see how letter frequency varies between languages. Compare your results with the tables at wikipedia.org/wiki/Letter_frequencies. Afternoon warm-up Write a function that takes three numbers $x_1, x_2, x_3$ from a user and returns the max value. Don't use the built in max function. Would your function work on more than three values? End of explanation """ assert compress('AAAADDBBBBBCCEAA') == 'A4D2B5C2E1A2' # %load ../scripts/compress/compressor.py def groupby_char(lst): """Returns a list of strings containing identical characters. Takes a list of characters produced by running split on a string. Groups runs (in order sequences) of identical characters into string elements in the list. Parameters --------- Input: lst: list A list of single character strings. Output: grouped: list A list of strings containing grouped characters.""" new_lst = [] count = 1 for i in range(len(lst) - 1): # we range to the second to last index since we're checking if lst[i] == lst[i + 1]. if lst[i] == lst[i + 1]: count += 1 else: new_lst.append([lst[i],count]) # Create a lst of lists. Each list contains a character and the count of adjacent identical characters. count = 1 new_lst.append((lst[-1],count)) # Return the last character (we didn't reach it with our for loop since indexing until second to last). grouped = [char*count for [char, count] in new_lst] return grouped def compress_group(string): """Returns a compressed two character string containing a character and a number. Takes in a string of identical characters and returns the compressed string consisting of the character and the length of the original string. Example ------- "AAA"-->"A3" Parameters: ----------- Input: string: str A string of identical characters. Output: ------ compressed_str: str A compressed string of length two containing a character and a number. """ return str(string[0]) + str(len(string)) def compress(string): """Returns a compressed representation of a string. Compresses the string by mapping each run of identical characters to a single character and a count. Ex. -- compress('AAABBCDDD')--> 'A3B2C1D3'. Only compresses string if the compression is shorter than the original string. Ex. -- compress('A')--> 'A' # not 'A1'. Parameters ---------- Input: string: str The string to compress Output: compressed: str The compressed representation of the string. """ try: split_str = [char for char in string] # Create list of single characters. grouped = groupby_char(split_str) # Group characters if characters are identical. compressed = ''.join( # Compress each element of the grouped list and join to a string. [compress_group(elem) for elem in grouped]) if len(compressed) < len(string): # Only return compressed if compressed is actually shorter. return compressed else: return string except IndexError: # If our input string is empty, return an empty string. return "" except TypeError: # If we get something that's not compressible (including NoneType) return None. return None # %load ../scripts/compress/compress_tests.py # This will fail to run because in wrong directory from compress.compressor import * def compress_test(): assert compress('AAABBCDDD') == 'A3B2C1D3' assert compress('A') == 'A' assert compress('') == '' assert compress('AABBCC') == 'AABBCC' # compressing doesn't shorten string so just return string. assert compress(None) == None def groupby_char_test(): assert groupby_char(["A", "A", "A", "B", "B"]) == ["AAA", "BB"] def compress_group_test(): assert compress_group("AAA") == "A3" assert compress_group("A") == "A1" """ Explanation: Strategy 1: Compare each to all (brute force) Strategy 2: Decision Tree Strategy 3: Sequential processing Strategy 4: Use python The development process A Problem Solving Algorithm See Polya's How to Solve it 1. Understand the problem 2. Brainstorm on paper 3. Plan out program 4. Refine design 5. Create function 6. Create function docstring 7. Create function tests 8. Check that tests fail 9. If function is trivial, then solve it (i.e. get function tests to pass). Else, create sub-function (aka divide and conquer) and repeat steps 5-8. Example: Compress End of explanation """
sharynr/notebooks
Import from Cloudant Python example.ipynb
apache-2.0
from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() """ Explanation: Python example using Spark SQL over Cloudant as a source This sample notebook is written in Python and expects the Python 3.5 or higher runtime. Make sure the kernel is started and you are connected to it when executing this notebook. The data source for this example can be found at: http://examples.cloudant.com/crimes/. Replicate the database into your own Cloudant account before you execute this script. This Python notebook showcases how to use the SQL-Cloudant connector. This code reads Cloudant data, creates a DataFrame from the Cloudant data, filters that data down to only crime incidents with the nature code for a public disturbance, and then writes those 7 documents to another Cloudant database. 1. Work with SparkSession Import and initialize SparkSession. End of explanation """ cloudantdata = spark.read.format("org.apache.bahir.cloudant")\ .option("cloudant.host","xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com")\ .option("cloudant.username", "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix")\ .option("cloudant.password","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")\ .load("crimes") """ Explanation: 2. Work with a Cloudant database A Dataframe object can be created directly from a Cloudant database. To configure the database as source, pass these options: 1 - package name that provides the classes (like CloudantDataSource) implemented in the connector to extend BaseRelation. For the SQL-Cloudant connector this will be org.apache.bahir.cloudant 2 - cloudant.host parameter to pass the Cloudant account name 3 - cloudant.user parameter to pass the Cloudant user name 4 - cloudant.password parameter to pass the Cloudant account password 5 - the database to load End of explanation """ # This code prints the schema and a record count cloudantdata.printSchema() cloudantdata.count() # This code displays the values of the naturecode field cloudantdata.select("properties.naturecode").show() # This code filters the data to just those records with a naturecode of "DISTRB", and then displays that data disturbDF = cloudantdata.filter("properties.naturecode = 'DISTRB'") disturbDF.show() # This code writes the filtered data to a Cloudant database called crimes_filtered. If the Cloudant database exists, the documents will be added to the database. # If the database does not exist, set the createDBOnSave option to 'true'. disturbDF.select("properties").write.format("org.apache.bahir.cloudant")\ .option("cloudant.host","xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com")\ .option("cloudant.username", "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix")\ .option("cloudant.password","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")\ .option("createDBOnSave", "true")\ .save("crimes_filtered") # Next, you'll see how to create a visualization of the crimes data. # First, this line creates a DataFrame containing all of the naturecodes and a count of the crime incidents for each code. reducedValue = cloudantdata.groupBy("properties.naturecode").count() reducedValue.printSchema() """ Explanation: 3. Work with a Dataframe At this point, all transformations and functions should behave as specified with Spark SQL. (http://spark.apache.org/sql/) End of explanation """ # This line imports two Python modules. The pprint module helps to produce pretty representations of data structures, # and the counter subclass from the collections module helps to count hashable objects. import pprint from collections import Counter # This line imports PySpark classes for Spark SQL and DataFrames. from pyspark.sql import * from pyspark.sql.functions import udf, asc, desc from pyspark import SparkContext, SparkConf from pyspark.sql.types import IntegerType # This line converts an Apache Spark DataFrame to a Panda DataFrame, and also sorts the DataFrame by count first, # and then by naturecode second in order to produce a sorted graph later. import pandas as pd pandaDF = reducedValue.orderBy(desc("count"), asc("naturecode")).toPandas() print(pandaDF) # This is needed to actually see the plots %matplotlib inline # This line imports matplotlib.pyplot which is a collection of command style functions that make matplotlib work like MATLAB import matplotlib.pyplot as plt # These lines assign the data to the values and labels objects. values = pandaDF['count'] labels = pandaDF['naturecode'] # These lines provide the format for the plot. plt.gcf().set_size_inches(16, 12, forward=True) plt.title('Number of crimes by type') # These lines specify that the plot should display as a horizontal bar chart with values being for the x axis # and labels for the y axis plt.barh(range(len(values)), values) plt.yticks(range(len(values)), labels) # This last line displays the plot plt.show() """ Explanation: 4. Generate visualizations End of explanation """
dolittle007/dolittle007.github.io
notebooks/getting_started.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt # Initialize random number generator np.random.seed(123) # True parameter values alpha, sigma = 1, 1 beta = [1, 2.5] # Size of dataset size = 100 # Predictor variable X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 # Simulate outcome variable Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma """ Explanation: Getting started with PyMC3 Authors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck Note: This text is taken from the PeerJ CS publication on PyMC3. Abstract Probabilistic Programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamliltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package. Introduction Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC. Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis. While most of PyMC3's user-facing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration. Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends. Installation Running PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO. PyMC3 can be installed using pip (https://pip.pypa.io/en/latest/installing.html): pip install git+https://github.com/pymc-devs/pymc3 PyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib. To take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed. These are not automatically installed, but can be installed by: pip install patsy pandas The source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage. A Motivating Example: Linear Regression To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes $Y$ as normally-distributed observations with an expected value $\mu$ that is a linear function of two predictor variables, $X_1$ and $X_2$. $$\begin{aligned} Y &\sim \mathcal{N}(\mu, \sigma^2) \ \mu &= \alpha + \beta_1 X_1 + \beta_2 X_2 \end{aligned}$$ where $\alpha$ is the intercept, and $\beta_i$ is the coefficient for covariate $X_i$, while $\sigma$ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. We choose zero-mean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a half-normal distribution (normal distribution bounded at zero) as the prior for $\sigma$. $$\begin{aligned} \alpha &\sim \mathcal{N}(0, 100) \ \beta_i &\sim \mathcal{N}(0, 100) \ \sigma &\sim \lvert\mathcal{N}(0, 1){\rvert} \end{aligned}$$ Generating data We can simulate some artificial data from this model using only NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure. End of explanation """ %matplotlib inline fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10,4)) axes[0].scatter(X1, Y) axes[1].scatter(X2, Y) axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2'); """ Explanation: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib. End of explanation """ from pymc3 import Model, Normal, HalfNormal """ Explanation: Model Specification Specifying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above. First, we import the components we will need from PyMC. End of explanation """ basic_model = Model() with basic_model: # Priors for unknown model parameters alpha = Normal('alpha', mu=0, sd=10) beta = Normal('beta', mu=0, sd=10, shape=2) sigma = HalfNormal('sigma', sd=1) # Expected value of outcome mu = alpha + beta[0]*X1 + beta[1]*X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y) """ Explanation: Now we build our model, which we will present in full first, then explain each part line-by-line. End of explanation """ help(Normal) #try help(Model), help(Uniform) or help(basic_model) """ Explanation: The first line, python basic_model = Model() creates a new Model object which is a container for the model random variables. Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: python with basic_model: This creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model right after we create them. If you try to create a new random variable without a with model: statement, it will raise an error since there is no obvious model for the variable to be added to. The first three statements in the context manager: python alpha = Normal('alpha', mu=0, sd=10) beta = Normal('beta', mu=0, sd=10, shape=2) sigma = HalfNormal('sigma', sd=1) create a stochastic random variables with a Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10 for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, $\sigma$. These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and partly random (or stochastic). We call the Normal constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu, the mean, and sd, the standard deviation, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and many others, are available in PyMC3. The beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7) makes random variable that takes on 5 by 7 matrix values). Detailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function. End of explanation """ from pymc3 import find_MAP map_estimate = find_MAP(model=basic_model) print(map_estimate) """ Explanation: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship: python mu = alpha + beta[0]*X1 + beta[1]*X2 This creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their values may be. PyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexed-into to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided. The final line of the model, defines Y_obs, the sampling distribution of the outcomes in the dataset. python Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y) This is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object. Notice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables. Model fitting Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior estimates analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods. Maximum a posteriori methods The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find_MAP function. Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values. End of explanation """ from scipy import optimize map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell) print(map_estimate) """ Explanation: By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP. End of explanation """ from pymc3 import NUTS, sample from scipy import optimize with basic_model: # draw 500 posterior samples trace = sample() """ Explanation: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together. Most techniques for finding the MAP estimate also only find a local optimum (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different. Sampling methods Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution. To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis. These step methods can be assigned manually, or assigned automatically by PyMC3. Auto-assignment is based on the attributes of each variable in the model. In general: Binary variables will be assigned to BinaryMetropolis Discrete variables will be assigned to Metropolis Continuous variables will be assigned to NUTS Auto-assignment can be overriden for any subset of variables by specifying them manually prior to sampling. Gradient-based sampling methods PyMC3 has the standard sampling algorithms like adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful on models that have many continuous parameters, a situation where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentiation of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables. NUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although NUTS uses it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often. Fortunately PyMC3 automatically initializes NUTS using another inference algorithm called ADVI (auto-diff variational inference). Moreover, PyMC3 will automatically assign an appropriate sampler if we don't supply it via the step keyword argument (see below for an example of how to explicitly assign step methods). End of explanation """ trace['alpha'][-5:] """ Explanation: The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows: End of explanation """ from pymc3 import Slice with basic_model: # obtain starting values via MAP start = find_MAP(fmin=optimize.fmin_powell) # instantiate sampler step = Slice(vars=[sigma]) # draw 5000 posterior samples trace = sample(5000, step=step, start=start) """ Explanation: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample. End of explanation """ from pymc3 import traceplot traceplot(trace); """ Explanation: Posterior analysis PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot. End of explanation """ from pymc3 import summary summary(trace) """ Explanation: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients. In addition, the summary function provides a text-based output of common posterior statistics: End of explanation """ try: from pandas_datareader import data except ImportError: !pip install pandas-datareader from pandas_datareader import data import pandas as pd returns = data.get_data_yahoo('SPY', start='2008-5-1', end='2009-12-1')['Adj Close'].pct_change() print(len(returns)) returns.plot(figsize=(10, 6)) plt.ylabel('daily returns in %'); """ Explanation: Case study 1: Stochastic volatility We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient. The Model Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21). $$ \begin{aligned} \sigma &\sim exp(50) \ \nu &\sim exp(.1) \ s_i &\sim \mathcal{N}(s_{i-1}, \sigma^{-2}) \ log(y_i) &\sim t(\nu, 0, exp(-2 s_i)) \end{aligned} $$ Here, $y$ is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process $s$. The individual $s_i$ are the individual daily log volatilities in the latent log volatility process. The Data Our data consist of daily returns of the S&P 500 during the 2008 financial crisis. Here, we use pandas-datareader to obtain the price data from Yahoo!-Finance; it can be installed with pip install pandas-datareader. End of explanation """ from pymc3 import Exponential, StudentT, Deterministic from pymc3.math import exp from pymc3.distributions.timeseries import GaussianRandomWalk with Model() as sp500_model: nu = Exponential('nu', 1./10, testval=5.) sigma = Exponential('sigma', 1./.02, testval=.1) s = GaussianRandomWalk('s', sigma**-2, shape=len(returns)) volatility_process = Deterministic('volatility_process', exp(-2*s)) r = StudentT('r', nu, lam=1/volatility_process, observed=returns) """ Explanation: Model Specification As with the linear regression example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the $\nu$ and $\sigma$ priors, the Student-T (StudentT) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities. In PyMC3, variables with purely positive priors like Exponential are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named "variableName_log") is added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. Variables with priors that constrain them on two sides, like Beta or Uniform, are also transformed to be unconstrained but with a log odds transform. Although, unlike model specification in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a "test value") using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden. The vector of latent volatilities s is given a prior distribution by GaussianRandomWalk. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector. End of explanation """ from pymc3 import variational import scipy with sp500_model: trace = sample(2000) """ Explanation: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does. Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example. Fitting End of explanation """ traceplot(trace[200:], [nu, sigma]); """ Explanation: We can check our samples by looking at the traceplot for nu and sigma. End of explanation """ fig, ax = plt.subplots(figsize=(15, 8)) returns.plot(ax=ax) ax.plot(returns.index, 1/np.exp(trace['s',::5].T), 'r', alpha=.03); ax.set(title='volatility_process', xlabel='time', ylabel='volatility'); ax.legend(['S&P500', 'stochastic volatility process']) """ Explanation: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly. End of explanation """ disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999) year = np.arange(1851, 1962) plt.plot(year, disaster_data, 'o', markersize=8); plt.ylabel("Disaster count") plt.xlabel("Year") """ Explanation: As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease. Case study 2: Coal mining disasters Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value. Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables. End of explanation """ from pymc3 import DiscreteUniform, Poisson from pymc3.math import switch with Model() as disaster_model: switchpoint = DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900) # Priors for pre- and post-switch rates number of disasters early_rate = Exponential('early_rate', 1) late_rate = Exponential('late_rate', 1) # Allocate appropriate Poisson rates to years before and after current rate = switch(switchpoint >= year, early_rate, late_rate) disasters = Poisson('disasters', rate, observed=disaster_data) """ Explanation: Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. In our model, $$ \begin{aligned} D_t &\sim \text{Pois}(r_t), r_t= \begin{cases} l, & \text{if } t \lt s \ e, & \text{if } t \ge s \end{cases} \ s &\sim \text{Unif}(t_l, t_h)\ e &\sim \text{exp}(1)\ l &\sim \text{exp}(1) \end{aligned} $$ the parameters are defined as follows: * $D_t$: The number of disasters in year $t$ * $r_t$: The rate parameter of the Poisson distribution of disasters in year $t$. * $s$: The year in which the rate parameter changes (the switchpoint). * $e$: The rate parameter before the switchpoint $s$. * $l$: The rate parameter after the switchpoint $s$. * $t_l$, $t_h$: The lower and upper boundaries of year $t$. This model is built much like our previous models. The major differences are the introduction of discrete variables with the Poisson and discrete-uniform priors and the novel form of the deterministic random variable rate. End of explanation """ from pymc3 import Metropolis with disaster_model: trace = sample(10000) """ Explanation: The logic for the rate random variable, python rate = switch(switchpoint &gt;= year, early_rate, late_rate) is implemented using switch, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments. Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. All we need to do to handle the missing values is ensure we sample this random variable as well. Unfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements adaptive Metropolis-Hastings, because it is designed to handle discrete values. PyMC3 automatically assigns the correct sampling algorithms. End of explanation """ traceplot(trace); """ Explanation: In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error. End of explanation """ import theano.tensor as T from theano.compile.ops import as_op @as_op(itypes=[T.lscalar], otypes=[T.lscalar]) def crazy_modulo3(value): if value > 0: return value % 3 else : return (-value + 1) % 3 with Model() as model_deterministic: a = Poisson('a', 1) b = crazy_modulo3(a) """ Explanation: Arbitrary deterministics Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator. Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types. End of explanation """ from pymc3.distributions import Continuous class Beta(Continuous): def __init__(self, mu, *args, **kwargs): super(Beta, self).__init__(*args, **kwargs) self.mu = mu self.mode = mu def logp(self, value): mu = self.mu return beta_logp(value - mu) def beta_logp(value): return -1.5 * np.log(1 + (value)**2) with Model() as model: beta = Beta('slope', mu=0, testval=0) """ Explanation: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op. Arbitrary distributions Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014). ```python import theano.tensor as T from pymc3 import DensityDist, Uniform with Model() as model: alpha = Uniform('intercept', -100, 100) # Create custom densities beta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0) eps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1) # Create likelihood like = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y) ``` For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op. Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function. End of explanation """ # Convert X and Y to a pandas DataFrame import pandas df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y}) """ Explanation: If your logp can not be expressed in Theano, you can decorate the function with as_op as follows: @as_op(itypes=[T.dscalar], otypes=[T.dscalar]). Note, that this will create a blackbox Python function that will be much slower and not provide the gradients necessary for e.g. NUTS. Generalized Linear Models Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module. The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example: End of explanation """ from pymc3.glm import GLM with Model() as model_glm: GLM.from_formula('y ~ x1 + x2', df) trace = sample() """ Explanation: The model can then be very concisely specified in one line of code. End of explanation """ from pymc3.glm.families import Binomial df_logistic = pandas.DataFrame({'x1': X1, 'y': Y > np.median(Y)}) with Model() as model_glm_logistic: GLM.from_formula('y ~ x1', df_logistic, family=Binomial()) """ Explanation: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object. End of explanation """ import pymc3 as pm from pymc3.backends import SQLite with Model() as model_glm_logistic: GLM.from_formula('y ~ x1', df_logistic, family=Binomial()) backend = SQLite('trace.sqlite') trace = sample(trace=backend) summary(trace, varnames=['x1']) """ Explanation: For a more complete and flexible formula interface, including hierarchical GLMs, see Bambi. Backends PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends: By default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the hdf5 backend: End of explanation """ from pymc3.backends.sqlite import load with basic_model: trace_loaded = load('trace.sqlite') """ Explanation: The stored trace can then later be loaded using the load command: End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD-3-Clause import mne from mne import io from mne.datasets import sample from mne.cov import compute_covariance print(__doc__) """ Explanation: Whitening evoked data with a noise covariance Evoked data are loaded and then whitened using a given noise covariance matrix. It's an excellent quality check to see if baseline signals match the assumption of Gaussian white noise during the baseline period. Covariance estimation and diagnostic plots are based on :footcite:EngemannGramfort2015. References .. footbibliography:: End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 40, n_jobs=1, fir_design='firwin') raw.info['bads'] += ['MEG 2443'] # bads + 1 more events = mne.read_events(event_fname) # let's look at rare events, button presses event_id, tmin, tmax = 2, -0.2, 0.5 reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'), baseline=None, reject=reject, preload=True) # Uncomment next line to use fewer samples and study regularization effects # epochs = epochs[:20] # For your data, use as many samples as you can! """ Explanation: Set parameters End of explanation """ method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01)) noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto', return_estimators=True, verbose=True, n_jobs=1, projs=None, rank=None, method_params=method_params) # With "return_estimator=True" all estimated covariances sorted # by log-likelihood are returned. print('Covariance estimates sorted from best to worst') for c in noise_covs: print("%s : %s" % (c['method'], c['loglik'])) """ Explanation: Compute covariance using automated regularization End of explanation """ evoked = epochs.average() evoked.plot(time_unit='s') # plot evoked response """ Explanation: Show the evoked data: End of explanation """ evoked.plot_white(noise_covs, time_unit='s') """ Explanation: We can then show whitening for our various noise covariance estimates. Here we should look to see if baseline signals match the assumption of Gaussian white noise. we expect values centered at 0 within 2 standard deviations for 95% of the time points. For the Global field power we expect a value of 1. End of explanation """
kmunve/APS
aps/notebooks/new_snow_problem.ipynb
mit
# -*- coding: utf-8 -*- %matplotlib inline from __future__ import print_function import pylab as plt import datetime import numpy as np plt.rcParams['figure.figsize'] = (14, 6) plt.rcParams.update({'font.size': 22}) plt.xkcd() """ Explanation: APS - new snow Imports End of explanation """ from matplotlib.patches import Rectangle from scipy.optimize import curve_fit def score_func(x, a, b):#, c): return (1 / (1+np.exp(-x+a))) * b #return a * (1. / np.log(b)) * np.log(x) + c def decay_func(x, a, b): return a * b ** x """ Explanation: Parameters, categories and scores Hourly score- and decay-functions The score function for the new snow problem is of type $$ s = a \cdot \log_b{x} $$ The decay function for the new snow problem is of type $$ d = a \cdot b^x $$ End of explanation """ # Hourly new snow amount in mm water equivalent control_points = np.array([ [-0.5, -2.], [0.0, 0.], [2., 3.], [10., 40.] ]) new_snow_1h_cat = control_points[:, 0] new_snow_1h_score = control_points[:, 1] params = curve_fit(score_func, new_snow_1h_cat, new_snow_1h_score) [sa, sb] = params[0] print("Score function with parameters {0:.2f} and {1:.2f} results in daily increase of {2:.2f} points with 2 mm hourly precipitation.".format(sa, sb, score_func(2, sa, sb)*24)) x = np.arange(0, 20.0, 0.1) res = score_func(x, sa, sb) plt.scatter(new_snow_1h_cat, new_snow_1h_score) plt.plot(x, res) plt.xlabel('Hourly new snow amount (mm w.e.)') plt.ylabel('Score') plt.xlim(0, 20); plt.ylim(0, 100) #plt.gca().add_patch(Rectangle((0, 0), 40, 100, edgecolor="lightgrey", facecolor="lightgrey")) """ Explanation: Score function End of explanation """ # Hourly air temperature (ideally snow surface temperature, but that is not available at the moment) control_points = np.array([ [-40., 0.001], [-20.0, 0.01], [-5, 0.3], [0.0, 1.5], #[1., 0.], #[4., -10.], [5., 4.] ]) new_snow_1h_decay_cat = control_points[:, 0] new_snow_1h_decay_score = control_points[:, 1] params = curve_fit(decay_func, new_snow_1h_decay_cat, new_snow_1h_decay_score) [da, db] = params[0] print("Decay function with parameters {0:.2f} and {1:.2f} results in daily reduction of {2:.2f} points at zero degrees Celsius.".format(da, db, decay_func(0, da, db)*24)) print("Decay function with parameters {0:.2f} and {1:.2f} results in daily reduction of {2:.2f} points at -10 degrees Celsius.".format(da, db, decay_func(-10, da, db)*24)) print("Decay function with parameters {0:.2f} and {1:.2f} results in daily reduction of {2:.2f} points at -20 degrees Celsius.".format(da, db, decay_func(-20, da, db)*24)) x = np.arange(-40, 10.0, 0.1) res = decay_func(x, da, db) plt.scatter(new_snow_1h_decay_cat, new_snow_1h_decay_score) plt.plot(x, res) plt.xlabel('Hourly air temperature (C)') plt.ylabel('Decay') plt.xlim(-42, 12); plt.ylim(-10, 10) #plt.gca().add_patch(Rectangle((0, 0), 40, 100, edgecolor="lightgrey", facecolor="lightgrey")) """ Explanation: Decay function End of explanation """ import sqlite3 import pandas as pd db_name = 'filefjell.db' conn = sqlite3.connect(db_name) cur = conn.cursor() sql = "SELECT * from FILEFJELL" df = pd.read_sql(sql, conn, index_col='index', parse_dates=['index']) # #conn.close() df.head() df.columns df.plot(subplots='True', figsize=(14, 9)) """ Explanation: Working with real data Load data from filefjell.db containing two weeks of met-data from the station. The database was generated by the notebook "xgeo_chartserver". End of explanation """ #df['24h_precip'] = df['FILEFJELL - KYRKJESTØLANE (54710), Nedbør (mm)'].rolling(window=24).sum() #df['72h_precip'] = df['FILEFJELL - KYRKJESTØLANE (54710), Nedbør (mm)'].rolling(window=72).sum() """ Explanation: Derive required input for new snow problem: - new snow amount last 0-24 h - new snow amount last 24-72 h - temperature gradient last 6 h (relate temperature to settling rate of previous snow falls) End of explanation """ df['score'] = score_func(df['FILEFJELL - KYRKJESTØLANE (54710), Nedbør (mm)'], sa, sb) df['decay'] = decay_func(df['FILEFJELL - KYRKJESTØLANE (54710), Lufttemperatur (°C)'], da, db) df['new_snow_score'] = np.clip(df['score'] - df['decay'], 0, 120) # using 120 to see how often we exceed 100! df.plot(subplots='True', figsize=(14, 9)) plt.gcf().savefig('real.png', dpi=300) """ Explanation: TODO: Find real data with higher precip... End of explanation """ df.loc['20160201']['new_snow_score'].plot() """ Explanation: Select certain days using .loc. Works before or after the ['column_name']. See http://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-label End of explanation """ df['new_snow_score'].loc['20160201T06:00:00':'20160202T06:00:00'].plot() sday = df.loc['20160201'] sday['new_snow_score'].describe() sday['new_snow_score'].plot.box() """ Explanation: Or meteorological day End of explanation """ def score_sum(new_snow_score, new_snow_decay, wind_speed_score): _sum = np.zeros_like(new_snow_score) _sum[0] = np.clip((new_snow_score[0] * wind_speed_score[0] - new_snow_decay[0]), 0, 100) for i in np.arange(1, len(new_snow_score)): _sum[i] = np.clip(_sum[i-1] + (new_snow_score[i] * wind_speed_score[i] - new_snow_decay[i]), 0, 100) return _sum df['wind_score'] = score_wind_speed(df['FILEFJELL - KYRKJESTØLANE (54710), Vindhastighet 10m (m/s)']) df['snow_score'] = score_new_snow_1h(df['FILEFJELL - KYRKJESTØLANE (54710), Nedbør (mm)']) df['snow_decay'] = decay_func(df['FILEFJELL - KYRKJESTØLANE (54710), Nedbør (mm)'], a, b) df['new_snow_score'] = score_sum(df['snow_score'], df['snow_decay'], df['wind_score']) #TODO: add a wind_speed_decay function; should we need to regard wind_direction? df.plot(subplots='True', figsize=(14, 23)) """ Explanation: Summing up the scores and decays End of explanation """ # Hourly air temperature (ideally snow surface temperature, but that is not available at the moment) control_points = np.array([ [-40., 0.01], [-20.0, 0.05], [-5, 1.], [0.0, 3.], #[1., 0.], #[4., -10.], [5., 4.] ]) new_snow_1h_decay_cat = control_points[:, 0] new_snow_1h_decay_score = control_points[:, 1] params = curve_fit(decay_func, new_snow_1h_decay_cat, new_snow_1h_decay_score) [a, b, c] = params[0] print("Decay function with parameters {0:.2f}, {1:.2f} and {2:.2f} results in daily reduction of {3:.2f} points at zero degrees Celsius.".format(a, b, c, decay_func(0, a, b, c)*24)) x = np.arange(-40, 10.0, 0.1) res = decay_func(x, a, b, c) print(decay_func(0, a, b, c)) plt.scatter(new_snow_1h_decay_cat, new_snow_1h_decay_score) plt.plot(x, res) plt.xlabel('Hourly air temperature (C)') plt.ylabel('Decay') plt.xlim(-42, 12); plt.ylim(-10, 10) #plt.gca().add_patch(Rectangle((0, 0), 40, 100, edgecolor="lightgrey", facecolor="lightgrey")) # Hourly new snow amount in mm water equivalent control_points = np.array([ [-2., 10.], [0.0, 2.0], [0.2, 0.5]#, #[0.5, 2.], #[1., 0.], #[4., -10.], #[10., -50.] ]) new_snow_1h_decay_cat = control_points[:, 0] new_snow_1h_decay_score = control_points[:, 1] params = curve_fit(decay_func, new_snow_1h_decay_cat, new_snow_1h_decay_score) [a, b] = params[0] print("Decay function with parameters {0:.2f} and {1:.2f} results in daily reduction of {2:.2f} points with zero precipitation.".format(a, b, decay_func(0, a, b)*24)) x = np.arange(0, 20.0, 0.1) res = decay_func(x, a, b) plt.scatter(new_snow_1h_decay_cat, new_snow_1h_decay_score) plt.plot(x, res) plt.xlabel('Hourly new snow amount (mm w.e.)') plt.ylabel('Decay') plt.xlim(0, 20); plt.ylim(0, 10) plt.gca().add_patch(Rectangle((0, 0), 40, 100, edgecolor="lightgrey", facecolor="lightgrey")) """ Explanation: Outdated stuff Using three parameters a,b,c End of explanation """ # New snow amount last 24 h 0-60 cm [10 cm intervals] new_snow_24h_cat = np.array([0, 20, 40, 60, 80, 100, 120]) new_snow_24h_score = np.array([0.5, 8.0, 15.0, 19., 21.0, 27.0, 33.3]) # Wind speed 0-100 km/h [0,10,20,30,40,50,60,80,100] wind_speed_cat = np.array([-1.5, 0, 2.5, 5, 7.5, 10, 15, 20, 25, 30, 40]) # m/s wind_speed_score = np.array([-1.0, 0.8, 2.0, 2.9, 3.2, 3.0, 1.1, 0.6, 0.4, 0.2, 0.0]) """ Explanation: Main control factors End of explanation """ # New snow amount last 24-72h 0-100 cm [0,10,20,30,40,50,60,80,100] new_snow_24_72h_cat = np.array([0, 10, 20, 30, 40, 50, 60, 80, 100]) new_snow_24_72h_weight = np.array([0.8, 0.83, 0.86, 0.89, 0.92, 0.95, 0.98, 0.99, 1.0]) # a weight for new_snow_24h # Evolution of temperature evolution_temperature_cat = ["constant very cold", "constant cold", "constant warm", "rise towards 0 deg after snowfall", "substantial cooling after snowfall"] # Bonding to existing snowpack bonding_existing_snowpack_cat = ["favorable", "moderate", "poor"] # Type of new snow type_new_snow_cat = ["loose-powder", "soft", "packed", "packed and moist"] """ Explanation: Weighting Weights are added if they are independent of the value of the core factor or multiplied if they are related to the core factor. End of explanation """ new_snow_24h_fit = np.polyfit(new_snow_24h_cat, new_snow_24h_score, 2) score_new_snow_24h = np.poly1d(new_snow_24h_fit) x = np.arange(0, 120.0) res = score_new_snow_24h(x) LABELSIZE = 22 #plt.scatter(new_snow_24h_cat, new_snow_24h_score) plt.plot(x, res) plt.xlabel('New snow amount', fontsize=LABELSIZE) plt.ylabel('Score', fontsize=LABELSIZE) ax = plt.gca() ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) #plt.axhline(33.3, color='grey', ls='--') plt.savefig('score_snow.png', dpi=150) """ Explanation: The new_snow_24_72h_weight are used to weight the new_snow_24h_scores prior to multiplying it with wind_speed_score. In order to achive a smooth fit within the range of interest I added some control points just right outside the normal range for the higher order polynomials. The temperature evolution during a snowfall can be fitted to a curve which can then be compared to predefined curves/scenarios. The scenario with the best correlation is chosen to define the category. Temperature or change in snow depth will be used if the precipitation event is rain or snow when applied to data from a weather station. The AROME model generally supplies that separation. The type_new_snow_cat can be infered from evolution_temperature and wind_speed. In the first place the categories can be set manually. Score functions New snow 24 h End of explanation """ new_snow_24_72h_fit = np.polyfit(new_snow_24_72h_cat, new_snow_24_72h_weight, 2) score_new_snow_24_72h = np.poly1d(new_snow_24_72h_fit) x = np.arange(0, 100.0) res = score_new_snow_24_72h(x) #plt.scatter(new_snow_24_72h_cat, new_snow_24_72h_weight) plt.plot(x, res) plt.xlabel('New snow last 24-72 h (cm)') plt.ylabel('Weight') #plt.axhline(1.0, color='grey', ls='--') """ Explanation: New snow 24-72 h End of explanation """ wind_speed_fit = np.polyfit(wind_speed_cat, wind_speed_score, 5) score_wind_speed = np.poly1d(wind_speed_fit) x = np.arange(-5, 45.0) res = score_wind_speed(x) #plt.scatter(wind_speed_cat, wind_speed_score) plt.plot(x, res) plt.xlabel('Wind speed', fontsize=LABELSIZE) plt.ylabel('Score', fontsize=LABELSIZE) plt.xlim(0, 30) plt.ylim(0, 4) ax = plt.gca() ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) #plt.axvspan(0.0, 36.0, facecolor='grey', alpha=0.5) # model validity range #plt.axhline(3.0, color='grey', ls='--') plt.savefig('score_wind.png', dpi=150) """ Explanation: Wind speed End of explanation """ new_snow = np.matrix(np.arange(0, 125.0)) sns = score_new_snow_24h(new_snow) # weighted by new snow amount of the previous two days new_snow_72 = 40 ns_weight = score_new_snow_24_72h(new_snow_72) sns *= ns_weight wind_speed = np.matrix(np.arange(0, 40.0)) swp = score_wind_speed(wind_speed) M = np.multiply(sns, swp.T) #print(M) plt.contourf(M)#np.flipud(M.T)) print("Min {0}; Max {1}".format(np.amin(M), np.amax(M))) #plt.colorbar() plt.xlabel("New snow amount", fontsize=LABELSIZE) plt.ylabel("Wind speed", fontsize=LABELSIZE) ax = plt.gca() ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) plt.savefig('score_im.png', dpi=150) """ Explanation: New snow vs. wind speed End of explanation """ new_snow_cat = ["0-5", "5-10", "10-15", "15-20"] new_snow_thres = {(0, 5): 0.2, (5, 10): 0.5, (10, 15): 1, (15, 20): 3} wind_cat = ["0-3", "4-7", "8-10", "10-15", "16-30"] wind_thres = {(0, 3): 0.2, (3, 7): 1, (7, 10): 2, (10, 15): 0.2, (15, 30): 0.01} new_snow_region = np.array([[0, 4, 6, 18], [0, 4, 6, 18], [0, 4, 6, 18]]) wind_region = np.array([[0, 4, 12, 18], [4, 0, 18, 6], [18, 12, 6, 0]]) def get_score(a, score_dict): for key, value in score_dict.items(): if key[0] <= a < key[1]: # if a < key: return value break return None """ Explanation: ToDo calculate new_snow_score for some weeks compare to chosen AP in regional forecast maybe extent to a larger grid ...continue with hemsedal_jan2016.py in Test Random scripting testing End of explanation """ new_snow_region_score = [get_score(a, new_snow_thres) for a in new_snow_region.flatten()] new_snow_region_score = np.array(new_snow_region_score).reshape(new_snow_region.shape) print(new_snow_region_score) wind_region_score = [get_score(a, wind_thres) for a in wind_region.flatten()] wind_region_score = np.array(wind_region_score).reshape(wind_region.shape) print(wind_region_score) print(wind_region_score * new_snow_region_score) X = np.matrix(np.arange(0, 11.0)) Y = np.matrix(np.arange(10.0, 21.0)) Z = np.multiply(X, Y.T) print(X) print(Y.T) print(Z) plt.imshow(Z) print("Min {0}; Max {1}".format(np.amin(Z), np.amax(Z))) plt.colorbar() arr = np.random.geometric(0.3, (20, 20)) plt.pcolor(arr) window = arr[1:11,1:11] arr[1:11,1:11] = 1 plt.pcolor(arr) print(np.median(window)) print(window.flatten()) print(np.bincount(window.flatten())) print(np.sort(window, axis=None)) """ Explanation: the dict is not sorted and the comparison less than is random... End of explanation """
p0licat/university
Experiments/Crawling/Jupyter Notebooks/Maria-Iuliana Bocicor.ipynb
mit
class HelperMethods: @staticmethod def IsDate(text): # print("text") # print(text) for c in text.lstrip(): if c not in "1234567890 ": return False return True import pandas import requests page = requests.get('https://sites.google.com/view/iuliana-bocicor/research/publications') data = page.text from bs4 import BeautifulSoup soup = BeautifulSoup(data) def GetPublicationData_Number(text): title = text.split(',')[0].split('.')[1] try: date = [k.lstrip() for k in text.split(',') if HelperMethods.IsDate(k.lstrip())][0] except: date = "" return title, "", date import re def GetCoAuthorData(text): # print(text) val = re.search('\"[a-zA-Z ]+\"', text) title = val.group(0) val = re.search('Authors: [a-zA-Z,-. ]+ (?=Pages)', text) authors = val.group(0) # print(authors) return title, authors, "" def GetPublicationData_A(text): print(text) print() text = text.replace("M. ", "") authors = text.split('.')[0] print("authors: ", authors) title = text.split('.')[1].lstrip(' \"') print("title: ", title) try: val = re.search('(19|20)[0-9]{2}\.', text) date = val.group(0).rstrip('.') except: date = "" print() return title, authors, date pubs = [] # print(soup.find_all('div')) for e in soup.find_all('div'): if "class" in e.attrs: if e.attrs["class"] == ["tyJCtd", "mGzaTb", "baZpAe"]: # for every pub entry for c in e.find_all("p", attrs={"class": "zfr3Q"}): if c.text == "": continue if "co-author" in c.text: rval = GetCoAuthorData(c.text) else: features = c.text.split('.') if features[0].isdecimal(): rval = GetPublicationData_Number(c.text) else: rval = GetPublicationData_A(c.text) pubs.append(rval) for pub in pubs: print(pub) print("Count: ", len(pubs)) """ Explanation: Manual publication DB insertion from raw text using syntax features Publications and conferences of Dr. BOCICOR Maria Iuliana, Profesor Universitar http://www.cs.ubbcluj.ro/~iuliana End of explanation """ import mariadb import json with open('../credentials.json', 'r') as crd_json_fd: json_text = crd_json_fd.read() json_obj = json.loads(json_text) credentials = json_obj["Credentials"] username = credentials["username"] password = credentials["password"] table_name = "publications_cache" db_name = "ubbcluj" print(table_name) mariadb_connection = mariadb.connect(user=username, password=password, database=db_name) mariadb_cursor = mariadb_connection.cursor() for paper in pubs: title = "" pub_date = "" authors = "" try: pub_date = paper[2].lstrip() pub_date = str(pub_date) + "-01-01" if len(pub_date) != 10: pub_date = "" except: pass try: title = paper[0].lstrip() except: pass try: authors = paper[1].lstrip() except AttributeError: pass insert_string = "INSERT INTO {0} SET ".format(table_name) insert_string += "Title=\'{0}\', ".format(title) insert_string += "ProfessorId=\'{0}\', ".format(7) if pub_date != "": insert_string += "PublicationDate=\'{0}\', ".format(str(pub_date)) insert_string += "Authors=\'{0}\', ".format(authors) insert_string += "Affiliations=\'{0}\' ".format("") print(insert_string) try: mariadb_cursor.execute(insert_string) except mariadb.ProgrammingError as pe: print("Error") raise pe except mariadb.IntegrityError: continue mariadb_connection.close() """ Explanation: DB Storage (TODO) Time to store the entries in the papers DB table. End of explanation """
NuSTAR/nustar_lunar_pointing
notebooks/Convert_Example.ipynb
mit
import sys from os.path import * import os # For loading the NuSTAR data from astropy.io import fits # Load the NuSTAR python libraries import nustar_pysolar as nustar """ Explanation: Code for converting an observation to solar coordinates Step 1: Run the pipeline on the data to get mode06 files with the correct status bit setting. Note that as of nustardas verion 1.6.0 you can now set the "runsplitsc" keyword to automatically split the CHU combinations for mode06 into separate data files. These files will be stored in the event_cl output directory and have filenames like: nu20201001001A06_chu2_N_cl.evt Optional: Check and see how much exposure is in each file. Use the Observation Report Notebook example to see how to do this. Step 2: Convert the data to heliocentric coordinates. Below uses the nustar.convert methods to change the image to heliocentric coordinates from RA/dec coordinates. Load the python libraries that we're going to use: End of explanation """ infile = '/Users/bwgref/science/solar/july_2016/data/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt' hdulist = fits.open(infile) evtdata = hdulist[1].data hdr = hdulist[1].header hdulist.close() """ Explanation: Get the data from the FITS file. Here we loop over the header keywords to get the correct columns for the X/Y coordinates. We also parse the FITS header to get the data we need to project the X/Y values (which are integers from 0-->1000) into RA/dec coordinates. End of explanation """ (newdata, newhdr) = nustar.convert.to_solar(evtdata, hdr, maxEvt=100) """ Explanation: Rotate to solar coordinates: Variation on what we did to setup the pointing. Note that this can take a little bit of time to run (~a minute or two). The important optin here is how frequently one wants to recompute the position of the Sun. The default is once every 5 seconds. Note that this can take a while (~minutes), so I recommend saving the output as a new FITS file (below). End of explanation """ # # Make the new filename: (sfile, ext)=splitext(infile) outfile=sfile+'_sunpos.evt' # Remove output file if necessary if isfile(outfile): print(outfile, 'exists! Removing old version...') os.remove(outfile) fits.writeto(outfile, newdata, newhdr) """ Explanation: Write the output to a new FITS file. Below keeps the RAWX, RAWY, DET_ID, GRADE, and PI columns from the original file. It repalces the X/Y columns with the new sun_x, sun_y columns. End of explanation """
ajrichards/phylogenetic-models
lda/herve-vertebrates-example.ipynb
bsd-3-clause
import os import numpy as np from vertebratesLib import * split = "SPLIT1" summaryTree,summarySpecies,splitPositions = get_split_data(split) print summaryTree.shape """ Explanation: LDA on vertebrates Notes on the data In this example the tree is contstrained In this example we have to extract position,transition, and branch. The total position are broken into N 'splits' Each split contains 30,000 files (except the last file that has less) So that means that there are 30,000 * N total positions Browsing the data End of explanation """ def get_sentence(position,splitPositions,summary,ignore=False): splitIndex = np.where(splitPositions==position)[0] nonZero = np.where(summary[splitIndex,:] != 0)[1] sentence = [] for nz in nonZero: if ignore and TRANSITIONS[nz].count(TRANSITIONS[nz][0]) == 2: continue count = int(summary[splitIndex,nz][0]) sentence.extend([TRANSITIONS[nz]] * count) return sentence position = '8500' sentence1 = get_sentence(position,splitPositions,summaryTree,ignore=False) sentence2 = get_sentence(position,splitPositions,summaryTree,ignore=True) print("with same AA transition") print(sentence1) print("without same AA transition") print(sentence2) """ Explanation: a sentence of words is represented as the transitions for a given position End of explanation """ import lda ## the data matrix are the sentences by vocabulary vocab = TRANSITIONS #inPlaceTransitions = [] #for t in TRANSITIONS: """ Explanation: Simple test run with lda package End of explanation """ from IPython.display import Image dataDir = None for ddir in [os.path.join("..","data","herve-vertebrates"),\ os.path.join("/","media","ganda","mojo","phylogenetic-models","herve-vertebrates")]: if os.path.isdir(ddir): dataDir = ddir split = "SPLIT1" position = "0" treeList = get_trees(split,position,dataDir) countMatrix = np.zeros((len(treeList),len(TRANSITIONS)),) t = 0 for t,pbTree in enumerate(treeList): fixedTree,treeSummary = fix_tree(pbTree) tlist = [] for item in treeSummary.itervalues(): tlist.extend(item['pairs']) counts = transitions_to_counts(tlist) countMatrix[t,:] = counts figName1 = os.path.join("figures","lda-bplot-check.png") profile_box_plot(countMatrix,figName1,figTitle='position - %s'%position) Image(filename=figName1) """ Explanation: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$. $\alpha$ is a hyperprior for the DP over per-document topic distributions $\beta$ is the hyperprior for the DP over per-topic word distributions $\theta_{m}$ is the topic distribution for document $m$ $\phi_{k}$ is the word distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th word in document $m$ $w_{m,n}$ is the specific word The generative story for phylogenetics We are still modeling topics. However, documents become sites and words become transitions. Transitions may defined in nucleotide, amino acid or codon space. Perhaps more logically though a document would be all of the sites for a given gene. $\alpha$ is a hyperprior for the DP over per-site topic distributions $\beta$ is the hyperprior for the DP over per-topic transition distributions $\theta_{m}$ is the topic distribution for gene $m$ $\phi_{k}$ is the nucleotide transition distribution for topic $k$ $z_{m,n}$ is the topic for the $n$th nucleotide transition in gene $m$ $w_{m,n}$ is the specific transition The generative process Choose $\theta_{m} \sim \textrm{Dir}(\alpha)$, where $m \in {1,...M}$ and $\textrm{Dir}(\alpha)$ is the Dirichlet distribtion for $\alpha$ Choose $\phi_{k} \sim \textrm{Dir}(\beta)$, where $k \in {1,...K}$ For each of the transition positions ($m$,$n$), where $n \in {1,...N}$, and $m \in {1,...M}$ Choose a topic $z_{m,n} \sim \textrm{Multinomial}(\theta_{m})$ Choose a transition $w_{m,n} \sim \textrm{Multinomial}(\phi_{m,n})$ $\phi$ is a $K*V$ Markov matrix each row of which denotes the transition distribution of a topic. The type of data to expect Here I am borrowing from the package LDA, which uses a collapsed version of Gibbs sampling. In this example. Positions are documents First 1000 positions We consider in place transitions 20 topics 1500 MCMC iterations 7 words in a topic (transitions) topics Topic 0: EE ED EQ EK EG DE EA Topic 1: YY YF FY FF YH YC YS Topic 2: RR KK RK QQ RQ HH KR Topic 3: AA AS AT AV SS VA SA Topic 4: II MM IV IL IM MI ML Topic 5: SS ST SA SN SP TT SG Topic 6: WW WL YW WS SW WV WG Topic 7: KK KR RR KQ RK KE KN Topic 8: HH HY HQ HR HN QH YH Topic 9: CC CS SC CY SS CF LC Topic 10: VV VI IV VA VL II VM Topic 11: TT TA TS TV TI ST TM Topic 12: DD DE DN ED EE DG ND Topic 13: QQ QK QE QH QR QL QP Topic 14: FF FL LF FY FI FV FC Topic 15: PP PS SP PA PL PT PQ Topic 16: NN NS SS ND NK SN NT Topic 17: LL LI LV LM LF MM LQ Topic 18: RR RK RQ RT RW RY RV Topic 19: GG GS GA GE GN SG GD top topics position - 0 (top topic: 3) position - 1 (top topic: 19) position - 10 (top topic: 3) position - 100 (top topic: 18) position - 1000 (top topic: 7) position - 10000 (top topic: 7) position - 10001 (top topic: 7) position - 10002 (top topic: 19) position - 10003 (top topic: 7) position - 10004 (top topic: 5) End of explanation """
cfjhallgren/shogun
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
gpl-3.0
%pylab inline %matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat from shogun import RealFeatures, MulticlassLabels, Math # load the dataset dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = dataset['data'] # the usps dataset has the digits labeled from 1 to 10 # we'll subtract 1 to make them in the 0-9 range instead Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 # 4000 examples for training Xtrain = RealFeatures(Xall[:,0:4000]) Ytrain = MulticlassLabels(Yall[0:4000]) # the rest for testing Xtest = RealFeatures(Xall[:,4000:-1]) Ytest = MulticlassLabels(Yall[4000:-1]) # initialize the random number generator with a fixed seed, for repeatability Math.init_random(10) """ Explanation: Deep Autoencoders by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn This notebook illustrates how to train and evaluate a deep autoencoder using Shogun. We'll look at both regular fully-connected autoencoders and convolutional autoencoders. Introduction A (single layer) autoencoder is a neural network that has three layers: an input layer, a hidden (encoding) layer, and a decoding layer. The network is trained to reconstruct its inputs, which forces the hidden layer to try to learn good representations of the inputs. In order to encourage the hidden layer to learn good input representations, certain variations on the simple autoencoder exist. Shogun currently supports two of them: Denoising Autoencoders [1] and Contractive Autoencoders [2]. In this notebook we'll focus on denoising autoencoders. For denoising autoencoders, each time a new training example is introduced to the network, it's randomly corrupted in some mannar, and the target is set to the original example. The autoencoder will try to recover the orignal data from it's noisy version, which is why it's called a denoising autoencoder. This process will force the hidden layer to learn a good representation of the input, one which is not affected by the corruption process. A deep autoencoder is an autoencoder with multiple hidden layers. Training such autoencoders directly is usually difficult, however, they can be pre-trained as a stack of single layer autoencoders. That is, we train the first hidden layer to reconstruct the input data, and then train the second hidden layer to reconstruct the states of the first hidden layer, and so on. After pre-training, we can train the entire deep autoencoder to fine-tune all the parameters together. We can also use the autoencoder to initialize a regular neural network and train it in a supervised manner. In this notebook we'll apply deep autoencoders to the USPS dataset for handwritten digits. We'll start by loading the data and dividing it into a training set and a test set: End of explanation """ from shogun import NeuralLayers, DeepAutoencoder layers = NeuralLayers() layers = layers.input(256).rectified_linear(512).rectified_linear(128).rectified_linear(512).linear(256).done() ae = DeepAutoencoder(layers) """ Explanation: Creating the autoencoder Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section. We'll create a 5-layer deep autoencoder with following layer sizes: 256->512->128->512->256. We'll use rectified linear neurons for the hidden layers and linear neurons for the output layer. End of explanation """ from shogun import AENT_DROPOUT, NNOM_GRADIENT_DESCENT ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise ae.pt_noise_parameter.set_const(0.5) # each input has a 50% chance of being set to zero ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent ae.pt_gd_learning_rate.set_const(0.01) ae.pt_gd_mini_batch_size.set_const(128) ae.pt_max_num_epochs.set_const(50) ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing # uncomment this line to allow the training progress to be printed on the console #from shogun import MSG_INFO; ae.io.set_loglevel(MSG_INFO) # start pre-training. this might take some time ae.pre_train(Xtrain) """ Explanation: Pre-training Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels: L1 for the input layer, L2 for the first hidden layer, and so on up to L5 for the output layer. In pre-training, an autoencoder will formed for each encoding layer (layers up to the middle layer in the network). So here we'll have two autoencoders: L1->L2->L5, and L2->L3->L4. The first autoencoder will be trained on the raw data and used to initialize the weights and biases of layers L2 and L5 in the deep autoencoder. After the first autoencoder is trained, we use it to transform the raw data into the states of L2. These states will then be used to train the second autoencoder, which will be used to initialize the weights and biases of layers L3 and L4 in the deep autoencoder. The operations described above are performed by the the pre_train() function. Pre-training parameters for each autoencoder can be controlled using the pt_* public attributes of DeepAutoencoder. Each of those attributes is an SGVector whose length is the number of autoencoders in the deep autoencoder (2 in our case). It can be used to set the parameters for each autoencoder indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all autoencoders. Different noise types can be used to corrupt the inputs in a denoising autoencoder. Shogun currently supports 2 noise types: dropout noise, where a random portion of the inputs is set to zero at each iteration in training, and gaussian noise, where the inputs are corrupted with random gaussian noise. The noise type and strength can be controlled using pt_noise_type and pt_noise_parameter. Here, we'll use dropout noise. End of explanation """ ae.set_noise_type(AENT_DROPOUT) # same noise type we used for pre-training ae.set_noise_parameter(0.5) ae.set_max_num_epochs(50) ae.set_optimization_method(NNOM_GRADIENT_DESCENT) ae.set_gd_mini_batch_size(128) ae.set_gd_learning_rate(0.0001) ae.set_epsilon(0.0) # start fine-tuning. this might take some time _ = ae.train(Xtrain) """ Explanation: Fine-tuning After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network. End of explanation """ # get a 50-example subset of the test set subset = Xtest[:,0:50].copy() # corrupt the first 25 examples with multiplicative noise subset[:,0:25] *= (random.random((256,25))>0.5) # corrupt the other 25 examples with additive noise subset[:,25:50] += random.random((256,25)) # obtain the reconstructions reconstructed_subset = ae.reconstruct(RealFeatures(subset)) # plot the corrupted data and the reconstructions figure(figsize=(10,10)) for i in range(50): ax1=subplot(10,10,i*2+1) ax1.imshow(subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax1.set_xticks([]) ax1.set_yticks([]) ax2=subplot(10,10,i*2+2) ax2.imshow(reconstructed_subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax2.set_xticks([]) ax2.set_yticks([]) """ Explanation: Evaluation Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions: End of explanation """ # obtain the weights matrix of the first hidden layer # the 512 is the number of biases in the layer (512 neurons) # the transpose is because numpy stores matrices in row-major format, and Shogun stores # them in column major format w1 = ae.get_layer_parameters(1)[512:].reshape(256,512).T # visualize the weights between the first 100 neurons in the hidden layer # and the neurons in the input layer figure(figsize=(10,10)) for i in range(100): ax1=subplot(10,10,i+1) ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax1.set_xticks([]) ax1.set_yticks([]) """ Explanation: The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise. Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format. End of explanation """ from shogun import NeuralSoftmaxLayer nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10)) nn.set_max_num_epochs(50) nn.set_labels(Ytrain) _ = nn.train(Xtrain) """ Explanation: Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like: L1->L2->L3->Softmax. The network is obtained by calling convert_to_neural_network(): End of explanation """ from shogun import MulticlassAccuracy predictions = nn.apply_multiclass(Xtest) accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100 print "Classification accuracy on the test set =", accuracy, "%" """ Explanation: Next, we'll evaluate the accuracy on the test set: End of explanation """ from shogun import DynamicObjectArray, NeuralInputLayer, NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR conv_layers = DynamicObjectArray() # 16x16 single channel images conv_layers.append_element(NeuralInputLayer(16,16,1)) # the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2, 2, 2)) # the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2)) # the first decoding layer: same structure as the first encoding layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2)) # the second decoding layer: same structure as the input layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 1, 2, 2)) conv_ae = DeepAutoencoder(conv_layers) """ Explanation: Convolutional Autoencoders Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification. In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using CNeuralConvolutionalLayer objects: End of explanation """ conv_ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise conv_ae.pt_noise_parameter.set_const(0.3) # each input has a 30% chance of being set to zero conv_ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent conv_ae.pt_gd_learning_rate.set_const(0.002) conv_ae.pt_gd_mini_batch_size.set_const(100) conv_ae.pt_max_num_epochs[0] = 30 # max number of epochs for pre-training the first encoding layer conv_ae.pt_max_num_epochs[1] = 10 # max number of epochs for pre-training the second encoding layer conv_ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing # start pre-training. this might take some time conv_ae.pre_train(Xtrain) """ Explanation: Now we'll pre-train the autoencoder: End of explanation """ conv_nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10)) # train the network conv_nn.set_epsilon(0.0) conv_nn.set_max_num_epochs(50) conv_nn.set_labels(Ytrain) # start training. this might take some time _ = conv_nn.train(Xtrain) """ Explanation: And then convert the autoencoder to a regular neural network for classification: End of explanation """ predictions = conv_nn.apply_multiclass(Xtest) accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100 print "Classification accuracy on the test set =", accuracy, "%" """ Explanation: And evaluate it on the test set: End of explanation """
CrowdTruth/CrowdTruth-core
tutorial/notebooks/Sparse Multiple Choice Task - Person Annotation in Video.ipynb
apache-2.0
import pandas as pd test_data = pd.read_csv("../data/person-video-sparse-multiple-choice.csv") test_data.head() """ Explanation: CrowdTruth for Sparse Multiple Choice Tasks: Person Annotation in Video In this tutorial, we will apply CrowdTruth metrics to a multiple choice crowdsourcing task for Person Annotation from video fragments. The workers were asked to watch a video of about 3-5 seconds and then pick from a multiple choice list which are the tags that are relevant for the people that appear in the video fragment. The options available in the multiple choice list change with the input video fragment. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here. To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript. This is a screenshot of the task as it appeared to workers: A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data: End of explanation """ import crowdtruth from crowdtruth.configuration import DefaultConfig """ Explanation: Declaring a pre-processing configuration The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class: End of explanation """ class TestConfig(DefaultConfig): inputColumns = ["videolocation", "subtitles", "imagetags", "subtitletags"] outputColumns = ["selected_answer"] # processing of a closed task open_ended_task = True def processJudgments(self, judgments): # pre-process output to match the values in annotation_vector for col in self.outputColumns: # transform to lowercase judgments[col] = judgments[col].apply(lambda x: str(x).lower()) # remove square brackets from annotations judgments[col] = judgments[col].apply(lambda x: str(x).replace('[','')) judgments[col] = judgments[col].apply(lambda x: str(x).replace(']','')) # remove the quotes around the annotations judgments[col] = judgments[col].apply(lambda x: str(x).replace('"','')) return judgments """ Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type/Role Annotation in Video task: inputColumns: list of input columns from the .csv file with the input data outputColumns: list of output columns from the .csv file with the answers from the workers annotation_separator: string that separates between the crowd annotations in outputColumns open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector The complete configuration class is declared below: End of explanation """ data, config = crowdtruth.load( file = "../data/person-video-sparse-multiple-choice.csv", config = TestConfig() ) data['judgments'].head() """ Explanation: Pre-processing the input data After declaring the configuration of our input file, we are ready to pre-process the crowd data: End of explanation """ results = crowdtruth.run(data, config) """ Explanation: Computing the CrowdTruth metrics The pre-processed data can then be used to calculate the CrowdTruth metrics: End of explanation """ results["units"].head() """ Explanation: results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers. The video fragment metrics are stored in results["units"]: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.hist(results["units"]["uqs"]) plt.xlabel("Video Fragment Quality Score") plt.ylabel("Video Fragment") """ Explanation: The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram: End of explanation """ results["units"]["unit_annotation_score"].head() """ Explanation: The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score. End of explanation """ results["workers"].head() """ Explanation: The worker metrics are stored in results["workers"]: End of explanation """ plt.hist(results["workers"]["wqs"]) plt.xlabel("Worker Quality Score") plt.ylabel("Workers") """ Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. End of explanation """
tpin3694/tpin3694.github.io
machine-learning/ridge_regression.ipynb
mit
# Load libraries from sklearn.linear_model import Ridge from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler """ Explanation: Title: Ridge Regression Slug: ridge_regression Summary: How to conduct ridge regression in scikit-learn for machine learning in Python. Date: 2017-09-18 12:00 Category: Machine Learning Tags: Linear Regression Authors: Chris Albon <a alt="Ridge Regression" href="https://machinelearningflashcards.com"> <img src="ridge_regression/Ridge_Regression_print.png" class="flashcard center-block"> </a> Preliminaries End of explanation """ # Load data boston = load_boston() X = boston.data y = boston.target """ Explanation: Load Boston Housing Dataset End of explanation """ # Standarize features scaler = StandardScaler() X_std = scaler.fit_transform(X) """ Explanation: Standardize Features End of explanation """ # Create ridge regression with an alpha value regr = Ridge(alpha=0.5) # Fit the linear regression model = regr.fit(X_std, y) """ Explanation: Fit Ridge Regression The hyperparameter, $\alpha$, lets us control how much we penalize the coefficients, with higher values of $\alpha$ creating simpler modelers. The ideal value of $\alpha$ should be tuned like any other hyperparameter. In scikit-learn, $\alpha$ is set using the alpha parameter. End of explanation """
urgedata/pythondata
fbprophet/.ipynb_checkpoints/fbprophet_part_one-checkpoint.ipynb
mit
import pandas as pd import numpy as np from fbprophet import Prophet import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize']=(20,10) plt.style.use('ggplot') """ Explanation: Import necessary libraries End of explanation """ sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True) sales_df.head() """ Explanation: Read in the data Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file. End of explanation """ df = sales_df.reset_index() df.head() """ Explanation: Prepare for Prophet For prophet to work, we need to change the names of these columns to 'ds' and 'y', so lets just create a new dataframe and keep our old one handy (you'll see why later). The new dataframe will initially be created with an integer index so we can rename the columns End of explanation """ df=df.rename(columns={'date':'ds', 'sales':'y'}) df.head() """ Explanation: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index. End of explanation """ df.set_index('ds').y.plot() """ Explanation: Now's a good time to take a look at your data. Plot the data using pandas' plot function End of explanation """ df['y'] = np.log(df['y']) df.tail() df.set_index('ds').y.plot() """ Explanation: When working with time-series data, its good to take a look at the data to determine if trends exist, whether it is stationary, has any outliers and/or any other anamolies. Facebook prophet's example uses the log-transform as a way to remove some of these anomolies but it isn't the absolute 'best' way to do this...but given that its the example and a simple data series, I'll follow their lead for now. Taking the log of a number is easily reversible to be able to see your original data. To log-transform your data, you can use numpy's log() function End of explanation """ model = Prophet() model.fit(df); """ Explanation: As you can see in the above chart, the plot looks the same as the first one but just at a different scale. Running Prophet Now, let's set prophet up to begin modeling our data. Note: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are workign with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet. End of explanation """ future = model.make_future_dataframe(periods=24, freq = 'm') future.tail() """ Explanation: Forecasting is fairly useless unless you can look into the future, so we need to add some future dates to our dataframe. For this example, I want to forecast 2 years into the future, so I'll built a future dataframe with 24 periods since we are working with monthly data. Note the freq='m' inclusion to ensure we are adding 24 months of data. This can be done with the following code: End of explanation """ forecast = model.predict(future) """ Explanation: To forecast this future data, we need to run it through Prophet's model. End of explanation """ forecast.tail() """ Explanation: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe: End of explanation """ forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail() """ Explanation: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with: End of explanation """ model.plot(forecast); """ Explanation: Plotting Prophet results Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area). End of explanation """ df.set_index('ds', inplace=True) forecast.set_index('ds', inplace=True) """ Explanation: Personally, I'm not a fan of this visualization so I like to break the data up and build a chart myself. The next section describes how I build my own visualization for Prophet modeling Visualizing Prophet models In order to build a useful dataframe to visualize our model versus our original data, we need to combine the output of the Prophet model with our original data set, then we'll build a new chart manually using pandas and matplotlib. First, let's set our dataframes to have the same index of ds End of explanation """ viz_df = sales_df.join(forecast[['yhat', 'yhat_lower','yhat_upper']], how = 'outer') """ Explanation: Now, we'll combine the original data and our forecast model data End of explanation """ viz_df.head() viz_df['yhat_rescaled'] = np.exp(viz_df['yhat']) viz_df.head() """ Explanation: If we look at the head(), we see the data has been joined correctly but the scales of our original data (sales) and our model (yhat) are different. We need to rescale the yhat colums(s) to get the same scale, so we'll use numpy's exp function to do that. End of explanation """ viz_df[['sales', 'yhat_rescaled']].plot() """ Explanation: Let's take a look at the sales and yhat_rescaled data together in a chart. End of explanation """ sales_df.index = pd.to_datetime(sales_df.index) #make sure our index as a datetime object connect_date = sales_df.index[-2] #select the 2nd to last date """ Explanation: You can see from the chart that the model (blue) is pretty good when plotted against the actual signal (orange) but I like to make my vizualization's a little better to understand. To build my 'better' visualization, we'll need to go back to our original sales_df and forecast dataframes. First things first - we need to find the 2nd to last date of the original sales data in sales_df in order to ensure the original sales data and model data charts are connected. End of explanation """ mask = (forecast.index > connect_date) predict_df = forecast.loc[mask] predict_df.head() """ Explanation: Using the connect_date we can now grab only the model data that after that date (you'll see why in a minute). To do this, we'll mask the forecast data. End of explanation """ viz_df = sales_df.join(predict_df[['yhat', 'yhat_lower','yhat_upper']], how = 'outer') viz_df['yhat_scaled']=np.exp(viz_df['yhat']) """ Explanation: Now, let's build a dataframe to use in our new visualization. We'll follow the same steps we did before. End of explanation """ viz_df.head() """ Explanation: Now, if we take a look at the head() of viz_df we'll see 'NaN's everywhere except for our original data rows. End of explanation """ viz_df.tail() """ Explanation: If we take a look at the tail() of the viz_df you'll see we have data for the forecasted data and NaN's for the original data series. End of explanation """ fig, ax1 = plt.subplots() ax1.plot(viz_df.sales) ax1.plot(viz_df.yhat_scaled, color='black', linestyle=':') ax1.fill_between(viz_df.index, np.exp(viz_df['yhat_upper']), np.exp(viz_df['yhat_lower']), alpha=0.5, color='darkgray') ax1.set_title('Sales (Orange) vs Sales Forecast (Black)') ax1.set_ylabel('Dollar Sales') ax1.set_xlabel('Date') L=ax1.legend() #get the legend L.get_texts()[0].set_text('Actual Sales') #change the legend text for 1st plot L.get_texts()[1].set_text('Forecasted Sales') #change the legend text for 2nd plot """ Explanation: time to plot Now, let's plot everything to get the 'final' visualization of our sales data and forecast with errors. End of explanation """
tritemio/multispot_paper
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-17d.ipynb
mit
ph_sel_name = "all-ph" data_id = "17d" # ph_sel_name = "all-ph" # data_id = "7d" """ Explanation: Executed: Mon Mar 27 11:34:28 2017 Duration: 8 seconds. usALEX-5samples - Template This notebook is executed through 8-spots paper analysis. For a direct execution, uncomment the cell below. End of explanation """ from fretbursts import * init_notebook() from IPython.display import display """ Explanation: Load software and filenames definitions End of explanation """ data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir """ Explanation: Data folder: End of explanation """ from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'), 'DexDem': Ph_sel(Dex='Dem')} ph_sel = ph_sel_map[ph_sel_name] data_id, ph_sel_name """ Explanation: List of data files: End of explanation """ d = loader.photon_hdf5(filename=files_dict[data_id]) """ Explanation: Data load Initial loading of the data: End of explanation """ d.ph_times_t, d.det_t """ Explanation: Laser alternation selection At this point we have only the timestamps and the detector numbers: End of explanation """ d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0) """ Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations: End of explanation """ plot_alternation_hist(d) """ Explanation: We should check if everithing is OK with an alternation histogram: End of explanation """ loader.alex_apply_period(d) """ Explanation: If the plot looks good we can apply the parameters with: End of explanation """ d """ Explanation: Measurements infos All the measurement data is in the d variable. We can print it: End of explanation """ d.time_max """ Explanation: Or check the measurements duration: End of explanation """ d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa """ Explanation: Compute background Compute the background using automatic threshold: End of explanation """ bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel) d.burst_search(**bs_kws) th1 = 30 ds = d.select_bursts(select_bursts.size, th1=30) bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True) .round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4})) bursts.head() burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv' .format(sample=data_id, th=th1, **bs_kws)) burst_fname bursts.to_csv(burst_fname) assert d.dir_ex == 0 assert d.leakage == 0 print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) n_bursts_do = ds_do.num_bursts[0] n_bursts_fret = ds_fret.num_bursts[0] n_bursts_do, n_bursts_fret d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret) print ('D-only fraction:', d_only_frac) dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False); """ Explanation: Burst search and selection End of explanation """ def hsm_mode(s): """ Half-sample mode (HSM) estimator of `s`. `s` is a sample from a continuous distribution with a single peak. Reference: Bickel, Fruehwirth (2005). arXiv:math/0505419 """ s = memoryview(np.sort(s)) i1 = 0 i2 = len(s) while i2 - i1 > 3: n = (i2 - i1) // 2 w = [s[n-1+i+i1] - s[i+i1] for i in range(n)] i1 = w.index(min(w)) + i1 i2 = i1 + n if i2 - i1 == 3: if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]: i2 -= 1 elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]: i1 += 1 else: i1 = i2 = i1 + 1 return 0.5*(s[i1] + s[i2]) E_pr_do_hsm = hsm_mode(ds_do.E[0]) print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100)) """ Explanation: Donor Leakage fit Half-Sample Mode Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005). End of explanation """ E_fitter = bext.bursts_fitter(ds_do, weights=None) E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03)) E_fitter.fit_histogram(model=mfit.factory_gaussian()) E_fitter.params res = E_fitter.fit_res[0] res.params.pretty_print() E_pr_do_gauss = res.best_values['center'] E_pr_do_gauss """ Explanation: Gaussian Fit Fit the histogram with a gaussian: End of explanation """ bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_fitter.calc_kde(bandwidth=bandwidth) E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1]) E_pr_do_kde = E_fitter.kde_max_pos[0] E_pr_do_kde """ Explanation: KDE maximum End of explanation """ mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False) plt.axvline(E_pr_do_hsm, color='m', label='HSM') plt.axvline(E_pr_do_gauss, color='k', label='Gauss') plt.axvline(E_pr_do_kde, color='r', label='KDE') plt.xlim(0, 0.3) plt.legend() print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' % (E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100)) """ Explanation: Leakage summary End of explanation """ nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean """ Explanation: Burst size distribution End of explanation """ E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5)) E_fitter.fit_res[0].params.pretty_print() fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(E_fitter, ax=ax[0]) mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100)) display(E_fitter.params*100) """ Explanation: Fret fit Max position of the Kernel Density Estimation (KDE): End of explanation """ ds_fret.fit_E_m(weights='size') """ Explanation: Weighted mean of $E$ of each burst: End of explanation """ ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None) """ Explanation: Gaussian fit (no weights): End of explanation """ ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr """ Explanation: Gaussian fit (using burst size as weights): End of explanation """ S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter, ax=ax[0]) mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100)) display(S_fitter.params*100) S_kde = S_fitter.kde_max_pos[0] S_gauss = S_fitter.params.loc[0, 'center'] S_gauss_sig = S_fitter.params.loc[0, 'sigma'] S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0])) S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr """ Explanation: Stoichiometry fit Max position of the Kernel Density Estimation (KDE): End of explanation """ S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit """ Explanation: The Maximum likelihood fit for a Gaussian population is the mean: End of explanation """ weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit """ Explanation: Computing the weighted mean and weighted standard deviation we get: End of explanation """ sample = data_id """ Explanation: Save data to file End of explanation """ variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr ' 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr ' 'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n') """ Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. End of explanation """ variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str) """ Explanation: This is just a trick to format the different variables: End of explanation """
google/applied-machine-learning-intensive
content/03_regression/03_regression_quality/colab.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 Google LLC. End of explanation """ import numpy as np np.random.seed(0xFACADE) """ Explanation: Regression Quality So far in this course, we have spent some time building and testing regression models. But how can we measure how good these models are? In this Colab, we will examine a few of the ways that we can measure and graph the results of a regression model in order to better understand the quality of the model. Building a Dataset In order to discuss regression quality, we should probably start by building a regression model. We will start by creating an artificial dataset to model. Start by importing NumPy and setting a random seed so that we get reproducible results. Remember: Do not set a random seed in production code! End of explanation """ X = np.random.uniform(low=0, high=200, size=50) print(f'min: {np.min(X)}') print(f'max: {np.max(X)}') print(f'mean: {np.mean(X)}') print(f'count: {np.size(X)}') """ Explanation: Recall that linear regression is about trying to fit a straight line through a set of data points. The equation for a straight line is: $y = m*x + b$ where: - $x$ is the feature - $y$ is the outcome - $m$ is the slope of the line - $b$ is the intercept of the line on the $y$-axis But at this point we don't even have $x$-values! We will use NumPy's random.uniform function to generate 50 random numbers between 0 and 200 as $x$-values. End of explanation """ import matplotlib.pyplot as plt plt.plot(X, X, 'g.') plt.show() """ Explanation: You should see a: minimum value near, but not below 0 maximum value near, but not above 200 mean value somewhere near 100 count value of exactly 50 Let's visualize the $x$-values, just to get some idea of the distribution of the values in the range of 0-200. How do we plot a one-dimensional list of values in two-dimensional space? We can plot it against itself! End of explanation """ SLOPE = 4 INTERCEPT = 10 Y_PRED = (SLOPE * X) + INTERCEPT plt.plot(X, Y_PRED, 'b.') plt.plot(X, Y_PRED, 'r-') plt.show() """ Explanation: As you can see, we have a straight line of $x$-values that span from roughly 0 to 200. Let's now create some $y$-values via the equation $y=4x+10$ (i.e. the slope is 4 and the intercept is 10). We'll call the new variable Y_PRED since it is the predicted variable. End of explanation """ Y_PRED = (SLOPE * X) + INTERCEPT randomness = np.random.uniform(low=-200, high=200, size=50) Y = SLOPE * X + randomness + INTERCEPT plt.plot(X, Y, 'b.') plt.plot(X, Y_PRED, 'r-') plt.show() """ Explanation: This regression line fits amazingly well! If only we could have this perfect of a fit in the real world. Unfortunately, this is almost never the case. There is always some variability. Let's add some randomness into our $y$-values to get a more realistic dataset. We will keep our original $y$-values in order to remember our base regression line. We'll recreate our original $y$-values and store them in Y_PRED. Then, we'll create Y with the same equation but with added randomness. End of explanation """ ss_res = ((Y - Y_PRED) ** 2).sum(axis=0, dtype=np.float64) print(ss_res) """ Explanation: We now have the line that was used to generate the data plotted in red, and the randomly displaced data points in blue. The dots, though definitely not close to the line, at least follow the linear trend in general. This seems like a reasonable dataset for a linear regression. Let's remind ourselves of the key variables in the model: X: the $x$-values that we used to "train" the model Y: the $y$-values that represent the actual values that correlate to X Y_PRED: the $y$-values that the model would predict for each $x$-value Coefficient of Determination The coefficient of determination, denoted $R^2$, is one of the most important metrics in regression. It tells us how much of the data is "explained" by the model. Before we can define the metric itself, we need to define a few other key terms. A residual is the difference between the target value $y_i$ and the predicted value $\hat{y_i}$. The residual sum of squares is the summation of the square of every residual in the prediction set. $$ SS_{res} = \sum_{i}(y_i - \hat{y_i})^2$$ End of explanation """ y_mean = np.average(Y, axis=0) print(y_mean) ss_tot = ((Y - y_mean)**2).sum(axis=0, dtype=np.float64) print(ss_tot) """ Explanation: The total sum of squares is the sum of the squares of the difference between each value $y_i$ and their mean. $$\bar{y} = \frac{1}{n}\sum_{i=1}^{n}y_{i}$$ $$SS_{tot} = \sum_{i}(y_{i}-\bar{y})^2$$ End of explanation """ r2 = 1 - (ss_res/ss_tot) print(r2) """ Explanation: Given the total sum of squares and the residual sum of squares, we can calculate the coefficient of determination $R^2$. $$R^{2} = 1 - \frac{SS_{res}}{SS_{tot}}$$ End of explanation """ from sklearn.metrics import r2_score print(r2_score(Y, Y_PRED)) """ Explanation: If you just ran the cells in this Colab from top to bottom you probably got a score of 0.846. Is this good? Bad? Mediocre? The $R^2$ score measures how well the actual variance from $x$-values to $y$-values is represented in the variance between the $x$-values and the predicted $y$-values. Typically, this score ranges from 0 to 1, where 0 is bad and 1 is a perfect mapping. However, the score can also be negative. Can you guess why? If a line drawn horizontally through the data points performs better than your regression, then the $R^2$ score would be negative. If you see this, try again. Your model really isn't working. For values in the range 0-1, interpreting the $R^2$ is more subjective. The closer to 0, the worse your model is at fitting the data. And generally, the closer to 1, the better (but you also don't want to overfit). This is where testing, observation, and experience come into play. It turns out that scikit-learn can calculate $R^2$ for us: End of explanation """ print(r2_score(Y, Y)) print(r2_score(Y_PRED, Y_PRED)) """ Explanation: Knowing that we don't have to manually do all of the math again, let's now see the perfect and a very imperfect case of a regression fitting a dataset. To begin with, we'll show a perfect fit. What happens if our predictions and our actual values are identical? End of explanation """ Y_PRED_BAD = -Y_PRED plt.plot(X, Y, 'y.') plt.plot(X, Y_PRED_BAD, 'r-') """ Explanation: 1.0: just what we thought! A perfect fit. Now let's see if we can make a regression so poor that $R^2$ is negative. In this case, we need to make our predicted data look different than our actuals. To do this, we'll negate our predictions and save them into a new variable called Y_PRED_BAD. End of explanation """ print(r2_score(Y, Y_PRED_BAD)) """ Explanation: That prediction line looks horrible! Indeed, a horizontal line would fit this data better. Let's check the $R^2$. End of explanation """ plt.plot(Y_PRED, Y, 'b.') plt.plot([Y_PRED.min(), Y_PRED.max()], [Y_PRED.min(), Y_PRED.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() """ Explanation: A negative $R^2$ is rare in practice. But if you do ever see one, it means the model has gone quite wrong. Predicted vs. Actual Plots We have now seen a quantitative way to measure the goodness-of-fit of our regressions: the coefficient of determination. We know that if we see negative numbers that our model is very broken and if we see numbers approaching 1, the model is decent (or overfitted). But what about the in-between? This is where qualitative observations based on expert opinion needs to come into play. There are numerous ways to visualize regression predictions, but one of the most basic is the "predicted vs. actual" plot. To create this plot, we scatter-plot the actual $y$-values used to train our model against the predicted $y$-values generated from the training features. We then draw a line from the lowest prediction to the largest. End of explanation """ Y_BAD = -Y_PRED plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() """ Explanation: In this case, the data points scatter pretty evenly around the prediction-to-actual line. So what does a bad plot look like? Let's first negate all of our predictions, making them exactly the opposite of what they should be. This creates the exact opposite of a good actual-vs-predicted line. End of explanation """ Y_BAD = Y_PRED + 200 plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() """ Explanation: In this case we made a very contrived example where the predictions are exact opposites of the actual values. When you see this case, you have a model predicting roughly the opposite of what it should be predicting. Let's look at another case, where we add a large positive bias to every prediction. End of explanation """ Y_BAD = Y_PRED - Y_PRED / 4 plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() """ Explanation: Now we have a situation where there is an obvious bias. All predictions are higher than the actual values, so the model needs to be adjusted to make smaller predictions. Most cases aren't quite so obvious. In the chart below, you can see that the predictions are okay for low values, but tend to underpredict for larger target values. End of explanation """ RESIDUALS = Y - Y_PRED plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: Predicted vs. actual charts are a useful tool for giving you a visual indication of how your model is performing. While single measures like $R^2$ give you an aggregated metric, charts allow you to see if there is a trend or outlier where your model isn't performing well. If you identify problem areas, you can work on retraining your model. Residual Plots Another helpful visualization tool is to plot the regression residuals. As a reminder, residuals are the difference between the actual values and the predicted values. We plot residuals on the $y$-axis against the predicted values on the $x$-axis, and draw a horizontal line through $y=0$. Cases where our predictions were too low are above the line. Cases where our predictions were too high are below the line. End of explanation """ RESIDUALS = Y - (Y_PRED + 200) plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: In the "Predicted vs. Actual" section above, we plotted a case where there was a large positive bias in our predictions. Plotting the same biased data on a residual plot shows all of the residuals below the zero line. End of explanation """ RESIDUALS = Y - (Y_PRED - Y_PRED / 4) plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: The other example in the "Predicted vs. Actual" section reduced our predictions by an amount proportional to the scale of the predictions. Below is the residual plot for that scenario. End of explanation """ import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.linspace(-10.0, 10.0, 100) y = np.linspace(-10.0, 10.0, 100) f = x**2 + y**2 + np.random.uniform(low=-14, high=14, size=100) plt.plot(x, f, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: Resources Coefficient of Determination Interpreting Residual Plots Exercises The Interpreting Residual Plots resource gives examples of patterns in different residual plots and what those patterns might mean for your model. Each exercise below contains code that generates an image. Run the code to view the image, and then find the corresponding pattern name in Interpreting Residual Plots. Note the name of the pattern in the answer cell, and provide a one or two sentence explanation of what this could signal about your model's predictions. Exercise 1 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. End of explanation """ import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.linspace(0.0, 10.0, 100) y = np.concatenate([ np.random.uniform(low=-5, high=5, size=90), np.random.uniform(low=50, high=55, size=10) ]) plt.plot(x, y, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: Student Solution Which plot pattern does this residual plot follow? And what might it mean about the model? Exercise 2 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. End of explanation """ import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.concatenate([ np.random.uniform(low=0, high=2, size=90), np.random.uniform(low=4, high=10, size=10) ]) y = np.random.uniform(low=-5, high=5, size=100) plt.plot(x, y, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() """ Explanation: Student Solution Which plot pattern does this residual plot follow? And what might it mean about the model? Exercise 3 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. End of explanation """
drabastomek/learningPySpark
Chapter09/LearningPySpark_Chapter09.ipynb
gpl-3.0
import blaze as bl """ Explanation: Hybrid data representation using Blaze Import the Blaze. End of explanation """ import numpy as np simpleArray = np.array([ [1,2,3], [4,5,6] ]) """ Explanation: Abstract data Working with NumPy array Let's create a simple NumPy array: we first load NumPy and then create a matrix with two rows and three columns. End of explanation """ simpleData_np = bl.Data(simpleArray) """ Explanation: Now that we have an array we can abstract it with Blaze's Data structure. End of explanation """ simpleData_np.peek() """ Explanation: In order to peek inside the structure you can either use .peek() method End of explanation """ simpleData_np.head(1) """ Explanation: or use (familiar to those of you versed in pandas' syntax) the .head(...) method End of explanation """ simpleData_np[0] """ Explanation: If you want to retrieve the first column you can use indexing. End of explanation """ simpleData_np.T[0] """ Explanation: If you want to retrieve columns you have to transpose your DataShape. End of explanation """ simpleData_np = bl.Data(simpleArray, fields=['a', 'b', 'c']) """ Explanation: Let's specify names of our fields. End of explanation """ simpleData_np['b'] """ Explanation: You can now retrieve the data simply by calling the column by it's name; let's retrieve column 'b' End of explanation """ import pandas as pd """ Explanation: Working with pandas DataFrame Start by importing pandas. End of explanation """ simpleDf = pd.DataFrame([ [1,2,3], [4,5,6] ], columns=['a','b','c']) """ Explanation: Next, we create a DataFrame. End of explanation """ simpleData_df = bl.Data(simpleDf) """ Explanation: and transform it into a DataShape. End of explanation """ simpleData_df['a'] """ Explanation: You can retrieve data in the same manner as with the DataShape created from the NumPy array. End of explanation """ import odo traffic = bl.Data('../Data/TrafficViolations.csv') """ Explanation: Working with files DataShapes can be created directly from a CSV file. End of explanation """ for year in traffic.Stop_year.distinct().sort(): odo.odo(traffic[traffic.Stop_year == year], '../Data/Years/TrafficViolations_{0}.csv.gz'\ .format(year)) """ Explanation: To save the data into multiple archives (for each year of traffic violation) use this. End of explanation """ print(traffic.fields) """ Explanation: If you do not know the names of columns in any dataset, you can get these from the DataShape. End of explanation """ traffic_gz = bl.Data('../Data/TrafficViolations.csv.gz') """ Explanation: Blaze can also read directly from a GZipped archives. End of explanation """ traffic.head(2) traffic_gz.head(2) """ Explanation: To compare we get exactly the same data let's retrieve the first two records from each structure. End of explanation """ traffic_multiple = bl.Data( '../Data/Years/TrafficViolations_*.csv.gz') traffic_multiple.head(2) """ Explanation: To read from multiple files you can use the asterisk *. End of explanation """ odo.odo(traffic[traffic.Stop_year == 2013], '../Data/Years/TrafficViolations_2013.csv.gz') """ Explanation: In order to save traffic data for year 2013 you can call odo like this End of explanation """ traffic_psql = bl.Data( 'postgresql://{0}:{1}@localhost:5432/drabast::traffic'\ .format('drabast', 'pAck7!B0ok') ) traffic_psql.head(2) """ Explanation: Working with databases Interacting with relational databases Let's read the data from the PostGRE SQL database now. The URI for accessing PostGRE SQL database has the following syntax postgresql://&lt;user_name&gt;:&lt;password&gt;@&lt;server&gt;:&lt;port&gt;/&lt;database&gt;::&lt;table&gt;. End of explanation """ traffic_2016 = traffic_psql[traffic_psql['Year'] == 2016] # odo.drop('sqlite:///traffic_local.sqlite::traffic2016') # odo.drop('postgresql://drabast:pAck7!B0ok@localhost:5432/drabast::traffic_2016') odo.odo(traffic_2016, 'sqlite:///traffic_local.sqlite::traffic2016') odo.odo(traffic_2016, 'postgresql://drabast:pAck7!B0ok@localhost:5432/drabast::traffic_2016') """ Explanation: We will output traffic violations that involved cars manufactured in 2016 to both, PostGRE SQL and SQLite databases. We will use odo to manage the transfers. End of explanation """ traffic_sqlt = bl.Data('sqlite:///traffic_local.sqlite::traffic2016') traffic_sqlt.head(2) """ Explanation: Reading data from the SQLite database should be trivial by now. End of explanation """ traffic_mongo = bl.Data('mongodb://localhost:27017/packt::traffic') traffic_mongo.head(2) """ Explanation: Interacting with MongoDB database Reading from MongoDB is very similar to reading from a PostGRE SQL or SQLite databases. End of explanation """ traffic.Year.head(2) """ Explanation: Data operations Accessing columns There are two ways of accessing columns: you can get a single column at a time by accessing them as if they were a DataShape attribute End of explanation """ (traffic[['Location', 'Year', 'Accident', 'Fatal', 'Alcohol']] .head(2)) """ Explanation: or indexing; this allows to select more than one column at a time End of explanation """ schema_example = bl.symbol('schema_exampl', '{id: int, name: string}') """ Explanation: Symbolic transformations If we could not reflect the schema from an already existing object, we would have to specify the schema manually. End of explanation """ traffic_s = bl.symbol('traffic', traffic.dshape) traffic_2013 = traffic_s[traffic_s['Stop_year'] == 2013][ ['Stop_year', 'Arrest_Type','Color', 'Charge'] ] """ Explanation: Since we already have an existing dataset traffic, we can reuse the schema by calling traffic.dshape and specify our transformations directly on it. End of explanation """ traffic_pd = pd.read_csv('../Data/TrafficViolations.csv') """ Explanation: To present how this works, let's read our dataset into pandas' DataFrame. End of explanation """ bl.compute(traffic_2013, traffic_pd).head(2) """ Explanation: You can now pass the dataset directly to traffic_2013 object and perform the computation using the .compute(...) method of Blaze. End of explanation """ bl.compute(traffic_2013, traffic_pd.values)[0:2] """ Explanation: You can also pass a list of lists or a list of NumPy arrays. End of explanation """ traffic['Stop_year'].distinct().sort() """ Explanation: Operations on columns You can check that by getting all the distinct values for the Stop_year column. End of explanation """ traffic['Stop_year'].head(2) - 2000 """ Explanation: We can subtract 2000 from the Stop_year column as we do not need a greater detail. End of explanation """ bl.log(traffic['Stop_year']).head(2) """ Explanation: If you wanted to log-transform the Stop_year you need to End of explanation """ traffic['Stop_year'].max() """ Explanation: Reducing data Some reduction methods are also available, like .mean() (that calculates the average), .std (that calculates standard deviation) or .max() (that returns the maximum from the list). End of explanation """ traffic = bl.transform(traffic, Age_of_car = traffic.Stop_year - traffic.Year) traffic.head(2)[['Stop_year', 'Year', 'Age_of_car']] """ Explanation: You can calculate the age of the car (in years) at the time when the violation occured End of explanation """ bl.by(traffic['Fatal'], Fatal_AvgAge=traffic.Age_of_car.mean(), Fatal_Count =traffic.Age_of_car.count() ) """ Explanation: and to calculate the average age of the car involved in a fatal trafic violation and count the number of occurences you can use the by operation. End of explanation """ violation = traffic[ ['Stop_month','Stop_day','Stop_year', 'Stop_hr','Stop_min','Stop_sec','Violation_Type']] belts = traffic[ ['Stop_month','Stop_day','Stop_year', 'Stop_hr','Stop_min','Stop_sec','Belts']] """ Explanation: Joins We first select all the traffic violations by violation type (the violation object) and the traffic violations involving belts (the belts object). End of explanation """ violation_belts = bl.join(violation, belts, ['Stop_month','Stop_day','Stop_year', 'Stop_hr','Stop_min','Stop_sec']) """ Explanation: Now, we join the two objects on the six date and time columns. End of explanation """ bl.by(violation_belts[['Violation_Type', 'Belts']], Violation_count=violation_belts.Belts.count() ).sort('Violation_count', ascending=False) """ Explanation: Once we have the full dataset in place, let's check how many traffic violations involved belts and what sort of punishment was issued to the driver. End of explanation """
aanishn/aanishn.github.io
artifacts/Kaggle_Dogs_Vs_Cats_Using_LeNet_on_Google_Colab_TPU.ipynb
mit
!pip install kaggle api_token = {"username":"xxxxx","key":"xxxxxxxxxxxxxxxxxxxxxxxx"} import json import zipfile import os os.mkdir('/root/.kaggle') with open('/root/.kaggle/kaggle.json', 'w') as file: json.dump(api_token, file) !chmod 600 /root/.kaggle/kaggle.json # !kaggle config path -p /root !kaggle competitions download -c dogs-vs-cats zip_ref = zipfile.ZipFile('/content/train.zip', 'r') zip_ref.extractall() zip_ref.close() """ Explanation: Kaggle Dogs Vs. Cats Using LeNet on Google Colab TPU Required setup Update api_token with kaggle api key for downloading dataset - Login to kaggle - My Profile &gt; Edit Profile &gt; Createt new API Token - Update **api_token** dict below with the values Change Notebook runtime to TPU - In colab notebook menu, Runtime &gt; Change runtime type - Select TPU in the list Install kaggle package, download and extract zip file End of explanation """ !mkdir train/cat train/dog !mv train/*cat*.jpg train/cat !mv train/*dog*.jpg train/dog """ Explanation: Re-arrange classes to 2 separate directories End of explanation """ BATCH_SIZE = 64 IMG_DIM = (256, 256, 3) NUM_EPOCHS = 1 """ Explanation: Training configs End of explanation """ import tensorflow as tf from tensorflow import keras print(keras.__version__) print(tf.__version__) datagen = keras.preprocessing.image.ImageDataGenerator( rescale=1./255, validation_split=0.2) traingen = datagen.flow_from_directory( 'train', batch_size = BATCH_SIZE, target_size = IMG_DIM[:-1], class_mode = 'categorical', subset='training') valgen = datagen.flow_from_directory( 'train', batch_size = BATCH_SIZE, target_size = IMG_DIM[:-1], class_mode = 'categorical', subset='validation') """ Explanation: Setup generators to provide with train and validation batches End of explanation """ input = keras.layers.Input(IMG_DIM, name="input") conv1 = keras.layers.Conv2D(20, kernel_size=(5, 5), padding='same')(input) pool1 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1) conv2 = keras.layers.Conv2D(50, kernel_size=(5,5), padding='same')(pool1) pool2 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1) flatten1 = keras.layers.Flatten()(pool2) fc1 = keras.layers.Dense(500, activation='relu')(flatten1) fc2 = keras.layers.Dense(2, activation='softmax')(fc1) model = keras.models.Model(inputs=input, outputs=fc2) model.compile( loss='categorical_crossentropy', optimizer=keras.optimizers.SGD(lr=0.01), metrics=['accuracy']) print(model.summary()) """ Explanation: Define LeNet model architecture End of explanation """ import os try: device_name = os.environ['COLAB_TPU_ADDR'] TPU_ADDRESS = 'grpc://' + device_name print('Found TPU at: {}'.format(TPU_ADDRESS)) except KeyError: print('TPU not found') """ Explanation: Check for TPU availability End of explanation """ tpu_model = tf.contrib.tpu.keras_to_tpu_model( model, strategy=tf.contrib.tpu.TPUDistributionStrategy( tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS))) """ Explanation: Convert keras model to TPU model End of explanation """ tpu_model.fit_generator( traingen, steps_per_epoch=traingen.n//traingen.batch_size, epochs=1, validation_data=valgen, validation_steps=valgen.n//valgen.batch_size) """ Explanation: Run training End of explanation """ tpu_model.save_weights('./lenet-catdog.h5', overwrite=True) """ Explanation: Save the model weights End of explanation """ from google.colab import files files.download("lenet-catdog.h5") """ Explanation: Download model weights locally End of explanation """
herruzojm/udacity-deep-learning
batch-norm/Batch_Normalization_Lesson.ipynb
mit
# Import necessary packages import tensorflow as tf import tqdm import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import MNIST data so we have something for our experiments from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) """ Explanation: Batch Normalization – Lesson What is it? What are it's benefits? How do we add it to a network? Let's see it work! What are you hiding? What is Batch Normalization?<a id='theory'></a> Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch. Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network. For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. Likewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3. When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network). Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models. Benefits of Batch Normalization<a id="benefits"></a> Batch normalization optimizes network training. It has been shown to have several benefits: 1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. 2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. 3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights. 4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again. 5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great. 6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. 7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization. Batch Normalization in TensorFlow<a id="implementation_1"></a> This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization. End of explanation """ class NeuralNet: def __init__(self, initial_weights, activation_fn, use_batch_norm): """ Initializes this object, creating a TensorFlow graph using the given parameters. :param initial_weights: list of NumPy arrays or Tensors Initial values for the weights for every layer in the network. We pass these in so we can create multiple networks with the same starting weights to eliminate training differences caused by random initialization differences. The number of items in the list defines the number of layers in the network, and the shapes of the items in the list define the number of nodes in each layer. e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would create a network with 784 inputs going into a hidden layer with 256 nodes, followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param use_batch_norm: bool Pass True to create a network that uses batch normalization; False otherwise Note: this network will not use batch normalization on layers that do not have an activation function. """ # Keep track of whether or not this network uses batch normalization. self.use_batch_norm = use_batch_norm self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm" # Batch normalization needs to do different calculations during training and inference, # so we use this placeholder to tell the graph which behavior to use. self.is_training = tf.placeholder(tf.bool, name="is_training") # This list is just for keeping track of data we want to plot later. # It doesn't actually have anything to do with neural nets or batch normalization. self.training_accuracies = [] # Create the network graph, but it will not actually have any real values until after you # call train or test self.build_network(initial_weights, activation_fn) def build_network(self, initial_weights, activation_fn): """ Build the graph. The graph still needs to be trained via the `train` method. :param initial_weights: list of NumPy arrays or Tensors See __init__ for description. :param activation_fn: Callable See __init__ for description. """ self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]]) layer_in = self.input_layer for weights in initial_weights[:-1]: layer_in = self.fully_connected(layer_in, weights, activation_fn) self.output_layer = self.fully_connected(layer_in, initial_weights[-1]) def fully_connected(self, layer_in, initial_weights, activation_fn=None): """ Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. """ # Since this class supports both options, only use batch normalization when # requested. However, do not use it on the final layer, which we identify # by its lack of an activation function. if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. # (See later in the notebook for more details.) weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) # Apply batch normalization to the linear combination of the inputs and weights batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) # Now apply the activation function, *after* the normalization. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None): """ Trains the model on the MNIST training dataset. :param session: Session Used to run training graph operations. :param learning_rate: float Learning rate used during gradient descent. :param training_batches: int Number of batches to train. :param batches_per_sample: int How many batches to train before sampling the validation accuracy. :param save_model_as: string or None (default None) Name to use if you want to save the trained model. """ # This placeholder will store the target labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define loss and optimizer cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer)) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) if self.use_batch_norm: # If we don't include the update ops as dependencies on the train step, the # tf.layers.batch_normalization layers won't update their population statistics, # which will cause the model to fail at inference time with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) # Train for the appropriate number of batches. (tqdm is only for a nice timing display) for i in tqdm.tqdm(range(training_batches)): # We use batches of 60 just because the original paper did. You can use any size batch you like. batch_xs, batch_ys = mnist.train.next_batch(60) session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) # Periodically test accuracy against the 5k validation images and store it for plotting later. if i % batches_per_sample == 0: test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) self.training_accuracies.append(test_accuracy) # After training, report accuracy against test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy)) # If you want to use this model later for inference instead of having to retrain it, # just construct it with the same parameters and then pass this file to the 'test' function if save_model_as: tf.train.Saver().save(session, save_model_as) def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None): """ Trains a trained model on the MNIST testing dataset. :param session: Session Used to run the testing graph operations. :param test_training_accuracy: bool (default False) If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. Note: in real life, *always* perform inference using the population mean and variance. This parameter exists just to support demonstrating what happens if you don't. :param include_individual_predictions: bool (default True) This function always performs an accuracy test against the entire test set. But if this parameter is True, it performs an extra test, doing 200 predictions one at a time, and displays the results and accuracy. :param restore_from: string or None (default None) Name of a saved model if you want to test with previously saved weights. """ # This placeholder will store the true labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # If provided, restore from a previously saved model if restore_from: tf.train.Saver().restore(session, restore_from) # Test against all of the MNIST test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images, labels: mnist.test.labels, self.is_training: test_training_accuracy}) print('-'*75) print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy)) # If requested, perform tests predicting individual values rather than batches if include_individual_predictions: predictions = [] correct = 0 # Do 200 predictions, 1 at a time for i in range(200): # This is a normal prediction using an individual test case. However, notice # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`. # Remember that will tell it whether it should use the batch mean & variance or # the population estimates that were calucated while training the model. pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy], feed_dict={self.input_layer: [mnist.test.images[i]], labels: [mnist.test.labels[i]], self.is_training: test_training_accuracy}) correct += corr predictions.append(pred[0]) print("200 Predictions:", predictions) print("Accuracy on 200 samples:", correct/200) """ Explanation: Neural network classes for testing The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions. About the code: This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization. It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train. End of explanation """ def plot_training_accuracies(*args, **kwargs): """ Displays a plot of the accuracies calculated during training to demonstrate how many iterations it took for the model(s) to converge. :param args: One or more NeuralNet objects You can supply any number of NeuralNet objects as unnamed arguments and this will display their training accuracies. Be sure to call `train` the NeuralNets before calling this function. :param kwargs: You can supply any named parameters here, but `batches_per_sample` is the only one we look for. It should match the `batches_per_sample` value you passed to the `train` function. """ fig, ax = plt.subplots() batches_per_sample = kwargs['batches_per_sample'] for nn in args: ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample), nn.training_accuracies, label=nn.name) ax.set_xlabel('Training steps') ax.set_ylabel('Accuracy') ax.set_title('Validation Accuracy During Training') ax.legend(loc=4) ax.set_ylim([0,1]) plt.yticks(np.arange(0, 1.1, 0.1)) plt.grid(True) plt.show() def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500): """ Creates two networks, one with and one without batch normalization, then trains them with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies. :param use_bad_weights: bool If True, initialize the weights of both networks to wildly inappropriate weights; if False, use reasonable starting weights. :param learning_rate: float Learning rate used during gradient descent. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param training_batches: (default 50000) Number of batches to train. :param batches_per_sample: (default 500) How many batches to train before sampling the validation accuracy. """ # Use identical starting weights for each network to eliminate differences in # weight initialization as a cause for differences seen in training performance # # Note: The networks will use these weights to define the number of and shapes of # its layers. The original batch normalization paper used 3 hidden layers # with 100 nodes in each, followed by a 10 node output layer. These values # build such a network, but feel free to experiment with different choices. # However, the input size should always be 784 and the final output should be 10. if use_bad_weights: # These weights should be horrible because they have such a large standard deviation weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,10), scale=5.0).astype(np.float32) ] else: # These weights should be good because they have such a small standard deviation weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] # Just to make sure the TensorFlow's default graph is empty before we start another # test, because we don't bother using different graphs or scoping and naming # elements carefully in this sample code. tf.reset_default_graph() # build two versions of same network, 1 without and 1 with batch normalization nn = NeuralNet(weights, activation_fn, False) bn = NeuralNet(weights, activation_fn, True) # train and test the two models with tf.Session() as sess: tf.global_variables_initializer().run() nn.train(sess, learning_rate, training_batches, batches_per_sample) bn.train(sess, learning_rate, training_batches, batches_per_sample) nn.test(sess) bn.test(sess) # Display a graph of how validation accuracies changed during training # so we can compare how the models trained and when they converged plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample) """ Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines. We add batch normalization to layers inside the fully_connected function. Here are some important points about that code: 1. Layers with batch normalization do not include a bias term. 2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.) 3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later. 4. We add the normalization before calling the activation function. In addition to that code, the training step is wrapped in the following with statement: python with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference. Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization. Batch Normalization Demos<a id='demos'></a> This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights. Code to support testing The following two functions support the demos we run in the notebook. The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots. The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights. End of explanation """ train_and_test(False, 0.01, tf.nn.relu) """ Explanation: Comparisons between identical networks, with and without batch normalization The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation """ train_and_test(False, 0.01, tf.nn.relu, 2000, 50) """ Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations. If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.) The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations. End of explanation """ train_and_test(False, 0.01, tf.nn.sigmoid) """ Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.) In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.relu) """ Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.relu) """ Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate. The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens. End of explanation """ train_and_test(False, 1, tf.nn.sigmoid) """ Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast. The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.sigmoid, 2000, 50) """ Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy. The cell below shows a similar pair of networks trained for only 2000 iterations. End of explanation """ train_and_test(False, 2, tf.nn.relu) """ Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced. The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights. End of explanation """ train_and_test(False, 2, tf.nn.sigmoid) """ Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all. The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights. End of explanation """ train_and_test(False, 2, tf.nn.sigmoid, 2000, 50) """ Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization. However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster. End of explanation """ train_and_test(True, 0.01, tf.nn.relu) """ Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights. End of explanation """ train_and_test(True, 0.01, tf.nn.sigmoid) """ Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights. End of explanation """ train_and_test(True, 1, tf.nn.relu) """ Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a> End of explanation """ train_and_test(True, 1, tf.nn.sigmoid) """ Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere. The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights. End of explanation """ train_and_test(True, 2, tf.nn.relu) """ Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy. The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a> End of explanation """ train_and_test(True, 2, tf.nn.sigmoid) """ Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck. The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights. End of explanation """ train_and_test(True, 1, tf.nn.relu) """ Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%. Full Disclosure: Batch Normalization Doesn't Fix Everything Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run. This section includes two examples that show runs when batch normalization did not help at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights. End of explanation """ train_and_test(True, 2, tf.nn.relu) """ Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.) The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights. End of explanation """ def fully_connected(self, layer_in, initial_weights, activation_fn=None): """ Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. """ if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) num_out_nodes = initial_weights.shape[-1] # Batch normalization adds additional trainable variables: # gamma (for scaling) and beta (for shifting). gamma = tf.Variable(tf.ones([num_out_nodes])) beta = tf.Variable(tf.zeros([num_out_nodes])) # These variables will store the mean and variance for this layer over the entire training set, # which we assume represents the general population distribution. # By setting `trainable=False`, we tell TensorFlow not to modify these variables during # back propagation. Instead, we will assign values to these variables ourselves. pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False) pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False) # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero. # This is the default value TensorFlow uses. epsilon = 1e-3 def batch_norm_training(): # Calculate the mean and variance for the data coming out of this layer's linear-combination step. # The [0] defines an array of axes to calculate over. batch_mean, batch_variance = tf.nn.moments(linear_output, [0]) # Calculate a moving average of the training data's mean and variance while training. # These will be used during inference. # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter # "momentum" to accomplish this and defaults it to 0.99 decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer. # This is necessary because the those two operations are not actually in the graph # connecting the linear_output and batch_normalization layers, # so TensorFlow would otherwise just skip them. with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): # During inference, use the our estimated population mean and variance to normalize the layer return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute # the operation returned from `batch_norm_training`; otherwise it will execute the graph # operation returned from `batch_norm_inference`. batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference) # Pass the batch-normalized layer output through the activation function. # The literature states there may be cases where you want to perform the batch normalization *after* # the activation function, but it is difficult to find any uses of that in practice. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) """ Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures. Batch Normalization: A Detailed Look<a id='implementation_2'></a> The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer. We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ $$ \mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i $$ We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation. $$ \sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2 $$ Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.) $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice. In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) The next section shows you how to implement the math directly. Batch normalization without the tf.layers package Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package. However, if you would like to implement batch normalization at a lower level, the following code shows you how. It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package. 1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before. End of explanation """ def batch_norm_test(test_training_accuracy): """ :param test_training_accuracy: bool If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. """ weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] tf.reset_default_graph() # Train the model bn = NeuralNet(weights, tf.nn.relu, True) # First train the network with tf.Session() as sess: tf.global_variables_initializer().run() bn.train(sess, 0.01, 2000, 2000) bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True) """ Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points: It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function. It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights. Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly. TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization. tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation. We use the tf.nn.moments function to calculate the batch mean and variance. 2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: python if self.use_batch_norm: with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line: python train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) 3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training: python return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon) return gamma * normalized_linear_output + beta And replace this line in batch_norm_inference: python return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon) return gamma * normalized_linear_output + beta As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$: $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ And the second line is a direct translation of the following equation: $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. Why the difference between training and inference? In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that? First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training). End of explanation """ batch_norm_test(True) """ Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training. End of explanation """ batch_norm_test(False) """ Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions. To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training. End of explanation """
tensorflow/tfx
docs/tutorials/tfx/template.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import sys # Use the latest version of pip. !pip install --upgrade pip # Install tfx and kfp Python packages. !pip install --upgrade "tfx[kfp]<2" """ Explanation: Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud Vertex AI Workbench. Launch this notebook on Vertex AI Workbench. <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/template"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/template.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/template.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/template.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td> </table></div> Introduction This document will provide instructions to create a TensorFlow Extended (TFX) pipeline using templates which are provided with TFX Python package. Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using ! are provided. You will build a pipeline using Taxi Trips dataset released by the City of Chicago. We strongly encourage you to try building your own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment. AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline. NOTE: To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks. Install tfx python package with kfp extra requirement. End of explanation """ !python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))" """ Explanation: Let's check the versions of TFX. End of explanation """ # Read GCP project id from env. shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null GOOGLE_CLOUD_PROJECT=shell_output[0] %env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT} print("GCP project ID:" + GOOGLE_CLOUD_PROJECT) """ Explanation: In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines. Let's set some environment variables to use Kubeflow Pipelines. First, get your GCP project ID. End of explanation """ # This refers to the KFP cluster endpoint ENDPOINT='' # Enter your ENDPOINT here. if not ENDPOINT: from absl import logging logging.error('Set your ENDPOINT in this cell.') """ Explanation: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint. ENDPOINT should contain only the hostname part of the URL. For example, if the URL of the KFP dashboard is https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start, ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com. NOTE: You MUST set your ENDPOINT value below. End of explanation """ # Docker image name for the pipeline image. CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline' """ Explanation: Set the image name as tfx-pipeline under the current GCP project. End of explanation """ PIPELINE_NAME="my_pipeline" import os PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME) """ Explanation: And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory. In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template. You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put. End of explanation """ !tfx template copy \ --pipeline-name={PIPELINE_NAME} \ --destination-path={PROJECT_DIR} \ --model=taxi """ Explanation: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point. The tfx template copy CLI command copies predefined template files into your project directory. End of explanation """ %cd {PROJECT_DIR} """ Explanation: Change the working directory context in this notebook to the project directory. End of explanation """ !{sys.executable} -m models.features_test !{sys.executable} -m models.keras.model_test """ Explanation: NOTE: Don't forget to change directory in File Browser on the left by clicking into the project directory once it is created. Step 3. Browse your copied source files The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial. Here is brief introduction to each of the Python files. - pipeline - This directory contains the definition of the pipeline - configs.py — defines common constants for pipeline runners - pipeline.py — defines TFX components and a pipeline - models - This directory contains ML model definitions. - features.py, features_test.py — defines features for the model - preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform - estimator - This directory contains an Estimator based model. - constants.py — defines constants of the model - model.py, model_test.py — defines DNN model using TF estimator - keras - This directory contains a Keras based model. - constants.py — defines constants of the model - model.py, model_test.py — defines DNN model using Keras - local_runner.py, kubeflow_runner.py — define runners for each orchestration engine You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines. You can run unit tests by supplying the module name of test files with -m flag. You can usually get a module name by deleting .py extension and replacing / with .. For example: End of explanation """ !gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv """ Explanation: Step 4. Run your first TFX pipeline Components in the TFX pipeline will generate outputs for each run as ML Metadata Artifacts, and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be &lt;your-project-id&gt;-kubeflowpipelines-default. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later. End of explanation """ !tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \ --build-image """ Explanation: Let's create a TFX pipeline using the tfx pipeline create command. Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And skaffold will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build. End of explanation """ !tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: While creating a pipeline, Dockerfile will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files. NOTE: kubeflow will be automatically selected as an orchestration engine if airflow is not installed and --engine is not specified. Now start an execution run with the newly created pipeline using the tfx run create command. End of explanation """ # Update the pipeline !tfx pipeline update \ --pipeline-path=kubeflow_runner.py \ --endpoint={ENDPOINT} # You can run the pipeline the same way. !tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline. For example, you can find your runs under the Experiments menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu. Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured when you create a KFP cluster in GCP, or see Troubleshooting document in GCP. Step 5. Add components for data validation. In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation. Double-click to change directory to pipeline and double-click again to open pipeline.py. Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it. You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline. End of explanation """ !tfx pipeline update \ --pipeline-path=kubeflow_runner.py \ --endpoint={ENDPOINT} !tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: Check pipeline outputs Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training. In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher. Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, Resolver, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):) As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create. End of explanation """ !tfx pipeline update \ --pipeline-path=kubeflow_runner.py \ --endpoint={ENDPOINT} !tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! NOTE: If we changed anything in the model code, we have to rebuild the container image, too. We can trigger rebuild using --build-image flag in the pipeline update command. NOTE: You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed. It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying enable_cache=True for the Pipeline object in pipeline.py. Step 7. (Optional) Try BigQueryExampleGen BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline. Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function. We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline. Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project. Note: You MUST set your GCP region in the configs.py file before proceeding. Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is my_pipeline if you didn't change. Double-click to open kubeflow_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function. Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6. End of explanation """ !tfx pipeline update \ --pipeline-path=kubeflow_runner.py \ --endpoint={ENDPOINT} !tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: Step 8. (Optional) Try Dataflow with KFP Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam. Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS. Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is my_pipeline if you didn't change. Double-click to open kubeflow_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.) Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6. End of explanation """ !tfx pipeline update \ --pipeline-path=kubeflow_runner.py \ --endpoint={ENDPOINT} !tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT} """ Explanation: You can find your Dataflow jobs in Dataflow in Cloud Console. Step 9. (Optional) Try Cloud AI Platform Training and Prediction with KFP TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services. Before editing files, you might first have to enable AI Platform Training & Prediction API. Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above. Change directory one level up, and double-click to open kubeflow_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args. Update the pipeline and create an execution run as we did in step 5 and 6. End of explanation """
sthuggins/phys202-2015-work
assignments/assignment04/MatplotlibEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 1 Imports End of explanation """ import os assert os.path.isfile('yearssn.dat') """ Explanation: Line plot of sunspot data Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook. End of explanation """ data = np.loadtxt("yearssn.dat") a= np.array(data) a years = a[:,0] years ssc = a[:,1] ssc assert len(year)==315 assert year.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float) """ Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts. End of explanation """ plt.plot(years, ssc) plt.figsize=(10,8) plt.xlim(1700,2015) #plot is scaled from 1700 to 2015 so that the data fill the graph. assert True # leave for grading """ Explanation: Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """ plt.subplots(2, 2) for i in range(1700, 1800): for j in range(1800,1900): for k in range(1900,2000): plt.plot(data) plt.tight_layout() assert True # leave for grading """ Explanation: Describe the choices you have made in building this visualization and how they make it effective. YOUR ANSWER HERE Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above: Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """
IST256/learn-python
content/lessons/02-Variables/LAB-Variables.ipynb
mit
a = "4" type(a) # should be str a = 4 type(a) # should be int """ Explanation: Class Coding Lab: Variables And Types The goals of this lab are to help you to understand: Python data types Getting input as different types Formatting output as different types Basic arithmetic operators How to create a program from an idea. Variable Types Every Python variable has a type. The Type determines how the data is stored in the computer's memory: End of explanation """ a = 4 b = 5 a + b # this plus in this case means add so 9 a = "4" b = "5" a + b # the plus + in this case means concatenation, so '45' """ Explanation: Types Matter Python's built in functions and operators work differently depending on the type of the variable.: End of explanation """ x = "45" # x is a str y = int(x) # y is now an int z = float(x) # z is a float print(x,y,z) """ Explanation: Switching Types there are built-in Python functions for switching types. For example: End of explanation """ age = input("Enter your age: ") type(age) """ Explanation: Inputs type str When you use the input() function the result is of type str: End of explanation """ age = input("Enter your age: ") age = int(age) type(age) """ Explanation: We can use a built in Python function to convert the type from str to our desired type: End of explanation """ age = int(input("Enter your age: ")) type(age) """ Explanation: We typically combine the first two lines into one expression like this: End of explanation """ # TODO: Debug this code age = input("Enter your age: ") nextage = age + 1 print("Today you are age next year you will be {nextage}") """ Explanation: 1.1 You Code: Debuging The following program has errors in it. Your task is to fix the errors so that: your age can be input and convertred to an integer. the program outputs your age and your age next year. For example: Enter your age: 45 Today you are 45 next year you will be 46 End of explanation """ name = "Mike" age = 45 gpa = 3.4 print("%s is %d years old. His gpa is %.3f" % (name, age,gpa)) """ Explanation: Format Codes Python has some string format codes which allow us to control the output of our variables. %s = format variable as str %d = format variable as int %f = format variable as float You can also include the number of spaces to use for example %5.2f prints a float with 5 spaces 2 to the right of the decimal point. End of explanation """ name ="Mike" wage = 15 print(f"{name} makes ${wage:.2f} per hour") """ Explanation: Formatting with F-Strings The other method of formatting data in Python is F-strings. As we saw in the last lab, F-strings use interpolation to specify the variables we would like to print in-line with the print string. You can format an f-string {var:d} formats var as integer {var:f} formats var as float {var:.3f} formats var as float to 3 decimal places. Example: End of explanation """ # TODO: Write code here """ Explanation: 1.2 You Code Re-write the program from (1.1 You Code) so that the print statement uses format codes. Remember: do not copy code, as practice, re-write it. End of explanation """ #TODO: Write Code Here """ Explanation: 1.3 You Code Use F-strings or format codes to Print the PI variable out 3 times. Once as a string, Once as an int, and Once as a float to 4 decimal places. End of explanation """ # TODO: Write your code here """ Explanation: Putting it all together: Fred's Fence Estimator Fred's Fence has hired you to write a program to estimate the cost of their fencing projects. For a given length and width you will calculate the number of 6 foot fence sections, and the total cost of the project. Each fence section costs $23.95. Assume the posts and labor are free. Program Inputs: Length of yard in feet Width of yard in feet Program Outputs: Perimeter of yard ( 2 x (Length + Width)) Number of fence sections required (Permiemer divided by 6 ) Total cost for fence ( fence sections multiplied by $23.95 ) NOTE: All outputs should be formatted to 2 decimal places: e.g. 123.05 ``` TODO: 1. Input length of yard as float, assign to a variable 2. Input Width of yard as float, assign to a variable 3. Calculate perimeter of yard, assign to a variable 4. calculate number of fence sections, assign to a variable 5. calculate total cost, assign to variable 6. print perimeter of yard 7. print number of fence sections 8. print total cost for fence. ``` 1.4 You Code Based on the provided TODO, write the program in python in the cell below. Your solution should have 8 lines of code, one for each TODO. HINT: Don't try to write the program in one sitting. Instead write a line of code, run it, verify it works and fix any issues with it before writing the next line of code. End of explanation """ # run this code to turn in your work! from coursetools.submission import Submission Submission().submit() """ Explanation: Metacognition Rate your comfort level with this week's material so far. 1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below. 2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below. 3 ==> I can do this on my own without any help. 4 ==> I can do this on my own and can explain/teach how to do it to others. --== Double-Click Here then Enter a Number 1 through 4 Below This Line ==-- Questions And Comments Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class. --== Double-click Here then Enter Your Questions Below this Line ==-- End of explanation """
google/starthinker
colabs/barnacle_dv360.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: DV360 User Audit Gives DV clients ability to see which users have access to which parts of an account. Loads DV user profile mappings using the API into BigQuery and connects to a DataStudio dashboard. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'auth_read':'user', # Credentials used for writing data. 'auth_write':'service', # Credentials used for writing data. 'partner':'', # Partner ID to run user audit on. 'recipe_slug':'', # Name of Google BigQuery dataset to create. } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter DV360 User Audit Recipe Parameters DV360 only permits SERVICE accounts to access the user list API endpoint, be sure to provide and permission one. Wait for BigQuery->->->DV_... to be created. Wait for BigQuery->->->Barnacle_... to be created, then copy and connect the following data sources. Join the StarThinker Assets Group to access the following assets Copy Barnacle DV Report. Click Edit->Resource->Manage added data sources, then edit each connection to connect to your new tables above. Or give these intructions to the client. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dataset':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}} } }, { 'google_api':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}}, 'api':'doubleclickbidmanager', 'version':'v1.1', 'function':'queries.listqueries', 'alias':'list', 'results':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'table':'DV_Reports' } } } }, { 'google_api':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}}, 'api':'displayvideo', 'version':'v1', 'function':'partners.list', 'kwargs':{ 'fields':'partners.displayName,partners.partnerId,nextPageToken' }, 'results':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'table':'DV_Partners' } } } }, { 'google_api':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for writing data.'}}, 'api':'displayvideo', 'version':'v1', 'function':'advertisers.list', 'kwargs':{ 'partnerId':{'field':{'name':'partner','kind':'integer','order':2,'default':'','description':'Partner ID to run user audit on.'}}, 'fields':'advertisers.displayName,advertisers.advertiserId,nextPageToken' }, 'results':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'table':'DV_Advertisers' } } } }, { 'google_api':{ 'auth':'service', 'api':'displayvideo', 'version':'v1', 'function':'users.list', 'kwargs':{ }, 'results':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'table':'DV_Users' } } } }, { 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'from':{ 'query':"SELECT U.userId, U.name, U.email, U.displayName, REGEXP_EXTRACT(U.email, r'@(.+)') AS Domain, IF (ENDS_WITH(U.email, '.gserviceaccount.com'), 'Service', 'User') AS Authentication, IF((Select COUNT(advertiserId) from UNNEST(U.assignedUserRoles)) = 0, 'Partner', 'Advertiser') AS Scope, STRUCT( AUR.partnerId, P.displayName AS partnerName, AUR.userRole, AUR.advertiserId, A.displayName AS advertiserName, AUR.assignedUserRoleId ) AS assignedUserRoles, FROM `{dataset}.DV_Users` AS U, UNNEST(assignedUserRoles) AS AUR LEFT JOIN `{dataset}.DV_Partners` AS P ON AUR.partnerId=P.partnerId LEFT JOIN `{dataset}.DV_Advertisers` AS A ON AUR.advertiserId=A.advertiserId ", 'parameters':{ 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}} }, 'legacy':False }, 'to':{ 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'view':'Barnacle_User_Roles' } } }, { 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'from':{ 'query':"SELECT R.*, P.displayName AS partnerName, A.displayName AS advertiserName, FROM ( SELECT queryId, (SELECT CAST(value AS INT64) FROM UNNEST(R.params.filters) WHERE type = 'FILTER_PARTNER' LIMIT 1) AS partnerId, (SELECT CAST(value AS INT64) FROM UNNEST(R.params.filters) WHERE type = 'FILTER_ADVERTISER' LIMIT 1) AS advertiserId, R.schedule.frequency, R.params.metrics, R.params.type, R.metadata.dataRange, R.metadata.sendNotification, DATE(TIMESTAMP_MILLIS(R.metadata.latestReportRunTimeMS)) AS latestReportRunTime, FROM `{dataset}.DV_Reports` AS R) AS R LEFT JOIN `{dataset}.DV_Partners` AS P ON R.partnerId=P.partnerId LEFT JOIN `{dataset}.DV_Advertisers` AS A ON R.advertiserId=A.advertiserId ", 'parameters':{ 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}} }, 'legacy':False }, 'to':{ 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}, 'view':'Barnacle_Reports' } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute DV360 User Audit This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """