repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
jinzishuai/learn2deeplearn
deeplearning.ai/C4.CNN/week4_SpecialApps/hw/Face Recognition/Face Recognition for the Happy House - v1.ipynb
gpl-3.0
from keras.models import Sequential from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D from keras.layers.merge import Concatenate from keras.layers.core import Lambda, Flatten, Dense from keras.initializers import glorot_uniform from keras.engine.topology import Layer from keras import backend as K K.set_image_data_format('channels_first') import cv2 import os import numpy as np from numpy import genfromtxt import pandas as pd import tensorflow as tf from fr_utils import * from inception_blocks import * %matplotlib inline %load_ext autoreload %autoreload 2 np.set_printoptions(threshold=np.nan) """ Explanation: Face Recognition for the Happy House Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace. Face recognition problems commonly fall into two categories: Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. In this assignment, you will: - Implement the triplet loss function - Use a pretrained model to map face images into 128-dimensional encodings - Use these encodings to perform face verification and face recognition In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Let's load the required packages. End of explanation """ FRmodel = faceRecoModel(input_shape=(3, 96, 96)) print("Total Params:", FRmodel.count_params()) """ Explanation: 0 - Naive Face Verification In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! <img src="images/pixel_comparison.png" style="width:380px;height:150px;"> <caption><center> <u> <font color='purple'> Figure 1 </u></center></caption> Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using an ConvNet to compute encodings The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook). The key things you need to know are: This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector Run the cell below to create the model for face images. End of explanation """ # GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive pos_dist = None # Step 2: Compute the (encoding) distance between the anchor and the negative neg_dist = None # Step 3: subtract the two previous distances and add alpha. basic_loss = None # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = None ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval())) """ Explanation: Expected Output <table> <center> Total Params: 3743280 </center> </table> By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: <img src="images/distance_kiank.png" style="width:680px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption> So, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very different The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. <img src="images/triplet_comparison.png" style="width:280px;height:150px;"> <br> <caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption> 1.2 - The Triplet Loss For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network. <img src="images/f_x.png" style="width:380px;height:150px;"> <!-- We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1). !--> Training will use triplets of images $(A, P, N)$: A is an "Anchor" image--a picture of a person. P is a "Positive" image--a picture of the same person as the Anchor image. N is a "Negative" image--a picture of a different person than the Anchor image. These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$: $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$ You would thus like to minimize the following "triplet cost": $$\mathcal{J} = \sum^{N}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$ Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes: - The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here. Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps: 1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ 2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ 3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$ 3. Compute the full formula by taking the max with zero and summing over the training examples: $$\mathcal{J} = \sum^{N}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$ Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.reduce_mean, tf.maximum(). End of explanation """ FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy']) load_weights_from_FaceNet(FRmodel) """ Explanation: Expected Output: <table> <tr> <td> **loss** </td> <td> 350.026 </td> </tr> </table> 2 - Loading the trained model FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run. End of explanation """ database = {} database["danielle"] = img_to_encoding("images/danielle.png", FRmodel) database["younes"] = img_to_encoding("images/younes.jpg", FRmodel) database["tian"] = img_to_encoding("images/tian.jpg", FRmodel) database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel) database["kian"] = img_to_encoding("images/kian.jpg", FRmodel) database["dan"] = img_to_encoding("images/dan.jpg", FRmodel) database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel) database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel) database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel) database["felix"] = img_to_encoding("images/felix.jpg", FRmodel) database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel) database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel) """ Explanation: Here're some examples of distances between the encodings between three individuals: <img src="images/distance_matrix.png" style="width:380px;height:200px;"> <br> <caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption> Let's now use this model to perform face verification and face recognition! 3 - Applying the model Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. 3.1 - Face Verification Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face. End of explanation """ # GRADED FUNCTION: verify def verify(image_path, identity, database, model): """ Function that verifies if the person on the "image_path" image is "identity". Arguments: image_path -- path to an image identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house. database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors). model -- your Inception model instance in Keras Returns: dist -- distance between the image_path and the image of "identity" in the database. door_open -- True, if the door should open. False otherwise. """ ### START CODE HERE ### # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line) encoding = None # Step 2: Compute distance with identity's image (≈ 1 line) dist = None # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines) if None: print("It's " + str(identity) + ", welcome home!") door_open = None else: print("It's not " + str(identity) + ", please go away") door_open = None ### END CODE HERE ### return dist, door_open """ Explanation: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID. Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps: 1. Compute the encoding of the image from image_path 2. Compute the distance about this encoding and the encoding of the identity image stored in the database 3. Open the door if the distance is less than 0.7, else do not open. As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) End of explanation """ verify("images/camera_0.jpg", "younes", database, FRmodel) """ Explanation: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture: <img src="images/camera_0.jpg" style="width:100px;height:100px;"> End of explanation """ verify("images/camera_2.jpg", "kian", database, FRmodel) """ Explanation: Expected Output: <table> <tr> <td> **It's younes, welcome home!** </td> <td> (0.65939283, True) </td> </tr> </table> Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter. <img src="images/camera_2.jpg" style="width:100px;height:100px;"> End of explanation """ # GRADED FUNCTION: who_is_it def who_is_it(image_path, database, model): """ Implements face recognition for the happy house by finding who is the person on the image_path image. Arguments: image_path -- path to an image database -- database containing image encodings along with the name of the person on the image model -- your Inception model instance in Keras Returns: min_dist -- the minimum distance between image_path encoding and the encodings from the database identity -- string, the name prediction for the person on image_path """ ### START CODE HERE ### ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line) encoding = None ## Step 2: Find the closest encoding ## # Initialize "min_dist" to a large value, say 100 (≈1 line) min_dist = None # Loop over the database dictionary's names and encodings. for (name, db_enc) in None: # Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line) dist = None # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines) if None: min_dist = None identity = None ### END CODE HERE ### if min_dist > 0.7: print("Not in the database.") else: print ("it's " + str(identity) + ", the distance is " + str(min_dist)) return min_dist, identity """ Explanation: Expected Output: <table> <tr> <td> **It's not kian, please go away** </td> <td> (0.86224014, False) </td> </tr> </table> 3.2 - Face Recognition Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. Exercise: Implement who_is_it(). You will have to go through the following steps: 1. Compute the target encoding of the image from image_path 2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items(). - Compute L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set min_dist to dist, and identity to name. End of explanation """ who_is_it("images/camera_0.jpg", database, FRmodel) """ Explanation: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes. End of explanation """
jalexvig/tensorflow
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
apache-2.0
# Install TensorFlow; note that Colab notebooks run remotely, on virtual # instances provided by Google. !pip install -U -q tf-nightly import os import time import tensorflow as tf from tensorflow.contrib import autograph import matplotlib.pyplot as plt import numpy as np import six from google.colab import widgets """ Explanation: Experimental: TF AutoGraph TensorFlow Dev Summit, 2018. This interactive notebook demonstrates AutoGraph, an experimental source-code transformation library to automatically convert Python, TensorFlow and NumPy code to TensorFlow graphs. Note: this is pre-alpha software! The notebook works best with Python 2, for now. Table of Contents Write Eager code that is fast and scalable. Case study: complex control flow. Case study: training MNIST with Keras. Case study: building an RNN. End of explanation """ def g(x): if x > 0: x = x * x else: x = 0 return x """ Explanation: 1. Write Eager code that is fast and scalable TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations. AutoGraph gives you the best of both worlds: you can write your code in an Eager style, and we will automatically transform it into the equivalent TF graph code. The graph code can be executed eagerly (as a single op), included as part of a larger graph, or exported. For example, AutoGraph can convert a function like this: End of explanation """ print(autograph.to_code(g)) """ Explanation: ... into a TF graph-building function: End of explanation """ tf_g = autograph.to_graph(g) with tf.Graph().as_default(): g_ops = tf_g(tf.constant(9)) with tf.Session() as sess: tf_g_result = sess.run(g_ops) print('g(9) = %s' % g(9)) print('tf_g(9) = %s' % tf_g_result) """ Explanation: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors: End of explanation """ def sum_even(numbers): s = 0 for n in numbers: if n % 2 > 0: continue s += n return s tf_sum_even = autograph.to_graph(sum_even) with tf.Graph().as_default(): with tf.Session() as sess: result = sess.run(tf_sum_even(tf.constant([10, 12, 15, 20]))) print('Sum of even numbers: %s' % result) # Uncomment the line below to print the generated graph code # print(autograph.to_code(sum_even)) """ Explanation: 2. Case study: complex control flow Autograph can convert a large subset of the Python language into graph-equivalent code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph. AutoGraph will automatically convert most Python control flow statements into their graph equivalent. We support common statements like while, for, if, break, return and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand: End of explanation """ def f(x): assert x != 0, 'Do not pass zero!' return x * x tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session() as sess: try: print(sess.run(tf_f(tf.constant(0)))) except tf.errors.InvalidArgumentError as e: print('Got error message: %s' % e.message) # Uncomment the line below to print the generated graph code # print(autograph.to_code(f)) """ Explanation: Try replacing the continue in the above code with break -- Autograph supports that as well! The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs. Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code. End of explanation """ def print_sign(n): if n >= 0: print(n, 'is positive!') else: print(n, 'is negative!') return n tf_print_sign = autograph.to_graph(print_sign) with tf.Graph().as_default(): with tf.Session() as sess: sess.run(tf_print_sign(tf.constant(1))) # Uncomment the line below to print the generated graph code # print(autograph.to_code(print_sign)) """ Explanation: You can also use print functions in-graph: End of explanation """ def f(n): numbers = [] # We ask you to tell us about the element dtype. autograph.set_element_type(numbers, tf.int32) for i in range(n): numbers.append(i) return autograph.stack(numbers) # Stack the list so that it can be used as a Tensor tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(tf_f(tf.constant(5)))) # Uncomment the line below to print the generated graph code # print(autograph.to_code(f)) """ Explanation: Appending to lists also works, with a few modifications: End of explanation """ def print_primes(n): """Returns all the prime numbers less than n.""" assert n > 0 primes = [] autograph.set_element_type(primes, tf.int32) for i in range(2, n): is_prime = True for k in range(2, i): if i % k == 0: is_prime = False break if not is_prime: continue primes.append(i) all_primes = autograph.stack(primes) print('The prime numbers less than', n, 'are:') print(all_primes) return tf.no_op() tf_print_primes = autograph.to_graph(print_primes) with tf.Graph().as_default(): with tf.Session() as sess: n = tf.constant(50) sess.run(tf_print_primes(n)) # Uncomment the line below to print the generated graph code # print(autograph.to_code(print_primes)) """ Explanation: And all of these functionalities, and more, can be composed into more complicated code: End of explanation """ import gzip import shutil from six.moves import urllib def download(directory, filename): filepath = os.path.join(directory, filename) if tf.gfile.Exists(filepath): return filepath if not tf.gfile.Exists(directory): tf.gfile.MakeDirs(directory) url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz' zipped_filepath = filepath + '.gz' print('Downloading %s to %s' % (url, zipped_filepath)) urllib.request.urlretrieve(url, zipped_filepath) with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) os.remove(zipped_filepath) return filepath def dataset(directory, images_file, labels_file): images_file = download(directory, images_file) labels_file = download(directory, labels_file) def decode_image(image): # Normalize from [0, 255] to [0.0, 1.0] image = tf.decode_raw(image, tf.uint8) image = tf.cast(image, tf.float32) image = tf.reshape(image, [784]) return image / 255.0 def decode_label(label): label = tf.decode_raw(label, tf.uint8) label = tf.reshape(label, []) return tf.to_int32(label) images = tf.data.FixedLengthRecordDataset( images_file, 28 * 28, header_bytes=16).map(decode_image) labels = tf.data.FixedLengthRecordDataset( labels_file, 1, header_bytes=8).map(decode_label) return tf.data.Dataset.zip((images, labels)) def mnist_train(directory): return dataset(directory, 'train-images-idx3-ubyte', 'train-labels-idx1-ubyte') def mnist_test(directory): return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte') """ Explanation: 3. Case study: training MNIST with Keras As we've seen, writing control flow in AutoGraph is easy. So running a training loop in graph should be easy as well! Here, we show an example of such a training loop for a simple Keras model that trains on MNIST. End of explanation """ def mlp_model(input_shape): model = tf.keras.Sequential(( tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape), tf.keras.layers.Dense(100, activation='relu'), tf.keras.layers.Dense(10, activation='softmax'), )) model.build() return model """ Explanation: First, we'll define a small three-layer neural network using the Keras API End of explanation """ def predict(m, x, y): y_p = m(x) losses = tf.keras.losses.categorical_crossentropy(y, y_p) l = tf.reduce_mean(losses) accuracies = tf.keras.metrics.categorical_accuracy(y, y_p) accuracy = tf.reduce_mean(accuracies) return l, accuracy """ Explanation: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model. End of explanation """ def fit(m, x, y, opt): l, accuracy = predict(m, x, y) opt.minimize(l) return l, accuracy """ Explanation: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient. End of explanation """ def setup_mnist_data(is_training, hp, batch_size): if is_training: ds = mnist_train('/tmp/autograph_mnist_data') ds = ds.shuffle(batch_size * 10) else: ds = mnist_test('/tmp/autograph_mnist_data') ds = ds.repeat() ds = ds.batch(batch_size) return ds def get_next_batch(ds): itr = ds.make_one_shot_iterator() image, label = itr.get_next() x = tf.to_float(tf.reshape(image, (-1, 28 * 28))) y = tf.one_hot(tf.squeeze(label), 10) return x, y """ Explanation: These are some utility functions to download data and generate batches for training End of explanation """ def train(train_ds, test_ds, hp): m = mlp_model((28 * 28,)) opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9) train_losses = [] autograph.set_element_type(train_losses, tf.float32) test_losses = [] autograph.set_element_type(test_losses, tf.float32) train_accuracies = [] autograph.set_element_type(train_accuracies, tf.float32) test_accuracies = [] autograph.set_element_type(test_accuracies, tf.float32) i = 0 while i < hp.max_steps: train_x, train_y = get_next_batch(train_ds) test_x, test_y = get_next_batch(test_ds) step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt) step_test_loss, step_test_accuracy = predict(m, test_x, test_y) if i % (hp.max_steps // 10) == 0: print('Step', i, 'train loss:', step_train_loss, 'test loss:', step_test_loss, 'train accuracy:', step_train_accuracy, 'test accuracy:', step_test_accuracy) train_losses.append(step_train_loss) test_losses.append(step_test_loss) train_accuracies.append(step_train_accuracy) test_accuracies.append(step_test_accuracy) i += 1 return (autograph.stack(train_losses), autograph.stack(test_losses), autograph.stack(train_accuracies), autograph.stack(test_accuracies)) """ Explanation: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time. In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well. End of explanation """ def plot(train, test, label): plt.title('MNIST model %s' % label) plt.plot(train, label='train %s' % label) plt.plot(test, label='test %s' % label) plt.legend() plt.xlabel('Training step') plt.ylabel(label.capitalize()) plt.show() with tf.Graph().as_default(): hp = tf.contrib.training.HParams( learning_rate=0.05, max_steps=tf.constant(500), ) train_ds = setup_mnist_data(True, hp, 50) test_ds = setup_mnist_data(False, hp, 1000) tf_train = autograph.to_graph(train) all_losses = tf_train(train_ds, test_ds, hp) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) (train_losses, test_losses, train_accuracies, test_accuracies) = sess.run(all_losses) plot(train_losses, test_losses, 'loss') plot(train_accuracies, test_accuracies, 'accuracy') """ Explanation: Everything is ready to go, let's train the model and plot its performance! End of explanation """ def parse(line): """Parses a line from the colors dataset. Args: line: A comma-separated string containing four items: color_name, red, green, and blue, representing the name and respectively the RGB value of the color, as an integer between 0 and 255. Returns: A tuple of three tensors (rgb, chars, length), of shapes: (batch_size, 3), (batch_size, max_sequence_length, 256) and respectively (batch_size). """ items = tf.string_split(tf.expand_dims(line, 0), ",").values rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0 color_name = items[0] chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256) length = tf.cast(tf.shape(chars)[0], dtype=tf.int64) return rgb, chars, length def maybe_download(filename, work_directory, source_url): """Downloads the data from source url.""" if not tf.gfile.Exists(work_directory): tf.gfile.MakeDirs(work_directory) filepath = os.path.join(work_directory, filename) if not tf.gfile.Exists(filepath): temp_file_name, _ = six.moves.urllib.request.urlretrieve(source_url) tf.gfile.Copy(temp_file_name, filepath) with tf.gfile.GFile(filepath) as f: size = f.size() print('Successfully downloaded', filename, size, 'bytes.') return filepath def load_dataset(data_dir, url, batch_size, training=True): """Loads the colors data at path into a tf.PaddedDataset.""" path = maybe_download(os.path.basename(url), data_dir, url) dataset = tf.data.TextLineDataset(path) dataset = dataset.skip(1) dataset = dataset.map(parse) dataset = dataset.cache() dataset = dataset.repeat() if training: dataset = dataset.shuffle(buffer_size=3000) dataset = dataset.padded_batch(batch_size, padded_shapes=((None,), (None, None), ())) return dataset train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/train.csv" test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/test.csv" data_dir = "tmp/rnn/data" """ Explanation: 4. Case study: building an RNN In this exercise we build and train a model similar to the RNNColorbot model that was used in the main Eager notebook. The model is adapted for converting and training in graph mode. To get started, we load the colorbot dataset. The code is identical to that used in the other exercise and its details are unimportant. End of explanation """ def model_components(): lower_cell = tf.contrib.rnn.LSTMBlockCell(256) lower_cell.build(tf.TensorShape((None, 256))) upper_cell = tf.contrib.rnn.LSTMBlockCell(128) upper_cell.build(tf.TensorShape((None, 256))) relu_layer = tf.layers.Dense(3, activation=tf.nn.relu) relu_layer.build(tf.TensorShape((None, 128))) return lower_cell, upper_cell, relu_layer def rnn_layer(chars, cell, batch_size, training): """A simple RNN layer. Args: chars: A Tensor of shape (max_sequence_length, batch_size, input_size) cell: An object of type tf.contrib.rnn.LSTMBlockCell batch_size: Int, the batch size to use training: Boolean, whether the layer is used for training Returns: A Tensor of shape (max_sequence_length, batch_size, output_size). """ hidden_outputs = tf.TensorArray(tf.float32, size=0, dynamic_size=True) state, output = cell.zero_state(batch_size, tf.float32) initial_state_shape = state.shape initial_output_shape = output.shape n = tf.shape(chars)[0] i = 0 while i < n: ch = chars[i] cell_output, (state, output) = cell.call(ch, (state, output)) hidden_outputs.append(cell_output) i += 1 hidden_outputs = autograph.stack(hidden_outputs) if training: hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5) return hidden_outputs def model(inputs, lower_cell, upper_cell, relu_layer, batch_size, training): """RNNColorbot model. The model consists of two RNN layers (made by lower_cell and upper_cell), followed by a fully connected layer with ReLU activation. Args: inputs: A tuple (chars, length) lower_cell: An object of type tf.contrib.rnn.LSTMBlockCell upper_cell: An object of type tf.contrib.rnn.LSTMBlockCell relu_layer: An object of type tf.layers.Dense batch_size: Int, the batch size to use training: Boolean, whether the layer is used for training Returns: A Tensor of shape (batch_size, 3) - the model predictions. """ (chars, length) = inputs chars_time_major = tf.transpose(chars, (1, 0, 2)) chars_time_major.set_shape((None, batch_size, 256)) hidden_outputs = rnn_layer(chars_time_major, lower_cell, batch_size, training) final_outputs = rnn_layer(hidden_outputs, upper_cell, batch_size, training) # Grab just the end-of-sequence from each output. indices = tf.stack((length - 1, range(batch_size)), axis=1) sequence_ends = tf.gather_nd(final_outputs, indices) sequence_ends.set_shape((batch_size, 128)) return relu_layer(sequence_ends) def loss_fn(labels, predictions): return tf.reduce_mean((predictions - labels) ** 2) """ Explanation: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise. Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions. End of explanation """ def train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps): iterator = train_data.make_one_shot_iterator() step = 0 while step < num_steps: labels, chars, sequence_length = iterator.get_next() predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=True) loss = loss_fn(labels, predictions) optimizer.minimize(loss) if step % (num_steps // 10) == 0: print('Step', step, 'train loss', loss) step += 1 return step def test(eval_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps): total_loss = 0.0 iterator = eval_data.make_one_shot_iterator() step = 0 while step < num_steps: labels, chars, sequence_length = iterator.get_next() predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=False) total_loss += loss_fn(labels, predictions) step += 1 print('Test loss', total_loss) return total_loss def train_model(train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps): optimizer = tf.train.AdamOptimizer(learning_rate=0.01) train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps=tf.constant(train_steps)) test(eval_data, lower_cell, upper_cell, relu_layer, 50, num_steps=tf.constant(2)) print('Colorbot is ready to generate colors!\n\n') # In graph mode, every op needs to be a dependent of another op. # Here, we create a no_op that will drive the execution of all other code in # this function. Autograph will add the necessary control dependencies. return tf.no_op() """ Explanation: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch. End of explanation """ @autograph.do_not_convert(run_as=autograph.RunMode.PY_FUNC) def draw_prediction(color_name, pred): pred = pred * 255 pred = pred.astype(np.uint8) plt.axis('off') plt.imshow(pred) plt.title(color_name) plt.show() def inference(color_name, lower_cell, upper_cell, relu_layer): _, chars, sequence_length = parse(color_name) chars = tf.expand_dims(chars, 0) sequence_length = tf.expand_dims(sequence_length, 0) pred = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, 1, training=False) pred = tf.minimum(pred, 1.0) pred = tf.expand_dims(pred, 0) draw_prediction(color_name, pred) # Create an op that will drive the entire function. return tf.no_op() """ Explanation: Finally, we add code to run inference on a single input, which we'll read from the input. Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code. End of explanation """ def run_input_loop(sess, inference_ops, color_name_placeholder): """Helper function that reads from input and calls the inference ops in a loop.""" tb = widgets.TabBar(["RNN Colorbot"]) while True: with tb.output_to(0): try: color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ") except (EOFError, KeyboardInterrupt): break if not color_name: break with tb.output_to(0): tb.clear_tab() sess.run(inference_ops, {color_name_placeholder: color_name}) plt.show() with tf.Graph().as_default(): # Read the data. batch_size = 64 train_data = load_dataset(data_dir, train_url, batch_size) eval_data = load_dataset(data_dir, test_url, 50, training=False) # Create the model components. lower_cell, upper_cell, relu_layer = model_components() # Create the helper placeholder for inference. color_name_placeholder = tf.placeholder(tf.string, shape=()) # Compile the train / test code. tf_train_model = autograph.to_graph(train_model) train_model_ops = tf_train_model( train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps=100) # Compile the inference code. tf_inference = autograph.to_graph(inference) inference_ops = tf_inference(color_name_placeholder, lower_cell, upper_cell, relu_layer) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Run training and testing. sess.run(train_model_ops) # Run the inference loop. run_input_loop(sess, inference_ops, color_name_placeholder) """ Explanation: Finally, we put everything together. Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input. End of explanation """
metpy/MetPy
v1.1/_downloads/6535033cff935ab2c434cdad6eb5b4f7/Wind_SLP_Interpolation.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as cfeature from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt import numpy as np import pandas as pd from metpy.calc import wind_components from metpy.cbook import get_test_data from metpy.interpolate import interpolate_to_grid, remove_nan_observations from metpy.plots import add_metpy_logo from metpy.units import units to_proj = ccrs.AlbersEqualArea(central_longitude=-97., central_latitude=38.) """ Explanation: Wind and Sea Level Pressure Interpolation Interpolate sea level pressure, as well as wind component data, to make a consistent looking analysis, featuring contours of pressure and wind barbs. End of explanation """ with get_test_data('station_data.txt') as f: data = pd.read_csv(f, header=0, usecols=(2, 3, 4, 5, 18, 19), names=['latitude', 'longitude', 'slp', 'temperature', 'wind_dir', 'wind_speed'], na_values=-99999) """ Explanation: Read in data End of explanation """ lon = data['longitude'].values lat = data['latitude'].values xp, yp, _ = to_proj.transform_points(ccrs.Geodetic(), lon, lat).T """ Explanation: Project the lon/lat locations to our final projection End of explanation """ x_masked, y_masked, pressure = remove_nan_observations(xp, yp, data['slp'].values) """ Explanation: Remove all missing data from pressure End of explanation """ slpgridx, slpgridy, slp = interpolate_to_grid(x_masked, y_masked, pressure, interp_type='cressman', minimum_neighbors=1, search_radius=400000, hres=100000) """ Explanation: Interpolate pressure using Cressman interpolation End of explanation """ wind_speed = (data['wind_speed'].values * units('m/s')).to('knots') wind_dir = data['wind_dir'].values * units.degree good_indices = np.where((~np.isnan(wind_dir)) & (~np.isnan(wind_speed))) x_masked = xp[good_indices] y_masked = yp[good_indices] wind_speed = wind_speed[good_indices] wind_dir = wind_dir[good_indices] """ Explanation: Get wind information and mask where either speed or direction is unavailable End of explanation """ u, v = wind_components(wind_speed, wind_dir) windgridx, windgridy, uwind = interpolate_to_grid(x_masked, y_masked, np.array(u), interp_type='cressman', search_radius=400000, hres=100000) _, _, vwind = interpolate_to_grid(x_masked, y_masked, np.array(v), interp_type='cressman', search_radius=400000, hres=100000) """ Explanation: Calculate u and v components of wind and then interpolate both. Both will have the same underlying grid so throw away grid returned from v interpolation. End of explanation """ x_masked, y_masked, t = remove_nan_observations(xp, yp, data['temperature'].values) tempx, tempy, temp = interpolate_to_grid(x_masked, y_masked, t, interp_type='cressman', minimum_neighbors=3, search_radius=400000, hres=35000) temp = np.ma.masked_where(np.isnan(temp), temp) """ Explanation: Get temperature information End of explanation """ levels = list(range(-20, 20, 1)) cmap = plt.get_cmap('viridis') norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True) fig = plt.figure(figsize=(20, 10)) add_metpy_logo(fig, 360, 120, size='large') view = fig.add_subplot(1, 1, 1, projection=to_proj) view.set_extent([-120, -70, 20, 50]) view.add_feature(cfeature.STATES.with_scale('50m')) view.add_feature(cfeature.OCEAN) view.add_feature(cfeature.COASTLINE.with_scale('50m')) view.add_feature(cfeature.BORDERS, linestyle=':') cs = view.contour(slpgridx, slpgridy, slp, colors='k', levels=list(range(990, 1034, 4))) view.clabel(cs, inline=1, fontsize=12, fmt='%i') mmb = view.pcolormesh(tempx, tempy, temp, cmap=cmap, norm=norm) fig.colorbar(mmb, shrink=.4, pad=0.02, boundaries=levels) view.barbs(windgridx, windgridy, uwind, vwind, alpha=.4, length=5) view.set_title('Surface Temperature (shaded), SLP, and Wind.') plt.show() """ Explanation: Set up the map and plot the interpolated grids appropriately. End of explanation """
sdss/marvin
docs/sphinx/tutorials/notebooks/Marvin_Results.ipynb
bsd-3-clause
# set up and run the query from marvin.tools.query import Query q = Query(search_filter='nsa.z < 0.1', return_params=['absmag_g_r', 'nsa.elpetro_th50_r']) r = q.run() # repr the results r """ Explanation: Marvin Results This tutorial explores some basics of how to handle results of your Marvin Query. Much of this information can also be found in the Marvin Results documentation. Table of Contents: - Performing a Query<br> - Retrieving Results<br> - Formatting Results<br> - Quickly Plotting Results<br> - Downloading Results<br> <a id='query'></a> Performing a Query Our first step is to generate a query. Let's perform a simple metadata query to look for all galaxies with a redshift < 0.1. Let's also return the absolute magnitude g-r color and the Elliptical Petrosian half-light radius. This step assumes familiarity with Marvin Queries. To learn how to write queries, please see the Marvin Query documentation or the Marvin Query Tutorial. End of explanation """ # look at the results r.results """ Explanation: Our query runs and indicates a total count of 4275 results. By default, queries that return more than 1000 rows will be automatically paginated into sets (or chunks) of 100 rows, indicated by count=100. The number of rows queries return can be changed using the limit keyword argument to Qeuery. The results are stored in the results attribute. End of explanation """ # look at the columns returned by your results r.columns """ Explanation: A ResultSet contains a list of tuple rows with some default parameters like mangaid and plateifu, plus any parameters used in the Query search_filter or requested with the return_params keyword. The redshift, g-r color, and half-light radius has been returned. We can look at all the columns available using the columns attribute. End of explanation """ # get the next set of results n = r.getNext() # look at page 2 r.results # get the previous set p = r.getPrevious() """ Explanation: <a id='retrieve'></a> Retrieving Results There are several options for handling paginated results. To page through the sets of results without extending the results, use getNext and getPrevious. These methods simply page through. End of explanation """ # extend the set by one page r.extendSet() r """ Explanation: To extend your results and keep them, use the extendSet method. By default, extending a set grabs the next page of 100 results (defined by r.chunk) and appends to the existing set of results. Rerunning extendSet continues to append results until you've retrieved them all. To avoid running extendSet multiple times, you can run use the loop method, which will loop over all pages appending the data until you've retrieved all the results. End of explanation """ # get all the results # r.getAll() # rerun the query q = Query(search_filter='nsa.z < 0.1', return_params=['absmag_g_r', 'nsa.elpetro_th50_r'], limit=5000) r = q.run() r """ Explanation: We now have 200 results out of the 4275. For results with a small number of total counts, you can attempt to retrieve all of the results with the getAll method. Currently this method is limited to returning results containing 500,000 rows or rows with 25 columns. Getting all the results There are several options for getting all of the results. - Use the getAll method to attempt to retrieve all the results in one request. - Use the loop method to loop over all the pages to extend/append the results together - Rerun the Query using a new limit to retrieve all the results. Note: A bug was recently found in getAll and might not work. Instead we will rerun the query using a large limit to return all the results. End of explanation """ # extract individual columns of data redshift = r.results['nsa.z'] color = r.results['absmag_g_r'] """ Explanation: We now have all the results. We can extract columns of data by indexing the results list using the column name. Let's extract the redshift and color. End of explanation """ # convert the marvin results to a Pandas dataframe df = r.toDF() df.head() """ Explanation: <a id='format'></a> Formatting Results You can convert the results to a variety of formats using the toXXX methods. Common formats are FITS, Astropy Table, Pandas Dataframe, JSON, or CSV. Only the FITS and CSV conversions will write the output to a file. Astropy Tables and Pandas Dataframes have more options for writing out your dataset to a file. Let's convert to Pandas Dataframe. End of explanation """ # convert the top 5 to cubes r.convertToTool('cube', limit=5) # look at the objects r.objects """ Explanation: You can also convert the data into Marvin objects using the convertToTool method. This will attempt to convert each result row into its corresponding Marvin Object. The default conversion is to a Cube object. Converted objects are stored in the r.objects attribute. Let's convert our results to cubes. Depending on the number of results, this may take awhile. Let's limit our conversion to 5. Once converted, we now have Marvin Tools at our disposal. End of explanation """ # make a scatter plot fig, ax, histdata = r.plot('z', 'absmag_g_r') """ Explanation: <a id='plot'></a> Quickly Plotting the Results You can quickly plot the full set of results using the plot method. plot accepts two string column names and will attempt to create a scatter plot, a hex-binned plot, or a scatter-density plot, depending on the total number of results. The plot method returns the matplotlib Figure and Axes objects, as well as a dictionary of histogram information for each column. The Results.plot method uses the underlying plot utility function. The utility function offers up more custom plotting options. Let's plot g-r color versus redshift. Regardless of the number of results you currently have loaded, the plot method will automatically retrieve all the results before plotting. End of explanation """ # make only a scatter plot fig, ax = r.plot('z', 'absmag_g_r', with_hist=False) """ Explanation: By default, it will also plot histograms of the column as well. This can be turned off by setting with_hist=False. End of explanation """ histdata, fig, ax = r.hist('absmag_g_r') """ Explanation: We can also quickly plot a histogram of a single column of data using the hist method, which uses an underlying hist utility function. End of explanation """ # download the DRP datacube files from the results # r.download() """ Explanation: <a id='download'></a> Downloading Results You can download the raw files from your results using the download method. This uses the downloadList utility function under the hood. By default this will download the DRP cubes for each target row. It accepts any keyword arguments as downloadList. End of explanation """
DJCordhose/ai
notebooks/tensorflow/embeddings.ipynb
mit
# Based on # https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/6.2-understanding-recurrent-neural-networks.ipynb import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) from tensorflow import keras # https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification max_features = 10000 # number of words to consider as features maxlen = 50 # cut texts after this number of words (among top max_features most common words) # each review is encoded as a sequence of word indexes # indexed by overall frequency in the dataset # output is 0 (negative) or 1 (positive) imdb = keras.datasets.imdb.load_data(num_words=max_features) (raw_input_train, y_train), (raw_input_test, y_test) = imdb # tf.keras.datasets.imdb.load_data? y_train.min() y_train.max() # 25000 texts len(raw_input_train) # first text has 218 words len(raw_input_train[0]) raw_input_train[0] # tf.keras.preprocessing.sequence.pad_sequences? # https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences input_train = keras.preprocessing.sequence.pad_sequences(raw_input_train, maxlen=maxlen) input_test = keras.preprocessing.sequence.pad_sequences(raw_input_test, maxlen=maxlen) input_train.shape, input_test.shape, y_train.shape, y_test.shape # left padded with zeros # As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. input_train[0] """ Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tensorflow/embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Understanding Embeddings on Texts End of explanation """ # tf.keras.layers.Embedding? from tensorflow.keras.layers import Embedding, Flatten, GlobalAveragePooling1D, Dense, Dropout embedding_dim = 2 model = keras.Sequential() # Parameters: max_features * embedding_dim model.add(Embedding(name='embedding', input_dim=max_features, output_dim=embedding_dim, input_length=maxlen)) # Output: maxlen * embedding_dim (8) model.add(Flatten(name='flatten')) # ALTERNATIVE # average of all embeddings (does not preserve sequence) # model.add(GlobalAveragePooling1D(name='average_pooling')) # binary classifier # model.add(Dense(name='fc', units=32, activation='relu')) # model.add(Dropout(0.4)) model.add(Dense(name='classifier', units=1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() batch_size = 96 %time history = model.fit(input_train, y_train, epochs=40, batch_size=batch_size, validation_data=(input_test, y_test)) import pandas as pd def plot_history(history, samples=10, init_phase_samples=None): epochs = history.params['epochs'] acc = history.history['acc'] val_acc = history.history['val_acc'] every_sample = int(epochs / samples) acc = pd.DataFrame(acc).iloc[::every_sample, :] val_acc = pd.DataFrame(val_acc).iloc[::every_sample, :] fig, ax = plt.subplots(figsize=(20,5)) ax.plot(acc, 'bo', label='Training acc') ax.plot(val_acc, 'b', label='Validation acc') ax.set_title('Training and validation accuracy') ax.legend() plot_history(history) train_loss, train_accuracy = model.evaluate(input_train, y_train, batch_size=batch_size) train_accuracy test_loss, test_accuracy = model.evaluate(input_test, y_test, batch_size=batch_size) test_accuracy # precition model.predict(input_test[0:5]) # ground truth y_test[0:5] """ Explanation: Training the embedding together with the whole model is more reasonable Alternative: use a pre-trained model, probably trained using skip-gram End of explanation """ embedding_layer = model.get_layer('embedding') model_stub= keras.Model(inputs=model.input, outputs=embedding_layer.output) word_to_id = keras.datasets.imdb.get_word_index() def encode_text(text): input_words = text.split() input_tokens = np.array([word_to_id[word] for word in input_words]) padded_input_tokens = keras.preprocessing.sequence.pad_sequences([input_tokens], maxlen=maxlen) return padded_input_tokens def plot_text_embedding(model, text): input_words = text.split() input_sequence = encode_text(text) embeddings = model.predict(input_sequence)[0][-len(input_words):, :] x_coords = embeddings[:, 0] # First latent dim y_coords = embeddings[:, 1] # Second latent dim plt.figure(figsize=(20, 20)) plt.scatter(x_coords, y_coords) for i, txt in enumerate(input_words): plt.annotate(txt, (x_coords[i], y_coords[i])) plt.show() text = """good best brilliant amazing great lovely awesome bad worst awful art garbage gross horrible sad funny beautiful ugly movie actor male female love""" plot_text_embedding(model_stub, text) from tensorflow.keras.layers import Embedding, Flatten, GlobalAveragePooling1D, Dense, Dropout embedding_dim = 1 model = keras.Sequential() # Parameters: max_features * embedding_dim model.add(Embedding(name='embedding', input_dim=max_features, output_dim=embedding_dim, input_length=maxlen)) # Output: maxlen * embedding_dim (8) model.add(Flatten(name='flatten')) # ALTERNATIVE # average of all embeddings (does not preserve sequence) # model.add(GlobalAveragePooling1D(name='average_pooling')) # binary classifier model.add(Dense(name='fc', units=32, activation='relu')) # model.add(Dropout(0.4)) model.add(Dense(name='classifier', units=1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # model.summary() batch_size = 96 %time history = model.fit(input_train, y_train, epochs=40, batch_size=batch_size, validation_data=(input_test, y_test)) """ Explanation: How does the output of the trained embedding look like? End of explanation """ embedding_layer = model.get_layer('embedding') model_stub= keras.Model(inputs=model.input, outputs=embedding_layer.output) def plot_1d_text_embedding(model, text): input_words = text.split() input_sequence = encode_text(text) embeddings = model.predict(input_sequence)[0][-len(input_words):, :] plt.figure(figsize=(20, 5)) plt.scatter(embeddings, np.zeros(len(embeddings))) for i, txt in enumerate(input_words): plt.annotate(txt, (embeddings[i], 0.004), rotation=80) plt.show() text = """good best brilliant amazing great lovely awesome bad worst awful garbage gross horrible sad funny beautiful ugly""" plot_1d_text_embedding(model_stub, text) """ Explanation: What about 1d? End of explanation """
mjones01/NEON-Data-Skills
tutorials-in-development/python-api/download_abby_tos_woody_veg_data_tutorial.ipynb
agpl-3.0
import requests, urllib, os """ Explanation: Get packages and set up This tutorial contains code and instructions for downloading NEON data via the API, using the data product DP1.10098.001 - Woody Plant Vegetation Structure as an example. It follows a similar workflow to the online tutorial <a href="https://www.neonscience.org/neon-api-usage" target="_blank">Using the NEON API in R</a>. See the R tutorial for further details about the overall API structure, and instructions in using other endpoints of the API (locations, taxonomy, etc). Required packages: - requests: http://docs.python-requests.org/en/master/ - urllib : https://docs.python.org/3.5/library/urllib.html#module-urllib End of explanation """ r = requests.get("http://data.neonscience.org/api/v0/products/DP1.10098.001") """ Explanation: We can use the requests module to see which Veg Structure data is availabe for all sites. For more details on the anatomy of an API call, refer to https://www.neonscience.org/neon-api-usage. Since we are looking for the DP1.10098.001 product, we can attach this endpoint to the NEON data base API url - http://data.neonscience.org/api/v0/ as follows: End of explanation """ r.status_code """ Explanation: info on status codes (from https://www.dataquest.io/blog/python-api-tutorial/) Status codes are returned with every request that is made to a web server. Status codes indicate information about what happened with a request. Here are some codes that are relevant to GET requests: 200 : everything went okay, and the result has been returned (if any) 301 : the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed. 401 : the server thinks you're not authenticated. This happens when you don't send the right credentials to access an API (we'll talk about authentication in a later post). 400 : the server thinks you made a bad request. This can happen when you don't send along the right data, among other things. 403 : the resource you're trying to access is forbidden -- you don't have the right permissions to see it. 404 : the resource you tried to access wasn't found on the server. Let's make sure this request was successful by checking the status code: End of explanation """ r.headers """ Explanation: Good news, the request was successful! You can get some additional information about the request by using the .headers method: End of explanation """ r.json() """ Explanation: Finally, to pull out information from the request, use .json(), which pulls the data contained from the api into a python dictionary. Note: you can also use the .text() option but this prints everything without any formatting. End of explanation """ r.json().keys() """ Explanation: To display the content headers, you can use the dictionary .keys() method: End of explanation """ r.json()['data'].keys() """ Explanation: We can see that everything is nested under the 'data' key, so let's look at the keys in the data dictionary: End of explanation """ for k, v in r.json().items(): for k1, v1 in v.items(): print(k1) """ Explanation: Or to print each key on a separate line (to more easily view), run the following: End of explanation """ site = 'ABBY' for i in range(len(r.json()['data']['siteCodes'])): if site in r.json()['data']['siteCodes'][i]['siteCode']: data_urls = r.json()['data']['siteCodes'][i]['availableDataUrls'] data_urls #display all ABBY DP1.10098.001 data urls """ Explanation: We want to extract the 'availableDataUrls' from the 'siteCodes' key, and only list the ones that match the site in question, in this case 'ABBY'. One way to do that is to loop through all the site codes, search for a match with the site we want, and then print the list of available data for that site: End of explanation """ abby_2017_urls = [url for url in data_urls if '2017' in url] abby_2017_urls """ Explanation: If you only want to extract the most recent year's data (in this case 2017), you can further subset these urls by date as follows: End of explanation """ r = requests.get(abby_2017_urls[0]) r.json() """ Explanation: Now that we have the list of api urls corresponding to the data products we want to download, we can make another request. Let's start with the first url as an example, then loop through all of the urls: End of explanation """ r.json()['data'].keys() """ Explanation: Display the keys to show the information provided for each file: End of explanation """ r.json()['data']['files'] """ Explanation: The download url links are nested under ['data']['files']: End of explanation """ tos_data_folder = './Data/ABBY_TOS_WoodyVegStructure/' #create a folder in current directory to store TOS data os.mkdir('./Data/') os.mkdir(tos_data_folder) """ Explanation: Now we have all the information we need to download the files for each of the months data is available. First, make a directory to download the data: End of explanation """ for url in abby_2017_urls: month = url.split('/')[-1] download_folder = tos_data_folder + month + '/' os.mkdir(download_folder) r = requests.get(url) files = r.json()['data']['files'] for i in range(len(files)): if '.zip' not in files[i]['name']: print('downloading ' + files[i]['name'] + ' to ' + download_folder) urllib.request.urlretrieve(files[i]['url'], download_folder + files[i]['name']) """ Explanation: Loop through the 2017 data urls and use the urllib.request.urlretrieve method download all the files except the zip folder (to avoid redundance) to the data folder, with a subfolder named after the month: ./Data/ABBY_TOS_WoodyVegStructure/yyyy_mm: End of explanation """ for url in abby_2017_urls: r = requests.get(url) files = r.json()['data']['files'] for i in range(len(files)): if '.zip' in files[i]['name']: print('downloading ' + files[i]['name'] + ' to ' + download_folder) urllib.request.urlretrieve(files[i]['url'], tos_data_folder + files[i]['name']) """ Explanation: Alternatively, you could loop through all the 2017 data and download only the zip files directly to the ./Data folder (without separating into monthly subfolders) as follows: End of explanation """
a-mt/dev-roadmap
docs/!ml/notebooks/PCA.ipynb
mit
from sklearn.decomposition import PCA pca = PCA(n_components=2) res = pca.fit_transform(df_norm) res # Singular values pca.singular_values_.round(2) # Eigenvalues pca.explained_variance_.round(2) # Eigenvalues/eigenvalues.sum() pca.explained_variance_ratio_.round(2) # Eigenvectors pca.components_ plt.bar(['PC1', 'PC2'], pca.explained_variance_ratio_) k = 1 df_reduced = np.dot(pca.components_[:k], df_norm.T) plt.scatter(df_reduced[0], np.ones_like(df_reduced[0])) """ Explanation: Using sklearn End of explanation """ cov = np.cov(df_norm.T) cov (df_norm.T.dot(df_norm))/(len(df_norm)-1) """ Explanation: From scratch with eigenvalues Create covariance matrix Here, we're applying Bessel's correction (dividing by n-1 instead of n) because our dataset is a sample. End of explanation """ eigenvalues, eigenvectors = np.linalg.eig(cov) # Here the column v[:,i] is the eigenvector # corresponding to the eigenvalue w[i] # Make it the opposite: v[i, :] eigenvectors = eigenvectors.T print("Eigenvalues (explained variance):\n", eigenvalues, "\n") print("Eigenvectors (components):\n", eigenvectors) print("Explained variance ratio:\n", eigenvalues/eigenvalues.sum()) """ Explanation: Compute eigenvalues & eigenvectors End of explanation """ rsort_eigenvalues_idx = eigenvalues.argsort()[::-1] rsort_eigenvalues_idx eigenvalues[rsort_eigenvalues_idx]/eigenvalues.sum() rsort_eigenvectors = eigenvectors[rsort_eigenvalues_idx] rsort_eigenvectors """ Explanation: Sort eigenvectors by DESC eigenvalues End of explanation """ k = 1 df_reduced = np.dot(rsort_eigenvectors[:k], df_norm.T) df_reduced plt.scatter(df_reduced[0], np.ones_like(df_reduced[0])) """ Explanation: PC1 is a principal component that captures 0.92% of the data variance, using a combination of a and b (-0.71⋅a - 0.71⋅b). That means that a 1-D graph, using just PC1 would be a good approximation of the 2-D graph since it would account for 92% of the variation in the data. This can be used to identify clusters of data. Get k features End of explanation """ U, s, V = np.linalg.svd(df_norm, full_matrices=False) # Left singular vectors U # Singular values s.round(2) # Right singular vectors = eigenvectors V # Eigenvalues n_sample = len(df_norm) (s**2/(n_sample-1)).round(2) # Transformed data k = 2 U[:, :k]*s[:k] """ Explanation: From scratch using Singular values End of explanation """
fionapigott/Data-Science-45min-Intros
vector-spaces/vector-spaces_pt1.ipynb
unlicense
import copy try: import ujson as json except ImportError: import json import math import operator import random from mpl_toolkits.mplot3d import Axes3D import numpy as np from numpy.linalg import norm as np_norm import matplotlib.pyplot as plt import pandas as pd from scipy.spatial import distance as spd import seaborn as sns from sklearn.datasets import make_blobs from sklearn.decomposition import PCA from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer sns.set_style('whitegrid') %matplotlib inline """ Explanation: Vector Spaces: Part I, "Graphical Representations and Intuition" Much of statistical learning relies on the ability to represent data observations (measurements) in a space comprising relevant dimensions (features). Often, the number of relevant dimensions is quite small; if you were trying to discern a model that described the area of a rectangle, observations of only two features (length and width) would be all you needed. Fisher's well-known iris dataset comprises 150 measurements of only three features 📊 In some cases - particularly with text analysis - the dimensionality of the space can grow much faster. In many approaches to text analysis, the process to get from a text corpus to numerical feature vectors involves a few steps. Just as an exapmle, one way to do this is to: break the corpus into documents e.g. each on a new line of an input file parse the document into tokens e.g. split words on whitespace construct a feature vector for each document One way to accomplish the final step is to consider each token (ie word) as a unique dimension, and the count of each word per document as the magnitude along the corresponding dimension. There are certainly other ways to define each of these steps (and more subtle details to consider within each), but for now, we'll consider this simple one. Using exactly this approach, constructing a vector space from just a few minutes of Tweets (each Tweet considered a document) leads to a space with hundreds of thousands of features! In this high-dimensional vector space, it becomes easy for us to be misled by our intuition for statistical learning approaches in more "human" dimensions e.g. one-, two- and three-dimensional spaces. At this point, many people will cite the "curse of dimensionality." There are multiple phenomena referred to by this name in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining, and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. This sparsity is problematic for any method that requires statistical significance. In order to obtain a statistically sound and reliable result, the amount of data needed to support the result often grows exponentially with the dimensionality. Also organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data however all objects appear to be sparse and dissimilar in many ways which prevents common data organization strategies from being efficient. Wikipedia This "curse of dimensionality" refers to a few related, but distinct challenges with data for statistical learning in high dimensions: "increasing dimensions decrease the power of a test statistic" ref "Intuition fails us in high dimensions" ref sparse data is increasingly found in the corners/shell of high-dimensional space (this notebook!) I wanted to build more intuition around thinking, visualizing, and generally being more aware of how these phenomena affect our typical kinds of analyses; this notebook is a first step, primarily focused on building an intuition for inspecting and thinking about ways to inspect spaces when we can longer just plot them. Along the way, I learned a number of new things, and aim to explore them in follow up pieces. Note: Beware that there are a lot of reused variable names in this notebook. If you get an unexpected result, or an error, be sure to check that the appropriate data generation step was run! Take-aways These are the two high-level objectives we'll aim for: concepts for inspecting data in high dimensions illustrations of how high dimensionality squeezes data "density profile" to edges kmeans ~O(d) "statistical significance" End of explanation """ # number of data points n = 1000 # array of continuous values, randomly drawn from standard normal in two dimensions X = np.array(np.random.normal(size=(n,2))) # seaborn plays really nicely with pandas df = pd.DataFrame(X, columns=['x0','x1']) df.tail() """ Explanation: Simple, visualizable spaces We'll start by exploring some approaches to thinking about and inspecting data in spaces that we can comprehend without much effort. Normal distributions in 2D End of explanation """ sns.jointplot(data=df, x='x0', y='x1', alpha=0.5) """ Explanation: We have a 2-dimensional feature space containing 1000 pieces of data. Each coordinate is orthogonal, and we can equivalently think about each data point being represented by a vector from the origin [ (0,0) in 2-dimensional space ], to the point defined by [x0, x1]. Since we only have two dimensions, we can look at the bivariate distribution quite easily using jointplot. Seaborn also gives us some handy tools for looking at the univariate distributions at the same time 🙌🏼 End of explanation """ def in_sample_dist(X, n): """Create a histogram of pairwise distances in array X, using n bins.""" plt.figure(figsize=(15,6)) # use scipy's pairwise distance function for efficiency plt.hist(spd.pdist(X), bins=n, alpha=0.6) plt.xlabel('inter-sample distance') plt.ylabel('count') in_sample_dist(X,n) """ Explanation: Another distribution that can provide some hints about the structure of data in a multi-dimensional vector space, is the pairwise inter-point distance distribution for all points in the data. Here's a function that makes this a little cleaner. End of explanation """ def radius(vector): """Calculate the euclidean norm for the given coordinate vector.""" origin = np.zeros(len(vector)) # use scipy's distance functions again! return spd.euclidean(origin, vector) # use our function to create a new 'r' column in the dataframe df['r'] = df.apply(radius, axis=1) df.head() """ Explanation: In unsupervised statistical learning, we're often interested in the existence of "clusters" in data. Our intuition in low dimensions can be helpful here. In order to identify and label a grouping of points as being unique from some other grouping of points, there needs to be a similarity or "sameness" metric that we can compare. One such measure is simply the distance between all of the points. If a group of points are all qualitatively closer to each other than another group of points, then we might call those two groups unique clusters. If we look at the distribution of inter-point distances above, we see a relatively smooth distribution, suggesting that no group of points is notably closer or further than another other group of points. We'll come back to this idea, shortly. (The inspiration for this approach is found here: pdf) Above, the bivariate pairplot works great for displaying our data when it's in two dimensions, but you can probably imagine that even in just d=3 dimensions, looking at this distribution of data will be really hard. So, I want to create a metric that gives us a feel for where the data is located in the vector space. There are many ways to do this. For now, I'm going to consider the euclidean distance cumulative distribution function*. Remember that the euclidean distance is the $L_{2}$ norm $dist(p,q) = \sqrt{ \sum_{i=1}^{d} (q_{i}-p_{i})^{2} }$ where d is the dimensionality of the space. (Wiki) *in fact, even in the course of developing this notebook, I learned that this is not a terribly great choice. But, hey, you have to start somewhere! ¯\_(ツ)_/¯ End of explanation """ def kde_cdf_plot(df, norm=False, vol=False): """Display stacked KDE and CDF plots.""" assert 'r' in df, 'This method only works for dataframes that include a radial distance in an "r" column!' if norm: # overwrite df.r with normalized version df['r'] = df['r'] / max(df['r']) fig, (ax1, ax2) = plt.subplots(2,1, sharex=True, figsize=(15,8) ) # subplot 1 sns.distplot(df['r'], hist=False, rug=True, ax=ax1 ) ax1.set_ylabel('KDE') ax1.set_title('n={} in {}-d space'.format(len(df), df.shape[1] - 1) ) # subplot 2 if vol: raise NotImplementedError("Didn't finish implementing this volume normalization!") dim = df.shape[1] - 1 df['r'].apply(lambda x: x**dim).plot(kind='hist', cumulative=True, normed=1, bins=len(df['r']), histtype='step', linewidth=2, ax=ax2 ) ax2.set_ylabel('CDF') plt.xlim(0, .99*max(df['r'])**dim) xlab = 'volume fraction' else: df['r'].plot(kind='hist', cumulative=True, normed=1, bins=len(df['r']), histtype='step', linewidth=2, ax=ax2 ) ax2.set_ylabel('CDF') plt.xlim(0, .99*max(df['r'])) xlab = 'radial distance' if norm: xlab += ' (%)' plt.xlabel(xlab) """ Explanation: There are a couple of ways that I want to visualize this radial distance. First, I'd like to see the univariate distribution (from 0 to max(r)), and second, I'd like to see how much of the data is at a radius less than or equal to a particular value of r. To do this, I'll define a plotting function that takes a dataframe as shown above, and returns plots of these two distributions as described. There's a lot of plotting hijinks in this function, so first just look at the output and see if it makes some sense. Then we can come back and dig through the plotting function. End of explanation """ kde_cdf_plot(df) """ Explanation: Now, let's see these distributions for the 2-dimensional array we created earlier. End of explanation """ # data points, dimensions, blob count n = 1000 dims = 2 blobs = 5 # note: default bounding space is +/- 10.0 in each dimension X, y = make_blobs(n_samples=n, n_features=dims, centers=blobs) # convert np arrays to a df, auto-label the columns X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])]) X_df.head() sns.jointplot(data=X_df, x='x0', y='x1') X_df['r'] = X_df.apply(radius, axis=1) #X_df.head() """ Explanation: As a reminder: the kernel density estimate (KDE) is a nice visualization of the "density profile", created by assuming there exists a standard normal at each data point, summing all of these curves, and then normalizing the total under-curve area to 1. The seaborn docs have a nice illustration of this technique. The ticks on the bottom are called a "rug plot", and are the values of the data (values of r) the cumulative distribution function (CDF) is a measure of the fraction of values which have a value equal to, or lesser than, the specified value, $CDF_{X}(x)=P(X \le x)$. For the purpose of this session, I want to use this particular to highlight where the observed data is, relative to the "radius" of the entire space. The value of the CDF is the fraction of data contained at an equal or lesser "radius" value (in d dimensions). Blobs in 2D Let's add a bit of complexity to the examples above by making the data slightly more irregular: we'll use sklearn's blob constructor. End of explanation """ kde_cdf_plot(X_df, norm=True) """ Explanation: This time, we'll incorporate one extra kwarg in the kde_cdf_plot function: norm=True displays the x axis (radial distance) as a fraction of the maximum value. This will helpful when we're comparing spaces of varying radial magnitude. End of explanation """ in_sample_dist(X,n) """ Explanation: As a start, notice that the radius CDF for this data has shifted to the right. At larger r, we're closer to the "edge" of the space containing our data. The graph will vary with iterations of the data generation, but should consistently be shifted to the right relative to the 0-centered standard normal distribution. Now let's look at the inter-sample distance distribution. Remember that this data is explicitly generated by a mechanism that includes clusters, so we should not see a nice uniform distribution. End of explanation """ def make_blob_df(n_points=1000, dims=2, blobs=5, bounding_box=(-10.0, 10.0)): """Function to automate the np.array blob => pd.df creation and r calculation.""" # nb: default bounding space is +/- 10.0 in each dimension X, y = make_blobs(n_samples=n_points, n_features=dims, centers=blobs, center_box=bounding_box) # make a df, auto-label the columns X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])]) X_df_no_r = copy.deepcopy(X_df) # add a radial distance column X_df['r'] = X_df.apply(radius, axis=1) return X, X_df, X_df_no_r, y n = 1000 dims = 3 blobs = 5 X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs) X_df.head() #X_df_no_r.head() fig = plt.figure(figsize=(12,7)) ax = fig.add_subplot(111, projection='3d') ax.plot(X_df['x0'],X_df['x1'],X_df['x2'],'o', alpha=0.3) ax.set_xlabel('x0'); ax.set_ylabel('x1'); ax.set_zlabel('x2') sns.pairplot(X_df_no_r, plot_kws=dict(alpha=0.3), diag_kind='kde') kde_cdf_plot(X_df, norm=True) """ Explanation: Sure enough, we can see that there are in fact some peaks in the inter-sample distance. This makes sense, because we know that the data generation process encoded that exact idea. Since we're intentionally using a data generation process that builds in clusters, we'll always see a peak on the low end of the x axis... each cluster is created with a low (and similar) intra-cluster distance. The other, larger peaks, will illustrate the relationships between the clusters. We may not see precisely the same number of peaks as were specified in the blob creation, though, because we know that sometimes the blobs will be on top of each other and will "look" like one cluster. Compare the peaks of this distribution with the pairplot we created with the same data. Blobs in 3D Let's increase the dimension count by one, to 3, just about the limit of our intuition's abilities. To make the data generation process a bit more reusable, we'll use a function to get the data array and corresponding dataframes. End of explanation """ in_sample_dist(X,n) """ Explanation: Again, compare this CDF to the 2-d case above; note that the data is closer to the "edge" of the space. End of explanation """ n = 1000 dims = 10 blobs = 5 X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs) X_df.head() # this starts to take a few seconds when d~10 sns.pairplot(X_df_no_r, diag_kind='kde', plot_kws=dict(alpha=0.3)) kde_cdf_plot(X_df, norm=True) in_sample_dist(X,n) """ Explanation: Higher-dimensional blobs Ok, let's jump out of the space where we can easily visualize the data. Let's now go to d=10. While we can still look at pairwise coordinate locations, we can't see the whole space at once anymore. Now we'll rely on our other plots for intuition of the space profile. End of explanation """ n_points = 1000 dim_range = [2, 100, 10000] blob_count = 5 fig, (ax1, ax2) = plt.subplots(2,1, sharex=True, figsize=(15,8)) for d in dim_range: ## data generation # random gaussian blobs in d-dims X, y = make_blobs(n_samples=n_points, n_features=d, centers=blob_count) ## ## calculation # create a labeled df from X X_df = pd.DataFrame(X, columns=['x{}'.format(i) for i in range(X.shape[1])]) # add an 'r' column #X_df_no_r = copy.deepcopy(X_df) X_df['r'] = X_df.apply(radius, axis=1) # normalize r value to % of max? X_df['r'] = X_df['r'] / max(X_df['r']) ## ## plotting # subplot 1 - KDE sns.distplot(X_df['r'], kde=True, hist=False, rug=True, ax=ax1, label='{}-dims'.format(d) ) # subplot 2 - CDF X_df['r'].plot(kind='hist', cumulative=True, normed=1, bins=len(X_df['r']), histtype='step', linewidth=2, ax=ax2 ) ## ax1.set_ylabel('KDE') ax1.set_title('n={} in {}-d space'.format(len(X_df), dim_range) ) ax2.set_ylabel('CDF') plt.xlim(0, .999*max(X_df['r'])) plt.xlabel('radial distance (%)') fig, (ax1, ax2, ax3) = plt.subplots(3,1, figsize=(15,9)) for i,d in enumerate(dim_range): X, y = make_blobs(n_samples=n_points, n_features=d, centers=blob_count) # loop through the subplots plt.subplot('31{}'.format(i+1)) # plot the data plt.hist(spd.pdist(X), bins=n_points, alpha=0.6) plt.ylabel('count (d={})'.format(d)) ax3.set_xlabel('inter-sample distance') """ Explanation: Having seen the way these plots vary individually, let's compare, side-by-side, a similar data generation process (same number of points and clusters) in a range of dimensions. End of explanation """ small_corpus = [ 'The dog likes cats.', 'The blue cat eats brown sharks.', 'Why not, blue?' ] vec = CountVectorizer() X = vec.fit_transform(small_corpus) X.todense() vec.vocabulary_ """ Explanation: Text data Most of the time, our unsupervised clustering in high dimensions is a function of using text data as an input. We'll start with a small corpus - again, to build intuition about what the data looks like - and then work up. End of explanation """ terms = [x for x,_ in sorted(vec.vocabulary_.items(), key=operator.itemgetter(1))] text_df = pd.DataFrame(X.todense(), columns=terms) text_df text_df['r'] = text_df.apply(radius, axis=1) text_df kde_cdf_plot(text_df, norm=True) """ Explanation: It's good to remember how to map the matrix-like data onto the words that go into it... End of explanation """ text_array = [] with open('twitter_2016-04-06_2030.jsonl.body.txt', 'r') as infile: for line in infile: text_array.append(line.replace('\n', ' ')) print( len(text_array) ) print( text_array[0] ) vec = CountVectorizer( #binary=1, ## add dimensionality reduction? #stop_words='english', #lowercase=True, #min_df=10 ) dtm = vec.fit_transform(text_array) dtm # what fraction of the feature space is full? 3051924 / ( 374941*523498 ) """ Explanation: With a tiny little corpus, these plots aren't very useful. Let's use a bigger one: this text file (not included in the repo, sorry visitors!) is about a 10-minute, 10% sample of Tweet (body text) from the Firehose. It has a little under 400,000 Tweets. End of explanation """ # (element-wise sq) (row sum) (flatten) (sqrt) dtm_r = dtm.multiply(dtm).sum(axis=1).A1**0.5 #print(len(dtm_r)) #print(dtm_r) #print(min(dtm_r), np.median(dtm_r), max(dtm_r)) s = pd.Series(dtm_r) plt.figure(figsize=(15,6)) s.plot(kind='hist', cumulative=True, normed=1, bins=len(dtm_r), histtype='step', linewidth=2 ) plt.ylabel('CDF') #plt.xlim(0, .99*max(dtm_r)) plt.xlim(0, 6) plt.xlabel('radial distance') # This is a super interesting side note: some tweets can totally throw off your distribution. # This one Tweet had 114 repetitions of a single character. If you swap the xlim() commands # above, you'll see that r extends to over 100. This is why: #text_array[ s[ s > 114 ].index[0] ] """ Explanation: We have to do the radius math slightly differently now, because we're dealing with a scipy CSR matrix instead of a dense numpy array. End of explanation """ n = 2000 dims = 10000 blobs = 10 X, X_df, X_df_no_r, y = make_blob_df(n, dims, blobs) #X_df_no_r.head() kde_cdf_plot(X_df, norm=True) plt.xlim(0,1) in_sample_dist(X,n) """ Explanation: &lt;record-stopping screeching noise&gt; Ok, so I spent some time working with this data, and I'll be honest: I expected this distribution to be much more skewed to large r! In fact, I thought it would be more exaggerated than the blob examples above. Since I didn't have enough time to dig any deeper for this session, let's keep this observation in the back of our minds, and come back to it in another session. We can round out today's discussion with one more relevant topic... Dimensionality reduction Before we end this session, we'll consider one more facet of high-dimensional spaces: reducing them to lower dimension. For now, we'll illustrate the effect of using principal component analysis using the same inspection techniques we've been using all along. If we try to densify the 500k+ dimension document term matrix above, we'll run out of RAM. So, let's use a synthetic data set. First, we look at our metrics in 10,000 dimensions, then after PCA to bring them down to 3. End of explanation """ # now apply PCA and reduce the dimension down to 3 pca = PCA(n_components=3) X_df_3d = pd.DataFrame(pca.fit_transform(X_df_no_r), columns=['x0','x1','x2']) # add in that radial distance column X_df_3d['r'] = X_df_3d.apply(radius, axis=1) X_df_3d.head() # add in the labels so we can color by them X_df_3d['y'] = y # nb: using the vars kwarg seems to remove the ability to include KDE sns.pairplot(X_df_3d, vars=['x0','x1','x2'], hue='y', palette="husl", diag_kind='kde', plot_kws=dict(s=50, alpha=0.7) ) kde_cdf_plot(X_df_3d, norm=True) #in_sample_dist(X_df_3d[['x0','x1','x2']],n) """ Explanation: Now, we know that the data generation process built in the notion of identifiable clusters. Let's see if we can surface that information by projecting our high-dimensional data and space down into a smaller number using principal component analysis. End of explanation """
cjcardinale/climlab
docs/source/courseware/Soundings_from_Observations_and_RCE_Models.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import xarray as xr ncep_url = "https://psl.noaa.gov/thredds/dodsC/Datasets/ncep.reanalysis.derived/" ncep_air = xr.open_dataset( ncep_url + "pressure/air.mon.1981-2010.ltm.nc", decode_times=False) level = ncep_air.level lat = ncep_air.lat """ Explanation: Comparing soundings from NCEP Reanalysis and various models We are going to plot the global, annual mean sounding (vertical temperature profile) from observations. Read in the necessary NCEP reanalysis data from the online server. The catalog is here: https://psl.noaa.gov/psd/thredds/catalog/Datasets/ncep.reanalysis.derived/catalog.html End of explanation """ Tzon = ncep_air.air.mean(dim=('lon','time')) weight = np.cos(np.deg2rad(lat)) / np.cos(np.deg2rad(lat)).mean(dim='lat') Tglobal = (Tzon * weight).mean(dim='lat') """ Explanation: Take global averages and time averages. End of explanation """ fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + 273.15, np.log(level/1000)) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/1000) ) ax.set_yticklabels( level.values ) ax.set_title('Global, annual mean sounding from NCEP Reanalysis', fontsize = 24) ax2 = ax.twinx() ax2.plot( Tglobal + 273.15, -8*np.log(level/1000) ); ax2.set_ylabel('Approx. height above surface (km)', fontsize=16 ); ax.grid() """ Explanation: Here is code to make a nicely labeled sounding plot. End of explanation """ import climlab from climlab import constants as const col = climlab.GreyRadiationModel() print(col) col.subprocess['LW'].diagnostics col.integrate_years(1) print("Surface temperature is " + str(col.Ts) + " K.") print("Net energy in to the column is " + str(col.ASR - col.OLR) + " W / m2.") """ Explanation: Now compute the Radiative Equilibrium solution for the grey-gas column model End of explanation """ pcol = col.lev fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + 273.15, np.log(level/1000), 'b-', col.Tatm, np.log( pcol/const.ps ), 'r-' ) ax.plot( col.Ts, 0, 'ro', markersize=20 ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/1000) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue) and radiative equilibrium in grey gas model (red)', fontsize = 18) ax2 = ax.twinx() ax2.plot( Tglobal + const.tempCtoK, -8*np.log(level/1000) ); ax2.set_ylabel('Approx. height above surface (km)', fontsize=16 ); ax.grid() """ Explanation: Plot the radiative equilibrium temperature on the same plot with NCEP reanalysis End of explanation """ dalr_col = climlab.RadiativeConvectiveModel(adj_lapse_rate='DALR') print(dalr_col) dalr_col.integrate_years(2.) print("After " + str(dalr_col.time['days_elapsed']) + " days of integration:") print("Surface temperature is " + str(dalr_col.Ts) + " K.") print("Net energy in to the column is " + str(dalr_col.ASR - dalr_col.OLR) + " W / m2.") dalr_col.param """ Explanation: Now use convective adjustment to compute a Radiative-Convective Equilibrium temperature profile End of explanation """ fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + 273.15, np.log(level/1000), 'b-', col.Tatm, np.log( pcol/const.ps ), 'r-' ) ax.plot( col.Ts, 0, 'ro', markersize=16 ) ax.plot( dalr_col.Tatm, np.log( pcol / const.ps ), 'k-' ) ax.plot( dalr_col.Ts, 0, 'ko', markersize=16 ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/1000) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue), RE (red) and dry RCE (black)', fontsize = 18) ax2 = ax.twinx() ax2.plot( Tglobal + const.tempCtoK, -8*np.log(level/1000) ); ax2.set_ylabel('Approx. height above surface (km)', fontsize=16 ); ax.grid() """ Explanation: Now plot this "Radiative-Convective Equilibrium" on the same graph: End of explanation """ rce_col = climlab.RadiativeConvectiveModel(adj_lapse_rate=6, abs_coeff=1.7E-4) print(rce_col) rce_col.integrate_years(2.) print("After " + str(rce_col.time['days_elapsed']) + " days of integration:") print("Surface temperature is " + str(rce_col.Ts) + " K.") print("Net energy in to the column is " + str(rce_col.ASR - rce_col.OLR) + " W / m2.") """ Explanation: The convective adjustment gets rid of the unphysical temperature difference between the surface and the overlying air. But now the surface is colder! Convection acts to move heat upward, away from the surface. Also, we note that the observed lapse rate (blue) is always shallower than $\Gamma_d$ (temperatures decrease more slowly with height). "Moist" Convective Adjustment To approximately account for the effects of latent heat release in rising air parcels, we can just adjust to a lapse rate that is a little shallow than $\Gamma_d$. We will choose 6 K / km, which gets close to the observed mean lapse rate. We will also re-tune the longwave absorptivity of the column to get a realistic surface temperature of 288 K: End of explanation """ fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + 273.15, np.log(level/1000), 'b-', col.Tatm, np.log( pcol/const.ps ), 'r-' ) ax.plot( col.Ts, 0, 'ro', markersize=16 ) ax.plot( dalr_col.Tatm, np.log( pcol / const.ps ), 'k-' ) ax.plot( dalr_col.Ts, 0, 'ko', markersize=16 ) ax.plot( rce_col.Tatm, np.log( pcol / const.ps ), 'm-' ) ax.plot( rce_col.Ts, 0, 'mo', markersize=16 ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/1000) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue), RE (red), dry RCE (black), and moist RCE (magenta)', fontsize = 18) ax2 = ax.twinx() ax2.plot( Tglobal + const.tempCtoK, -8*np.log(level/1000) ); ax2.set_ylabel('Approx. height above surface (km)', fontsize=16 ); ax.grid() """ Explanation: Now add this new temperature profile to the graph: End of explanation """ # Put in some ozone import xarray as xr ozonepath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/apeozone_cam3_5_54.nc" ozone = xr.open_dataset(ozonepath) ozone """ Explanation: Adding stratospheric ozone Our model has no equivalent of the stratosphere, where temperature increases with height. That's because our model has been completely transparent to shortwave radiation up until now. We can load some climatogical ozone data: End of explanation """ # Taking annual, zonal, and global averages of the ozone data O3_zon = ozone.OZONE.mean(dim=("time","lon")) weight_ozone = np.cos(np.deg2rad(ozone.lat)) / np.cos(np.deg2rad(ozone.lat)).mean(dim='lat') O3_global = (O3_zon * weight_ozone).mean(dim='lat') O3_global.shape ax = plt.figure(figsize=(10,8)).add_subplot(111) ax.plot( O3_global * 1.E6, np.log(O3_global.lev/const.ps) ) ax.invert_yaxis() ax.set_xlabel('Ozone (ppm)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) yticks = np.array([1000., 500., 250., 100., 50., 20., 10., 5.]) ax.set_yticks( np.log(yticks/1000.) ) ax.set_yticklabels( yticks ) ax.set_title('Global, annual mean ozone concentration', fontsize = 24); """ Explanation: Take the global average of the ozone climatology, and plot it as a function of pressure (or height) End of explanation """ oz_col = climlab.RadiativeConvectiveModel(lev = ozone.lev, abs_coeff=1.82E-4, adj_lapse_rate=6, albedo=0.315) """ Explanation: This shows that most of the ozone is indeed in the stratosphere, and peaks near the top of the stratosphere. Now create a new column model object on the same pressure levels as the ozone data. We are also going set an adjusted lapse rate of 6 K / km, and tune the longwave absorption End of explanation """ ozonefactor = 75 dp = oz_col.Tatm.domain.axes['lev'].delta sw_abs = O3_global * dp * ozonefactor oz_col.subprocess.SW.absorptivity = sw_abs oz_col.compute() oz_col.compute() print(oz_col.SW_absorbed_atm) """ Explanation: Now we will do something new: let the column absorb some shortwave radiation. We will assume that the shortwave absorptivity is proportional to the ozone concentration we plotted above. We need to weight the absorptivity by the pressure (mass) of each layer. End of explanation """ oz_col.integrate_years(2.) print("After " + str(oz_col.time['days_elapsed']) + " days of integration:") print("Surface temperature is " + str(oz_col.Ts) + " K.") print("Net energy in to the column is " + str(oz_col.ASR - oz_col.OLR) + " W / m2.") pozcol = oz_col.lev fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + const.tempCtoK, np.log(level/1000), 'b-', col.Tatm, np.log( pcol/const.ps ), 'r-' ) ax.plot( col.Ts, 0, 'ro', markersize=16 ) ax.plot( dalr_col.Tatm, np.log( pcol / const.ps ), 'k-' ) ax.plot( dalr_col.Ts, 0, 'ko', markersize=16 ) ax.plot( rce_col.Tatm, np.log( pcol / const.ps ), 'm-' ) ax.plot( rce_col.Ts, 0, 'mo', markersize=16 ) ax.plot( oz_col.Tatm, np.log( pozcol / const.ps ), 'c-' ) ax.plot( oz_col.Ts, 0, 'co', markersize=16 ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/1000) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue), RE (red), dry RCE (black), moist RCE (magenta), RCE with ozone (cyan)', fontsize = 18) ax.grid() """ Explanation: Now run it out to Radiative-Convective Equilibrium, and plot End of explanation """ oz_col2 = climlab.process_like( oz_col ) oz_col2.subprocess['LW'].absorptivity *= 1.2 oz_col2.integrate_years(2.) fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + const.tempCtoK, np.log(level/const.ps), 'b-' ) ax.plot( oz_col.Tatm, np.log( pozcol / const.ps ), 'c-' ) ax.plot( oz_col.Ts, 0, 'co', markersize=16 ) ax.plot( oz_col2.Tatm, np.log( pozcol / const.ps ), 'c--' ) ax.plot( oz_col2.Ts, 0, 'co', markersize=16 ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/const.ps) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue), RCE with ozone (cyan)', fontsize = 18) ax.grid() """ Explanation: And we finally have something that looks looks like the tropopause, with temperature increasing above at about the correct rate. Though the tropopause temperature is off by 15 degrees or so. Greenhouse warming in the RCE model with ozone End of explanation """ datapath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/CESMA/" atmstr = ".cam.h0.clim.nc" cesm_ctrl = xr.open_dataset(datapath + 'som_1850_f19/clim/som_1850_f19' + atmstr) cesm_2xCO2 = xr.open_dataset(datapath + 'som_1850_2xCO2/clim/som_1850_2xCO2' + atmstr) cesm_ctrl.T T_cesm_ctrl_zon = cesm_ctrl.T.mean(dim=('time', 'lon')) T_cesm_2xCO2_zon = cesm_2xCO2.T.mean(dim=('time', 'lon')) weight = np.cos(np.deg2rad(cesm_ctrl.lat)) / np.cos(np.deg2rad(cesm_ctrl.lat)).mean(dim='lat') T_cesm_ctrl_glob = (T_cesm_ctrl_zon*weight).mean(dim='lat') T_cesm_2xCO2_glob = (T_cesm_2xCO2_zon*weight).mean(dim='lat') fig = plt.figure( figsize=(10,8) ) ax = fig.add_subplot(111) ax.plot( Tglobal + const.tempCtoK, np.log(level/const.ps), 'b-' ) ax.plot( oz_col.Tatm, np.log( pozcol / const.ps ), 'c-' ) ax.plot( oz_col.Ts, 0, 'co', markersize=16 ) ax.plot( oz_col2.Tatm, np.log( pozcol / const.ps ), 'c--' ) ax.plot( oz_col2.Ts, 0, 'co', markersize=16 ) ax.plot( T_cesm_ctrl_glob, np.log( cesm_ctrl.lev/const.ps ), 'r-' ) ax.plot( T_cesm_2xCO2_glob, np.log( cesm_ctrl.lev/const.ps ), 'r--' ) ax.invert_yaxis() ax.set_xlabel('Temperature (K)', fontsize=16) ax.set_ylabel('Pressure (hPa)', fontsize=16 ) ax.set_yticks( np.log(level/const.ps) ) ax.set_yticklabels( level.values ) ax.set_title('Temperature profiles: observed (blue), RCE with ozone (cyan), CESM (red)', fontsize = 18) ax.grid() """ Explanation: And we find that the troposphere warms, while the stratosphere cools! Vertical structure of greenhouse warming in CESM model End of explanation """
oscaribv/pyaneti
pyaneti_extras/.ipynb_checkpoints/toy_model1-checkpoint.ipynb
gpl-3.0
from __future__ import print_function, division, absolute_import #Import the multi-GP class from the mgp.py file, all the magic is there import numpy as np from pyaneti_extras.citlalatonac import citlalatonac, create_times star = citlalatonac(tmin=0,tmax=50,amplitudes=[0.005,0.05,0.05,0.0,0.005,-0.05], kernel_parameters=[20,0.3,5],time_series=['rhk','bis'],seed=13) #Plot the time-series star.plot() """ Explanation: Creation of synthetic spectrospindicators time-series Oscar Barragán 2021 Import the class End of explanation """ #time = np.linspace(10,40,30) #time = optimal_times[:] ##Create an instance called k2100 #k2100 = multigp_rvs(time=time,amplitudes=[0.0058,0.0421,0.024,0.0,0.02,-0.086], # qp_params=[31.2,0.55,4.315],time_series=['rhk','bis'],seed=13) ##Plot the time-series #k2100.plot() """ Explanation: Check if code works by giving an input time to create the stamps End of explanation """ #Planet signal parameters planet_params = [7760,0.005,5.2,0,0] #k2100.add_planet(planet_params=planet_params,planet_name='k2100b') #Now the plot will include the planet signal k2100.plot() optimal_times = create_times(min(k2100.time),max(k2100.time),ndata=50,star='K2-100',observatory='lapalma') k2100.create_data(t=optimal_times) k2100.plot() """ Explanation: No planet for this example End of explanation """ #k2100.add_red_noise([[5e-4,10],[1e-4,10],[1e-3,10]]) #Now the data includes red noise #k2100.plot() """ Explanation: Let us add red noise accounting for imperfections on the instrument, observations, etc End of explanation """ #The input vector err has to have one white noise term per each time-series k2100.add_white_noise(err=[0.001,0.001,0.001]) k2100.plot() """ Explanation: Add white noise to the data points End of explanation """ fname = 'mgp-'+str(k2100.ndata)+'.dat' k2100.save_data(fname) """ Explanation: Save the file as requested by pyaneti End of explanation """
CLEpy/CLEpy-MotM
Tweepy/Tweepy.ipynb
mit
# Load keys, secrets, settings import os ENV = os.environ CONSUMER_KEY = ENV.get('IOTX_CONSUMER_KEY') CONSUMER_SECRET = ENV.get('IOTX_CONSUMER_SECRET') ACCESS_TOKEN = ENV.get('IOTX_ACCESS_TOKEN') ACCESS_TOKEN_SECRET = ENV.get('IOTX_ACCESS_TOKEN_SECRET') USERNAME = ENV.get('IOTX_USERNAME') USER_ID = ENV.get('IOTX_USER_ID') print(USERNAME) """ Explanation: Tweepy An easy-to-use Python library for accessing the Twitter API. http://www.tweepy.org/ Installing pip install tweepy Twitter App https://apps.twitter.com/ Create an app and save: Consumer key Consumer secret Access token Access token secret Store Secrets Securely Don't commit them to a public repo! I set them via environment variables and then access via os.environ. End of explanation """ import tweepy auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) api = tweepy.API(auth) public_tweets = api.home_timeline(count=3) for tweet in public_tweets: print(tweet.text) # Models user = api.get_user('clepy') print(user.screen_name) print(dir(user)) # Tweet! status = api.update_status("I'm at @CLEPY!") print(status.id) print(status.text) """ Explanation: Twitter APIs REST vs Streaming https://dev.twitter.com/docs Twitter REST API Search Tweet Get information https://dev.twitter.com/rest/public End of explanation """ # Subclass StreamListener and define on_status method class MyStreamListener(tweepy.StreamListener): def on_status(self, status): print("@{0}: {1}".format(status.author.screen_name, status.text)) myStream = tweepy.Stream(auth = api.auth, listener=MyStreamListener()) try: myStream.filter(track=['#clepy']) except KeyboardInterrupt: print('Interrupted...') except tweepy.error.TweepError: myStream.disconnect() print('Disconnected. Try again!') """ Explanation: Streaming API Real-time streaming of: Searches Mentions Lists Timeline https://dev.twitter.com/streaming/public End of explanation """
saudijack/unfpyboot
Day_01/01_Advanced_Python/03_LambdaFunction-Solutions.ipynb
mit
words = 'The quick brown fox jumps over the lazy dog'.split() print words stuff = [] for w in words: stuff.append([w.upper(), w.lower(), len(w)]) for i in stuff: print i """ Explanation: Lambda Function and More <u>Problem 1</u> End of explanation """ stuff = map(lambda w: [w.upper(), w.lower(), len(w)],words) for i in stuff: print i """ Explanation: Use list comprehension and lambda/map function to define <b>stuff</b>. End of explanation """ sentence = "It's a myth that there are no words in English without vowels." vowels = 'aeiou' result = filter(lambda x: x not in vowels, sentence) print result """ Explanation: <u>Problem 2</u> Use the filter function to remove all the vowels from the sentence End of explanation """ print reduce(lambda x,y: x*y, [47,11,42,102,13]) # note that you can improve the speed of the calculation using built-in functions # or better still: using the numpy module from operator import mul import numpy as np a = range(1, 101) print "\nreduce(lambda x, y: x * y, a)" %timeit reduce(lambda x, y: x * y, a) # (1) print "\nreduce(mul, a)" %timeit reduce(mul, a) # (2) print "\nnp.prod(a)" a = np.array(a) %timeit np.prod(a) # (3) """ Explanation: <u>Problem 3</u> Use the reduce function to find the produc of all the entries in the list [47,11,42,102,13] End of explanation """
nholtz/structural-analysis
matrix-methods/frame2d/05-test-frame-6b.ipynb
cc0-1.0
from IPython import display display.SVG('data/frame-6b.d/frame-6b.svg') from Frame2D import Frame2D f6b = Frame2D('frame-6b') """ Explanation: Example 6-b In this example, all input data is given directly in the notebook cells below. The data is given in CSV form precisely as would be given in data files. For each table started by the cell magic %%Table, the table name follows immediately. If the data was instead provided in a data file the file name would be the table name + .csv. This example is statically determinate for ease of checking -- the rightmost figure shows the reactions corresponding to factored values of the loads shown in the left figure. Units are $N$ and $mm$. Test Frame End of explanation """ %%Table nodes NODEID,X,Y,Z A,0.,0.,5000. B,0,4000,5000 C,8000,4000,5000 D,8000,0,5000 """ Explanation: Input Data Nodes Table nodes (file nodes.csv) provides the $x$-$y$ coordinates of each node. Other columns, such as the $z$- coordinate are optional, and ignored if given. End of explanation """ %%Table supports NODEID,C0,C1,C2 A,FX,FY,MZ D,FX,FY """ Explanation: Supports Table supports (file supports.csv) specifies the support fixity, by indicating the constrained direction for each node. There can be 1, 2 or 3 constraints, selected from the set 'FX', 'FY' or 'MZ', in any order for each constrained node. Directions not mentioned are 'free' or unconstrained. End of explanation """ %%Table members MEMBERID,NODEJ,NODEK AB,A,B BC,B,C CD,C,D """ Explanation: Members Table members (file members.csv) specifies the member incidences. For each member, specify the id of the nodes at the 'j-' and 'k-' ends. These ends are used to interpret the signs of various values. End of explanation """ %%Table releases MEMBERID,RELEASE AB,MZK CD,MZJ """ Explanation: Releases Table releases (file releases.csv) is optional and specifies internal force releases in some members. Currently only moment releases at the 'j-' end ('MZJ') and 'k-' end ('MZK') are supported. These specify that the internal bending moment at those locations are zero. You can only specify one release per line, but you can have more than one line for a member. End of explanation """ %%Table properties MEMBERID,SIZE,IX,A BC,W460x106,, AB,W310x97,, CD,, """ Explanation: Properties Table properties (file properties.csv) specifies the member properties for each member. If the 'SST' library is available, you may specify the size of the member by using the designation of a shape in the CISC Structural Section Tables. If either IX or A is missing, it is retreived using the sst library. If the values on any line are missing, they are copied from the line above. End of explanation """ %%Table node_loads LOAD,NODEID,DIRN,F Wind,B,FX,-200000. """ Explanation: Node Loads Table node_loads (file node_loads.csv) specifies the forces applied directly to the nodes. DIRN (direction) may be one of 'FX,FY,MZ'. 'LOAD' is an identifier of the kind of load being applied and F is the value of the load, normally given as a service or specified load. A later input table will specify load combinations and factors. End of explanation """ %%Table support_displacements LOAD,NODEID,DIRN,DELTA Other,A,DY,-10 """ Explanation: Support Displacements Table support_displacements (file support_displacements.csv) is optional and specifies imposed displacements of the supports. DIRN (direction) is one of 'DX, DY, RZ'. LOAD is as for Node Loads, above. Of course, in this example the frame is statically determinate and so the support displacement will have no effect on the reactions or member end forces. End of explanation """ %%Table member_loads LOAD,MEMBERID,TYPE,W1,W2,A,B,C Live,BC,UDL,-50,,,, Live,BC,PL,-200000,,5000 """ Explanation: Member Loads Table member_loads (file member_loads.csv) specifies loads acting on members. Current types are PL (concentrated transverse, ie point load), CM (concentrated moment), UDL (uniformly distributed load over entire span), LVL (linearly varying load over a portion of the span) and PLA (point load applied parallel to member coincident with centroidal axis). Values W1 and W2 are loads or load intensities and A, B, and C are dimensions appropriate to the kind of load. End of explanation """ %%Table load_combinations CASE,LOAD,FACTOR One,Live,1.5 One,Wind,1.75 One,Other,2.0 """ Explanation: Load Combinations Table load_combinations (file load_combinations.csv) is optional and specifies factored combinations of loads. By default, there is always a load combination called all that includes all loads with a factor of 1.0. A frame solution (see below) indicates which CASE to use. End of explanation """ f6b.input_all() f6b.print_input() RS = f6b.solve('one') f6b.print_results(rs=RS) """ Explanation: Solution The following outputs all tables, prints a description of the input data, produces a solution for load case 'one' (all load and case names are case-insensitive) and finally prints the results. End of explanation """
galtay/tensorflow_examples
01_linear_regression.ipynb
gpl-3.0
import pandas as pd import numpy as np import tensorflow as tf from tensorflow.contrib import keras from sklearn import datasets from sklearn import linear_model import statsmodels.api as sm import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns """ Explanation: Imports End of explanation """ Nsamp = 50 Nfeatures = 1 xarr = np.linspace(-0.5, 0.5, Nsamp) np.random.seed(83749) beta_0 = -2.0 beta_1 = 4.3 yarr = (beta_0 + beta_1 * xarr) + (np.random.normal(size=Nsamp) * 0.5) mdl = linear_model.LinearRegression(fit_intercept=False) mdl = mdl.fit(np.c_[np.ones(Nsamp), xarr], yarr) mdl.coef_ fig, ax = plt.subplots(figsize=(5,5)) plt.scatter(xarr, yarr, s=10, color='blue') plt.plot(xarr, mdl.coef_[0] + mdl.coef_[1] * xarr, color='red') ph_x = tf.placeholder(tf.float32, [None, Nfeatures], name='features') ph_y = tf.placeholder(tf.float32, [None, 1], name='output') ph_x, ph_y # Set model weights v_W = tf.Variable(tf.random_normal([Nfeatures, 1]), name='weights') v_b = tf.Variable(tf.zeros([1]), name='bias') v_z = tf.matmul(ph_x, v_W) + v_b cost_1 = tf.squared_difference(v_z, ph_y) cost_2 = tf.reduce_mean(cost_1) learning_rate=0.1 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2) # Construct model and encapsulating all ops into scopes, making # Tensorboard's Graph visualization more convenient #with tf.name_scope('Model'): # # Model # pred = tf.matmul(x, W) + b # basic linear regression #with tf.name_scope('Loss'): # # Minimize error (mean squared error) # cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1)) #with tf.name_scope('SGD'): # # Gradient Descent # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #with tf.name_scope('Accuracy'): # # Accuracy # acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # acc = tf.reduce_mean(tf.cast(acc, tf.float32)) init = tf.global_variables_initializer() merged = tf.summary.merge_all() # Launch the graph feed_dict = {ph_x: xarr.reshape(Nsamp, 1), ph_y: yarr.reshape(Nsamp,1)} with tf.Session() as sess: train_writer = tf.summary.FileWriter('/tmp/tensorflow/logs', sess.graph) sess.run(init) z_out = sess.run(v_z, feed_dict=feed_dict) cost_1_out = sess.run(cost_1, feed_dict=feed_dict) cost_2_out = sess.run(cost_2, feed_dict=feed_dict) for i in range(300): train_step_out = sess.run(train_step, feed_dict=feed_dict) W_out = sess.run(v_W, feed_dict=feed_dict) b_out = sess.run(v_b, feed_dict=feed_dict) print(W_out) print(b_out) """ Explanation: Simple Mock Data Lets create a simple mock dataset with one independent variable and one dependent variable with a little noise. End of explanation """ boston = datasets.load_boston() print(boston['DESCR']) features = pd.DataFrame(data=boston['data'], columns=boston['feature_names']) target = pd.DataFrame(data=boston['target'], columns=['MEDV']) features.head(5) target.head(5) hh = features.hist(figsize=(14,18)) """ Explanation: Boston Housing Dataset feautres: raw features variables in DataFrame target: raw target variable in DataFrame End of explanation """ from sklearn.preprocessing import StandardScaler scalerX = StandardScaler() scalerX.fit(features) dfXn = pd.DataFrame(data=scalerX.transform(features), columns=features.columns) scalerY = StandardScaler() scalerY.fit(target) dfYn = pd.DataFrame(data=scalerY.transform(target), columns=target.columns) dfXn.head(5) dfYn.head(5) """ Explanation: Center and Normalize End of explanation """ dfXn1 = dfXn.copy() dfXn1.insert(loc=0, column='intercept', value=1) results = sm.OLS(dfYn, dfXn1).fit() print(results.summary()) dfYn.max() target.max() plt.scatter(dfYn.values, results.fittedvalues.values) from sklearn import linear_model mdl = linear_model.LinearRegression(fit_intercept=False) mdl = mdl.fit(dfXn1.values, dfYn.values) print('n_params (statsmodels): ', len(results.params)) print('n params (sklearn linear): ', len(mdl.coef_.flatten())) print(results.params) print() print(mdl.coef_) np.all(np.abs(mdl.coef_ - results.params.values) < 1.0e-10) plt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten()) """ Explanation: Statsmodels Linear Regression End of explanation """ from keras.models import Sequential from keras.layers import Dense, InputLayer from keras.optimizers import SGD, Adam, RMSprop from keras.losses import mean_squared_error nfeatures = features.shape[1] model = Sequential() model.add(InputLayer(input_shape=(nfeatures,), name='input')) model.add(Dense(1, kernel_initializer='uniform', activation='linear', name='dense_1')) model.summary() weights_initial = model.get_weights() print('weights_initial - input nodes: \n', weights_initial[0]) print('weights_initial - bias node: ', weights_initial[1]) model.compile(optimizer=RMSprop(lr=0.001), loss='mean_squared_error') dfYn.shape model.set_weights(weights_initial) history = model.fit(dfXn.values, dfYn.values, epochs=5000, batch_size=dfYn.shape[0], verbose=0) plt.plot(history.history['loss']) model.get_weights() mdl.coef_ plt.scatter(model.get_weights()[0].flatten(), mdl.coef_.flatten()[1:]) fig, ax = plt.subplots(figsize=(10,10)) plt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten(), color='red', alpha=0.6, marker='o') plt.scatter(dfYn.values, model.predict(dfXn.values), color='blue', alpha=0.6, marker='+') # tf Graph Input # input data n_samples, n_features = features.shape x = tf.placeholder(tf.float32, [None, n_features], name='InputData') # output data y = tf.placeholder(tf.float32, [None, 1], name='TargetData') # Set model weights W = tf.Variable(tf.random_normal([n_features, 1]), name='Weights') b = tf.Variable(tf.zeros([1]), name='Bias') z = tf.matmul(x,W) + b cost_1 = tf.squared_difference(z,y) cost_2 = tf.reduce_mean(cost_1) learning_rate=0.1 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2) # Construct model and encapsulating all ops into scopes, making # Tensorboard's Graph visualization more convenient #with tf.name_scope('Model'): # # Model # pred = tf.matmul(x, W) + b # basic linear regression #with tf.name_scope('Loss'): # # Minimize error (mean squared error) # cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1)) #with tf.name_scope('SGD'): # # Gradient Descent # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #with tf.name_scope('Accuracy'): # # Accuracy # acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # acc = tf.reduce_mean(tf.cast(acc, tf.float32)) tf.Session().run(y, feed_dict={x: features.values, y: target.values}).shape init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) z_out = sess.run(z, feed_dict={x: features.values, y:target.values}) cost_1_out = sess.run(cost_1, feed_dict={x: features.values, y:target.values}) cost_2_out = sess.run(cost_2, feed_dict={x: features.values, y:target.values}) for i in range(100): train_step_out = sess.run(train_step, feed_dict={x: features.values, y:target.values}) print(cost_1_out[0:5,:]) print(cost_2_out) print(train_step_out) sess = tf.Session() sess.run(c) x y W b """ Explanation: Linear Regression, Keras End of explanation """
phoebe-project/phoebe2-docs
2.3/tutorials/reflection_heating.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" """ Explanation: Reflection and Heating For a comparison between "Horvat" and "Wilson" methods in the "irad_method" parameter, see the tutorial on Lambert Scattering. Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt #logger = phoebe.logger('error') b = phoebe.default_binary() """ Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation """ print(b['irrad_frac_refl_bol']) print(b['irrad_frac_lost_bol']) print(b['irrad_frac_refl_bol@primary']) print(b['irrad_frac_lost_bol@primary@component']) """ Explanation: Relevant Parameters The parameters that define reflection and heating are all prefaced by "irrad_frac" (fraction of incident flux) and suffixed by "bol" to indicate that they all refer to a bolometric (rather than passband-dependent) process. For this reason, they are not stored in the dataset, but rather directly in the component. Each of these parameters dictates how much incident flux will be handled by each of the available processes. For now these only include reflection (heating with immediate re-emission, without heat distribution) and lost flux. In the future, heating with distribution and scattering will also be supported. For each component, these parameters must add up to exactly 1.0 - and this is handled by a constraint which by default constrains the "lost" parameter. End of explanation """ b.set_value_all('irrad_frac_refl_bol', 0.9) """ Explanation: In order to see the effect of reflection, let's set "irrad_frac_refl_bol" of both of our stars to 0.9 - that is 90% of the incident flux will go towards reflection and 10% will be ignored. End of explanation """ print(b['irrad_method@compute']) """ Explanation: Since reflection can be a computationally expensive process and in most cases is a low-order effect, there is a switch in the compute options that needs to be enabled in order for reflection to be taken into account. If this switch is False (which it is by default), the albedos are completely ignored and will be treated as if all incident light is lost/ignored. End of explanation """ b['sma@orbit'] = 4.0 b['teff@primary'] = 10000 b['teff@secondary'] = 5000 """ Explanation: Reflection has the most noticeable effect when the two stars are close to each other and have a large temperature ratio. End of explanation """ b.add_dataset('lc', times=np.linspace(0,1,101)) """ Explanation: Influence on Light Curves (fluxes) End of explanation """ b.run_compute(irrad_method='none', ntriangles=700, model='refl_false') b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true') afig, mplfig = b.plot(show=True, legend=True) artists = plt.plot(b['value@times@refl_false'], b['value@fluxes@refl_true']-b['value@fluxes@refl_false'], 'r-') """ Explanation: Let's run models with the reflection switch both turned on and off so that we can compare the two results. We'll also override delta to be a larger number since the computation time required by delta depends largely on the number of surface elements. End of explanation """ b.add_dataset('mesh', times=[0.2], columns=['teffs', 'intensities@lc01']) b.disable_dataset('lc01') b.run_compute(irrad_method='none', ntriangles=700, model='refl_false', overwrite=True) b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true', overwrite=True) #phoebe.logger('debug') afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false', fc='intensities', ec='face', draw_sidebars=True, show=True) afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true', fc='intensities', ec='face', draw_sidebars=True, show=True) afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false', fc='teffs', ec='face', draw_sidebars=True, show=True) afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true', fc='teffs', ec='face', draw_sidebars=True, show=True) """ Explanation: Influence on Meshes (Intensities) End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/solutions/tfdv_basic_spending.ipynb
apache-2.0
!pip install pyarrow==5.0.0 !pip install numpy==1.19.2 !pip install tensorflow-data-validation """ Explanation: Introduction to TensorFlow Data Validation Learning Objectives Review TFDV methods Generate statistics Visualize statistics Infer a schema Update a schema Introduction This lab is an introduction to TensorFlow Data Validation (TFDV), a key component of TensorFlow Extended. This lab serves as a foundation for understanding the features of TFDV and how it can help you understand, validate, and monitor your data. TFDV can be used for generating schemas and statistics about the distribution of every feature in the dataset. Such information is useful for comparing multiple datasets (e.g. training vs inference datasets) and reporting: Statistical differences in the features distribution TFDV also offers visualization capabilities for comparing datasets based on the Google PAIR Facets project. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Import Libraries End of explanation """ import pandas as pd import tensorflow_data_validation as tfdv import sys import warnings warnings.filterwarnings('ignore') print('Installing TensorFlow Data Validation') !pip install -q tensorflow_data_validation[visualization] print('TFDV version: {}'.format(tfdv.version.__version__)) # Confirm that we're using Python 3 assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type' """ Explanation: Restart the kernel (Kernel > Restart kernel > Restart). Re-run the above cell and proceed further. Note: Please ignore any incompatibility warnings and errors. End of explanation """ # TODO score_train = pd.read_csv('data/score_train.csv') score_train.head() # TODO score_test = pd.read_csv('data/score_test.csv') score_test.head() score_train.info() """ Explanation: Load the Consumer Spending Dataset We will download our dataset from Google Cloud Storage. The columns in the dataset are: 'Graduated': Whether or not the person is a college graduate 'Work Experience': The number of years in the workforce 'Family Size': The size of the family unit 'Spending Score': The spending score for consumer spending End of explanation """ # check methods present in tfdv # TODO [methods for methods in dir(tfdv)] """ Explanation: Review the methods present in TFDV End of explanation """ # Compute data statistics for the input pandas DataFrame. # TODO stats = tfdv.generate_statistics_from_dataframe(dataframe=score_train) """ Explanation: Describing data with TFDV The usual workflow when using TFDV during training is as follows: Generate statistics for the data Use those statistics to generate a schema for each feature Visualize the schema and statistics and manually inspect them Update the schema if needed Compute and visualize statistics First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings) TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation. NOTE: Compute statistics * tfdv.generate_statistics_from_csv * tfdv.generate_statistics_from_dataframe * tfdv.generate_statistics_from_tfrecord Generate Statistics from a Pandas DataFrame End of explanation """ # Visualize the input statistics using Facets. # TODO tfdv.visualize_statistics(stats) """ Explanation: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data: Notice that numeric features and categorical features are visualized separately, and that charts are displayed showing the distributions for each feature. Notice that features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature. Notice that there are no examples with values for pickup_census_tract. This is an opportunity for dimensionality reduction! Try clicking "expand" above the charts to change the display Try hovering over bars in the charts to display bucket ranges and counts Try switching between the log and linear scales, and notice how the log scale reveals much more detail about the payment_type categorical feature Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages End of explanation """ train_stats = tfdv.generate_statistics_from_dataframe(dataframe=score_train) test_stats = tfdv.generate_statistics_from_dataframe(dataframe=score_test) tfdv.visualize_statistics( lhs_statistics=train_stats, lhs_name='TRAIN_DATASET', rhs_statistics=test_stats, rhs_name='NEW_DATASET') """ Explanation: TFDV generates different types of statistics based on the type of features. For numerical features, TFDV computes for every feature: * Count of records * Number of missing (i.e. null values) * Histogram of values * Mean and standard deviation * Minimum and maximum values * Percentage of zero values For categorical features, TFDV provides: * Count of values * Percentage of missing values * Number of unique values * Average string length * Count for each label and its rank Let's compare the score_train and the score_test datasets End of explanation """ # Infers schema from the input statistics. # TODO schema = tfdv.infer_schema(statistics=stats) print(schema) """ Explanation: Infer a schema Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics. Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. Generating Schema Once statistics are generated, the next step is to generate a schema for our dataset. This schema will map each feature in the dataset to a type (float, bytes, etc.). Also define feature boundaries (min, max, distribution of values and missings, etc.). Link to infer schema https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/infer_schema With TFDV, we generate schema from statistics using End of explanation """ tfdv.display_schema(schema=schema) """ Explanation: The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it. End of explanation """ # Update Family_Size from FLOAT to Int Graduated_feature = tfdv.get_feature(schema, 'Graduated') Graduated_feature.presence.min_fraction = 1.0 Profession_feature = tfdv.get_feature(schema, 'Profession') Profession_feature.presence.min_fraction = 1.0 Family_Size_feature = tfdv.get_feature(schema, 'Family_Size') Family_Size_feature.presence.min_fraction = 1.0 tfdv.display_schema(schema) """ Explanation: TFDV provides a API to print a summary of each feature schema using In this visualization, the columns stand for: Type indicates the feature datatype. Presence indicates whether the feature must be present in 100% of examples (required) or not (optional). Valency indicates the number of values required per training example. Domain and Values indicates The feature domain and its values In the case of categorical features, single indicates that each training example must have exactly one category for the feature. Updating the Schema As stated above, Presence indicates whether the feature must be present in 100% of examples (required) or not (optional). Currently, all of our features except for our target label are shown as "optional". We need to make our features all required except for "Work Experience". We will need to update the schema. TFDV lets you update the schema according to your domain knowledge of the data if you are not satisfied by the auto-generated schema. We will update three use cases: Making a feature required, adding a value to a feature, and change a feature from a float to an integer. Change optional features to required. End of explanation """ Profession_domain = tfdv.get_domain(schema, 'Profession') Profession_domain.value.insert(0, 'Self-Employed') Profession_domain.value # [0 indicates I want 'Self-Employed to come first', if the number were 3, # it would be placed after the third value. ] """ Explanation: Update a feature with a new value Let's add "self-employed" to the Profession feature End of explanation """ Profession_domain = tfdv.get_domain(schema, 'Profession') Profession_domain.value.remove('Homemaker') Profession_domain.value """ Explanation: Let's remove "Homemaker" from "Profession" End of explanation """ # Update Family_Size to Int size = tfdv.get_feature(schema, 'Family_Size') size.type=2 tfdv.display_schema(schema) """ Explanation: Change a feature from a float to an integer End of explanation """
sraejones/phys202-2015-work
days/day12/Integration.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import numpy as np """ Explanation: Numerical Integration Learning Objectives: Learn how to numerically integrate 1d and 2d functions that are represented as Python functions or numerical arrays of data using scipy.integrate. This lesson was orginally developed by Jennifer Klay under the terms of the MIT license. The original version is in this repo (https://github.com/Computing4Physics/C4P). Her materials was in turn based on content from the Computational Physics book by Mark Newman at University of Michigan, materials developed by Matt Moelter and Jodi Christiansen for PHYS 202 at Cal Poly, as well as the SciPy tutorials. Imports End of explanation """ func = lambda x: x**4 - 2*x + 1 N = 10 a = 0.0 b = 2.0 h = (b-a)/N k = np.arange(1,N) I = h*(0.5*func(a) + 0.5*func(b) + func(a+k*h).sum()) print(I) """ Explanation: Introduction We often calculate integrals in physics (electromagnetism, thermodynamics, quantum mechanics, etc.). In calculus, you learned how to evaluate integrals analytically. Some functions are too difficult to integrate analytically and for these we need to use the computer to integrate numerically. A numerical integral goes back to the basic principles of calculus. Given a function $f(x)$, we need to find the area under the curve between two limits, $a$ and $b$: $$ I(a,b) = \int_a^b f(x) dx $$ There is no known way to calculate such an area exactly in all cases on a computer, but we can do it approximately by dividing up the area into rectangular slices and adding them all together. Unfortunately, this is a poor approximation, since the rectangles under and overshoot the function: <img src="rectangles.png" width=400> Trapezoidal Rule A better approach, which involves very little extra work, is to divide the area into trapezoids rather than rectangles. The area under the trapezoids is a considerably better approximation to the area under the curve, and this approach, though simple, often gives perfectly adequate results. <img src="trapz.png" width=420> We can improve the approximation by making the size of the trapezoids smaller. Suppose we divide the interval from $a$ to $b$ into $N$ slices or steps, so that each slice has width $h = (b − a)/N$ . Then the right-hand side of the $k$ th slice falls at $a+kh$, and the left-hand side falls at $a+kh−h$ = $a+(k−1)h$ . Thus the area of the trapezoid for this slice is $$ A_k = \tfrac{1}{2}h[ f(a+(k−1)h)+ f(a+kh) ] $$ This is the trapezoidal rule. It gives us a trapezoidal approximation to the area under one slice of our function. Now our approximation for the area under the whole curve is the sum of the areas of the trapezoids for all $N$ slices $$ I(a,b) \simeq \sum\limits_{k=1}^N A_k = \tfrac{1}{2}h \sum\limits_{k=1}^N [ f(a+(k−1)h)+ f(a+kh) ] = h \left[ \tfrac{1}{2}f(a) + \tfrac{1}{2}f(b) + \sum\limits_{k=1}^{N-1} f(a+kh)\right] $$ Note the structure of the formula: the quantity inside the square brackets is a sum over values of $f(x)$ measured at equally spaced points in the integration domain, and we take a half of the values at the start and end points but one times the value at all the interior points. Applying the Trapezoidal rule Use the trapezoidal rule to calculate the integral of $x^4 − 2x + 1$ from $x$ = 0 to $x$ = 2. This is an integral we can do by hand, so we can check our work. To define the function, let's use a lambda expression (you learned about these in the advanced python section of CodeCademy). It's basically just a way of defining a function of some variables in one line. For this case, it is just a function of x: End of explanation """ N = 10 a = 0.0 b = 2.0 h = (b-a)/N k1 = np.arange(1,N/2+1) k2 = np.arange(1,N/2) I = (1./3.)*h*(func(a) + func(b) + 4.*func(a+(2*k1-1)*h).sum() + 2.*func(a+2*k2*h).sum()) print(I) """ Explanation: The correct answer is $$ I(0,2) = \int_0^2 (x^4-2x+1)dx = \left[\tfrac{1}{5}x^5-x^2+x\right]_0^2 = 4.4 $$ So our result is off by about 2%. Simpson's Rule The trapezoidal rule estimates the area under a curve by approximating the curve with straight-line segments. We can often get a better result if we approximate the function instead with curves of some kind. Simpson's rule uses quadratic curves. In order to specify a quadratic completely one needs three points, not just two as with a straight line. So in this method we take a pair of adjacent slices and fit a quadratic through the three points that mark the boundaries of those slices. Given a function $f(x)$ and spacing between adjacent points $h$, if we fit a quadratic curve $ax^2 + bx + c$ through the points $x$ = $-h$, 0, $+h$, we get $$ f(-h) = ah^2 - bh + c, \hspace{1cm} f(0) = c, \hspace{1cm} f(h) = ah^2 +bh +c $$ Solving for $a$, $b$, and $c$ gives: $$ a = \frac{1}{h^2}\left[\tfrac{1}{2}f(-h) - f(0) + \tfrac{1}{2}f(h)\right], \hspace{1cm} b = \frac{1}{2h}\left[f(h)-f(-h)\right], \hspace{1cm} c = f(0) $$ and the area under the curve of $f(x)$ from $-h$ to $+h$ is given approximately by the area under the quadratic: $$ I(-h,h) \simeq \int_{-h}^h (ax^2+bx+c)dx = \tfrac{2}{3}ah^3 + 2ch = \tfrac{1}{3}h[f(-h)+4f(0)+f(h)] $$ This is Simpson’s rule. It gives us an approximation to the area under two adjacent slices of our function. Note that the final formula for the area involves only $h$ and the value of the function at evenly spaced points, just as with the trapezoidal rule. So to use Simpson’s rule we don’t actually have to worry about the details of fitting a quadratic—we just plug numbers into this formula and it gives us an answer. This makes Simpson’s rule almost as simple to use as the trapezoidal rule, and yet Simpson’s rule often gives much more accurate answers. Applying Simpson’s rule involves dividing the domain of integration into many slices and using the rule to separately estimate the area under successive pairs of slices, then adding the estimates for all pairs to get the final answer. If we are integrating from $x = a$ to $x = b$ in slices of width $h$ then Simpson’s rule gives the area under the $k$ th pair, approximately, as $$ A_k = \tfrac{1}{3}h[f(a+(2k-2)h)+4f(a+(2k-1)h) + f(a+2kh)] $$ With $N$ slices in total, there are $N/2$ pairs of slices, and the approximate value of the entire integral is given by the sum $$ I(a,b) \simeq \sum\limits_{k=1}^{N/2}A_k = \tfrac{1}{3}h\left[f(a)+f(b)+4\sum\limits_{k=1}^{N/2}f(a+(2k-1)h)+2\sum\limits_{k=1}^{N/2-1}f(a+2kh)\right] $$ Note that the total number of slices must be even for Simpson's rule to work. Applying Simpson's rule Now let's code Simpson's rule to compute the integral of the same function from before, $f(x) = x^4 - 2x + 1$ from 0 to 2. End of explanation """ import scipy.integrate as integrate integrate? """ Explanation: Adaptive methods and higher order approximations In some cases, particularly for integrands that are rapidly varying, a very large number of steps may be needed to achieve the desired accuracy, which means the calculation can become slow. So how do we choose the number $N$ of steps for our integrals? In our example calculations we just chose round numbers and looked to see if the results seemed reasonable. A more common situation is that we want to calculate the value of an integral to a given accuracy, such as four decimal places, and we would like to know how many steps will be needed. So long as the desired accuracy does not exceed the fundamental limit set by the machine precision of our computer— the rounding error that limits all calculations—then it should always be possible to meet our goal by using a large enough number of steps. At the same time, we want to avoid using more steps than are necessary, since more steps take more time and our calculation will be slower. Ideally we would like an $N$ that gives us the accuracy we want and no more. A simple way to achieve this is to start with a small value of $N$ and repeatedly double it until we achieve the accuracy we want. This method is an example of an adaptive integration method, one that changes its own parameters to get a desired answer. The trapezoidal rule is based on approximating an integrand $f(x)$ with straight-line segments, while Simpson’s rule uses quadratics. We can create higher-order (and hence potentially more accurate) rules by using higher-order polynomials, fitting $f(x)$ with cubics, quartics, and so forth. The general form of the trapezoidal and Simpson rules is $$ \int_a^b f(x)dx \simeq \sum\limits_{k=1}^{N}w_kf(x_k) $$ where the $x_k$ are the positions of the sample points at which we calculate the integrand and the $w_k$ are some set of weights. In the trapezoidal rule, the first and last weights are $\tfrac{1}{2}$ and the others are all 1, while in Simpson’s rule the weights are $\tfrac{1}{3}$ for the first and last slices and alternate between $\tfrac{4}{3}$ and $\tfrac{2}{3}$ for the other slices. For higher-order rules the basic form is the same: after fitting to the appropriate polynomial and integrating we end up with a set of weights that multiply the values $f(x_k)$ of the integrand at evenly spaced sample points. Notice that the trapezoidal rule is exact if the function being integrated is actually a straight line, because then the straight-line approximation isn’t an approximation at all. Similarly, Simpson’s rule is exact if the function being integrated is a quadratic, and so on for higher order polynomials. There are other more advanced schemes for calculating integrals that can achieve high accuracy while still arriving at an answer quickly. These typically combine the higher order polynomial approximations with adaptive methods for choosing the number of slices, in some cases allowing their sizes to vary over different regions of the integrand. One such method, called Gaussian Quadrature - after its inventor, Carl Friedrich Gauss, uses Legendre polynomials to choose the $x_k$ and $w_k$ such that we can obtain an integration rule accurate to the highest possible order of $2N−1$. It is beyond the scope of this course to derive the Gaussian quadrature method, but you can learn more about it by searching the literature. Now that we understand the basics of numerical integration and have even coded our own trapezoidal and Simpson's rules, we can feel justified in using scipy's built-in library of numerical integrators that build on these basic ideas, without coding them ourselves. scipy.integrate It is time to look at scipy's built-in functions for integrating functions numerically. Start by importing the library. End of explanation """ fun = lambda x : np.exp(-x)*np.sin(x) result,error = integrate.quad(fun, 0, 2*np.pi) print(result,error) """ Explanation: An overview of the module is provided by the help command, but it produces a lot of output. Here's a quick summary: Methods for Integrating Functions given function object. quad -- General purpose integration. dblquad -- General purpose double integration. tplquad -- General purpose triple integration. fixed_quad -- Integrate func(x) using Gaussian quadrature of order n. quadrature -- Integrate with given tolerance using Gaussian quadrature. romberg -- Integrate func using Romberg integration. Methods for Integrating Functions given fixed samples. trapz -- Use trapezoidal rule to compute integral from samples. cumtrapz -- Use trapezoidal rule to cumulatively compute integral. simps -- Use Simpson's rule to compute integral from samples. romb -- Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples. See the <code>special</code> module's orthogonal polynomials (<code>scipy.special</code>) for Gaussian quadrature roots and weights for other weighting factors and regions. Interface to numerical integrators of ODE systems. odeint -- General integration of ordinary differential equations. ode -- Integrate ODE using VODE and ZVODE routines. General integration (quad) The scipy function quad is provided to integrate a function of one variable between two points. The points can be $\pm\infty$ ($\pm$ np.infty) to indicate infinite limits. For example, suppose you wish to integrate the following: $$ I = \int_0^{2\pi} e^{-x}\sin(x)dx $$ This could be computed using quad as: End of explanation """ I = integrate.quad(fun, 0, np.infty) print(I) """ Explanation: The first argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice that we used a lambda function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. The analytic solution to the integral is $$ \int_0^{2\pi} e^{-x} \sin(x) dx = \frac{1}{2} - e^{-2\pi} \simeq \textrm{0.499066} $$ so that is pretty good. Here it is again, integrated from 0 to infinity: End of explanation """ print(abs(I[0]-0.5)) """ Explanation: In this case the analytic solution is exactly 1/2, so again pretty good. We can calculate the error in the result by looking at the difference between the exact result and the numerical value from quad with End of explanation """ x = np.arange(0, 20, 2) y = np.array([0, 3, 5, 2, 8, 9, 0, -3, 4, 9], dtype = float) plt.plot(x,y) plt.xlabel('x') plt.ylabel('y') #Show the integration area as a filled region plt.fill_between(x, y, y2=0,color='red',hatch='//',alpha=0.2); I = integrate.simps(y,x) print(I) """ Explanation: In this case, the numerically-computed integral is within $10^{-16}$ of the exact result — well below the reported error bound. Integrating array data When you want to compute the integral for an array of data (such as our thermistor resistance-temperature data from the Interpolation lesson), you don't have the luxury of varying your choice of $N$, the number of slices (unless you create an interpolated approximation to your data). There are three functions for computing integrals given only samples: trapz , simps, and romb. The trapezoidal rule approximates the function as a straight line between adjacent points while Simpson’s rule approximates the function between three adjacent points as a parabola, as we have already seen. The first two functions can also handle non-equally-spaced samples (something we did not code ourselves) which is a useful extension to these integration rules. If the samples are equally-spaced and the number of samples available is $2^k+1$ for some integer $k$, then Romberg integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration is an adaptive method that uses the trapezoid rule at step-sizes related by a power of two and then performs something called Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. (A different interface to Romberg integration useful when the function can be provided is also available as romberg). Applying simps to array data Here is an example of using simps to compute the integral for some discrete data: End of explanation """ from scipy.integrate import dblquad #NOTE: the order of arguments matters - inner to outer integrand = lambda x,y: y * np.sin(x) + x * np.cos(y) ymin = 0 ymax = np.pi #The callable functions for the x limits are just constants in this case: xmin = lambda y : np.pi xmax = lambda y : 2*np.pi #See the help for correct order of limits I, err = dblquad(integrand, ymin, ymax, xmin, xmax) print(I, err) dblquad? """ Explanation: Multiple Integrals Multiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad and tplquad. The function dblquad performs double integration. Use the help function to be sure that you define the arguments in the correct order. The limits on all inner integrals are actually functions (which can be constant). Double integrals using dblquad Suppose we want to integrate $f(x,y)=y\sin(x)+x\cos(y)$ over $\pi \le x \le 2\pi$ and $0 \le y \le \pi$: $$\int_{x=\pi}^{2\pi}\int_{y=0}^{\pi} y \sin(x) + x \cos(y) dxdy$$ To use dblquad we have to provide callable functions for the range of the x-variable. Although here they are constants, the use of functions for the limits enables freedom to integrate over non-constant limits. In this case we create trivial lambda functions that return the constants. Note the order of the arguments in the integrand. If you put them in the wrong order you will get the wrong answer. End of explanation """ from scipy.integrate import tplquad #AGAIN: the order of arguments matters - inner to outer integrand = lambda x,y,z: y * np.sin(x) + z * np.cos(x) zmin = -1 zmax = 1 ymin = lambda z: 0 ymax = lambda z: 1 #Note the order of these arguments: xmin = lambda y,z: 0 xmax = lambda y,z: np.pi #Here the order of limits is outer to inner I, err = tplquad(integrand, zmin, zmax, ymin, ymax, xmin, xmax) print(I, err) """ Explanation: Triple integrals using tplquad We can also numerically evaluate a triple integral: $$ \int_{x=0}^{\pi}\int_{y=0}^{1}\int_{z=-1}^{1} y\sin(x)+z\cos(x) dxdydz$$ End of explanation """
ajdawson/python_for_climate_scientists
course_content/notebooks/cis_introduction.ipynb
gpl-3.0
# Ensure I don't use any local plugins. Set it to a readable folder with no Python files to avoid warnings. %env CIS_PLUGIN_HOME=/Users/watson-parris/Pictures from cis import read_data, read_data_list, get_variables get_variables('../resources/WorkshopData2016/Aeronet/920801_150530_Brussels.lev20') aeronet_aot_500 = read_data("../resources/WorkshopData2016/Aeronet/920801_150530_Brussels.lev20", "AOT_500") print(aeronet_aot_500) aeronet_aot_500.name() """ Explanation: CIS Introduction CIS has it's own version of the Iris Cube. But it's designed to work with any observational data. The CIS data structure is just called UngriddedData: <img src="../images/ungridded_data.png" width="640"/> First unzip your example data to a folder you can easily find Be sure to activate your conda environment: $ source activate python_course End of explanation """ aeronet_aot_500.plot() ax = aeronet_aot_500.plot(color='red') ax.set_yscale('log') aeronet_aot = read_data_list("../resources/WorkshopData2016/Aeronet/920801_150530_Brussels.lev20", ['AOT_500', 'AOT_675']) ax = aeronet_aot.plot() ax.set_title('Brussels Aeronet AOT') ax.set_xlabel('Date') from datetime import datetime ax.set_xlim(datetime(2007,5,5), datetime(2007,8,26)) """ Explanation: Plotting Ungridded time series End of explanation """ model_aod = read_data("../resources/WorkshopData2016/od550aer.nc", "od550aer") print(model_aod) maod_global_mean, maod_std_dev, _ = model_aod.collapsed(['x', 'y']) print(maod_global_mean) ax = maod_global_mean.plot(itemwidth=2) aeronet_aot_500.plot(ax=ax) # This is a nice snippet which adds error bounds on the plot above from cis.time_util import convert_std_time_to_datetime ax.fill_between(convert_std_time_to_datetime(maod_global_mean.coord('time').points), maod_std_dev.data-maod_global_mean.data, maod_std_dev.data+maod_global_mean.data, alpha=0.3) """ Explanation: Model time series End of explanation """ model_aod_annual_mean = model_aod.collapsed('t') model_aod_annual_mean[0].plot() model_aod_annual_mean[0].plot(central_longitude=180) """ Explanation: Global model plot End of explanation """ number_concentration = read_data('../resources/WorkshopData2016/ARCPAC_2008', 'NUMBER_CONCENTRATION') print(number_concentration) ax = number_concentration.plot() ax.bluemarble() """ Explanation: Lat/lon aircraft plots End of explanation """ aerosol_cci = read_data('../resources/WorkshopData2016/AerosolCCI', 'AOD550') aerosol_cci.plot() aerosol_cci_one_day = read_data('../resources/WorkshopData2016/AerosolCCI/20080415*.nc', 'AOD550') ax = aerosol_cci_one_day.plot() aerosol_cci_one_day.plot(projection='Orthographic') ax=aerosol_cci_one_day.plot(projection='InterruptedGoodeHomolosine') ax.bluemarble() """ Explanation: Satellite plots End of explanation """ # Lets take a closer look at the model data print(model_aod) from cis.time_util import PartialDateTime # First subset the aeronet data: aeronet_aot_2008 = aeronet_aot_500.subset(t=PartialDateTime(2008)) """ Explanation: Exercises 1. Try plotting AOT_500 against AOT_675 from the Aeronet file using a comparative scatter plot 2. Subset the 5 days of satellite data down to the region covered by the aircraft data, then plot it. Collocation <img src="../images/collocation_options.png" width="640"/> Model onto Aeronet <img src="../images/model_onto_aeronet.png" width="640"/> This is an gridded onto un-gridded collocation and can be done using either linear interpolation or nearest neighbour. This is very quick and in general CIS can even handle hybrid height coordinates: <img src="../images/gridded_ungridded_collocation.png" width="640"/> End of explanation """ # Now do the collocation: model_aod_onto_aeronet = model_aod.collocated_onto(aeronet_aot_2008) print(model_aod_onto_aeronet[0]) """ Explanation: Note that we don’t actually have to do this subsetting, but that otherwise CIS will interpolate the nearest values, which in this case we don’t really want. End of explanation """ from cis.plotting.plot import multilayer_plot, taylor_plot ax = multilayer_plot([model_aod_onto_aeronet[0], aeronet_aot_2008], layer_opts=[dict(label='Model'), dict(label='Aeronet')], xaxis='time', itemwidth=1) taylor_plot([aeronet_aot_2008, model_aod_onto_aeronet[0]], layer_opts=[dict(label='Aeronet'),dict(label='Model')]) # Basic maths on the data print(model_aod_onto_aeronet[0] - aeronet_aot_2008) """ Explanation: Note the updated history End of explanation """ # Read all of the AOD satelite variables aerosol_cci = read_data_list('../resources/WorkshopData2016/AerosolCCI', 'AOD*0') aoerosol_cci_Alaska = aerosol_cci.subset(x=[-170,-100],y=[35,80]) aoerosol_cci_Alaska[0].plot(yaxis='latitude') aerosol_cci_collocated = aoerosol_cci_Alaska.collocated_onto(number_concentration, h_sep=10, t_sep='P1D') aerosol_cci_collocated.append(number_concentration) print(aerosol_cci_collocated) aerosol_cci_collocated = aerosol_cci_collocated[::3] aerosol_cci_collocated[:2].plot('comparativescatter') """ Explanation: Aircraft onto satellite <img src="../images/aircraft_onto_satellite.png" width="640"/> As you can see the difficulty here is the sparseness of the aircraft data, and actually of the satellite data in this region. This is an ungridded to ungridded collocation: <img src="../images/ungridded_ungridded_collocation.png" width="640" /> End of explanation """ df = aerosol_cci_collocated.as_data_frame() print(df) df.corr() # Then do a pretty plot of it... # This is a nice segway into the Pandas lesson. # Save the collocation output so that we can come back to it during the Pandas tutorial. aerosol_cci_collocated.save_data('col_output.nc') """ Explanation: Exercises 1. How does the correlation change if we only include those average number concentrations which averaged more than one point? 2. Consider the case of comparing our model AOD with the AerosolCCI. a. What strategies could you employ? b. Perform an initial assesment of the model AOD field using the Aerosol CCI data for the few days we have data. CIS and Pandas End of explanation """
geektoni/shogun
doc/ipython-notebooks/neuralnets/neuralnets_digits.ipynb
bsd-3-clause
import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat import shogun as sg import numpy as np import matplotlib.pyplot as plt import matplotlib %matplotlib inline # load the dataset dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = dataset['data'] # the usps dataset has the digits labeled from 1 to 10 # we'll subtract 1 to make them in the 0-9 range instead Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 # 1000 examples for training Xtrain = sg.create_features(Xall[:,0:1000]) Ytrain = sg.create_labels(Yall[0:1000]) # 4000 examples for validation Xval = sg.create_features(Xall[:,1001:5001]) Yval = sg.create_labels(Yall[1001:5001]) # the rest for testing Xtest = sg.create_features(Xall[:,5002:-1]) Ytest = sg.create_labels(Yall[5002:-1]) """ Explanation: Neural Nets for Digit Classification by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn This notebook illustrates how to use the NeuralNets module to teach a neural network to recognize digits. It also explores the different optimization and regularization methods supported by the module. Convolutional neural networks are also discussed. Introduction An Artificial Neural Network is a machine learning model that is inspired by the way biological nervous systems, such as the brain, process information. The building block of neural networks is called a neuron. All a neuron does is take a weighted sum of its inputs and pass it through some non-linear function (activation function) to produce its output. A (feed-forward) neural network is a bunch of neurons arranged in layers, where each neuron in layer i takes its input from all the neurons in layer i-1. For more information on how neural networks work, follow this link. In this notebook, we'll look at how a neural network can be used to recognize digits. We'll train the network on the USPS dataset of handwritten digits. We'll start by loading the data and dividing it into a training set, a validation set, and a test set. The USPS dataset has 9298 examples of handwritten digits. We'll intentionally use just a small portion (1000 examples) of the dataset for training . This is to keep training time small and to illustrate the effects of different regularization methods. End of explanation """ # create the networks net_no_reg = sg.create_machine("NeuralNetwork") net_no_reg.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256)) net_no_reg.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256)) net_no_reg.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128)) net_no_reg.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) net_l2 = sg.create_machine("NeuralNetwork") net_l2.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256)) net_l2.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256)) net_l2.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128)) net_l2.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) net_l1 = sg.create_machine("NeuralNetwork") net_l1.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256)) net_l1.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256)) net_l1.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128)) net_l1.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) net_dropout = sg.create_machine("NeuralNetwork") net_dropout.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256)) net_dropout.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256)) net_dropout.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128)) net_dropout.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) """ Explanation: Creating the network To create a neural network in shogun, we'll first create an instance of NeuralNetwork and then initialize it by telling it how many inputs it has and what type of layers it contains. To specifiy the layers of the network a DynamicObjectArray is used. The array contains instances of NeuralLayer-based classes that determine the type of neurons each layer consists of. Some of the supported layer types are: NeuralLinearLayer, NeuralLogisticLayer and NeuralSoftmaxLayer. We'll create a feed-forward, fully connected (every neuron is connected to all neurons in the layer below) neural network with 2 logistic hidden layers and a softmax output layer. The network will have 256 inputs, one for each pixel (16*16 image). The first hidden layer will have 256 neurons, the second will have 128 neurons, and the output layer will have 10 neurons, one for each digit class. Note that we're using a big network, compared with the size of the training set. This is to emphasize the effects of different regularization methods. We'll try training the network with: No regularization L2 regularization L1 regularization Dropout regularization Therefore, we'll create 4 versions of the network, train each one of them differently, and then compare the results on the validation set. End of explanation """ # import networkx, install if necessary try: import networkx as nx except ImportError: import pip %pip install --user networkx import networkx as nx G = nx.DiGraph() pos = {} for i in range(8): pos['X'+str(i)] = (i,0) # 8 neurons in the input layer pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer for j in range(8): G.add_edge('X'+str(j),'H'+str(i)) if i<4: pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer for j in range(8): G.add_edge('H'+str(j),'U'+str(i)) if i<6: pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer for j in range(4): G.add_edge('U'+str(j),'Y'+str(i)) nx.draw(G, pos, node_color='y', node_size=750) """ Explanation: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it. End of explanation """ def compute_accuracy(net, X, Y): predictions = net.apply_multiclass(X) evaluator = sg.create_evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(predictions, Y) return accuracy*100 """ Explanation: Training NeuralNetwork supports two methods for training: LBFGS (default) and mini-batch gradient descent. LBFGS is a full-batch optimization methods, it looks at the entire training set each time before it changes the network's parameters. This makes it slow with large datasets. However, it works very well with small/medium size datasets and is very easy to use as it requires no parameter tuning. Mini-batch Gradient Descent looks at only a small portion of the training set (a mini-batch) before each step, which it makes it suitable for large datasets. However, it's a bit harder to use than LBFGS because it requires some tuning for its parameters (learning rate, learning rate decay,..) Training in NeuralNetwork stops when: Number of epochs (iterations over the entire training set) exceeds max_num_epochs The (percentage) difference in error between the current and previous iterations is smaller than epsilon, i.e the error is not anymore being reduced by training To see all the options supported for training, check the documentation We'll first write a small function to calculate the classification accuracy on the validation set, so that we can compare different models: End of explanation """ net_no_reg.put('epsilon', 1e-6) net_no_reg.put('max_num_epochs', 600) net_no_reg.put('seed', 10) # uncomment this line to allow the training progress to be printed on the console #from shogun import MSG_INFO; net_no_reg.io.put('loglevel', MSG_INFO) net_no_reg.put('labels', Ytrain) net_no_reg.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%") """ Explanation: Training without regularization We'll start by training the first network without regularization using LBFGS optimization. Note that LBFGS is suitable because we're using a small dataset. End of explanation """ # turn on L2 regularization net_l2.put('l2_coefficient', 3e-4) net_l2.put('epsilon', 1e-6) net_l2.put('max_num_epochs', 600) net_l2.put('seed', 10) net_l2.put('labels', Ytrain) net_l2.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%") """ Explanation: Training with L2 regularization We'll train another network, but with L2 regularization. This type of regularization attempts to prevent overfitting by penalizing large weights. This is done by adding $\frac{1}{2} \lambda \Vert W \Vert_2$ to the optimization objective that the network tries to minimize, where $\lambda$ is the regularization coefficient. End of explanation """ # turn on L1 regularization net_l1.put('l1_coefficient', 3e-5) net_l1.put('epsilon', 1e-6) net_l1.put('max_num_epochs', 600) net_l1.put('seed', 10) net_l1.put('labels', Ytrain) net_l1.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%") """ Explanation: Training with L1 regularization We'll now try L1 regularization. It works by by adding $\lambda \Vert W \Vert_1$ to the optimzation objective. This has the effect of penalizing all non-zero weights, therefore pushing all the weights to be close to 0. End of explanation """ # set the dropout probabilty for neurons in the hidden layers net_dropout.put('dropout_hidden', 0.5) # set the dropout probabilty for the inputs net_dropout.put('dropout_input', 0.2) # limit the maximum incoming weights vector lengh for neurons net_dropout.put('max_norm', 15) net_dropout.put('epsilon', 1e-6) net_dropout.put('max_num_epochs', 600) net_dropout.put('seed', 10) # use gradient descent for optimization net_dropout.put('optimization_method', "NNOM_GRADIENT_DESCENT") net_dropout.put('gd_learning_rate', 0.5) net_dropout.put('gd_mini_batch_size', 100) net_dropout.put('labels', Ytrain) net_dropout.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%") """ Explanation: Training with dropout The idea behind dropout is very simple: randomly ignore some neurons during each training iteration. When used on neurons in the hidden layers, it has the effect of forcing each neuron to learn to extract features that are useful in any context, regardless of what the other hidden neurons in its layer decide to do. Dropout can also be used on the inputs to the network by randomly omitting a small fraction of them during each iteration. When using dropout, it's usually useful to limit the L2 norm of a neuron's incoming weights vector to some constant value. Due to the stochastic nature of dropout, LBFGS optimization doesn't work well with it, therefore we'll use mini-batch gradient descent instead. End of explanation """ # prepere the layers net_conv = sg.create_machine("NeuralNetwork") # input layer, a 16x16 image single channel image net_conv.add("layers", sg.create_layer("NeuralInputLayer", width=16, height=16, num_neurons=256)) # the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps net_conv.add("layers", sg.create_layer("NeuralConvolutionalLayer", activation_function="CMAF_RECTIFIED_LINEAR", num_maps=10, radius_x=2, radius_y=2, pooling_width=2, pooling_height=2)) # the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps net_conv.add("layers", sg.create_layer("NeuralConvolutionalLayer", activation_function="CMAF_RECTIFIED_LINEAR", num_maps=15, radius_x=2, radius_y=2, pooling_width=2, pooling_height=2)) # output layer net_conv.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) """ Explanation: Convolutional Neural Networks Now we'll look at a different type of network, namely convolutional neural networks. A convolutional net operates on two principles: Local connectivity: Convolutional nets work with inputs that have some sort of spacial structure, where the order of the inputs features matter, i.e images. Local connectivity means that each neuron will be connected only to a small neighbourhood of pixels. Weight sharing: Different neurons use the same set of weights. This greatly reduces the number of free parameters, and therefore makes the optimization process easier and acts as a good regularizer. With that in mind, each layer in a convolutional network consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. The convolution operation satisfies the local connectivity and the weight sharing constraints. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. This adds some translation invarience and improves the performance. Convolutional nets in Shogun are handled through the NeuralNetwork class along with the NeuralConvolutionalLayer class. A NeuralConvolutionalLayer represents a convolutional layer with multiple feature maps, optional max-pooling, and support for different types of activation functions Now we'll creates a convolutional neural network with two convolutional layers and a softmax output layer. We'll use the rectified linear activation function for the convolutional layers: End of explanation """ # 50% dropout in the input layer net_conv.put('dropout_input', 0.5) # max-norm regularization net_conv.put('max_norm', 1.0) # set gradient descent parameters net_conv.put('optimization_method', "NNOM_GRADIENT_DESCENT") net_conv.put('gd_learning_rate', 0.01) net_conv.put('gd_mini_batch_size', 100) net_conv.put('epsilon', 0.0) net_conv.put('max_num_epochs', 100) net_conv.put("seed", 10) # start training net_conv.put('labels', Ytrain) net_conv.train(Xtrain) # compute accuracy on the validation set print("With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%") """ Explanation: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization: End of explanation """ print("Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%") """ Explanation: Evaluation According the accuracy on the validation set, the convolutional network works best in out case. Now we'll measure its performance on the test set: End of explanation """ predictions = net_conv.apply_multiclass(Xtest) _=plt.figure(figsize=(10,12)) # plot some images, with the predicted label as the title of each image # this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg for i in range(100): ax=plt.subplot(10,10,i+1) plt.title(int(predictions.get("labels")[i])) ax.imshow(Xtest.get("feature_matrix")[:,i].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r) ax.set_xticks([]) ax.set_yticks([]) """ Explanation: We can also look at some of the images and the network's response to each of them: End of explanation """
mrcinv/matpy
00_uvod.ipynb
gpl-2.0
1+1 """ Explanation: << nazaj: Predgovor Uvod Preden se lotimo trenja matematičnih orehov s kladivom imenovanim Python, si moramo pripraviti primerno okolje. Dokumenti so napisani v obliki Jupyter notebook, ki je interaktivno okolje za Python, v katerem lahko združujemo programsko kodo in besedilo. Dokumente lahko prenesete na svoj računalnik, kodo po želji spreminjate in tako spoznate, kaj koda počne in kako. Zato potrebujete nameščen Jupyter in seveda Python. Dokumente lahko poganjate tudi na oblaku SageMathCloud, ne da bi karkoli namestili na svoj računalnik, vendar je hitrost delovanja omejena. Kodo seveda lahko tudi skopirate v kakšno drugo okolje za Python in jo poganjate neodvisno od okolja Jupyter. Kakorkoli, kodo spreminjajte, eksperimentirajte in poskusite naloge rešiti na različne načine. 1. naloga: namestite jupyter Namestite si jupyter. Linux Navodila so napisana za distribucije, ki temeljijo na distribuciji Debian. Najprej namestite pip za Python verzije 3, tako da v ukazni vrstici terminala napišite sudo apt-get install pip3 Nato namestite paket jupyter-notebook z ukazom pip3 install jupyter-notebook Windows in Mac Sledite navodilom za namestitev. 2. naloga: Preiskusite jupyter Preiskusite jupyter, tako da poženete ukaz jupyter notebook v ukazni vrstici. Odprlo se vam bo novo okno brskalnika, v katerem je seznam datotek v trenutnem imeniku. V meniju desno zgoraj izberite New -&gt; Notebooks -&gt; Python3 in preiskusite Python s preporstimi ukazi, npr. 1+1, print("Hello!"), .... Kodo v celici poženete s kombinacijo tipk Shift+Enter. End of explanation """ ?print import time tekst = "Matematika in Python" for znak in tekst: time.sleep(0.2) print(znak,end='') """ Explanation: 3. naloga: Prenesite zbirko na svoj računalnik Celotno zbirko prenesite v obliki arhiva zip na svoj računalnik. Arhiv razširite in nato v novo razpakiranem direktoriju poženite Jupyter cd matpy-master jupyter notebook V oknu brskalnika, ki se odpre, izberite 00_uvod.ipynb in poskusite, kaj izvede naslednja koda End of explanation """ import disqus %reload_ext disqus %disqus matpy """ Explanation: Polje za diskusijo Naslednje kode ni treba poganjati, saj je namenjena le temu, da se na koncu zvezka prikaže polje za diskusijo. Komentarji so dobrodošli. End of explanation """
squishbug/DataScienceProgramming
DataScienceProgramming/04-Pandas-Data-Tables/Ten-Minute-Tutorial.ipynb
cc0-1.0
import pandas as pd import numpy as np import matplotlib.pyplot as plt """ Explanation: Pandas pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal. Source: http://pandas.pydata.org/pandas-docs/stable/ 10 Minutes to pandas This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook The original document is posted at http://pandas.pydata.org/pandas-docs/stable/10min.html Customarily, we import as follows: End of explanation """ s = pd.Series([1,3,5,np.nan,6,8]) s """ Explanation: Object Creation See the Data Structure Intro section Creating a Series by passing a list of values, letting pandas create a default integer index: End of explanation """ dates = pd.date_range('20130101', periods=6) dates df = pd.DataFrame(np.random.randn(6,4), index=dates, columns = ['Ann', "Bob", "Charly", "Don"]) ## columns=list('ABCD')) df df.Charly+df.Don df2 = pd.DataFrame({ 'A' : 1., ....: 'B' : pd.Timestamp('20130102'), ....: 'C' : pd.Series(1,index=list(range(4)),dtype='float32'), ....: 'D' : np.array([3] * 4,dtype='int32'), ....: 'E' : pd.Categorical(["test","train","test","train"]), ....: 'F' : 'foo' }) ....: df2 """ Explanation: Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns: End of explanation """ df2.dtypes """ Explanation: Having specific dtypes End of explanation """ df.head() df.tail(3) df.index df.columns df.values df.describe() df2.columns df2.dtypes df2.describe() """ Explanation: If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed: <pre> In [13]: df2.&lt;TAB&gt; df2.A df2.boxplot df2.abs df2.C df2.add df2.clip df2.add_prefix df2.clip_lower df2.add_suffix df2.clip_upper df2.align df2.columns df2.all df2.combine df2.any df2.combineAdd df2.append df2.combine_first df2.apply df2.combineMult df2.applymap df2.compound df2.as_blocks df2.consolidate df2.asfreq df2.convert_objects df2.as_matrix df2.copy df2.astype df2.corr df2.at df2.corrwith df2.at_time df2.count df2.axes df2.cov df2.B df2.cummax df2.between_time df2.cummin df2.bfill df2.cumprod df2.blocks df2.cumsum df2.bool df2.D </pre> As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity. Viewing Data See the Basics section See the top & bottom rows of the frame End of explanation """ df.T """ Explanation: Transposing your data End of explanation """ df.sort_index(axis=1, ascending=False) """ Explanation: Sorting by an axis End of explanation """ df.sort_values(by='Bob', ascending=True) """ Explanation: Sorting by values End of explanation """ df['Ann'] """ Explanation: Selection Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix. See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing Getting Selecting a single column, which yields a Series, equivalent to df.A End of explanation """ df[1:3] df['20130102':'20130104'] """ Explanation: Selecting via [], which slices the rows. End of explanation """ dates[0] df.loc[dates[0]] """ Explanation: Selection by Label See more in Selection by Label For getting a cross section using a label End of explanation """ df.loc[:,['Ann','Bob']] """ Explanation: Selecting on a multi-axis by label End of explanation """ df.loc['20130102':'20130104',['A','B']] """ Explanation: Showing label slicing, both endpoints are included End of explanation """ df.loc['20130102',['A','B']] """ Explanation: Reduction in the dimensions of the returned object End of explanation """ df.loc[dates[0],'A'] """ Explanation: For getting a scalar value End of explanation """ df.at[dates[0],'A'] """ Explanation: For getting fast access to a scalar (equiv to the prior method) End of explanation """ df.iloc[3] """ Explanation: Selection by Position See more in Selection by Position Select via the position of the passed integers End of explanation """ df.iloc[3:5,0:2] """ Explanation: By integer slices, acting similar to numpy/python End of explanation """ df.iloc[[1,2,4],[0,2]] """ Explanation: By lists of integer position locations, similar to the numpy/python style End of explanation """ df.iloc[1:3,:] """ Explanation: For slicing rows explicitly End of explanation """ df.iloc[:,1:3] """ Explanation: For slicing columns explicitly End of explanation """ In [37]: df.iloc[1,1] Out[37]: -0.17321464905330858 For getting fast access to a scalar (equiv to the prior method) In [38]: df.iat[1,1] Out[38]: -0.17321464905330858 """ Explanation: For getting a value explicitly End of explanation """ flt = (df.Ann >= 0.5) & (df.Ann < 1.5) df[flt] In [39]: df[(df.Ann >= 0.5) & (df.Ann < 1.5)] """ Explanation: Boolean Indexing Using a single column’s values to select data. End of explanation """ df[df > 0] """ Explanation: A where operation for getting. End of explanation """ df2 = df.copy() df2['E'] = ['one', 'one','two','three','four','three'] df2 df2[df2['E'].isin(['two','four'])] """ Explanation: Using the isin() method for filtering: End of explanation """ s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6)) s1 df['F'] = s1 df['G'] = df['Ann']-df['Bob'] df df.G """ Explanation: Setting Setting a new column automatically aligns the data by the indexes End of explanation """ df.at[dates[0],'Ann'] = 17.6 df """ Explanation: Setting values by label End of explanation """ df.iat[5,2] = 349 df """ Explanation: Setting values by position End of explanation """ df.loc[:,'D'] = np.array([5] * len(df)) """ Explanation: Setting by assigning with a numpy array End of explanation """ df """ Explanation: The result of the prior setting operations End of explanation """ df2 = df.copy() df2[df2 > 0] = -df2 df2 """ Explanation: A where operation with setting. End of explanation """ df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E']) df1.loc[dates[0]:dates[1],'E'] = 1 df1 """ Explanation: Missing Data pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data. End of explanation """ df1.dropna(how='any') """ Explanation: To drop any rows that have missing data. End of explanation """ df1 df1.fillna(value=5) """ Explanation: Filling missing data End of explanation """ pd.isnull(df1) """ Explanation: To get the boolean mask where values are nan End of explanation """ df.mean(0) """ Explanation: Operations See the Basic section on Binary Ops Stats Operations in general exclude missing data. Performing a descriptive statistic End of explanation """ df.mean(1) """ Explanation: Same operation on the other axis End of explanation """ s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2) s df.sub(s, axis='index') """ Explanation: Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension. End of explanation """ df df.apply(np.cumsum) df df.max() ##df.apply(max) df.Ann.max()-df.Ann.min() df.apply(lambda x: x.max() - x.min()) """ Explanation: Apply Applying functions to the data End of explanation """ s = pd.Series(np.random.randint(0, 7, size=10)) s s.value_counts() """ Explanation: Histogramming See more at Histogramming and Discretization End of explanation """ s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) s.str.lower() """ Explanation: Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods. End of explanation """ df = pd.DataFrame(np.random.randn(10, 4)) df # break it into pieces pieces = [df[:3], df[3:7], df[7:]] pd.concat(pieces) """ Explanation: Merge Concat pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. See the Merging section Concatenating pandas objects together with concat(): End of explanation """ left = pd.DataFrame({'key': ['foo', 'boo', 'foo'], 'lval': [1, 2, 3]}) #right = pd.DataFrame({'key': ['boo', 'foo', 'foo'], 'rval': [4, 5, 6]}) right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [5, 6]}) left right pd.merge(left, right, on='key', how='left') """ Explanation: Join SQL style merges. See the Database style joining End of explanation """ df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D']) df s = df.iloc[3] df.append(s, ignore_index=True) """ Explanation: Append Append rows to a dataframe. See the Appending End of explanation """ df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ....: 'foo', 'bar', 'foo', 'foo'], ....: 'B' : ['one', 'one', 'two', 'three', ....: 'two', 'two', 'one', 'three'], ....: 'C' : np.random.randn(8), ....: 'D' : np.random.randn(8)}) ....: df """ Explanation: Grouping By “group by” we are referring to a process involving one or more of the following steps - Splitting the data into groups based on some criteria - Applying a function to each group independently - Combining the results into a data structure See the Grouping section End of explanation """ df.groupby('A').sum() """ Explanation: Grouping and then applying a function sum to the resulting groups. End of explanation """ df.groupby(['A','B']).sum() """ Explanation: Grouping by multiple columns forms a hierarchical index, which we then apply the function. End of explanation """ tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', ....: 'foo', 'foo', 'qux', 'qux'], ....: ['one', 'two', 'one', 'two', ....: 'one', 'two', 'one', 'two']])) ....: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B']) df2 = df[:4] df2 """ Explanation: Reshaping See the sections on Hierarchical Indexing and Reshaping. Stack End of explanation """ stacked = df2.stack() stacked """ Explanation: The stack() method “compresses” a level in the DataFrame’s columns. End of explanation """ stacked.unstack() stacked.unstack(1) stacked.unstack(0) """ Explanation: With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level: End of explanation """ df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3, .....: 'B' : ['A', 'B', 'C'] * 4, .....: 'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2, .....: 'D' : np.random.randn(12), .....: 'E' : np.random.randn(12)}) .....: df """ Explanation: Pivot Tables See the section on Pivot Tables. In [100]: End of explanation """ pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C']) """ Explanation: We can produce pivot tables from this data very easily: End of explanation """ rng = pd.date_range('1/1/2012', periods=100, freq='S') ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng) ts.resample('5Min').sum() Out[105]: 2012-01-01 25083 Freq: 5T, dtype: int64 """ Explanation: Time Series pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section End of explanation """ rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D') ts = pd.Series(np.random.randn(len(rng)), rng) ts ts_utc = ts.tz_localize('UTC') ts_utc """ Explanation: Time zone representation End of explanation """ ts_utc.tz_convert('US/Eastern') """ Explanation: Convert to another time zone End of explanation """ rng = pd.date_range('1/1/2012', periods=5, freq='M') ts = pd.Series(np.random.randn(len(rng)), index=rng) ts ps = ts.to_period() ps ps.to_timestamp() """ Explanation: Converting between time span representations End of explanation """ prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV') ts = pd.Series(np.random.randn(len(prng)), prng) ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9 ts.head() """ Explanation: Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end: End of explanation """ df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']}) """ Explanation: Categoricals Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation. End of explanation """ df["grade"] = df["raw_grade"].astype("category") df["grade"] Out[124]: 0 a 1 b 2 b 3 a 4 a 5 e Name: grade, dtype: category Categories (3, object): [a, b, e] """ Explanation: Convert the raw grades to a categorical data type. End of explanation """ df["grade"].cat.categories = ["very good", "good", "very bad"] """ Explanation: Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!) End of explanation """ df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"]) df["grade"] """ Explanation: Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default). End of explanation """ df.sort_values(by="grade") Out[128]: id raw_grade grade 5 6 e very bad 1 2 b good 2 3 b good 0 1 a very good 3 4 a very good 4 5 a very good """ Explanation: Sorting is per order in the categories, not lexical order. End of explanation """ df.groupby("grade").size() """ Explanation: Grouping by a categorical column shows also empty categories. End of explanation """ ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() %matplotlib inline ts.plot() """ Explanation: Plotting Plotting docs. End of explanation """ df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, .....: columns=['A', 'B', 'C', 'D']) .....: df = df.cumsum() %matplotlib inline plt.figure(); df.plot(); plt.legend(loc='best') """ Explanation: On DataFrame, plot() is a convenience to plot all of the columns with labels: End of explanation """ df.to_csv('foo.csv') """ Explanation: Getting Data In/Out CSV Writing to a csv file End of explanation """ pd.read_csv('foo.csv') """ Explanation: Reading from a csv file End of explanation """ ## df.to_hdf('foo.h5','df') """ Explanation: HDF5 Reading and writing to HDFStores Writing to a HDF5 Store End of explanation """ ## pd.read_hdf('foo.h5','df') """ Explanation: Reading from a HDF5 Store End of explanation """ df.to_excel('foo.xlsx', sheet_name='Sheet1') """ Explanation: Excel Reading and writing to MS Excel Writing to an excel file End of explanation """ pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA']) """ Explanation: Reading from an excel file End of explanation """
facaiy/book_notes
deep_learning/Regularization_for_Deep_Learning/note.ipynb
cc0-1.0
show_image("fig7_2.png") """ Explanation: Chapter 7 Regularization for Deep Learning the best fitting model is a large model that has been regularized appropriately. 7.1 Parameter Norm Penalties \begin{equation} \tilde{J}(\theta; X, y) = J(\theta; X, y) + \alpha \Omega(\theta) \end{equation} where $\Omega(\theta)$ is a paramter norm penalty. typically, penalizes only the weights of the affine transformation at each layer and leaves the biases unregularized. 7.1.1 $L^2$ Parameter Regularization 7.1.2 $L^1$ Regularization The sparsity property induced by $L^1$ regularization => feature selection 7.2 Norm Penalties as Constrained Optimization constrain $\Omega(\theta)$ to be less than some constant $k$: \begin{equation} \mathcal{L}(\theta, \alpha; X, y) = J(\theta; X, y) + \alpha(\Omega(\theta) - k) \end{equation} In practice, column norm limitation is always implemented as an explicit constraint with reprojection. 7.3 Regularization and Under-Constrained Problems regularized matrix is guarantedd to be invertible. 7.4 Dataset Augmentation create fake data: transform inject noise 7.5 Noise Robustness add noise to data add noise to weight (Bayesian: variable distributaion): is equivalent with an additional regularization term. add noise to output target: label smooothing 7.6 Semi-Supervised Learning Goal: learn a representation so that example from the same class have similar representations. 7.7 Multi-Task Learning Task-specific paramters Generic parameters End of explanation """ show_image("fig7_3.png", figsize=[10, 8]) """ Explanation: 7.8 Early Stopping run it until the ValidationSetError has not imporved for some amount of time. Use the parameters of the lowest ValidationSetError during the whole train. End of explanation """ show_image("fig7_8.png", figsize=[10, 8]) """ Explanation: 7.9 Parameter Tying adn Parameter Sharing regularized the paramters of one model (supervised) to be close to model (unsupervised) to force sets of parameters to be equal: parameter sharing => convolutional neural networks. 7.10 Sparse Representations place a penalty on the activations of the units in a neural network, encouraging their activations to be sparse. 7.11 Bagging and Other Ensemble Methods 7.12 Dropout increase the size of the model when using dropout. small samples, dropout is less effective. 7.13 Adversarial Training End of explanation """
giacomov/3ML
examples/basic_test.ipynb
bsd-3-clause
from threeML import * import matplotlib.pyplot as plt %matplotlib inline %matplotlib notebook """ Explanation: <center><img src="http://identity.stanford.edu/overview/images/emblems/SU_BlockStree_2color.png" width="200" style="display: inline-block"><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/c/c2/Main_fermi_logo_HI.jpg/682px-Main_fermi_logo_HI.jpg" width="200" style="display: inline-block"><img src="http://www.astro.wisc.edu/~russell/HAWCLogo.png" width="200" style="display: inline-block"></center> <h1> Basic PHA FITS Analysis with 3ML</h1> <br/> Giacomo Vianello (Stanford University) <a href="mailto:giacomov@stanford.edu">giacomov@stanford.edu</a> <h2>IPython Notebook setup. </h2> This is needed only if you are using the <a href=http://ipython.org/notebook.html>IPython Notebook</a> on your own computer, it is NOT needed if you are on threeml.stanford.edu. This line will activate the support for inline display of matplotlib images: End of explanation """ triggerName = 'bn090217206' ra = 204.9 dec = -8.4 #Data are in the current directory datadir = os.path.abspath('.') #Create an instance of the GBM plugin for each detector #Data files obsSpectrum = os.path.join( datadir, "bn090217206_n6_srcspectra.pha{1}" ) bakSpectrum = os.path.join( datadir, "bn090217206_n6_bkgspectra.bak{1}" ) rspFile = os.path.join( datadir, "bn090217206_n6_weightedrsp.rsp{1}" ) #Plugin instance NaI6 = OGIPLike( "NaI6", observation=obsSpectrum, background=bakSpectrum, response=rspFile ) #Choose energies to use (in this case, I exclude the energy #range from 30 to 40 keV to avoid the k-edge, as well as anything above #950 keV, where the calibration is uncertain) NaI6.set_active_measurements( "10.0-30.0", "40.0-950.0" ) """ Explanation: Simple standard analysis Here we can see a simple example of basic spectral analysis for FITS PHA files from the Fermi Gamma-ray Burst Monitor. FITS PHA files are read in with the OGIPLike plugin that supports reading TYPE I & II PHA files with properly formatted <a href=https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/node5.html>OGIP</a> keywords. Here, we examine a TYPE II PHA file. Since there are multiple spectra embedded in the file, we must either use the XSPEC style {spectrum_number} syntax to access the appropriate spectum or use the keyword spectrum_number=1. End of explanation """ NaI6.set_active_measurements? """ Explanation: As we can see, the plugin probes the data to choose the appropriate likelihood for the given obseration and background data distribution. In GBM, the background is estimated from a polynomial fit and hence has Gaussian distributed errors via error propagation. We can also select the energies that we would like to use in a spectral fit. To understand how energy selections work, there is a detailed docstring: End of explanation """ NaI6 NaI6.display() """ Explanation: Signature: NaI6.set_active_measurements(args, *kwargs) Docstring: Set the measurements to be used during the analysis. Use as many ranges as you need, and you can specify either energies or channels to be used. NOTE to Xspec users: while XSpec uses integers and floats to distinguish between energies and channels specifications, 3ML does not, as it would be error-prone when writing scripts. Read the following documentation to know how to achieve the same functionality. Energy selections: They are specified as 'emin-emax'. Energies are in keV. Example: set_active_measurements('10-12.5','56.0-100.0') which will set the energy range 10-12.5 keV and 56-100 keV to be used in the analysis. Note that there is no difference in saying 10 or 10.0. Channel selections: They are specified as 'c[channel min]-c[channel max]'. Example: set_active_measurements('c10-c12','c56-c100') This will set channels 10-12 and 56-100 as active channels to be used in the analysis Mixed channel and energy selections: You can also specify mixed energy/channel selections, for example to go from 0.2 keV to channel 20 and from channel 50 to 10 keV: set_active_measurements('0.2-c10','c50-10') Use all measurements (i.e., reset to initial state): Use 'all' to select all measurements, as in: set_active_measurements('all') Use 'reset' to return to native PHA quality from file, as in: set_active_measurements('reset') Exclude measurements: Excluding measurements work as selecting measurements, but with the "exclude" keyword set to the energies and/or channels to be excluded. To exclude between channel 10 and 20 keV and 50 keV to channel 120 do: set_active_measurements(exclude=["c10-20", "50-c120"]) Select and exclude: Call this method more than once if you need to select and exclude. For example, to select between 0.2 keV and channel 10, but exclude channel 30-50 and energy , do: set_active_measurements("0.2-c10",exclude=["c30-c50"]) Using native PHA quality: To simply add or exclude channels from the native PHA, one can use the use_quailty option: set_active_measurements("0.2-c10",exclude=["c30-c50"], use_quality=True) This translates to including the channels from 0.2 keV - channel 10, exluding channels 30-50 and any channels flagged BAD in the PHA file will also be excluded. :param args: :param exclude: (list) exclude the provided channel/energy ranges :param use_quality: (bool) use the native quality on the PHA file (default=False) :return: File: ~/coding/3ML/threeML/plugins/SpectrumLike.py Type: instancemethod Investigating the contents of the data We can examine some quicklook properties of the plugin by executing it or calling its display function: End of explanation """ NaI6.view_count_spectrum() """ Explanation: To examine our energy selections, we can view the count spectrum: End of explanation """ NaI6.view_count_spectrum(significance_level=5) """ Explanation: Deselected regions are marked shaded in grey. We have also view which channels fall below a given significance level: End of explanation """ #This declares which data we want to use. In our case, all that we have already created. data_list = DataList( NaI6 ) powerlaw = Powerlaw() GRB = PointSource( triggerName, ra, dec, spectral_shape=powerlaw ) model = Model( GRB ) jl = JointLikelihood( model, data_list, verbose=False ) jl.set_minimizer('ROOT') res = jl.fit() jl.minimizer """ Explanation: Setup for spectral fitting Now we will prepare the plugin for fitting by: * Creating a DataList * Selecting a spectral shape * Creating a likelihood model * Building a joint analysis object End of explanation """ res = jl.get_errors() res = jl.get_contours(powerlaw.index,-1.3,-1.1,20) res = jl.get_contours(powerlaw.index,-1.25,-1.1,60,powerlaw.K,1.8,3.4,60) """ Explanation: Examining the fitted model We can now look at the asymmetric errors, countours, etc. End of explanation """ jl.restore_best_fit() _=display_spectrum_model_counts(jl) """ Explanation: And to plot the fit in the data space we call the data spectrum plotter: End of explanation """ plot_point_source_spectra(jl.results,flux_unit='erg/(s cm2 keV)') """ Explanation: Or we can examine the fit in model space. Note that we must use the analysis_results of the joint likelihood for the model plotting: End of explanation """ powerlaw.index.prior = Uniform_prior(lower_bound=-5.0, upper_bound=5.0) powerlaw.K.prior = Log_uniform_prior(lower_bound=1.0, upper_bound=10) bayes = BayesianAnalysis(model, data_list) samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000) fig = bayes.corner_plot() fig = bayes.corner_plot_cc() plot_point_source_spectra(bayes.results, flux_unit='erg/(cm2 s keV)',equal_tailed=False) """ Explanation: We can go Bayesian too! End of explanation """
sthuggins/phys202-2015-work
assignments/assignment05/.ipynb_checkpoints/InteractEx02-checkpoint.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 2 Imports End of explanation """ def plot_sine1(a, b): for x in range(0, 4*np.pi, np.dtype(float)): plt.plot(a, b, x, np.sin(a*x+b)) plot_sine1(5.0, 3.4) """ Explanation: Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For the x-axis tick locations use integer multiples of $\pi$. For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$. End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading the plot_sine1 exercise """ Explanation: Then use interact to create a user interface for exploring your function: a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$. b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$. End of explanation """ # YOUR CODE HERE raise NotImplementedError() plot_sine2(4.0, -1.0, 'r--') """ Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line. End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading the plot_sine2 exercise """ Explanation: Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. End of explanation """
NYUDataBootcamp/Materials
Code/notebooks/bootcamp_timeseries_update.ipynb
mit
import sys # system module import pandas as pd # data package import matplotlib.pyplot as plt # graphics module import datetime as dt # date and time module import numpy as np %matplotlib inline plt.style.use("ggplot") # quandl package import quandl # check versions (overkill, but why not?) print('Python version:', sys.version) print('Pandas version: ', pd.__version__) print('quandl version: ', quandl.version.VERSION) print('Today: ', dt.date.today()) # helper function to print info about dataframe def df_info(df): print("Shape: ", df.shape) print("dtypes: ", df.dtypes.to_dict()) print("index dtype: ", df.index.dtype) return pd.concat([df.head(3), df.tail(3)]) """ Explanation: Timeseries Overview. We introduce the tools for working with dates, times, and time series data. We start with functionality built into python itself, then discuss how pandas builds on these tools to add powerful time series capabilities to DataFrames. Outline Quandl: We show how to use the quandl package to access a large database of financial and economic data. Dates in python: covers the basics of working with dates and times in python Dates in pandas: shows how to use dates with pandas objects Note: requires internet access to run. This Jupyter notebook was created by Chase Coleman and Spencer Lyon for the NYU Stern course Data Bootcamp. In order to run the code in this notebook, you will need to have the quandl package installed. You can do this from the command line using pip install quandl --upgrade End of explanation """ us_tax = quandl.get("OECD/REV_NES_TOTALTAX_TAXUSD_USA") df_info(us_tax) """ Explanation: Quandl <a id=data></a> quandl is a company that collects and maintains financial and economic data from standard sources (e.g. FRED, IMF, BEA, etc.) and non-standard sources (Fx data, company level data, trader receipts). The data is viewable on their webpage (see here or there for examples), but made available to programming languages via their API. We will access their API using their python library. Suppose, for example, that we wanted to get data on taxes in the US. Here's how we might find some: Open up the quandl search page Type in "US tax revenue" Click on one of the results that seems interesting to us Checkout things like the frequency (Annual for this data set), the quandl code (top right, here it is OECD/REV_NES_TOTALTAX_TAXUSD_USA) and description. Exercise (5 min): Go to Quandl's website and explore some of the data quandl has available. Come up with 2 datasets and make a dictionary that maps the quandl code into a reasonable name. For example, for the us tax revenue dataset above I could have done python my_data = {"OECD/REV_NES_TOTALTAX_TAXUSD_USA": "US_tax_rev"} We can download the data using the quandl.get function and passing it one of the Quandl codes we collected in the previous exercise End of explanation """ us_tax_recent = quandl.get("OECD/REV_NES_TOTALTAX_TAXUSD_USA", start_date="2000-01-01") df_info(us_tax_recent) """ Explanation: We can also pass start_date and end_date parameters to control the dates for the downloaded data: End of explanation """ # For multiple data sources, it's useful to define a list/dict my_data = {"FRED/DFF": "risk_free_rate", "NVCA/VENTURE_3_09C": "vc_investments"} dfs = [] for k in my_data.keys(): dfs.append(quandl.get(k)) df_info(dfs[0]) """ Explanation: Now, let's read in the data sets we found were interesting. Feel free to use the codes you looked up, or the ones I'm using here. End of explanation """ quandl.get(['NVCA/VENTURE_3_09C.2', 'NVCA/VENTURE_3_09C.5']).head() """ Explanation: To request specific columns use column indices (NOT 0-based) End of explanation """ quandl.get(['NSE/OIL.1', 'WIKI/AAPL.4']) """ Explanation: To combine variables from different datasets End of explanation """ mix = quandl.get(['NSE/OIL.1', 'WIKI/AAPL.4']) mix.plot(subplots=True) df_info(dfs[1]) """ Explanation: What happened to the first column? End of explanation """ dfs[1].rename(columns={"DFF": my_data["FRED/DFF"]}, inplace=True) df_info(dfs[1]) """ Explanation: So, "FRED/DFF" is the federal funds rate, or the interest rate at which banks can trade federal assets with each other overnight. This is often used as a proxy for the risk free rate in economic analysis. From the printout above it looks like we have more than 22k observations starting in 1954 at a daily frequency. Notice, however that the column name is DFF. Let's use our dict to clean up that name: End of explanation """ quart = quandl.get("FRED/DFF", collapse='quarterly') print(quart.head()) quart.plot() """ Explanation: We can change the sampling frequency: End of explanation """ diff = quandl.get("FRED/DFF", transformation='rdiff') diff.plot() """ Explanation: Or we can perform elementary calculations on the data End of explanation """ ffr = dfs[1] vc = dfs[0] """ Explanation: The other dataframe we dowloaded (using code NVCA/VENTURE_3_09C) contains quarterly data on total investment by venture capital firms in the US, broken down by the stage of the project. The column names here are ok, so we don't need to change anything. So that we have the data easily acessible for later on, let's store these two variables in individual dataframes: End of explanation """ today = dt.date.today() print("the type of today is ", type(today)) """ Explanation: Usage Limits The Quandl Python module is free. If you would like to make more than 50 calls a day, however, you will need to create a free Quandl account and set your API key: quandl.ApiConfig.api_key = "YOUR_KEY_HERE" mydata = quandl.get("FRED/GDP") The personalized API key is available here: https://www.quandl.com/account/api Dates in python <a id=datetime></a> The date and time functionality in python comes from the built in datetime module. Notice above that we ran python import datetime as dt We've been using the dt.date.today() function throughout this course when we print the date at the top of our notebooks, but we haven't given it very much thought. Let's take a closer look now. To start, let's see what the type of dt.date.today() is End of explanation """ print("the day of the month is: ", today.day) print("we are curretly in month number", today.month) print("The year is", today.year) """ Explanation: Given that we have an object of type datetime.date we can do things like ask for the day, month, and year End of explanation """ # construct a date by hand new_years_eve = dt.date(2017, 12, 31) until_nye = new_years_eve - today type(until_nye) """ Explanation: timedelta Suppose that we wanted to construct a "days until" counter. To do this we will construct another datetime.date and use the - operator to find the differene between the other date and today. End of explanation """ until_nye.days """ Explanation: We can get the number of days until new years eve by looking at until_nye.days End of explanation """ def days_until(date): today = dt.date.today() numb_days = date - today return numb_days.days project_due = dt.date(2017, 5, 5) days_until(project_due) """ Explanation: Exercise: write a python function named days_until that accepts one argument (a datetime.date) and returns the number of days between today and that date. Apply your function to May 5, 2017 (day the UG project is due) Your birthday End of explanation """ spencer_bday = dt.date(1989, 4, 25) # NOTE: add 7 for the 7 leap years between 1989 and 2019 thirty_years = dt.timedelta(days=365*30 + 7) # check to make sure it is still April 25th spencer_bday + thirty_years days_to_30 = (spencer_bday + thirty_years - today).days print("Spencer will be 30 in {} days".format(days_to_30)) """ Explanation: We could also construct a datetime.timedelta by hand and add it to an existing date. Here's an example to see how many days until Spencer turns 30 End of explanation """ now = dt.datetime.now() print("type of now:", type(now)) now """ Explanation: datetime Being able to work with dates and the difference between dates is very useful, but sometimes we need to also think about times. To do that, we will look to the dt.datetime module. We can get the current date and time using dt.datetime.now(): End of explanation """ print("the day of the month is: ", now.day) print("we are curretly in month number", now.month) print("The year is", now.year) """ Explanation: The numbers in the printout above are year, month, day, hour, minute, second, millisecond. Because we still have day, month, year information ; we can access these properties just as we did for the today above: End of explanation """ now.weekday() """ Explanation: Exercise: Use tab completion to see what else we can access on our dt.datetime object now End of explanation """ # NOTE: we can only do arithmetic between many date objects or datetime obejcts # we cannot add or subtract a datetime to/from a date. So, we need to # re-create spencer_bday as a datetime object. # NOTE: The timedelta object is already compatible with date and datetime objects spencer_bday_time = dt.datetime(1989, 4, 25, 16, 33, 5) seconds_to_30 = (spencer_bday_time + thirty_years - now).seconds print("Spencer will be 30 in {} seconds".format(seconds_to_30)) """ Explanation: Time deltas work the same way with datetime objects as they did with date objects. We can see how many seconds until Spencer turns 30: End of explanation """ print(today.strftime("Today is %Y-%m-%d")) """ Explanation: strftime Once we have date and time information, a very common thing to do is to print out a formatted version of that date. For example, suppose we wanted to print out a string in the format YYYY-MM-DD. To do this we use the strftime method. Here's an example End of explanation """ spencer_bday_time.strftime("Spencer was born on %Y-%m-%d") """ Explanation: Notice that the argument to strftime is a python string that can contain normal text (e.g. Today is) and a special formatters (the stuff starting with %). We haven't talked much about how to do string formatting, but in Python and many other languages using % inside strings has special meaning. Exercise Using the documentation for the string formatting behavior, figure out how to write the following strings using the method strftime method on the spencer_bday_time object "Spencer was born on 1989-04-25" End of explanation """ spencer_bday_time.strftime("Spencer was born on a %A") """ Explanation: "Spencer was born on a Tuesday" End of explanation """ spencer_bday_time.strftime("Spencer was born on %A, %b %dth") """ Explanation: "Spencer was born on Tuesday, Apr 25th" End of explanation """ spencer_bday_time.strftime("Spencer was born on %A, %B %dth at %I:%M %p") """ Explanation: (bonus) "Spencer was born on Tuesday, April 25th at 04:33 PM" End of explanation """ type(ffr.index) """ Explanation: Dates in Pandas <a id=pandas_dates></a> Now we will look at how to use date and dateime functionality in pandas. To begin, lets take a closer look at the type of index we have on our ffr and vc dataframes: End of explanation """ ffr2008 = ffr["2008"] print("ffr2008 is a", type(ffr2008)) df_info(ffr2008) ffr2008.plot() """ Explanation: Here we have a DatetimeIndex, which means pandas recogizes this DataFrame as containing time series data. What can we do now? A lot, here's a brief list: subset the data using strings to get data for a particular time frame resample the data to a diffrent frequency: this means we could convert daily to monthly, quarterly, etc. quickly access things like year, month, and day for the observation rolling computations: this will allow us to compute statistics on a rolling subset of the data. We'll show a simple example here, but check out the docs for more info snap the observations to a particular frequency -- this one is a bit advanced and we won't cover it here For a much more comprehensive list with other examples see the docs For now, let's look at how to do these things with the data we obtained from quandl NOTE You can only do these things when you have a DatetimeIndex. This means that even if one of the columns in your DataFrame has date or datetime information, you will need to set it as the index to access this functionality. subsetting Suppose we wanted to extract all the data for the federal funds rate for the year 2008. End of explanation """ ffr_sep2008 = ffr["2008-09"] df_info(ffr_sep2008) ffr_sep2008.plot() """ Explanation: Suppose we want to restrict to September 2008: End of explanation """ ffr2 = ffr["2007-06":"2011-03"] df_info(ffr2) ffr2.plot() """ Explanation: We can use this same functionality to extract ranges of dates. To get the data starting in june 2007 and going until march 2011 we would do End of explanation """ vc['2013':'2016'].plot() """ Explanation: Exercise Using one of your datasets from quandl, plot one or more variables for the last 3 years (2013 through 2016) End of explanation """ # MS means "month start" ffrM_resample = ffr.resample("MS") type(ffrM_resample) """ Explanation: resampling Now suppose that instead of daily data, we wanted our federal funds data at a monthly frequency. To do this we use the resample method on our DataFrame End of explanation """ ffrM = ffrM_resample.first() df_info(ffrM) """ Explanation: Notice that when we call resample we don't get back a DataFrame at that frequency. This is because there is some ambiguity regarding just how the frequency should be converted: should we take the average during the period, the first observation, last observation, sum the observations? In order to get a DataFrame we have to call a method on our DatetimeIndexResampler object. For this example, let's do the first observation in each period: End of explanation """ ffr.resample("2w") ffr.resample("2w").mean() """ Explanation: Note that we can also combine numbers with the specification of the resampling frequency. As an example, we can resample to a bi-weekly frequency using End of explanation """ ffr.resample('QS').mean().head() """ Explanation: Exercise: Using the documentation for the most common frequencies, figure out how to resample one of your datasets to A quarterly frequency -- make sure to get the start of the quarter End of explanation """ ffr.resample('A').mean().head() """ Explanation: An annual frequency -- use the end of the year End of explanation """ ffr.resample("M").first().head() ffr.resample("M").last().head() """ Explanation: more than you need: I want to point out that when you use the first or last methods to perform the aggregations, there are two dates involved: (1) the date the resultant index will have and (2) the date used to fill in the data at that date. The first date (one on the index) will be assigned based on the string you pass to the resample method. The second date (the one for extracting data from the original dataframe) will be determined based on the method used to do the aggregation. first will extract the first data point from that subset and last will extract the last. Let's see some examples: End of explanation """ ffr.resample("MS").first().head() ffr.resample("MS").last().head() """ Explanation: Notice that the index is the same on both, but the data is clearly different. If we use MS instead of M we will have the index based on the first day of the month: End of explanation """ ffr.index.year ffr.index.day ffr.index.month """ Explanation: Notice how the data associated with "M" and first is the same as the data for "MS" and first. The same holds for last. Access year, month, day... Given a DatetimeIndex you can access the day, month, or year (also second, millisecond, etc.) by simply accessing the .XX property; where XX is the data you want End of explanation """ fig, ax = plt.subplots() ffr.rolling(window=7).max().plot(ax=ax) ffr.rolling(window=7).min().plot(ax=ax) ax.legend(["max", "min"]) """ Explanation: Rolling computations We can use pandas to do rolling computations. For example, suppose we want to plot the maximum and minimum of the risk free rate within the past week at each date (think about that slowly -- for every date, we want to look back 7 days and compute the max). Here's how we can do that End of explanation """ ffr.rolling(window=7).max().head(10) ffr.resample("7D").max().head(10) """ Explanation: Note that this is different from just resampling because we will have an observation for every date in the original dataframe (except the number of dates at the front needed to construct the initial window). End of explanation """ # do a left merge on the index (date info) df = pd.merge(ffr, vc, left_index=True, right_index=True, how="left") df_info(df) vc.head() """ Explanation: Merging with dates Let's see what happens when we merge the ffr and vc datasets End of explanation """ ffr_recent = ffr["1985":] """ Explanation: Notice that we ended up with a lot of missing data. This happened for two reasons: The ffr data goes back to 1954, but the vc data starts in 1985 The ffr data is at a daily frequency, but vc is at quarterly. To resolve the first issue we can subset the ffr data and only keep from 1985 on End of explanation """ ffr_recentM = ffr_recent.resample("M").first() vc_M = vc.resample("M").pad() vc_M.head() """ Explanation: To resolve the second issue we will do two-steps: resample the ffr data to a monthly frequency resample the vc data to a monthly frequency by padding. This is called upsampling because we are going from a lower frequency (quarterly) to a higher one (monthly) End of explanation """ df = pd.merge(ffr_recentM, vc_M, left_index=True, right_index=True, how="left") print(df.head(6)) print("\n\n", df.tail(8)) """ Explanation: Notice that using pad here just copied data forwards to fill in missing months (e.g. the data for March 1985 was applied to April and May) Now let's try that merge again End of explanation """ # subset the data, then remove datetime index as we don't need it again post_dotcom = df["1995":].reset_index(drop=True) post_housing = df["2004":].reset_index(drop=True) # take logs so we can do growth rates as log(x_{t+N}) - log(x_t) post_dotcom = np.log(post_dotcom) post_housing = np.log(post_housing) dotcom_growth = post_dotcom - post_dotcom.iloc[0, :] housing_growth = post_housing - post_housing.iloc[0, :] fig, axs = plt.subplots(3, 1, figsize=(10, 5)) variables = ["risk_free_rate", "Early Stage", "Total"] for i in range(len(variables)): var = variables[i] # add dotcom line dotcom_growth[var].plot(ax=axs[i]) # add housing line housing_growth[var].plot(ax=axs[i]) # set title axs[i].set_title(var) # set legend and xlabel on last plot only axs[-1].legend(["dotcom", "housing"]) axs[-1].set_xlabel("Quarters since boom") # make subplots not overlap fig.tight_layout() """ Explanation: That looks much better -- we have missing data at the top and the bottom for months that aren't available in the venture capital dataset, but nothing else should be missing. Let's try to do something interesting with this data. We want to plot the growth rate in the risk free rate, early stage vc funding, and total vc funding for the months following the start of the dotcom boom (rougly Jan 1995) and the housing boom (roughly Jan 2004). Read that again carefully. For each of the three series we want 2 lines. For each line, the x axis will be quarters since start of boom. The y axis growth rates since first month of bubble. End of explanation """
drcgw/bass
BASS v2.0.ipynb
gpl-3.0
from BASS import * """ Explanation: Welcome to BASS! Version: Beta 2.0 Created by Abigail Dobyns and Ryan Thorpe BASS: Biomedical Analysis Software Suite for event detection and signal processing. Copyright (C) 2015 Abigail Dobyns This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see &lt;http://www.gnu.org/licenses/&gt; Initalize Run the following code block to intialize the program. run this block one time End of explanation """ #Import and Initialize #Class BASS_Dataset(inputDir, fileName, outputDir, fileType='plain', timeScale='seconds') data1 = BASS_Dataset('C:\\Users\\Ryan\\Desktop\\sample_data\\','2016-06-22_C1a_P3_Base.txt','C:\\Users\\Ryan\\Desktop\\bass_output\\pleth') data2 = BASS_Dataset('C:\\Users\\Ryan\\Desktop\\sample_data\\','2016-06-26_C1a_P7_Base.txt','C:\\Users\\Ryan\\Desktop\\bass_output\\pleth') #transformation Settings data1.Settings['Absolute Value'] = False #Must be True if Savitzky-Golay is being used data1.Settings['Bandpass Highcut'] = 12 #in Hz data1.Settings['Bandpass Lowcut'] = 1 #in Hz data1.Settings['Bandpass Polynomial'] = 1 #integer data1.Settings['Linear Fit'] = False #between 0 and 1 on the whole time series data1.Settings['Linear Fit-Rolling R'] = 0.5 #between 0 and 1 data1.Settings['Linear Fit-Rolling Window'] = 1000 #window for rolling mean for fit, unit is index not time data1.Settings['Relative Baseline'] = 0 #default 0, unless data is normalized, then 1.0. Can be any float data1.Settings['Savitzky-Golay Polynomial'] = 'none' #integer data1.Settings['Savitzky-Golay Window Size'] = 'none' #must be odd. units are index not time #Baseline Settings data1.Settings['Baseline Type'] = r'rolling' #'linear', 'rolling', or 'static' #For Linear data1.Settings['Baseline Start'] = None #start time in seconds data1.Settings['Baseline Stop'] = None #end time in seconds #For Rolling data1.Settings['Rolling Baseline Window'] = 5 # in seconds. leave as 'none' if linear or static #Peaks data1.Settings['Delta'] = 0.05 data1.Settings['Peak Minimum'] = -0.50 #amplitude value data1.Settings['Peak Maximum'] = 0.50 #amplitude value #Bursts data1.Settings['Apnea Factor'] = 2 #factor to define apneas as a function of expiration data1.Settings['Burst Area'] = True #calculate burst area data1.Settings['Exclude Edges'] = True #False to keep edges, True to discard them data1.Settings['Inter-event interval minimum (time-scale units)'] = 0.0001 #only for bursts, not for peaks data1.Settings['Maximum Burst Duration (time-scale units)'] = 6 data1.Settings['Minimum Burst Duration (time-scale units)'] = 0.0001 data1.Settings['Minimum Peak Number'] = 1 #minimum number of peaks/burst, integer data1.Settings['Threshold']= 0.0001 #linear: proportion of baseline. #static: literal value. #rolling, linear ammount grater than rolling baseline at each time point. #Outputs data1.Settings['Generate Graphs'] = False #create and save the fancy graph outputs #Settings that you should not change unless you are a super advanced user: #These are settings that are still in development data1.Settings['Graph LCpro events'] = False ############################################################################################ data1.run_analysis('pleth', batch=False) """ Explanation: Instructions For help, check out the wiki: Protocol Or the video tutorial: Coming Soon! 1) Load Data File(s) Use the following block to create a BASS_Dataset object and initialize your settings. All settings are attributes of the dataset instance. Manual initialization of settings in this block is optional and is required only once for a given batch. All BASS_Dataset objects that are initialized are automatically added to the batch. class BASS_Dataset(inputDir, fileName, outputDir, fileType='plain', timeScale='seconds') Attributes: Batch: static list Contains all instances of the BASS_Dataset object in order to be referenced by the global runBatch function. Data: library instance data Settings: library instance settings Results: library instance results Methods: run_analysis(settings = self.Settings, analysis_module): BASS_Dataset method Highest level of the object-oriented analysis pipeline. First syncs the settings of all BASS_Dataset objects (stored in Batch), then runs the specified analysis module on each one. Analysis must be called after the object is initialize and Settings added if the Settings are to be added manually (not via the interactive check and load settings function). Analysis runs according to batch-oriented protocol and is specific to the analysis module determined by the "analysis_module" parameter. 2) Run Analysis Run BASS_Dataset.run_analysis(analysis_mod, settings, batch) Runs in either single (batch=False) or batch mode. Assuming batch mode, this function first syncs settings of each dataset within Bass_Dataset.Batch to the entered parameter "settings", then runs analysis on each instance within Batch. Be sure to select the correct module given your desired type of analysis. The current options (as of 9/21/16) are "ekg" and "pleth". Parameters are as follows: Parameters: analysis_mod: string the name of the BASS_Dataset module which will be used to analyze the batch of datasets settings: string or dictionary can be entered as the location of a settings file or the actual settings dictionary (default = self.Settings) batch: boolean determines if the analysis is performed on only the self-instance or as a batch on all object instances (default=True) More Info on Settings For more information about other settings, go to: Transforming Data Baseline Settings Peak Detection Settings Burst Detection Settings End of explanation """ display_settings(Settings) """ Explanation: OPTIONAL GRAPHS AND ANALYSIS The following blocks are optional calls to other figures and analysis Display Event Detection Tables Display Settings used for analysis End of explanation """ #grouped summary for peaks Results['Peaks-Master'].groupby(level=0).describe() """ Explanation: Display Summary Results for Peaks End of explanation """ #grouped summary for bursts Results['Bursts-Master'].groupby(level=0).describe() """ Explanation: Display Summary Results for Bursts End of explanation """ #Interactive, single time series by Key key = Settings['Label'] graph_ts(Data, Settings, Results, key) """ Explanation: Interactive Graphs Line Graphs One pannel, detected events Plot one time series by calling it's name End of explanation """ key = Settings['Label'] start =550 #start time in seconds end= 560#end time in seconds results_timeseries_plot(key, start, end, Data, Settings, Results) """ Explanation: Two pannel Create line plots of the raw data as well as the data analysis. Plots are saved by clicking the save button in the pop-up window with your graph. key = 'Mean1' start =100 end= 101 Results Line Plot End of explanation """ #autocorrelation key = Settings['Label'] start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. autocorrelation_plot(Data['trans'][key][start:end]) plt.show() """ Explanation: Autocorrelation Display the Autocorrelation plot of your transformed data. Choose the start and end time in seconds. to capture whole time series, use end = -1. May be slow key = 'Mean1' start = 0 end = 10 Autocorrelation Plot End of explanation """ #raster raster(Data, Results) """ Explanation: Raster Plot Shows the temporal relationship of peaks in each column. Auto scales. Display only. Intended for more than one column of data End of explanation """ event_type = 'Peaks' meas = 'Intervals' key = Settings['Label'] frequency_plot(event_type, meas, key, Data, Settings, Results) """ Explanation: Frequency Plot Use this block to plot changes of any measurement over time. Does not support 'all'. Example: event_type = 'Peaks' meas = 'Intervals' key = 'Mean1' Frequency Plot End of explanation """ #Get average plots, display only event_type = 'Peaks' meas = 'Intervals' average_measurement_plot(event_type, meas,Results) """ Explanation: Analyze Events by Measurement Generates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot. event_type = 'peaks' meas = 'Peaks Amplitude' Analyze Events by Measurement End of explanation """ #Batch event_type = 'Bursts' meas = 'Total Cycle Time' Results = poincare_batch(event_type, meas, Data, Settings, Results) pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']}) """ Explanation: Poincare Plots Create a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported. Plots and tables are saved automatically Example: event_type = 'Bursts' meas = 'Burst Duration' More on Poincare Plots Batch Poincare Batch Poincare End of explanation """ #quick event_type = 'Bursts' meas = 'Attack' key = Settings['Label'] poincare_plot(Results[event_type][key][meas]) """ Explanation: Quick Poincare Plot Quickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported. Quick Poincare End of explanation """ Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx']) #Set PSD ranges for power in band Settings['PSD-Event']['hz'] = 100 #freqency that the interpolation and PSD are performed with. Settings['PSD-Event']['ULF'] = 1 #max of the range of the ultra low freq band. range is 0:ulf Settings['PSD-Event']['VLF'] = 2 #max of the range of the very low freq band. range is ulf:vlf Settings['PSD-Event']['LF'] = 5 #max of the range of the low freq band. range is vlf:lf Settings['PSD-Event']['HF'] = 50 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve. event_type = 'Peaks' meas = 'Intervals' key = Settings['Label'] scale = 'raw' Results = psd_event(event_type, meas, key, scale, Data, Settings, Results) Results['PSD-Event'][key] """ Explanation: Power Spectral Density The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended: event_type = 'Bursts' meas = 'Total Cycle Time' key = 'Mean1' scale = 'raw' event_type = 'Peaks' meas = 'Intervals' key = 'Mean1' scale = 'raw' Because this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'. Power Spectral Density: Events Events Use the code block below to specify your settings for event measurment PSD. End of explanation """ #optional Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx']) #Set PSD ranges for power in band Settings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf Settings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf Settings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf Settings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency Settings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve. """ Explanation: Time Series Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz. Power Spectral Density: Signal End of explanation """ scale = 'raw' #raw or db Results = psd_signal(version = 'original', key = 'Mean1', scale = scale, Data = Data, Settings = Settings, Results = Results) Results['PSD-Signal'] """ Explanation: Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph: raw = s^2/Hz db = dB/Hz = 10*log10(s^2/Hz) Graph and table are automatically saved in the PSD-Signal subfolder. End of explanation """ version = 'original' key = Settings['Label'] spectogram(version, key, Data, Settings, Results) """ Explanation: Spectrogram Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run. version = 'original' key = 'Mean1' After transformation is run, you can call version = 'trans'. This graph is not automatically saved. Spectrogram End of explanation """ #Moving Stats event_type = 'Bursts' meas = 'Total Cycle Time' window = 30 #seconds Results = moving_statistics(event_type, meas, window, Data, Settings, Results) """ Explanation: Descriptive Statistics Moving/Sliding Averages, Standard Deviation, and Count Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table). Saves out the dataframes of these three results automatically with the window size in the name as a .csv. If meas == 'All', then the function will loop and produce these tables for all measurements. event_type = 'Peaks' meas = 'all' window = 30 Moving Stats End of explanation """ #Histogram Entropy event_type = 'Bursts' meas = 'all' Results = histent_wrapper(event_type, meas, Data, Settings, Results) Results['Histogram Entropy'] """ Explanation: Entropy Histogram Entropy Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely. If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1 event_type = 'Bursts' meas = 'all' Histogram Entropy End of explanation """ #Approximate Entropy event_type = 'Peaks' meas = 'Intervals' Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results) Results['Approximate Entropy'] """ Explanation: Approximate entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY run the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values: M = 2 R = 0.2*std(measurement) these values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all' Interpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn. Approximate Entropy in BASS Aproximate Entropy Source Events End of explanation """ #Approximate Entropy on raw signal #takes a VERY long time from pyeeg import ap_entropy version = 'original' #original, trans, shift, or rolling key = Settings['Label'] #Mean1 default key for one time series start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. The absolute end is -1 ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end]))) """ Explanation: Time Series End of explanation """ #Sample Entropy event_type = 'Bursts' meas = 'Total Cycle Time' Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results) Results['Sample Entropy'] Results['Sample Entropy']['Attack'] """ Explanation: Sample Entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY run the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values: M = 2 R = 0.2*std(measurement) these values can be modified in the source code. alternatively, you can call samp_entropy directly. Supports 'all' Sample Entropy in BASS Sample Entropy Source Events End of explanation """ #on raw signal #takes a VERY long time version = 'original' #original, trans, shift, or rolling key = Settings['Label'] start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. The absolute end is -1 samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end]))) """ Explanation: Time Series End of explanation """ help(moving_statistics) moving_statistics?? """ Explanation: Helpful Stuff While not completely up to date with some of the new changes, the Wiki can be useful if you have questions about some of the settings: https://github.com/drcgw/SWAN/wiki/Tutorial More Help? Stuck on a particular step or function? Try typing the function name followed by two ??. This will pop up the docstring and source code. You can also call help() to have the notebook print the doc string. Example: analyze?? help(analyze) End of explanation """
Hyperparticle/deep-learning-foundation
lessons/sentiment-network/Sentiment_Classification_Projects.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" (this lesson) Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network (video only - nothing in notebook) PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset<a id='lesson_1'></a> The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything. End of explanation """ len(reviews) reviews[0] labels[0] """ Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a> End of explanation """ from collections import Counter import numpy as np """ Explanation: Project 1: Quick Theory Validation<a id='project_1'></a> There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook. You'll find the Counter class to be useful in this exercise, as well as the numpy library. End of explanation """ # Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() """ Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. End of explanation """ # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects for review, label in zip(reviews, labels): words = review.split(' ') if label == 'POSITIVE': positive_counts.update(words) elif label == 'NEGATIVE': negative_counts.update(words) total_counts.update(words) """ Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter. Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. End of explanation """ # Examine the counts of the most common words in positive reviews positive_counts.most_common() # Examine the counts of the most common words in negative reviews negative_counts.most_common() """ Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. End of explanation """ # Create Counter object to store positive/negative ratios pos_neg_ratios = Counter() # TODO: Calculate the ratios of positive and negative uses of the most common words # Consider words to be "common" if they've been used at least 100 times for word, freq in total_counts.most_common(): if freq >= 100: pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word] + 1) """ Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the ratios you've calculated for a few words: End of explanation """ # TODO: Convert ratios to logs for word, ratio in pos_neg_ratios.most_common(): pos_neg_ratios[word] = np.log(ratio) """ Explanation: Looking closely at the values you just calculated, we see the following: Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be. Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be. Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway. Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons: Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys. When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms. TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)) In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the new ratios you've calculated for the same words from before: End of explanation """ # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] # Note: Above is the code Andrew uses in his solution video, # so we've included it here to avoid confusion. # If you explore the documentation for the Counter class, # you will see you could also find the 30 least common # words like this: pos_neg_ratios.most_common()[:-31:-1] """ Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. End of explanation """ from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') """ Explanation: End of Project 1. Watch the next video to see Andrew's solution, then continue on to the next lesson. Transforming Text into Numbers<a id='lesson_3'></a> The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. End of explanation """ # TODO: Create set named "vocab" containing all of the words from all of the reviews vocab = set(total_counts.keys()) """ Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a> TODO: Create a set named vocab that contains every word in the vocabulary. End of explanation """ vocab_size = len(vocab) print(vocab_size) """ Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 End of explanation """ from IPython.display import Image Image(filename='sentiment_network_2.png') """ Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. End of explanation """ # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros layer_0 = np.zeros((1, vocab_size)) """ Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. End of explanation """ layer_0.shape from IPython.display import Image Image(filename='sentiment_network.png') """ Explanation: Run the following cell. It should display (1, 74074) End of explanation """ # Create a dictionary of words in the vocabulary mapped to index positions # (to be used in layer_0) word2index = {} for i,word in enumerate(vocab): word2index[word] = i # display the map of words to indices word2index """ Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. End of explanation """ def update_input_layer(review): """ Modify the global layer_0 to represent the vector form of review. The element at a given index of layer_0 should represent how many times the given word occurs in the review. Args: review(string) - the string of the review Returns: None """ global layer_0 # clear out previous state by resetting the layer to be all 0s layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 for word in review.split(' '): layer_0[0][word2index[word]] += 1 """ Explanation: TODO: Complete the implementation of update_input_layer. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0. End of explanation """ update_input_layer(reviews[0]) layer_0 """ Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. End of explanation """ def get_target_for_label(label): """Convert a label to `0` or `1`. Args: label(string) - Either "POSITIVE" or "NEGATIVE". Returns: `0` or `1`. """ # TODO: Your code here return 1 if label == 'POSITIVE' else 0 """ Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, depending on whether the given label is NEGATIVE or POSITIVE, respectively. End of explanation """ labels[0] get_target_for_label(labels[0]) """ Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively. End of explanation """ labels[1] get_target_for_label(labels[1]) """ Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". review_vocab = set(word for review in reviews for word in review.split(' ')) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. label_vocab = set(labels) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead for i, word in enumerate(self.label_vocab): self.label2index[word] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = np.zeros((input_nodes, hidden_nodes)) # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes)) # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # TODO: You can copy most of the code you wrote for update_input_layer # earlier in this notebook. # # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS. # For example, replace "layer_0 *= 0" with "self.layer_0 *= 0" self.layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 for word in review.split(' '): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] += 1 def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. return 1 if label == 'POSITIVE' else 0 def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion return output * (1 - output) def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label review, label = training_reviews[i], training_labels[i] # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. self.update_input_layer(review) layer_1 = np.dot(self.layer_0, self.weights_0_1) layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2)) # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. target = self.get_target_for_label(label) error_2 = target - layer_2 error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2) error_1 = error_2_term * self.weights_1_2.T error_1_term = error_1 self.weights_1_2 += self.learning_rate * error_2_term * layer_1.T self.weights_0_1 += self.learning_rate * error_1_term * self.layer_0.T # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. if np.abs(error_2) < 0.5: correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. self.update_input_layer(review.lower()) layer_1 = np.dot(self.layer_0, self.weights_0_1) layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2)) # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. return 'POSITIVE' if layer_2 >= 0.5 else 'NEGATIVE' """ Explanation: End of Project 2. Watch the next video to see Andrew's solution, then continue on to the next lesson. Project 3: Building a Neural Network<a id='project_3'></a> TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following: - Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs. - Re-use the code from earlier in this notebook to create the training data (see TODOs in the code) - Implement the pre_process_data function to create the vocabulary for our training data generating functions - Ensure train trains over the entire corpus Where to Get Help if You Need it Re-watch earlier Udacity lectures Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) """ Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. End of explanation """ mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. End of explanation """ from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common() """ Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to see Andrew's solution, then continue on to the next lesson. Understanding Neural Noise<a id='lesson_4'></a> The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Projet 3 lesson # -Modify it to reduce noise, like in the video import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". review_vocab = set(word for review in reviews for word in review.split(' ')) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. label_vocab = set(labels) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead for i, word in enumerate(self.label_vocab): self.label2index[word] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = np.zeros((input_nodes, hidden_nodes)) # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes)) # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # TODO: You can copy most of the code you wrote for update_input_layer # earlier in this notebook. # # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS. # For example, replace "layer_0 *= 0" with "self.layer_0 *= 0" self.layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 for word in review.split(' '): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] = 1 def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. return 1 if label == 'POSITIVE' else 0 def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion return output * (1 - output) def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label review, label = training_reviews[i], training_labels[i] # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. self.update_input_layer(review) layer_1 = np.dot(self.layer_0, self.weights_0_1) layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2)) # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. target = self.get_target_for_label(label) error_2 = target - layer_2 error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2) error_1 = error_2_term * self.weights_1_2.T error_1_term = error_1 self.weights_1_2 += self.learning_rate * error_2_term * layer_1.T self.weights_0_1 += self.learning_rate * error_1_term * self.layer_0.T # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. if np.abs(error_2) < 0.5: correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. self.update_input_layer(review.lower()) layer_1 = np.dot(self.layer_0, self.weights_0_1) layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2)) # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. return 'POSITIVE' if layer_2 >= 0.5 else 'NEGATIVE' """ Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a> TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following: * Copy the SentimentNetwork class you created earlier into the following cell. * Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (1 * weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png') layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 """ Explanation: End of Project 4. Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 4 lesson # -Modify it according to the above instructions import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". review_vocab = set(word for review in reviews for word in review.split(' ')) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. label_vocab = set(labels) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead for i, word in enumerate(self.label_vocab): self.label2index[word] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = np.zeros((input_nodes, hidden_nodes)) # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes)) # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_1 = np.zeros((1, hidden_nodes)) def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. return 1 if label == 'POSITIVE' else 0 def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion return output * (1 - output) def train(self, training_reviews_raw, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews_raw) == len(training_labels)) training_reviews = [ set(self.word2index[word] for word in review.split(' ')) for review in training_reviews_raw ] # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label review, label = training_reviews[i], training_labels[i] # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2)) # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. target = self.get_target_for_label(label) error_2 = target - layer_2 error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2) error_1 = error_2_term * self.weights_1_2.T error_1_term = error_1 self.weights_1_2 += self.learning_rate * error_2_term * self.layer_1.T for index in review: self.weights_0_1[index] += self.learning_rate * error_1_term[0] # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. if np.abs(error_2) < 0.5: correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. self.layer_1 *= 0 indices = set(self.word2index[word] for word in review.lower().split(' ') if word in self.word2index.keys()) for index in indices: self.layer_1 += self.weights_0_1[index] layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2)) # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. return 'POSITIVE' if layer_2[0] >= 0.5 else 'NEGATIVE' """ Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a> TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Remove the update_input_layer function - you will not need it in this version. * Modify init_network: You no longer need a separate input layer, so remove any mention of self.layer_0 You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero Modify train: Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step. At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review. Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review. When updating weights_0_1, only update the individual weights that were used in the forward pass. Modify run: Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it once again. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse_2.png') # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook output_notebook() hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="Word Positive/Negative Affinity Distribution") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="The frequency distribution of the words in our corpus") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) """ Explanation: End of Project 5. Watch the next video to see Andrew's solution, then continue on to the next lesson. Further Noise Reduction<a id='lesson_6'></a> End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 5 lesson # -Modify it according to the above instructions import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1, min_count = 10, polarity_cutoff = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels, min_count, polarity_cutoff) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels, min_count, polarity_cutoff): positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for review, label in zip(reviews, labels): words = review.split(' ') if label == 'POSITIVE': positive_counts.update(words) elif label == 'NEGATIVE': negative_counts.update(words) total_counts.update(words) # Create Counter object to store positive/negative ratios pos_neg_ratios = {} for word, freq in total_counts.most_common(): if freq >= 50: pos_neg_ratios[word] = np.log(positive_counts[word] / float(negative_counts[word] + 1)) # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". review_vocab = set( word for word, freq in total_counts.most_common() if freq >= min_count and word in pos_neg_ratios.keys() and np.abs(pos_neg_ratios[word]) >= polarity_cutoff ) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. label_vocab = set(labels) # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook for i, word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead for i, word in enumerate(self.label_vocab): self.label2index[word] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = np.zeros((input_nodes, hidden_nodes)) # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0, self.output_nodes ** -0.5, (hidden_nodes, output_nodes)) # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_1 = np.zeros((1, hidden_nodes)) def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. return 1 if label == 'POSITIVE' else 0 def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion return output * (1 - output) def train(self, training_reviews_raw, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews_raw) == len(training_labels)) training_reviews = [ set( self.word2index[word] for word in review.split(' ') if word in self.word2index.keys() ) for review in training_reviews_raw ] # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label review, label = training_reviews[i], training_labels[i] # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2)) # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. target = self.get_target_for_label(label) error_2 = target - layer_2 error_2_term = error_2 * self.sigmoid_output_2_derivative(layer_2) error_1 = error_2_term * self.weights_1_2.T error_1_term = error_1 self.weights_1_2 += self.learning_rate * error_2_term * self.layer_1.T for index in review: self.weights_0_1[index] += self.learning_rate * error_1_term[0] # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. if np.abs(error_2) < 0.5: correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. self.layer_1 *= 0 indices = set(self.word2index[word] for word in review.lower().split(' ') if word in self.word2index.keys()) for index in indices: self.layer_1 += self.weights_0_1[index] layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2)) # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. return 'POSITIVE' if layer_2[0] >= 0.5 else 'NEGATIVE' """ Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a> TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Modify pre_process_data: Add two additional parameters: min_count and polarity_cutoff Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.) Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times. Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff Modify __init__: Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a small polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. It should be End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a much larger polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. End of explanation """ mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp_full.train(reviews[:-1000],labels[:-1000]) Image(filename='sentiment_network_sparse.png') def get_most_similar_words(focus = "horrible"): most_similar = Counter() for word in mlp_full.word2index.keys(): most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]]) return most_similar.most_common() get_most_similar_words("excellent") get_most_similar_words("terrible") import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]: if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) pos = 0 neg = 0 colors_list = list() vectors_list = list() for word in words_to_visualize: if word in pos_neg_ratios.keys(): vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]]) if(pos_neg_ratios[word] > 0): pos+=1 colors_list.append("#00ff00") else: neg+=1 colors_list.append("#000000") from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) words_top_ted_tsne = tsne.fit_transform(vectors_list) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="vector T-SNE for most polarized words") source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0], x2=words_top_ted_tsne[:,1], names=words_to_visualize, color=colors_list)) p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color") word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6, text_font_size="8pt", text_color="#555555", source=source, text_align='center') p.add_layout(word_labels) show(p) # green indicates positive words, black indicates negative words """ Explanation: End of Project 6. Watch the next video to see Andrew's solution, then continue on to the next lesson. Analysis: What's Going on in the Weights?<a id='lesson_7'></a> End of explanation """
sdpython/pyquickhelper
_doc/notebooks/javascript_extension.ipynb
mit
from pyquickhelper.ipythonhelper import install_notebook_extension, get_installed_notebook_extension """ Explanation: Javascript extension for a notebook Play with Javascript extensions. End of explanation """ install_notebook_extension() """ Explanation: We install extensions in case it was not done before: End of explanation """ from pyquickhelper.ipythonhelper.notebook_helper import get_jupyter_extension_dir path = get_jupyter_extension_dir() path get_installed_notebook_extension() import notebook notebook.nbextensions.check_nbextension('autosavetime', user=True) """ Explanation: We check the list of installed extensions (from IPython-notebook-extensions): End of explanation """ %%javascript require(['base/js/utils'], function(utils) { utils.load_extensions('autosavetime/main'); }); print(3) """ Explanation: And then, we load one of them: End of explanation """
McIntyre-Lab/papers
fear_ase_2016/scripts/cis_summary/maren_equations_summary_jmf2.ipynb
lgpl-3.0
# Set-up default environment %run '../ipython_startup.py' # Import additional libraries import sas7bdat as sas import cPickle as pickle import statsmodels.formula.api as smf from ase_cisEq import marenEq from ase_cisEq import marenPrintTable from ase_normalization import meanCenter from ase_normalization import q3Norm from ase_normalization import meanStd pjoin = os.path.join """ Explanation: Maren Equations Summary This notebook is just pulling out the important figures and tables for the manuscript. For more detailed explanations and exploring see other notebooks. End of explanation """ # Import clean dataset with sas.SAS7BDAT(pjoin(PROJ, 'sas_data/clean_ase_stack.sas7bdat')) as FH: df = FH.to_data_frame() dfClean = df[['line', 'mating_status', 'fusion_id', 'flag_AI_combined', 'q5_mean_theta', 'sum_both', 'sum_line', 'sum_tester', 'sum_total', 'mean_apn']] print 'Rows ' + str(dfClean.shape[0]) print 'Columns ' + str(dfClean.shape[1]) print 'Number of Genotypes ' + str(len(set(dfClean['line'].tolist()))) print 'Number of Exonic Regions ' + str(len(set(dfClean['fusion_id'].tolist()))) """ Explanation: Import clean data set This data set was created by: ase_summarize_ase_filters.sas The data has had the following droped: * regions that were always bias in 100 genome simulation * regions with APN $\le 25$ * regions not in at least 10% of genotypes * regions not in mated and virgin * genotypes with extreme bias in median(q5_mean_theta) * genotypes with $\le500$ regions End of explanation """ # Drop groups with less than 10 lines per fusion grp = dfClean.groupby(['mating_status', 'fusion_id']) dfGt10 = grp.filter(lambda x: x['line'].count() >= 10).copy() print 'Rows ' + str(dfGt10.shape[0]) print 'Columns ' + str(dfGt10.shape[1]) print 'Number of Genotypes ' + str(len(set(dfGt10['line'].tolist()))) print 'Number of Exonic Regions ' + str(len(set(dfGt10['fusion_id'].tolist()))) """ Explanation: Additional cleaning For the maren equations, I am also going to drop exonic regions with less than 10 genotypes. The maren equations make some assumptions about the population level sums. Obvisouly the more genotypes that are present for each fusions the better, but I am comfortable with as few as 10 genotypes. End of explanation """ # Calculate Maren TIG equations by mating status and exonic region marenRawCounts = marenEq(dfGt10, Eii='sum_line', Eti='sum_tester', group=['mating_status', 'fusion_id']) marenRawCounts['mag_cis'] = abs(marenRawCounts['cis_line']) marenRawCounts.columns marenRawCounts """ Explanation: Raw Counts End of explanation """ # Plot densities def panelKde(df, **kwargs): options = {'subplots': True, 'layout': (7, 7), 'figsize': (20, 20), 'xlim': (-500, 500), 'legend': False, 'color': 'k'} options.update(kwargs) # Make plot axes = df.plot(kind='kde', **options) # Add titles intead of legends try: for ax in axes.ravel(): h, l = ax.get_legend_handles_labels() ax.set_title(l[0]) ax.get_yaxis().set_visible(False) ax.axvline(0, lw=1) except: ax = axes ax.get_yaxis().set_visible(False) ax.axvline(0, lw=1) return plt.gcf() def linePanel(df, value='cis_line', index='fusion_id', columns='line'): mymap = { 'cis_line': 'cis-Line Effects', 'trans_line': 'trans-Line Effects', 'line': 'genotype', 'fusion_id': 'exonic_region' } # Iterate over mated and virgin for k, v in {'M': 'Mated', 'V': 'Virgin'}.iteritems(): # Pivot data frame so that the thing you want to make panels by is in columns. dfPiv = pd.pivot_table(df[df['mating_status'] == k], values=value, index=index, columns=columns) # Generate panel plot with at most 49 panels if value == 'cis_line': xlim = (-500, 500) else: # trans-effects appear to be larger in magnitude xlim = (-1500, 1500) fig = panelKde(dfPiv.iloc[:, :49], xlim=xlim) title = '{}\n{}'.format(mymap[value], v) fig.suptitle(title, fontsize=18, fontweight='bold') fname = pjoin(PROJ, 'pipeline_output/cis_effects/density_plot_by_{}_{}_{}.png'.format(mymap[columns], value, v.lower())) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() def testerPanel(df, value='cis_tester'): mymap = { 'cis_tester': 'cis-Tester Effects', 'trans_tester': 'trans-Trans Effects' } # Iterate over mated and virgin for k, v in {'M': 'Mated', 'V': 'Virgin'}.iteritems(): # Split table by mating status and drop duplicates, because # there is only one tester value for each exonic region dfSub = df.ix[df['mating_status'] == k,['fusion_id', value]].drop_duplicates() # Generate Panel Plot fig = panelKde(dfSub, subplots=False) title = '{}\n{}'.format(mymap[value], v) fig.suptitle(title, fontsize=18, fontweight='bold') fname = pjoin(PROJ, 'pipeline_output/cis_effects/density_plot_{}_{}.png'.format(value, v.lower())) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() # Cis and trans line effects by genotype linePanel(marenRawCounts, value='cis_line', index='fusion_id', columns='line') linePanel(marenRawCounts, value='trans_line', index='fusion_id', columns='line') # Cis and trans line effects by exonic region linePanel(marenRawCounts, value='cis_line', index='line', columns='fusion_id') linePanel(marenRawCounts, value='trans_line', index='line', columns='fusion_id') # Cis and trans tester effects testerPanel(marenRawCounts, value='cis_tester') testerPanel(marenRawCounts, value='trans_tester') """ Explanation: Plot Distribution of cis- and trans-effects End of explanation """ # Set Globals SHAPES = {'M': 'o', 'V': '^'} CMAP='jet' # Add color column to color by genotype colors = {} cnt = 0 genos = set(dfGt10['line'].tolist()) for l in genos: colors[l] = cnt cnt += 1 marenRawCounts['color'] = marenRawCounts['line'].map(colors) # Plotting scatter def getR2(df, x, y): """ Calculate the R-squared using OLS with an intercept """ formula = '{} ~ {} + 1'.format(y, x) return smf.ols(formula, df).fit().rsquared def scatPlt(df, x, y, c=None, cmap='jet', s=50, marker='o', ax=None, title=None, xlab=None, ylab=None, diag='pos'): """ Make a scatter plot using some default options """ ax = df.plot(x, y, kind='scatter', ax=ax, c=c, cmap=cmap, s=s, marker=marker, title=title, colorbar=False) # Add a diag line if diag == 'neg': # draw a diag line with negative slope ax.plot([0, 1], [1, 0], transform=ax.transAxes) elif diag == 'pos': # draw a diag line with positive slope ax.plot([0, 1], [0, 1], transform=ax.transAxes) ax.set_xlabel(xlab) ax.set_ylabel(ylab) return ax def scatPltPanel(df, line='sum_line', tester='sum_tester', x='cis_line', y='prop', cmap='jet', s=60, panel_title=None, diag='pos'): """ Make a panel of scatter plots using pandas """ # Plot the cis-line effects x proportion Line by fusion df['prop'] = 1 - df[tester] / (df[line] + df[tester]) # Create 5x5 panel plot fig, axes = plt.subplots(5, 5, figsize=(20, 20)) fig.suptitle(panel_title, fontsize=12, fontweight='bold') axes = axes.ravel() # Group by fusion_id for i, (n, gdf) in enumerate(df.groupby('fusion_id')): ax = axes[i] # Calculate R-squared value r2 = getR2(gdf, x, y) # Make new title with R-squared in it t = '{}\nR^2: {}'.format(n, round(r2, 3)) # Change marker style based on mating status and plot for ms, msdf in gdf.groupby('mating_status'): scatPlt(msdf, x, y, c='color', cmap=cmap, s=s, marker=SHAPES[ms], ax=ax, title=t, xlab=x, ylab=y, diag=diag) # only plot 25 fusions if i == 24: break fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_by_exonic_region_{}_v_{}.png'.format(x, y)) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() # Plot the cis-line effects x proportion by fusion scatPltPanel(marenRawCounts, line='sum_line', tester='sum_tester', cmap=CMAP, panel_title='Raw Counts: cis-line') # Plot the trans-line effects x proportion by fusion scatPltPanel(marenRawCounts, line='sum_line', tester='sum_tester', x='trans_line', cmap=CMAP, panel_title='Raw Counts: trans-line', diag='neg') """ Explanation: Plot cis- and trans-effects vs Allelic Proportion End of explanation """ # Plot F10005_SI FUSION='F10005_SI' dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy() dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester']) # Generate 3 panel plot fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(FUSION, fontsize=14, fontweight='bold') for n, mdf in dfFus.groupby('mating_status'): # Plot the cis-line effects x proportion by fusion scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop') # Plot the trans-line effects x proportion by fusion scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg') # Plot the Tester effects x proportion by fusion scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION)) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() """ Explanation: Plot cis- and trans-effects vs Allelic Proportion for Specific Exonic Regions End of explanation """ # Plot F10317_SI FUSION='F10317_SI' dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy() dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester']) # Generate 3 panel plot fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(FUSION, fontsize=14, fontweight='bold') for n, mdf in dfFus.groupby('mating_status'): # Plot the cis-line effects x proportion by fusion scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop') # Plot the trans-line effects x proportion by fusion scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg') # Plot the Tester effects x proportion by fusion scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION)) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() """ Explanation: F10317_SI This fusion has a weaker cis-line effects but trans-line effects look more linear. End of explanation """ # Plot F10482_SI FUSION='F10482_SI' dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy() dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester']) # Generate 3 panel plot fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(FUSION, fontsize=14, fontweight='bold') for n, mdf in dfFus.groupby('mating_status'): # Plot the cis-line effects x proportion by fusion scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop') # Plot the trans-line effects x proportion by fusion scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg') # Plot the Tester effects x proportion by fusion scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION)) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() meanByMsLine = marenRawCounts[['mean_apn', 'cis_line', 'mating_status', 'line']].groupby(['mating_status', 'line']).mean() meanByMsLine.columns meanByMsLine.plot(kind='scatter', x='mean_apn', y='cis_line') def cisAPN(df, fusion, value='cis_line', xcutoff='>=150', ycutoff='<=-180'): """ Plot effects vs mean apn""" # Pull out fusion of interest dfSub = marenRawCounts[marenRawCounts['fusion_id'] == fusion] # Make scatter plot fig, ax = plt.subplots(1, 1, figsize=(10, 10)) dfSub.plot(kind='scatter', x='mean_apn', y='cis_line', ax=ax, title=fusion) # Annotate outliers xc = filt = dfSub.loc[(dfSub[value] eval(ycutoff)) | (dfSub['mean_apn'] eval(xcutoff)), ['line', 'mating_status', 'mean_apn', 'cis_line']] for row in filt.values: line, ms, apn, cis = row ax.annotate(line + '_' + ms, xy=(apn, cis)) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_{}_v_meanApn.png'.format(fusion, value)) plt.savefig(fname, bbox_inches='tight') marenRawCounts.columns mated = marenRawCounts[marenRawCounts['mating_status'] == 'M'] genos = set(mated['line'].tolist()) mated.head() grp = mated.groupby('line') sub = grp.get_group('r324') fig, ax = plt.subplots(1, 1, figsize=(10, 6), dpi=300) ax.axvline(0, c='k', lw=1) sub['cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='all', style='b-', lw=3) sub.ix[sub['flag_AI_combined'] == 1, 'cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='AI', style='r--') sub.ix[sub['flag_AI_combined'] == 0, 'cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='no AI', style='m-.') plt.legend() plt.title('r324') """ Explanation: F10482_SI This fusion has a weaker cis-line effects but trans-line effects look more linear. End of explanation """
NlGG/MachineLearning
.ipynb_checkpoints/PSO_discre-checkpoint.ipynb
mit
%matplotlib inline import numpy as np import pylab as pl import math from sympy import * import matplotlib.pyplot as plt import matplotlib.animation as animation from mpl_toolkits.mplot3d import Axes3D def TSP_map(N): #100×100の正方格子内にN個の点を配置する関数 TSP_map = [] X = [i for i in range(100)] Y = [i for i in range(100)] x = np.array([]) y = np.array([]) for i in range(N): x = np.append(x, np.random.choice(X)) y = np.append(y, np.random.choice(Y)) for i in range(N): TSP_map.append([x[i], y[i]]) return TSP_map class PSO: def __init__(self, N, pN, omega, alpha, beta): self.N = N self.pN = pN self.omega = omega self.alpha = alpha self.beta = beta self.city = TSP_map(N) def initialize(self): ptcl = np.array([]) for i in range(self.pN): a = np.random.choice([i for i in range(self.N - 1)]) b = np.random.choice([i for i in range(a, self.N)]) V = [a, b] ptcl = np.append(ptcl, particle(i, V, self.N, self.omega, self.alpha, self.beta)) self.ptcl = ptcl return self.ptcl def one_simulate(self): for i in range(self.pN): self.ptcl[i].SS_id() self.ptcl[i].SS_gd(self.p_gd_X) self.ptcl[i].new_V() self.ptcl[i].new_X() self.ptcl[i].P_id(self.city) def simulate(self, sim_num): for i in range(self.pN): self.ptcl[i].initial(self.city) self.p_gd_X = self.P_gd() for i in range(sim_num): self.one_simulate() self.p_gd_X = self.P_gd() self.p_gd_X = self.P_gd() return self.p_gd_X def P_gd(self): P_gd = self.ptcl[0].p_id no = 0 for i in range(self.pN): if P_gd < self.ptcl[i].p_id: P_gd = self.ptcl[i].p_id self.no = i return self.ptcl[self.no].p_id_X class particle: def __init__(self, No, V, num_city, omega, alpha, beta): #noは粒子の番号(ナンバー)である。 self.No = No self.V = V self.num_city = num_city self.omega = omega self.alpha = alpha self.beta = beta self.X = self.init_X() def initial(self, city): self.ss_id = [] self.ss_gd = [] self.P_id(city) def init_X(self): c = np.array([i for i in range(self.num_city)]) np.random.shuffle(c) return c def SO(self, V, P): SO = [] for i in range(len(V)): if V[i] != P[i]: t = np.where(V == P[i]) t = int(t[0]) a = V[i] b = V[t] V[i] = b V[t] = a SO.append([i, t]) return SO def SS_id(self): self.ss_id = self.SO(self.X, self.p_id_X) def SS_gd(self, p_gd_X): self.ss_gd = self.SO(self.X, p_gd_X) def select(self, V, v, p): select_v = np.array([]) for i in range(len(V)): x = np.random.choice([1, 0], p=[p, 1-p]) if x == 1: select_v = np.append(select_v,V[i]) return select_v def new_V(self): V = np.array([]) self.select(V, self.V, self.omega) self.select(V, self.SS_id, self.alpha) self.select(V, self.ss_gd, self.beta) self.V = V return self.V def new_X(self): for i in range(len(self.V)): j = self.V[i][0] k = self.V[i][1] a = self.X[j] b = self.X[k] self.X[j] = b self.X[k] = a return self.X def P_id(self, city): #二都市間の距離を足し上げてP_idを求める関数。 P_id = 0 for i in range(self.num_city): if i != self.num_city-1: x1 = city[self.X[i]][0] y1 = city[self.X[i]][1] x2 = city[self.X[i+1]][0] y2 = city[self.X[i+1]][1] else: x1 = city[self.X[i]][0] y1 = city[self.X[i]][1] x2 = city[self.X[0]][0] y2 = city[self.X[0]][1] a = np.array([x1, y1]) b = np.array([x2, y2]) u = b - a p = np.linalg.norm(u) P_id += p if P_id < self.P_id: self.p_id = P_id self.p_id_X = self.X return self.p_id """ Explanation: 以下の論文のPSOアルゴリズムに従った。 http://ci.nii.ac.jp/els/110006977755.pdf?id=ART0008887051&type=pdf&lang=en&host=cinii&order_no=&ppv_type=0&lang_sw=&no=1452683083&cp= End of explanation """ pso = PSO(10, 10, 0.3, 0.1, 0.3) """ Explanation: 10都市で行う。パラメータは左から順に、(都市の数、粒子の数、前期の速度の影響率、localベストの影響率、globalベストの影響率)である。 End of explanation """ pso.city x = [] y = [] for i in range(len(pso.city)): x.append(pso.city[i][0]) y.append(pso.city[i][1]) plt.scatter(x, y) """ Explanation: 都市の座標とプロット。 End of explanation """ pso.initialize() """ Explanation: 粒子の初期化。 End of explanation """ pso.simulate(100) x = [] y = [] for i in pso.p_gd_X: x.append(pso.city[i][0]) y.append(pso.city[i][1]) plt.plot(x, y) pso.no """ Explanation: 100回シミュレーションした。 End of explanation """
silburt/rebound2
ipython_examples/FourierSpectrum.ipynb
gpl-3.0
import rebound import numpy as np sim = rebound.Simulation() sim.units = ('AU', 'yr', 'Msun') sim.add("Sun") sim.add("Jupiter") sim.add("Saturn") """ Explanation: Fourier analysis & resonances A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database: End of explanation """ sim.integrator = "whfast" sim.dt = 1. # in years. About 10% of Jupiter's period sim.move_to_com() """ Explanation: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period. End of explanation """ Nout = 100000 tmax = 3.e5 Nplanets = 2 x = np.zeros((Nplanets,Nout)) ecc = np.zeros((Nplanets,Nout)) longitude = np.zeros((Nplanets,Nout)) varpi = np.zeros((Nplanets,Nout)) times = np.linspace(0.,tmax,Nout) ps = sim.particles for i,time in enumerate(times): sim.integrate(time) # note we used above the default exact_finish_time = 1, which changes the timestep near the outputs to match # the output times we want. This is what we want for a Fourier spectrum, but technically breaks WHFast's # symplectic nature. Not a big deal here. os = sim.calculate_orbits() for j in range(Nplanets): x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0 ecc[j][i] = os[j].e longitude[j][i] = os[j].l varpi[j][i] = os[j].Omega + os[j].omega """ Explanation: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum. End of explanation """ %matplotlib inline labels = ["Jupiter", "Saturn"] import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) plt.plot(times,ecc[0],label=labels[0]) plt.plot(times,ecc[1],label=labels[1]) ax.set_xlabel("Time (yrs)", fontsize=20) ax.set_ylabel("Eccentricity", fontsize=20) ax.tick_params(labelsize=20) plt.legend(); """ Explanation: Let's see what the eccentricity evolution looks like with matplotlib: End of explanation """ from scipy import signal Npts = 3000 logPmin = np.log10(10.) logPmax = np.log10(1.e5) Ps = np.logspace(logPmin,logPmax,Npts) ws = np.asarray([2*np.pi/P for P in Ps]) periodogram = signal.lombscargle(times,x[0],ws) fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([10**logPmin,10**logPmax]) ax.set_ylim([0,0.15]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps). Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position. End of explanation """ fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([600,1600]) ax.set_ylim([0,0.003]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range: End of explanation """ def zeroTo360(val): while val < 0: val += 2*np.pi while val > 2*np.pi: val -= 2*np.pi return val*180/np.pi """ Explanation: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. In this case, we have that the mean longitude of Jupiter $\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\phi_{5:2} = 5\lambda_S - 2\lambda_J - 3\varpi_J$, where $\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates. To see a clear trend, we have to shift each value of $\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees: End of explanation """ phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{5:2}$", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs. End of explanation """ phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi2) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{2:1}$", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup! This resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\phi_{2:1} = 2\lambda_S - \lambda_J - \varpi_J$, End of explanation """
robblack007/clase-dinamica-robot
Practicas/.ipynb_checkpoints/Practica 5 - Modelado de Robots-checkpoint.ipynb
mit
from scipy.integrate import odeint from numpy import linspace """ Explanation: Modelado de Robots Recordando la práctica anterior, tenemos que la ecuación diferencial que caracteriza a un sistema masa-resorte-amoritguador es: $$ m \ddot{x} + c \dot{x} + k x = F $$ y revisamos 3 maneras de obtener el comportamiento de ese sistema, sin embargo nos interesa saber el comportamiento de un sistema mas complejo, un robot; empezaremos con un pendulo simple, el cual tiene la siguiente ecuación de movimiento: $$ m l^2 \ddot{q} + m g l \cos{q} = \tau $$ Como podemos ver, son similares en el sentido de que involucran una sola variable, sin embargo, en la segunda ecuación, nuestra variable esta involucrada adentro de una función no lineal ($\cos{q}$), por lo que nuestra ecuación diferencial es no lineal, y por lo tanto no podemos usar el formalismo de función de transferencia para resolverla; tenemos que usar la función odeint para poder resolverla. Como es de segundo grado, tenemos que dividir nuestra ecuación diferencial en dos mas simples, por lo tanto usaremos el siguiente truco: $$ \frac{d}{dt} q = \dot{q} $$ entonces, tenemos dos ecuaciones diferenciales, por lo que podemos resolver dos incognitas $q$ y $\dot{q}$. Utilizando nuestros conocimientos de algebra lineal, podemos acomodar nuestro sistema de ecuaciones en una matriz, de tal manera que si antes teniamos que: $$ \begin{align} \frac{d}{dt} q &= \dot{q} \ \frac{d}{dt} \dot{q} &= \ddot{q} = \frac{\tau - m g l \cos{q}}{ml^2} \end{align} $$ Por lo que podemos ver que nuestro sistema de ecuaciones tiene un estado mas grande que antes; la ecuación diferencial que teniamos como no lineal, de segundo orden, podemos escribirla como no lineal, de primer orden siempre y cuando nuestro estado sea mas grande. Definamos a lo que nos referimos con estado: $$ x = \begin{pmatrix} q \ \dot{q} \end{pmatrix} $$ con esta definición de estado, podemos escribir el sistema de ecuaciónes de arriba como: $$ \frac{d}{dt} x = \dot{x} = \frac{d}{dt} \begin{pmatrix} q \ \dot{q} \end{pmatrix} = \begin{pmatrix} \dot{q} \ \frac{\tau - m g l \cos{q}}{ml^2} \end{pmatrix} $$ o bien $\dot{x} = f(x)$, en donde $f(x)$ es una función vectorial, o bien, un vector de funciones: $$ f(x) = \begin{pmatrix} \dot{q} \ \frac{\tau - m g l \cos{q}}{ml^2} \end{pmatrix} $$ Por lo que ya estamos listos para simular este sistema mecánico, con la ayuda de odeint(); empecemos importando laas librerias necesarias: End of explanation """ def f(x, t): from numpy import cos q, q̇ = x τ = 0 m = 1 g = 9.81 l = 1 return [q̇, τ - m*g*l*cos(q)/(m*l**2)] """ Explanation: y definiendo una función que devuelva un arreglo con los valores de $f(x)$ End of explanation """ ts = linspace(0, 10, 100) x0 = [0, 0] """ Explanation: Vamos a simular desde el tiempo $0$, hasta $10$, y las condiciones iniciales del pendulo son $q=0$ y $\dot{q} = 0$. End of explanation """ xs = odeint(func = f, y0 = x0, t = ts) qs, q̇s = list(zip(*xs.tolist())) """ Explanation: Utilizamos la función odeint para simular el comportamiento del pendulo, dandole la función que programamos con la dinámica de $f(x)$ y sacamos los valores de $q$ y $\dot{q}$ que nos devolvió odeint envueltos en el estado $x$ End of explanation """ %matplotlib inline from matplotlib.pyplot import style, plot, figure style.use("ggplot") fig1 = figure(figsize = (8, 8)) ax1 = fig1.gca() ax1.plot(xs); fig2 = figure(figsize = (8, 8)) ax2 = fig2.gca() ax2.plot(qs) ax2.plot(q̇s); """ Explanation: En este punto ya tenemos nuestros datos de la simulación, tan solo queda graficarlos para interpretar los resultados: End of explanation """ from matplotlib import animation from numpy import sin, cos, arange # Se define el tamaño de la figura fig = figure(figsize=(8, 8)) # Se define una sola grafica en la figura y se dan los limites de los ejes x y y axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.5, 1.5), ylim=(-2, 1)) # Se utilizan graficas de linea para el eslabon del pendulo linea, = axi.plot([], [], "-o", lw=2, color='gray') def init(): # Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema linea.set_data([], []) return linea def animate(i): # Esta funcion se ejecuta para cada cuadro del GIF # Se obtienen las coordenadas x y y para el eslabon xs, ys = [[0, cos(qs[i])], [0, sin(qs[i])]] linea.set_data(xs, ys) return linea # Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que # se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo # de cada cuadro y la funcion inicial ani = animation.FuncAnimation(fig, animate, arange(1, len(qs)), interval=25, blit=True, init_func=init) # Se guarda el GIF en el archivo indicado ani.save('./imagenes/pendulo-simple.gif', writer='imagemagick'); """ Explanation: Pero las gráficas de trayectoria son aburridas, recordemos que podemos hacer una animación con matplotlib: End of explanation """
CalPolyPat/phys202-2015-work
assignments/assignment09/IntegrationEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy import integrate """ Explanation: Integration Exercise 2 Imports End of explanation """ def integrand(x, a): return 1.0/(x**2 + a**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.pi/a print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral """ Explanation: Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps: Typeset the integral using LateX in a Markdown cell. Define an integrand function that computes the value of the integrand. Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral. Define an integral_exact function that computes the exact value of the integral. Call and print the return value of integral_approx and integral_exact for one set of parameters. Here is an example to show what your solutions should look like: Example Here is the integral I am performing: $$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$ End of explanation """ def integrand(x, p): return x**(p-1)/(1+x) def integral_approx(p): I, e = integrate.quad(integrand, 0, np.inf, args=(p,)) return I def integral_exact(p): return np.pi/np.sin(p*np.pi) print("Numerical: ", integral_approx(.5)) print("Exact : ", integral_exact(.5)) assert True # leave this cell to grade the above integral """ Explanation: Integral 1 $$I = \int_0^\infty \frac{x^{p-1}dx}{1+x} = \frac{\pi}{sin p \pi}, 0<p<1$$ End of explanation """ def integrand(x, a, b): return 1/(a+b*np.sin(x)) def integral_approx(a,b): I, e = integrate.quad(integrand, 0, 2*np.pi, args=(a,b,)) return I def integral_exact(a,b): return 2*np.pi/(a**2+b**2)**.5 print("Numerical: ", integral_approx(.5, .5)) print("Exact : ", integral_exact(.5, .5)) assert True # leave this cell to grade the above integral """ Explanation: Integral 2 $$ I = \int_0^{2\pi} \frac{dx}{a+bsin(x)} = \frac{2\pi}{\sqrt{a^2+b^2}}$$ End of explanation """ def integrand(x, a): return np.exp(-a*x**2) def integral_approx(a): I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return .5*(np.pi/a)**.5 print("Numerical: ", integral_approx(.5)) print("Exact : ", integral_exact(.5)) assert True # leave this cell to grade the above integral """ Explanation: Integral 3 $$I = \int_0^\infty e^{-ax^2}dx = \frac{1}{2}\sqrt{\frac{\pi}{a}} $$ End of explanation """ def integrand(x): return np.log(x)/(1+x) def integral_approx(): I, e = integrate.quad(integrand, 0, 1) return I def integral_exact(): return -np.pi**2/12 print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral """ Explanation: Integral 4 $$I = \int_0^1 \frac{ln(x)}{1+x}dx = -\frac{\pi^2}{12}$$ End of explanation """ def integrand(x): return 1/np.cosh(x) def integral_approx(): I, e = integrate.quad(integrand, -np.inf, np.inf) return I def integral_exact(): return np.pi print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral """ Explanation: Integral 5 $$I = \int_{-\infty}^\infty \frac{1}{cosh x}dx = \pi $$ End of explanation """
mitdbg/modeldb
client/workflows/demos/census-end-to-end-local-data-example.ipynb
mit
# restart your notebook if prompted on Colab try: import verta except ImportError: !pip install verta """ Explanation: Logistic Regression with Grid Search (scikit-learn) <a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-end-to-end-local-data-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ HOST = "app.verta.ai" PROJECT_NAME = "Census Income Classification - Local Data" EXPERIMENT_NAME = "Logistic Regression" # import os # os.environ['VERTA_EMAIL'] = '' # os.environ['VERTA_DEV_KEY'] = '' """ Explanation: This example builds on our basic census income classification example by incorporating local data versioning. End of explanation """ from __future__ import print_function import warnings from sklearn.exceptions import ConvergenceWarning warnings.filterwarnings("ignore", category=ConvergenceWarning) warnings.filterwarnings("ignore", category=FutureWarning) import itertools import os import time import six import numpy as np import pandas as pd import sklearn from sklearn import model_selection from sklearn import linear_model from sklearn import metrics try: import wget except ImportError: !pip install wget # you may need pip3 import wget """ Explanation: Imports End of explanation """ from verta import Client from verta.utils import ModelAPI client = Client(HOST) proj = client.set_project(PROJECT_NAME) expt = client.set_experiment(EXPERIMENT_NAME) """ Explanation: Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client End of explanation """ from verta.dataset import Path dataset = client.set_dataset(name="Census Income Local") version = dataset.create_version(Path("census-train.csv")) DATASET_PATH = "./" train_data_filename = DATASET_PATH + "census-train.csv" test_data_filename = DATASET_PATH + "census-test.csv" df_train = pd.read_csv(train_data_filename) X_train = df_train.iloc[:,:-1] y_train = df_train.iloc[:, -1] df_train.head() """ Explanation: <h2 style="color:blue">Prepare Data</h2> End of explanation """ hyperparam_candidates = { 'C': [1e-6, 1e-4], 'solver': ['lbfgs'], 'max_iter': [15, 28], } hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values)) for values in itertools.product(*hyperparam_candidates.values())] """ Explanation: Prepare Hyperparameters End of explanation """ def run_experiment(hyperparams): # create object to track experiment run run = client.set_experiment_run() # create validation split (X_val_train, X_val_test, y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train, test_size=0.2, shuffle=True) # log hyperparameters run.log_hyperparameters(hyperparams) print(hyperparams, end=' ') # create and train model model = linear_model.LogisticRegression(**hyperparams) model.fit(X_train, y_train) # calculate and log validation accuracy val_acc = model.score(X_val_test, y_val_test) run.log_metric("val_acc", val_acc) print("Validation accuracy: {:.4f}".format(val_acc)) # create model API model_api = ModelAPI(X_train, y_train) # save and log model run.log_model(model, model_api=model_api) run.log_requirements(["scikit-learn"]) # log dataset snapshot as version run.log_dataset_version("train", version) for hyperparams in hyperparam_sets: run_experiment(hyperparams) """ Explanation: Train Models End of explanation """ best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0] print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc"))) best_hyperparams = best_run.get_hyperparameters() print("Hyperparameters: {}".format(best_hyperparams)) """ Explanation: Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run End of explanation """ model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams) model.fit(X_train, y_train) """ Explanation: Train on Full Dataset End of explanation """ train_acc = model.score(X_train, y_train) print("Training accuracy: {:.4f}".format(train_acc)) """ Explanation: Calculate Accuracy on Full Training Set End of explanation """ model_id = 'YOUR_MODEL_ID' run = client.set_experiment_run(id=model_id) """ Explanation: Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB. End of explanation """ df_test = pd.read_csv(test_data_filename) X_test = df_test.iloc[:,:-1] """ Explanation: Prepare "Live" Data End of explanation """ run.deploy(wait=True) run """ Explanation: Deploy Model End of explanation """ deployed_model = run.get_deployed_model() for x in itertools.cycle(X_test.values.tolist()): print(deployed_model.predict([x])) time.sleep(.5) """ Explanation: Query Deployed Model End of explanation """
google/applied-machine-learning-intensive
content/06_other_models/05_svm/colab.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/05_svm/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC. End of explanation """ import pandas as pd from sklearn.datasets import load_iris iris_bunch = load_iris() iris_df = pd.DataFrame(iris_bunch.data, columns=iris_bunch.feature_names) iris_df['species'] = iris_bunch.target iris_df.describe() """ Explanation: Support Vector Machines Support Vector Machines (SVM) are powerful tools for performing both classification and regression tasks. In this colab we'll create a classification model using an SVM in scikit-learn. Load the Data Let's begin by loading a dataset that we'll use for classification. End of explanation """ from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(iris_df[iris_bunch.feature_names]) scaler.mean_ """ Explanation: You can see in the data description above that the range of values for each of the columns is quite a bit different. For instance, the mean sepal length is almost twice as big as the mean sepal width. SVM is sensitive to features with different scales. We'll run the data through the StandardScaler to get all of the feature data scaled. First let's create the scalar and fit it to our features. End of explanation """ iris_df[iris_bunch.feature_names] = scaler.transform( iris_df[iris_bunch.feature_names]) iris_df.describe() """ Explanation: We can now transform the data by applying the scaler. End of explanation """ iris_df = iris_df.rename(index=str, columns={ 'sepal length (cm)': 'sepal_length', 'sepal width (cm)': 'sepal_width', 'petal length (cm)': 'petal_length', 'petal width (cm)': 'petal_width'}) iris_df.head() """ Explanation: Since we scaled the data, the column names are now a bit deceiving. These are no longer unaltered centimeters, but normalized lengths. Let's rename the columns to get "(cm)" out of the names. End of explanation """ features = ['petal_length', 'petal_width'] target = 'species' """ Explanation: We could use all of the features to train our model, but in this case we are going to pick two features so that we can make some nice visualizations later on in the colab. End of explanation """ from sklearn.svm import LinearSVC classifier = LinearSVC() classifier.fit(iris_df[features], iris_df[target]) """ Explanation: Now we can create and train a classifier. There are multiple ways to create an SVM model in scikit-learn. We are going to use the linear support vector classifier. End of explanation """ from sklearn.metrics import f1_score predictions = classifier.predict(iris_df[features]) f1_score(iris_df[target], predictions, average='micro') """ Explanation: We can now use our model to make predictions. We'll make predictions on the data we just trained on in order to get an F1 score. End of explanation """ import matplotlib.pyplot as plt import numpy as np # Find the smallest value in the feature data. We are looking across both # features since we scaled them. Make the min value a little smaller than # reality in order to better see all of the points on the chart. min_val = min(iris_df[features].min()) - 0.25 # Find the largest value in the feature data. Make the max value a little bigger # than reality in order to better see all of the points on the chart. max_val = max(iris_df[features].max()) + 0.25 # Create a range of numbers from min to max with some small step. This will be # used to make multiple predictions that will create the decision boundary # outline. rng = np.arange(min_val, max_val, .02) # Create a grid of points. xx, yy = np.meshgrid(rng, rng) # Make predictions on every point in the grid. predictions = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) # Reshape the predictions for plotting. zz = predictions.reshape(xx.shape) # Plot the predictions on the grid. plt.contourf(xx, yy, zz) # Plot each class of iris with a different marker. # Class 0 with circles # Class 1 with triangles # Class 2 with squares for species_and_marker in ((0, 'o'), (1, '^'), (2, 's')): plt.scatter( iris_df[iris_df[target] == species_and_marker[0]][features[0]], iris_df[iris_df[target] == species_and_marker[0]][features[1]], marker=species_and_marker[1]) plt.show() """ Explanation: We can visualize the decision boundaries using the pyplot contourf function. End of explanation """ # Your code goes here """ Explanation: Exercises Exercise 1: Polynomial SVC The scikit-learn module also has an SVC classifier that can use non-linear kernels. Create an SVC classifier with a 3-degree polynomial kernel, and train it on the iris data. Make predictions on the iris data that you trained on, and then print out the F1 score. Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 2: Plotting Create a plot that shows the decision boundaries of the polynomial SVC that you created in exercise 1. Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 3: C Hyperparameter We accepted the default 1.0 C hyperparameter in the classifier above. Try halving and doubling the C value. How does it affect the F1 score? Visualize the decision boundaries. Do they visibly change? Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 4: Regression Use the LinearSVR to predict Boston housing prices in the Boston housing dataset. Hold out some test data and print your final RMSE. Student Solution End of explanation """
SonneSun/self_driving_car_projects
1_Finding_Lane_Lines.ipynb
apache-2.0
#importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline #reading in an image image = mpimg.imread('video_test/frame219.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimesions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') """ Explanation: Finding Lane Lines on the Road In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output". The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below. <figure> <img src="line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> End of explanation """ import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ neg_x = [] neg_y = [] pos_x = [] pos_y = [] for line in lines: slope = (line[0][3]- line[0][1])/(line[0][2]-line[0][0]) if slope < 0: neg_x.append(line[0][0]) neg_y.append(line[0][1]) neg_x.append(line[0][2]) neg_y.append(line[0][3]) if slope > 0: pos_x.append(line[0][0]) pos_y.append(line[0][1]) pos_x.append(line[0][2]) pos_y.append(line[0][3]) y_size = img.shape[0] x_size = img.shape[1] min_y = y_size #predefined bottom line max_y = 320 #predefined top line #max_y = min(min(neg_y), min(pos_y)) if (len(neg_x)) > 0: neg_a,neg_b = np.polyfit(neg_x, neg_y, 1) #fit a line min_x_neg = int((min_y - neg_b) / neg_a) #get x based on y,a,b max_x_neg = int((max_y - neg_b) / neg_a) cv2.line(img, (min_x_neg, min_y), (max_x_neg, max_y), color, thickness) if (len(pos_x)) > 0: pos_a,pos_b = np.polyfit(pos_x, pos_y, 1) #fit a line min_x_pos = int((min_y - pos_b) / pos_a) #get x based on y,a,b max_x_pos = int((max_y - pos_b) / pos_a) cv2.line(img, (min_x_pos, min_y), (max_x_pos, max_y), color, thickness) # for line in lines: # for x1,y1,x2,y2 in line: # cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines, thickness = 8) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ) """ Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are: cv2.inRange() for color selection cv2.fillPoly() for regions selection cv2.line() to draw lines on an image given endpoints cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file cv2.bitwise_and() to apply a mask to an image Check out the OpenCV documentation to learn about these and discover even more awesome functionality! Below are some helper functions to help get you started. They should look familiar from the lesson! End of explanation """ import os os.listdir("test_images/") """ Explanation: Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos. End of explanation """ #Pipeline # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images directory. for im_name in os.listdir("test_images/"): #This is used for video test #if im_name == 'frame220.jpg': # continue #read image im_name = "test_images/" + im_name image = mpimg.imread(im_name) gray = grayscale(image) #gaussian blur k_size = 5 blur_gray = gaussian_blur(gray, k_size) #canny low_thres = 50 high_thres = 150 edges = canny(blur_gray, low_thres, high_thres) #limit region ysize = image.shape[0] xsize = image.shape[1] left_bottom = [100, ysize] right_bottom = [(xsize-100), ysize] left_top = [xsize/2, 300] right_top = [xsize/2, 300] vertices = np.array([[left_bottom, right_bottom, right_top, left_top]], dtype = np.int32) masked_edges = region_of_interest(edges, vertices) #hough transform rho = 1 theta = np.pi/180 threshold = 50 min_line_len = 20 max_line_gap = 5 hough_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) combo = weighted_img(hough_img, image) out_name = im_name.split('.')[0] + '_output.jpg' mpimg.imsave(out_name,combo) #Partial test gray = grayscale(image) k_size = 5 blur_gray = gaussian_blur(gray, k_size) low_thres = 50 high_thres = 150 edges = canny(blur_gray, low_thres, high_thres) plt.imshow(edges, cmap='Greys_r') #Partial test ysize = image.shape[0] xsize = image.shape[1] left_bottom = [100, ysize] right_bottom = [xsize, ysize] #left_top = [xsize/2, ysize/2] #right_top = [xsize/2, ysize/2] left_top = [xsize/2, 300] right_top = [xsize/2, 300] vertices = np.array([[left_bottom, right_bottom, right_top, left_top]], dtype = np.int32) masked_edges = region_of_interest(edges, vertices) plt.imshow(masked_edges, cmap='Greys_r') #Partial test rho = 1 theta = np.pi/180 threshold = 50 min_line_len = 20 max_line_gap = 5 hough_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) combo = weighted_img(hough_img, image) plt.imshow(combo) """ Explanation: run your solution on all test_images and make copies into the test_images directory). End of explanation """ # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image with lines are drawn on lanes) gray = grayscale(image) #gaussian blur k_size = 5 blur_gray = gaussian_blur(gray, k_size) #canny low_thres = 50 high_thres = 150 edges = canny(blur_gray, low_thres, high_thres) #limit region ysize = image.shape[0] xsize = image.shape[1] left_bottom = [100, ysize] right_bottom = [(xsize-100), ysize] left_top = [xsize/2, 300] right_top = [xsize/2, 300] vertices = np.array([[left_bottom, right_bottom, right_top, left_top]], dtype = np.int32) masked_edges = region_of_interest(edges, vertices) #hough transform rho = 1 theta = np.pi/180 threshold = 50 min_line_len = 20 max_line_gap = 5 hough_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) result = weighted_img(hough_img, image) return result """ Explanation: Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: solidWhiteRight.mp4 solidYellowLeft.mp4 End of explanation """ # This is used for vedio debugging # #Store video into images # vidcap = cv2.VideoCapture('solidWhiteRight.mp4') # success,image = vidcap.read() # count = 0 # success = True # while success: # success,image = vidcap.read() # cv2.imwrite("video_test/frame%d.jpg" % count, image) # save frame as JPEG file # count += 1 white_output = 'white.mp4' clip1 = VideoFileClip("solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) """ Explanation: Let's try the one with the solid white lane on the right first ... End of explanation """ HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) """ Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. End of explanation """ yellow_output = 'yellow.mp4' clip2 = VideoFileClip('solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) """ Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline. Now for the one with the solid yellow lane on the left. This one's more tricky! End of explanation """ challenge_output = 'extra.mp4' clip2 = VideoFileClip('challenge.mp4') challenge_clip = clip2.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) """ Explanation: Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below! The pipeline follows the standard process: read the image, Gaussian blur, canny transform, limit the region and then hough transform. The draw_lines function can be further polished. For example, instead of fitting a linear line, we can fit a polynomial line; And also, for the very short line, the slope might not be stable. Some filters could be added on. Some parameters in the pipeline could be further optimized. Submission If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review. Optional Challenge - Not complete Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! End of explanation """
domluna/cgt_tutorials
neural net - digits.ipynb
mit
randinds = np.random.permutation(len(digits.target)) # shuffle the values from sklearn.utils import shuffle data, targets = shuffle(digits.data, digits.target, random_state=0) # scale the data from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(data) data_scaled = scaler.transform(data) from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(data_scaled, targets, test_size=0.20, random_state=0) X_train.shape, y_train.shape """ Explanation: Ok, we've had a little peek at our dataset, lets prep it for our model. End of explanation """ from cgt.distributions import categorical def model(X, y): # relu(W*x + b) np.random.seed(0) h1 = nn.rectify(nn.Affine(64, 512, weight_init=nn.IIDGaussian(std=.1))(X)) h2 = nn.rectify(nn.Affine(512, 512, weight_init=nn.IIDGaussian(std=.1))(h1)) # softmax probabilities probs = nn.softmax(nn.Affine(512, 10)(h2)) # our prediction is the highest probability ypreds = cgt.argmax(probs, axis=1) acc = cgt.cast(cgt.equal(ypreds, y), cgt.floatX).mean() cost = -categorical.loglik(y, probs).mean() return cost, acc X = cgt.matrix(name='X', fixed_shape=(None, 64)) y = cgt.vector(name='y', dtype='i8') cost, acc = model(X, y) """ Explanation: Prep is done, time for the model. End of explanation """ learning_rate = 1e-3 epochs = 100 batch_size = 64 # get all the weight parameters for our model params = nn.get_parameters(cost) # train via SGD, use 1e-3 as the learning rate updates = nn.sgd(cost, params, learning_rate) # Functions trainf = cgt.function(inputs=[X,y], outputs=[], updates=updates) cost_and_accf = cgt.function(inputs=[X,y], outputs=[cost,acc]) import time for i in xrange(epochs): t1 = time.time() for srt in xrange(0, X_train.shape[0], batch_size): end = batch_size+srt trainf(X_train[srt:end], y_train[srt:end]) elapsed = time.time() - t1 costval, accval = cost_and_accf(X_test, y_test) print("Epoch {} took {}, test cost = {}, test accuracy = {}".format(i+1, elapsed, costval, accval)) """ Explanation: We've defined the cost and accuracy functions, time to train our model. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/awi/cmip6/models/awi-cm-1-0-mr/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-mr', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: AWI Source ID: AWI-CM-1-0-MR Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:37 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
EmuKit/emukit
notebooks/Emukit-tutorial-parallel-eval-of-obj-fun.ipynb
apache-2.0
### General imports %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import colors as mcolors ### --- Figure config colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS) LEGEND_SIZE = 15 TITLE_SIZE = 25 AXIS_SIZE = 15 FIG_SIZE = (12,8) """ Explanation: Bayesian optimization with parallel evaluation of an external objection function using Emukit This tutorial will show you how to leverage Emukit to do Bayesian optimization on an external objective function that we can evaluate multiple times in parallel. Overview By the end of the tutorial, you will be able to: Generate batches ${X_t | t \in 1..}$ of objective function evaluation locations ${x_i | x_i \in X_t}$ Evaluate the objective function at these suggested locations in parallel $f(x_i)$ Use asyncio to implement the concurrency structure supporting this parallel evaluation This tutorial requires basic familiarity with Bayesian optimization and concurrency. If you've never run Bayesian optimization using Emukit before, please refer to the introductory tutorial for more information. The concurrency used here is not particularly complicated, so you should be able to follow just fine without much more than an understanding of the active object design pattern. The overview must start with the general imports and plots configuration The overview section must finish with a Navigation that links to the main sections of the notebook End of explanation """ # Specific imports that are used in a section should be loaded at the beginning of that section. # It is ok if an import is repeated multiple times over the notebook import time import asyncio import GPy import emukit import numpy as np from math import pi from emukit.test_functions.branin import ( branin_function as _branin_function, ) ### Define the cost and objective functions _branin, _ps = _branin_function() async def a_cost(x: np.ndarray): # Cost function, defined arbitrarily t = max(x.sum()/10, 0.1) await asyncio.sleep(t) async def a_objective(x: np.ndarray): # Objective function r = _branin(x) await a_cost(x) return r async def demo_async_obj(): '''This function demonstrates a simple usage of the async objective function''' # Configure _x = [7.5, 12.5] d = len(_x) x = np.array(_x).reshape((1, d)) assert _ps.check_points_in_domain(x).all(), ("You configured a point outside the objective" f"function's domain: {x} is outside {_ps.get_bounds()}") # Execute print(f"Input: x={x}") t0 = time.perf_counter() r = await a_objective(x) t1 = time.perf_counter() print(f"Output: result={r}") print(f"Time elapsed: {t1-t0} sec") await demo_async_obj() """ Explanation: Navigation Define Objective Function Setup BO & Run BO Conclusions 1. Define objective function End of explanation """ from GPy.models import GPRegression from emukit.model_wrappers import GPyModelWrapper from emukit.core.initial_designs.latin_design import LatinDesign from emukit.core import ParameterSpace, ContinuousParameter from emukit.core.loop import UserFunctionWrapper, UserFunctionResult from emukit.core.loop.stopping_conditions import FixedIterationsStoppingCondition from emukit.core.optimization import GradientAcquisitionOptimizer from emukit.bayesian_optimization.loops import BayesianOptimizationLoop from emukit.bayesian_optimization.acquisitions import NegativeLowerConfidenceBound import warnings warnings.filterwarnings('ignore') # to quell the numerical errors in hyperparameter fitting # Plotting stuff (from constrained optimization tutorial) x1b, x2b = _ps.get_bounds() plot_granularity = 50 x_1 = np.linspace(x1b[0], x1b[1], plot_granularity) x_2 = np.linspace(x2b[0], x2b[1], plot_granularity) x_1_grid, x_2_grid = np.meshgrid(x_1, x_2) x_all = np.stack([x_1_grid.flatten(), x_2_grid.flatten()], axis=1) y_all = _branin(x_all) y_reshape = np.reshape(y_all, x_1_grid.shape) x_best = np.array([(-pi,12.275), (pi,2.275), (9.425,2.475)]) def plot_progress(loop_state, batch_size: int): plt.figure(figsize=FIG_SIZE) plt.contourf(x_1, x_2, y_reshape) plt.plot(loop_state.X[:-batch_size, 0], loop_state.X[:-batch_size, 1], linestyle='', marker='.', markersize=16, color='b') plt.plot(loop_state.X[-batch_size:, 0], loop_state.X[-batch_size:, 1], linestyle='', marker='.', markersize=16, color='r') plt.plot(x_best[:,0], x_best[:,1], linestyle='', marker='x', markersize=18, color='g') plt.legend(['Previously evaluated points', 'Last evaluation', 'True best']) plt.show() async def async_run_bo(): # Configure max_iter = 50 n_init = 6 batch_size = 6 beta = 0.1 # tradeoff parameter for NCLB acq. opt. update_interval = 1 # how many results before running hyperparam. opt. # Build Bayesian optimization components space = _ps design = LatinDesign(space) X_init = design.get_samples(n_init) input_coroutines = [a_objective(x.reshape((1,space.dimensionality))) for x in X_init] _Y_init = await asyncio.gather(*input_coroutines, return_exceptions=True) Y_init = np.concatenate(_Y_init) model_gpy = GPRegression(X_init, Y_init) model_gpy.optimize() model_emukit = GPyModelWrapper(model_gpy) acquisition_function = NegativeLowerConfidenceBound(model=model_emukit, beta=beta) acquisition_optimizer = GradientAcquisitionOptimizer(space=space) bo_loop = BayesianOptimizationLoop( model = model_emukit, space = space, acquisition = acquisition_function, acquisition_optimizer = acquisition_optimizer, update_interval = update_interval, batch_size = batch_size, ) # Run BO loop results = None n = bo_loop.model.X.shape[0] while n < max_iter: print(f"Optimizing: n={n}") # TODO use a different acquisition function because currently X_batch is 5 identical sugg. # ^ only on occasion, apparently X_batch = bo_loop.get_next_points(results) coroutines = [a_objective(x.reshape((1, space.dimensionality))) for x in X_batch] # TODO update model as soon as any result is available # ^ as-is, only updates and makes new suggestions when all results come in # TODO make suggestions cost-aware _results = await asyncio.gather(*coroutines, return_exceptions=True) Y_batch = np.concatenate(_results) results = list(map(UserFunctionResult, X_batch, Y_batch)) n = n + len(results) plot_progress(bo_loop.loop_state, batch_size) final_result = bo_loop.get_results() true_best = 0.397887 # rel_err = (final_result.minimum_value - true_best)/true_best print( "############################################################\n" f"Minimum found at location: {final_result.minimum_location}\n" f"\twith score: {final_result.minimum_value}\n" f"True minima at:\n{x_best}\n" f"\twith score: {true_best}\n" # f"Relative error (%): {rel_err*100:.2f}\n" "\tsource: https://www.sfu.ca/~ssurjano/branin.html\n" "############################################################" ) await async_run_bo() """ Explanation: 2. Run BO using parallel evaluation of batched suggestions End of explanation """
bdestombe/flopy-1
examples/Notebooks/flopy3_LoadSWRBinaryData.ipynb
bsd-3-clause
%matplotlib inline from IPython.display import Image import os import sys import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('matplotlib version: {}'.format(mpl.__version__)) print('flopy version: {}'.format(flopy.__version__)) #Set the paths datapth = os.path.join('..', 'data', 'swr_test') # SWR Process binary files files = ('SWR004.obs', 'SWR004.vel', 'SWR004.str', 'SWR004.stg', 'SWR004.flow') """ Explanation: FloPy Plotting SWR Process Results This notebook demonstrates the use of the SwrObs and SwrStage, SwrBudget, SwrFlow, and SwrExchange, SwrStructure, classes to read binary SWR Process observation, stage, budget, reach to reach flows, reach-aquifer exchange, and structure files. It demonstrates these capabilities by loading these binary file types and showing examples of plotting SWR Process data. An example showing how the simulated water surface profile at a selected time along a selection of reaches can be plotted is also presented. End of explanation """ sobj = flopy.utils.SwrObs(os.path.join(datapth, files[0])) ts = sobj.get_data() """ Explanation: Load SWR Process observations Create an instance of the SwrObs class and load the observation data. End of explanation """ fig = plt.figure(figsize=(6, 12)) ax1 = fig.add_subplot(3, 1, 1) ax1.semilogx(ts['totim']/3600., -ts['OBS1'], label='OBS1') ax1.semilogx(ts['totim']/3600., -ts['OBS2'], label='OBS2') ax1.semilogx(ts['totim']/3600., -ts['OBS9'], label='OBS3') ax1.set_ylabel('Flow, in cubic meters per second') ax1.legend() ax = fig.add_subplot(3, 1, 2, sharex=ax1) ax.semilogx(ts['totim']/3600., -ts['OBS4'], label='OBS4') ax.semilogx(ts['totim']/3600., -ts['OBS5'], label='OBS5') ax.set_ylabel('Flow, in cubic meters per second') ax.legend() ax = fig.add_subplot(3, 1, 3, sharex=ax1) ax.semilogx(ts['totim']/3600., ts['OBS6'], label='OBS6') ax.semilogx(ts['totim']/3600., ts['OBS7'], label='OBS7') ax.set_xlim(1, 100) ax.set_ylabel('Stage, in meters') ax.set_xlabel('Time, in hours') ax.legend(); """ Explanation: Plot the data from the binary SWR Process observation file End of explanation """ sobj = flopy.utils.SwrFlow(os.path.join(datapth, files[1])) times = np.array(sobj.get_times())/3600. obs1 = sobj.get_ts(irec=1, iconn=0) obs2 = sobj.get_ts(irec=14, iconn=13) obs4 = sobj.get_ts(irec=4, iconn=3) obs5 = sobj.get_ts(irec=5, iconn=4) """ Explanation: Load the same data from the individual binary SWR Process files Load discharge data from the flow file. The flow file contains the simulated flow between connected reaches for each connection in the model. End of explanation """ sobj = flopy.utils.SwrStructure(os.path.join(datapth, files[2])) obs3 = sobj.get_ts(irec=17, istr=0) """ Explanation: Load discharge data from the structure file. The structure file contains the simulated structure flow for each reach with a structure. End of explanation """ sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3])) obs6 = sobj.get_ts(irec=13) """ Explanation: Load stage data from the stage file. The flow file contains the simulated stage for each reach in the model. End of explanation """ sobj = flopy.utils.SwrBudget(os.path.join(datapth, files[4])) obs7 = sobj.get_ts(irec=17) """ Explanation: Load budget data from the budget file. The budget file contains the simulated budget for each reach group in the model. The budget file also contains the stage data for each reach group. In this case the number of reach groups equals the number of reaches in the model. End of explanation """ fig = plt.figure(figsize=(6, 12)) ax1 = fig.add_subplot(3, 1, 1) ax1.semilogx(times, obs1['flow'], label='OBS1') ax1.semilogx(times, obs2['flow'], label='OBS2') ax1.semilogx(times, -obs3['strflow'], label='OBS3') ax1.set_ylabel('Flow, in cubic meters per second') ax1.legend() ax = fig.add_subplot(3, 1, 2, sharex=ax1) ax.semilogx(times, obs4['flow'], label='OBS4') ax.semilogx(times, obs5['flow'], label='OBS5') ax.set_ylabel('Flow, in cubic meters per second') ax.legend() ax = fig.add_subplot(3, 1, 3, sharex=ax1) ax.semilogx(times, obs6['stage'], label='OBS6') ax.semilogx(times, obs7['stage'], label='OBS7') ax.set_xlim(1, 100) ax.set_ylabel('Stage, in meters') ax.set_xlabel('Time, in hours') ax.legend(); """ Explanation: Plot the data loaded from the individual binary SWR Process files. Note that the plots are identical to the plots generated from the binary SWR observation data. End of explanation """ sd = np.genfromtxt(os.path.join(datapth, 'SWR004.dis.ref'), names=True) """ Explanation: Plot simulated water surface profiles Simulated water surface profiles can be created using the ModelCrossSection class. Several things that we need in addition to the stage data include reach lengths and bottom elevations. We load these data from an existing file. End of explanation """ fc = open(os.path.join(datapth, 'SWR004.dis.ref')).readlines() fc """ Explanation: The contents of the file are shown in the cell below. End of explanation """ sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3])) """ Explanation: Create an instance of the SwrStage class for SWR Process stage data. End of explanation """ iprof = sd['IRCH'] > 0 iprof[2:8] = False dx = np.extract(iprof, sd['RLEN']) belev = np.extract(iprof, sd['BELEV']) """ Explanation: Create a selection condition (iprof) that can be used to extract data for the reaches of interest (reaches 0, 1, and 8 through 17). Use this selection condition to extract reach lengths (from sd['RLEN']) and the bottom elevation (from sd['BELEV']) for the reaches of interest. The selection condition will also be used to extract the stage data for reaches of interest. End of explanation """ ml = flopy.modflow.Modflow() dis = flopy.modflow.ModflowDis(ml, nrow=1, ncol=dx.shape[0], delr=dx, top=4.5, botm=belev.reshape(1,1,12)) """ Explanation: Create a fake model instance so that the ModelCrossSection class can be used. End of explanation """ x = np.cumsum(dx) """ Explanation: Create an array with the x position at the downstream end of each reach, which will be used to color the plots below each reach. End of explanation """ fig = plt.figure(figsize=(12, 12)) for idx, v in enumerate([19, 29, 34, 39, 44, 49, 54, 59]): ax = fig.add_subplot(4, 2, idx+1) s = sobj.get_data(idx=v) stage = np.extract(iprof, s['stage']) xs = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0}) xs.plot_fill_between(stage.reshape(1,1,12), colors=['none', 'blue'], ax=ax, edgecolors='none') linecollection = xs.plot_grid(ax=ax, zorder=10) ax.fill_between(np.append(0., x), y1=np.append(belev[0], belev), y2=-0.5, facecolor='0.5', edgecolor='none', step='pre') ax.set_title('{} hours'.format(times[v])) ax.set_ylim(-0.5, 4.5) """ Explanation: Plot simulated water surface profiles for 8 times. End of explanation """
TiKeil/Master-thesis-LOD
notebooks/Figure_7.4_Coefficients.ipynb
apache-2.0
import os import sys import numpy as np %matplotlib notebook import matplotlib.pyplot as plt from visualize import drawCoefficient import buildcoef2d """ Explanation: Coefficients for tests For our numerical simulations, we use four different diffusion coefficients. These coefficients are presented in the following. For the motivation and further explanations, we refer to the thesis. End of explanation """ bg = 0.05 #background val = 1 #values NWorldFine = np.array([256, 256]) #Coefficient 1 CoefClass1 = buildcoef2d.Coefficient2d(NWorldFine, bg = bg, val = val, length = 2, thick = 2, space = 2, probfactor = 1, right = 1, down = 0, diagr1 = 0, diagr2 = 0, diagl1 = 0, diagl2 = 0, LenSwitch = None, thickSwitch = None, equidistant = True, ChannelHorizontal = None, ChannelVertical = None, BoundarySpace = True) A1 = CoefClass1.BuildCoefficient() A1 = A1.flatten() plt.figure("Coefficient 1") drawCoefficient(NWorldFine, A1, greys=True) plt.title('Coefficient 1') plt.show() """ Explanation: Coefficient 1 End of explanation """ CoefClass2 = buildcoef2d.Coefficient2d(NWorldFine, bg = bg, val = val, length = 1, thick = 1, space = 2, probfactor = 1, right = 0, down = 0, diagr1 = 0, diagr2 = 0, diagl1 = 0, diagl2 = 0, LenSwitch = None, thickSwitch = None, equidistant = True, ChannelHorizontal = None, ChannelVertical = True, BoundarySpace = True) A2 = CoefClass2.BuildCoefficient() A2 = A2.flatten() plt.figure("Coefficient 2") drawCoefficient(NWorldFine, A2, greys=True) plt.title('Coefficient 2') plt.show() """ Explanation: Coefficient 2 End of explanation """ CoefClass3 = buildcoef2d.Coefficient2d(NWorldFine, bg = bg, val = val, length = 8, thick = 2, space = 4, probfactor = 1, right = 0, down = 0, diagr1 = 0, diagr2 = 0, diagl1 = 0, diagl2 = 1, LenSwitch = None, thickSwitch = None, equidistant = None, ChannelHorizontal = None, ChannelVertical = None, BoundarySpace = True) A3 = CoefClass3.BuildCoefficient() A3 = A3.flatten() plt.figure("Coefficient 3") drawCoefficient(NWorldFine, A3, greys=True) plt.title('Coefficient 3') plt.show() """ Explanation: Coefficient 3 End of explanation """ CoefClass4 = buildcoef2d.Coefficient2d(NWorldFine, bg = bg, val = val, thick = 1, space = 0, probfactor = 1, right = 0, down = 0, diagr1 = 1, diagr2 = 0, diagl1 = 1, diagl2 = 0, LenSwitch = [4,5,6,7,8], thickSwitch = None, equidistant = None, ChannelHorizontal = None, ChannelVertical = None, BoundarySpace = None) A4 = CoefClass4.BuildCoefficient() A4 = A4.flatten() plt.figure("Coefficient 4") drawCoefficient(NWorldFine, A4, greys=True) plt.title('Coefficient 4') plt.show() """ Explanation: Coefficient 4 End of explanation """
metpy/MetPy
v0.8/_downloads/sigma_to_pressure_interpolation.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from netCDF4 import Dataset, num2date import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, add_timestamp from metpy.units import units """ Explanation: Sigma to Pressure Interpolation By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be interpolated to isobaric coordinates. End of explanation """ data = Dataset(get_test_data('wrf_example.nc', False)) lat = data.variables['lat'][:] lon = data.variables['lon'][:] time = data.variables['time'] vtimes = num2date(time[:], time.units) temperature = data.variables['temperature'][:] * units.celsius pres = data.variables['pressure'][:] * units.pascal hgt = data.variables['height'][:] * units.meter """ Explanation: Data The data for this example comes from the outer domain of a WRF-ARW model forecast initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso University Department of Geography and Meteorology. End of explanation """ plevs = [700.] * units.hPa """ Explanation: Array of desired pressure levels End of explanation """ height, temp = mpcalc.log_interp(plevs, pres, hgt, temperature, axis=1) """ Explanation: Interpolate The Data Now that the data is ready, we can interpolate to the new isobaric levels. The data is interpolated from the irregular pressure values for each sigma level to the new input mandatory isobaric levels. mpcalc.log_interp will interpolate over a specified dimension with the axis argument. In this case, axis=1 will correspond to interpolation on the vertical axis. The interpolated data is output in a list, so we will pull out each variable for plotting. End of explanation """ # Set up our projection crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0) # Set the forecast hour FH = 1 # Create the figure and grid for subplots fig = plt.figure(figsize=(17, 12)) add_metpy_logo(fig, 470, 320, size='large') # Plot 700 hPa ax = plt.subplot(111, projection=crs) ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75) ax.add_feature(cfeature.STATES, linewidth=0.5) # Plot the heights cs = ax.contour(lon, lat, height[FH, 0, :, :], transform=ccrs.PlateCarree(), colors='k', linewidths=1.0, linestyles='solid') ax.clabel(cs, fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True, use_clabeltext=True) # Contour the temperature cf = ax.contourf(lon, lat, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.RdBu_r, transform=ccrs.PlateCarree()) cb = fig.colorbar(cf, orientation='horizontal', extend='max', aspect=65, shrink=0.5, pad=0.05, extendrect='True') cb.set_label('Celsius', size='x-large') ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree()) # Make the axis title ax.set_title('{:.0f} hPa Heights (m) and Temperature (C)'.format(plevs[0].m), loc='center', fontsize=10) # Set the figure title fig.suptitle('WRF-ARW Forecast VALID: {:s} UTC'.format(str(vtimes[FH])), fontsize=14) add_timestamp(ax, vtimes[FH], y=0.02, high_contrast=True) plt.show() """ Explanation: Plotting the Data for 700 hPa. End of explanation """
daniel-koehn/Theory-of-seismic-waves-II
08_1D_visco_elastic_SH_modelling/4_1D_visc_SH_FD_modelling.ipynb
gpl-3.0
# Execute this cell to load the notebook's style sheet, then ignore it from IPython.core.display import HTML css_file = '../style/custom.css' HTML(open(css_file, "r").read()) """ Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi End of explanation """ # Import Libraries # ---------------- import numpy as np from numba import jit import matplotlib import matplotlib.pyplot as plt from pylab import rcParams from scipy.optimize import differential_evolution # Ignore Warning Messages # ----------------------- import warnings warnings.filterwarnings("ignore") """ Explanation: 1D viscoelastic SH modelling After deriving the equations of motion for 1D wave propagation in viscoelastic SH media, we can now solve the problem using a staggered grid FD scheme. Furthermore, the anelastic coefficients of the Generalized Maxwell body (GMB) are optimized by a global Differential Evolution (DE) algorithm to achive a constant Q-spectrum. Finally, elastic and viscoelastic modelling results for a homogeneous model are compared. FD solution of 1D isotropic viscoelastic SH problem As derived in the last lesson, we can describe the 1D viscoelastic SH problem by the partial differential equations: The 2nd order space/time FD scheme on a staggered grid <img src="images/SG_1D_SH-Cart.png" width="50%"> using explicit time-stepping with the Leapfrog method to solve the 1D viscoelastic SH problem can be written as \begin{align} \scriptsize \frac{v_y(i,n+1/2) - v_y(i,n-1/2)}{dt} &\scriptsize= \frac{1}{\rho}\biggl{\biggl(\frac{\partial \sigma_{yx}}{\partial x}\biggr)^c(i,n) + f_y(i,n)\biggr}, \notag\ \scriptsize\frac{\sigma_{yx}(i+1/2,n+1) - \sigma_{yx}(i+1/2,n)}{dt} &\scriptsize= \mu_{x,u} \biggl(\frac{\partial v_{y}}{\partial x}\biggr)^c(i+1/2,n+1/2) - \sum_{l=1}^L Y_l\xi_l(i+1/2,n),\notag\ \scriptsize \frac{\xi_l(i+1/2,n+1/2) - \xi_l(i+1/2,n-1/2)}{dt} &\scriptsize= \omega_l \biggl(\frac{\partial v_{y}}{\partial x}\biggr)^c(i+1/2,n+1/2) - \omega_l \xi_l(i+1/2,n-1/2)\notag \end{align} with the unrelaxed shear modulus \begin{equation} \mu_u = \frac{\mu}{1-\sum_{l=1}^L Y_l}\notag \end{equation} and the harmonically averaged unrelaxed shear modulus \begin{equation} \mu_{x,u} = 2 \biggl(\frac{1}{\mu_u(i)}+\frac{1}{\mu_u(i+1)}\biggr)^{-1}\notag \end{equation} Notice that we have to evaluate the memory variables $\xi_l$ in the stress update at a full time step, which has to be estimated from the $\xi_l$-values at half-time steps by arithmetic averaging: \begin{equation} \xi_l(i+1/2,n) = \frac{\xi_l(i+1/2,n+1/2)+\xi_l(i+1/2,n-1/2)}{2}\notag \end{equation} Initial and boundary conditions Because we have analytical solutions for wave propagation in homogeneous elastic media, we should test our code implementation for a similar medium, by setting density $\rho$ and shear modulus $\mu$ to constant values $\rho_0,\; \mu_0$ \begin{align} \rho(i) &= \rho_0 \notag \ \mu(i) &= \mu_0 = \rho_0 V_{s0}^2\notag \end{align} at each spatial grid point $i = 0, 1, 2, ..., nx$ and staggered grid points $i+1/2$ in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is \begin{equation} v_y(i,-1/2) = \sigma_{yx}(i+1/2,0) = \xi_{l}(i+1/2,-1/2) = 0, \nonumber \end{equation} so the modelling starts with zero particle velocity, shear stress and memory variable amplitudes at each spatial grid point. As boundary conditions, we assume \begin{align} v_y(0,n) &= \sigma_{yx}(1/2,n) = \xi_{l}(1/2,n) = 0, \nonumber\ v_y(nx,n) &= \sigma_{yx}(nx+1/2,n) = \xi_{l}(nx+1/2,n) = 0, \nonumber\ \end{align} for all full and staggered time steps n, n+1/2. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions. Let's implement it ... End of explanation """ # Define GMB models # ----------------- Qopt = 20 # target Q-value L = 4 # number of Maxwell bodies nfreq = 50 # number of frequencies to estimate Q-model fmin = 5. # minimum frequency fmax = 100.0 # maximum frequency f = np.linspace(fmin,fmax,num=nfreq) # calculate frequencies to estimate Q-model w = 2 * np.pi * f # circular frequencies to estimate Q-model fl = np.linspace(fmin,fmax,num=L) # calculate relaxation frequencies wl = 2 * np.pi * fl # circular relaxation frequencies """ Explanation: Optimizing Yl coefficients To achieve a constant Q-spectrum, we have to optimize the anelastic coefficients $Y_l$. This can be achieved by minimizing the objective function: \begin{equation} \chi(Y_l) = \int_{\omega_{min}}^{\omega_{max}} \biggl(Q^{-1}(Y_l,\omega) - Q^{-1}_{opt}\biggr)^2 d\omega\notag \end{equation} where $Q^{-1}_{opt}$ denotes the target constant inverse Q-value within the frequency range of the source wavelet and \begin{equation} Q^{-1}(Y_l,\omega) = \sum_{l=1}^L Y_l \frac{\omega_l\omega}{\omega^2+\omega_l^2}\notag \end{equation} is an approximation of the GMB Q-spectrum for $Q>>1$ according to (Blanch et al. 1995, Bohlen 2002, Yang et al. 2016). The objective function can be minimized using local or global optimization methods. In this case, we will use a global optimization Differential Evolution (DE) algorithm available from SciPy Optimization library. In the following example, we want to solve the viscoelastic SH-problem by the (2,2)-FD algorithm defined above for a homogeneous model using 4 Maxwell bodies and target for a constant Q=20 value in a frequency band between 5 and 100 Hz: End of explanation """ # Objective function to optimize Yl values # ---------------------------------------- def obj_Yl(Yl): # Calculate Qs model based on GMB # ------------------------------- Qinv_GMB = np.zeros(nfreq) Qinv_const = (1/Qopt) * np.ones(nfreq) for l in range (0,L): Qinv_GMB += Yl[l] * (wl[l] * w) / (w**2+wl[l]**2) # Calculate objective function obj_Qinv = np.sum((Qinv_GMB - Qinv_const)**2) return obj_Qinv """ Explanation: Next, we define the objective function to optimize the $Y_l$ values ... End of explanation """ # Calculate Q-spectrum for given Yl parameters and circular relaxation frequencies # -------------------------------------------------------------------------------- def calc_Q(Yl): # Calculate Qs model based on GMB # ------------------------------- Qinv_GMB = np.zeros(nfreq) for l in range (0,L): Qinv_GMB += Yl[l] * (wl[l] * w) / (w**2+wl[l]**2) return f, Qinv_GMB """ Explanation: A function to evaluate the Q-spectrum for given $Y_l$ values might be also useful ... End of explanation """ # Optimize dimensionless, anelastic coefficients Yl # ------------------------------------------------- # Define bound constraints for DE algorithm bounds = [(0.0, 1), (0.0, 1), (0.0, 1), (0.0, 1)] # Optimize Q-model by Differential Evolution DE_result = differential_evolution(obj_Yl, bounds) print(' Final obj_Qinv = ', DE_result.fun) # Calculate optimum Q model f, Qinv_GMB = calc_Q(DE_result.x) # Store and display optimized Yl, wl values Yl = np.zeros(L) Yl = DE_result.x print('Yl = ', Yl) print('wl = ', wl) # Calculate Q(omega) Q = 1 / Qinv_GMB # Define figure size rcParams['figure.figsize'] = 7, 5 # plot stress-strain relation plt.plot(f, Q, 'r-',lw=3,label="Generalized Maxwell model") plt.title(r'$Q(\omega)$ for Generalized Maxwell model') plt.xlabel('Frequency f [Hz]') plt.ylabel(r'$Q$ []') plt.ylim(0,2*Qopt) plt.grid() plt.show() """ Explanation: To optimize the $Y_l$ values, we define their bounds necessary for the DE-algorithm, minimize the objective function, store the resulting $Y_l$ values and plot the optimized Q-spectrum End of explanation """ # Definition of modelling parameters # ---------------------------------- xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m) tmax = 0.502 # maximum recording time of the seismogram (s) vs0 = 580. # S-wave speed in medium (m/s) rho0 = 1000. # Density in medium (kg/m^3) # acquisition geometry xr = 330.0 # x-receiver position (m) xsrc = 250.0 # x-source position (m) f0 = 40. # dominant frequency of the source (Hz) t0 = 4. / f0 # source time shift (s) """ Explanation: As usual, we define the modelling parameters ... End of explanation """ # Particle velocity vy update # --------------------------- @jit(nopython=True) # use JIT for C-performance def update_vel(vy, syx, dx, dt, nx, rho): for i in range(1, nx - 1): # Calculate spatial derivatives syx_x = (syx[i] - syx[i - 1]) / dx # Update particle velocities vy[i] = vy[i] + (dt/rho[i]) * syx_x return vy """ Explanation: Next, we first test the elastic part of the 1D SH code and compare it with the analytical solution and finally run the viscoelastic SH code. Comparison of 2D finite difference with analytical solution for homogeneous Vs model In a previous exercise you proved that the analytical solutions for the homogeneous 1D acoustic and 1D elastic SH problem, beside a density factor $1/\rho_0$, are actual identical . In the function below we solve the homogeneous 1D SH problem by centered 2nd order spatial/temporal difference operators and compare the numerical results with the analytical solution: \begin{equation} u_{y,analy}(x,t) = G_{1D} * S \nonumber \end{equation} with the 1D Green's function: \begin{equation} G_{1D}(x,t) = \dfrac{1}{2 \rho_0 V_{s0}}H\biggl((t-t_s)-\dfrac{|r|}{V_{s0}}\biggr), \nonumber \end{equation} where $H$ denotes the Heaviside function, $r = \sqrt{(x-x_s)^2}$ the source-receiver distance (offset) and $S$ the source wavelet. Keep in mind that the stress-velocity code computes the particle velocities $\mathbf{v_{y,analy}}$, while the analytical solution is expressed in terms of the displacement $\mathbf{u_{y,analy}}$. Therefore, we have to take the first derivative of the analytical solution, before comparing the numerical with the analytical solution: \begin{equation} v_{y,analy}(x,t) = \frac{\partial u_{y,analy}}{\partial t} \nonumber \end{equation} To implement the 2D SH code, we first introduce functions to update the particle velocity $v_y$ ... End of explanation """ # Shear stress syx updates (elastic) # ---------------------------------- @jit(nopython=True) # use JIT for C-performance def update_stress(vy, syx, dx, dt, nx, mux): for i in range(1, nx - 1): # Calculate spatial derivatives vy_x = (vy[i + 1] - vy[i]) / dx # Update shear stress syx[i] = syx[i] + dt * mux[i] * vy_x return syx """ Explanation: ... update the shear stress component $\sigma_{yx}$ for the elastic medium ... End of explanation """ # Shear stress syx updates (viscoelastic) # --------------------------------------- @jit(nopython=True) # use JIT for C-performance def update_stress_visc(vy, syx, xi, Yl, wl, L, dx, dt, nx, mux): for i in range(1, nx - 1): # Calculate spatial derivatives vy_x = (vy[i + 1] - vy[i]) / dx # Calculate sum over memory variables xi_sum = 0.0 for l in range(0, L): xi_sum += Yl[l] * xi[i,l] # Update shear stress # Note that the factor 0.5 in front of the memory variables sum # is due to the arithmetic averaging of the xi variables at # the staggered time steps syx[i] = syx[i] + dt * mux[i] * (vy_x - 0.5 * xi_sum) # Update memory variables xi_sum = 0.0 for l in range(0, L): xi[i,l] += dt * wl[l] * (vy_x - xi[i,l]) # After xi update calculate new xi_sum ... xi_sum += Yl[l] * xi[i,l] # ... and finish stress update syx[i] = syx[i] - dt * mux[i] * xi_sum return syx, xi """ Explanation: ... and a function for the shear stress update $\sigma_{yx}$ for the viscoelastic case ... End of explanation """ # Harmonic averages of shear modulus # ---------------------------------- @jit(nopython=True) # use JIT for C-performance def shear_avg(mu, nx, mux): for i in range(1, nx - 1): # Calculate harmonic averages of shear moduli mux[i] = 2 / (1 / mu[i + 1] + 1 / mu[i]) return mux """ Explanation: ... and harmonically averaging the (unrelaxed) shear modulus ... End of explanation """ # 2D SH viscoelastic wave propagation (Finite Difference Solution) # ---------------------------------------------------------------- def FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,mode): nx = (int)(xmax/dx) # number of grid points in x-direction print('nx = ',nx) nt = (int)(tmax/dt) # maximum number of time steps print('nt = ',nt) ir = (int)(xr/dx) # receiver location in grid in x-direction isrc = (int)(xsrc/dx) # source location in grid in x-direction # Source time function (Gaussian) # ------------------------------- src = np.zeros(nt + 1) time = np.linspace(0 * dt, nt * dt, nt) # 1st derivative of a Gaussian src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2)) # Analytical solution # ------------------- G = time * 0. vy_analy = time * 0. # Initialize coordinates # ---------------------- x = np.arange(nx) x = x * dx # coordinates in x-direction (m) # calculate source-receiver distance r = np.sqrt((x[ir] - x[isrc])**2) for it in range(nt): # Calculate Green's function (Heaviside function) if (time[it] - r / vs0) >= 0: G[it] = 1. / (2 * rho0 * vs0) Gc = np.convolve(G, src * dt) Gc = Gc[0:nt] # compute vy_analy from uy_analy for i in range(1, nt - 1): vy_analy[i] = (Gc[i+1] - Gc[i-1]) / (2.0 * dt) # Initialize empty wavefield arrays # --------------------------------- vy = np.zeros(nx) # particle velocity vy syx = np.zeros(nx) # shear stress syx # Initialize model (assume homogeneous model) # ------------------------------------------- vs = np.zeros(nx) vs = vs + vs0 # initialize wave velocity in model rho = np.zeros(nx) rho = rho + rho0 # initialize wave velocity in model # calculate shear modulus # ----------------------- mu = np.zeros(nx) mu = rho * vs ** 2 # Estimate unrelaxed shear modulus in viscoelastic case # ----------------------------------------------------- if(mode=='visc'): mu1 = mu / (1 - np.sum(Yl)) # harmonic average of shear moduli # -------------------------------- if(mode=='elast'): mux = mu # initialize harmonic average mux if(mode=='visc'): mux = mu1 # initialize harmonic average mux mux = shear_avg(mu, nx, mux) # Initialize memory variables # --------------------------- if(mode=='visc'): xi = np.zeros((nx,L)) # Initialize empty seismogram # --------------------------- seis = np.zeros(nt) # Time looping # ------------ for it in range(nt): # Update particle velocity vy # --------------------------- vy = update_vel(vy, syx, dx, dt, nx, rho) # Add Source Term at isrc # ------------------------------ # Absolute particle velocity w.r.t analytical solution vy[isrc] = vy[isrc] + (dt * src[it] / (rho[isrc] * dx)) # Update shear stress syx, syz # ---------------------------- if(mode=='elast'): syx = update_stress(vy, syx, dx, dt, nx, mux) if(mode=='visc'): syx, xi = update_stress_visc(vy, syx, xi, Yl, wl, L, dx, dt, nx, mux) # Output of Seismogram # ----------------- seis[it] = vy[ir] # Compare FD Seismogram with analytical solution # ---------------------------------------------- # Define figure size rcParams['figure.figsize'] = 12, 5 if(mode=='elast'): label = "Elastic FD solution" if(mode=='visc'): label = "Viscoelastic FD solution (Q = " + str(Qopt) + ")" plt.plot(time, seis, 'b-',lw=3,label=label) # plot FD seismogram Analy_seis = plt.plot(time,vy_analy,'r--',lw=3,label="Elastic analytical solution") # plot analytical solution plt.xlim(time[0], time[-1]) plt.title('Seismogram') plt.xlabel('Time (s)') plt.ylabel('Amplitude') plt.legend() plt.grid() plt.show() """ Explanation: Finally, we can assemble the main FD code ... End of explanation """ %%time # FD modelling of homogeneous elastic medium # ------------------------------------------ dx = 1.0 # grid point distance in x-direction (m) dt = 0.001 # time step (s) FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,'elast') """ Explanation: ... run the elastic FD code and compare the result with the analytical solution: End of explanation """ %%time # FD modelling of homogeneous viscoelastic medium # ----------------------------------------------- dx = 1.0 # grid point distance in x-direction (m) dt = 0.001 # time step (s) FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,'visc') """ Explanation: Finally, we run the viscoelastic modelling run and compare it with the elastic analytical solution: End of explanation """
OCPython/meetup-2017-10-mongodb
jupyter_notebooks/02_pymongo_aggregation.ipynb
mit
# import pymongo from pymongo import MongoClient from pprint import pprint # Create client client = MongoClient('mongodb://localhost:32768') # Connect to database db = client['fifa'] # Get collection my_collection = db['player'] """ Explanation: Aggregation (via pymongo) End of explanation """ def print_docs(pipeline, limit=5): pipeline.append({'$limit':limit}) # Run Aggregation docs = my_collection.aggregate(pipeline) # Print Results for idx, doc in enumerate(docs): # print(type(doc)) pprint(doc) # print(f"#{idx + 1}: {doc}\n\n") """ Explanation: Do An Aggregation Basic process to convert pipelines from a JavaScript array to a Python list Convert all comments (from "//" to "#") Title-case all true/false to True/False Quote all operators and fields ($match --> '$match') Important: When using $sort operator in Python 2, wrap list with SON() method (from bson import SON) Tips to avoid above process Use 1/0 for True/False Quote things in JavaScript ahead of time Helper Functions End of explanation """ # $match - Filter out Goalkeepers match_a = { '$match': { 'positionFull': {'$ne': 'Goalkeeper'} } } # Create Pipeline pipeline = [ match_a, ] # Fetch and Print the Results print_docs(pipeline, limit=1) # $project - Keep only the fields we're interested in project_a = { '$project': { '_id': True, # Note: not required, _id is included by default 'name': {'$concat': ['$firstName', ' ', '$lastName']}, 'pos': '$positionFull', # Note: renaming 'rating': True, 'attributes': True } } # Create Pipeline pipeline = [ match_a, project_a, ] # Fetch and Print the Results print_docs(pipeline, limit=5) # $unwind - Convert N documents to 6*N documents (so we can do math on attributes) unwind_a = { '$unwind': '$attributes' } # Create Pipeline pipeline = [ match_a, project_a, unwind_a, ] # Fetch and Print the Results print_docs(pipeline, limit=5) # $group - $sum the value of the attributes (and pass the rest of the fields through the _id) group_a = { '$group': { '_id': { 'id': '$_id', 'rating': '$rating', 'name': '$name', 'pos': '$pos' }, "sum_attributes": { '$sum': "$attributes.value" } } } # Create Pipeline pipeline = [ match_a, project_a, unwind_a, group_a, ] # Fetch and Print the Results print_docs(pipeline, limit=5) # $project - Keep only the fields we're interested in # Note: this is our second $project operator !!! project_b = { '$project': { '_id': False, # turn off _id 'id': '$_id.id', 'name': '$_id.name', 'pos': '$_id.pos', 'rating': '$_id.rating', 'avg_attributes': {"$divide": ['$sum_attributes', 6]}, 'rating_attribute_difference': {"$subtract": [{"$divide": ['$sum_attributes', 6]}, '$_id.rating']} } } # Create Pipeline pipeline = [ match_a, project_a, unwind_a, group_a, project_b, ] # Fetch and Print the Results print_docs(pipeline, limit=5) # $match - Find anybody rated LESS than 75 that has a higher than 75 avg_attributes # Note: this is our second $match operator !!! match_b = { '$match': { 'rating': {'$lt': 75}, 'avg_attributes': {'$gte': 75} } } # Create Pipeline pipeline = [ match_a, project_a, unwind_a, group_a, project_b, match_b, ] # Fetch and Print the Results print_docs(pipeline, limit=5) # $sort - Based on the amount of injustice # Note: This step could be placed above previous "$match" step, but placing it here is more efficient with less # data to sort sort_a = { '$sort': { 'rating_attribute_difference': -1 } } # Create Pipeline pipeline = [ match_a, project_a, unwind_a, group_a, project_b, match_b, sort_a, ] # Fetch and Print the Results print_docs(pipeline, limit=5) """ Explanation: Aggregation End of explanation """ # Create Pipeline pipeline = [match_a, project_a, unwind_a, group_a, project_b, match_b, sort_a] # Run Aggregation docs = my_collection.aggregate(pipeline) # Print Results for idx, doc in enumerate(docs): print(f"#{idx + 1}: {doc['name']}, a {doc['pos']}, is rated {doc['rating']} instead of {doc['avg_attributes']:.0f}") """ Explanation: Final Pipeline End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session14/Day3/IntroductionToVariationalAutoencoders_solutions.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import torch.nn as nn import torch.nn.functional as F import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader from torchvision.transforms import Normalize """ Explanation: <a href="https://colab.research.google.com/github/VMBoehm/ML_Lectures/blob/main/IntroductionToVariationalAutoencoders_solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Introduction to Variational AutoEncoders (VAEs) and applications to physical data by Vanessa Boehm (UC Berkeley and LBNL) Feb 27 2022 In this notebook we will be implementing our first Variational Autoencoder. VAEs are a useful tool for the analysis of physical data because of their probabilistic layout. We will start from a (non-probalistic) autoencoder and convert it into its variational counterpart. Autoencoder Recall that an Auto-Encoder consists of two networks: An encoder network that takes the data, $x$, and maps it to a lower-dimensional latent space. We will call this network $f$ and its network parameters $\phi$. A decoder network that takes the encoded data, $z$, and maps it back to the data space. We will call the result of the reconstruction $x'$, the decoder network $g$ (for generator) and its network parameters $\psi$. $$ x' = g_\psi(f_\phi(x)) \tag{1}$$ An Auto-Encoder is trained to minimize the reconstruction error between the input $x$ and the reconstruction $x'$. $$ \mathcal{L}{AE}(\phi,\psi) = ||x-g\psi(f_\phi(x))||^2_2 \tag{2}$$ Probabilistic Interpretation (lecture recap) In a Variational Auto-Encoder we interpret the reconstruction task probabilistically: Compressing the data results in a loss of information about the original data. If we only have access to the compressed data, we have no chance of knowing what the original data looked like exactly. Instead, we obtain a probability distribution over possible inputs. This motivates a probabilistic formulation of the problem: Let's assume that the data follows some probabiliy distribution $p(x)$ (each data point is a drawn from this distribution) $$ p(x) = \int \mathrm{d}z\, p(x|z) p(z) \tag{3}$$ Here, we have introduced two probability distributions on the right hand side, the liklelihood, $p(x|z)$, and the prior, $p(z)$. The likelihood arises because of the information loss in the compression. $$ x = g_\psi(f_\phi(x)) + \epsilon = g_\psi(z) + \epsilon \tag{4}$$ Ideally, $\epsilon$, the part of the data that is lost in the compression, is just noise and unimportant for our final data analysis. The form of the likelihood is equal to the distribution of this noise. For example, if the noise is Gaussian (which is often the case for physical data) with covariance $\Sigma_\epsilon$, the likelihood is a Gaussian distribution: $$ p_{\psi}(x|z) = \mathcal{G}(g_\psi(z),\Sigma_\epsilon) \tag{5}$$ Optional Question 1 Starting from $p(x|z) = \int \mathrm{d}\epsilon\, p(x,\epsilon|z)$ can you show that the likelihood follows the same distribution as $\epsilon$? Solution: $p(x|z) = \int \mathrm{d}\epsilon\, p(x,\epsilon|z) = \int \mathrm{d}\epsilon\, p(x|z,\epsilon) p(\epsilon) = \int \mathrm{d}\epsilon\, \delta_D(x-g(z)-\epsilon) p(\epsilon) = p(x-g(z))$ The prior, $p(z)$, is the average distribution of the encoded data. $$ p(z) = \int \mathcal{d}x\, p(z|x) p(x) \tag{6}$$ In a VAE, we want the prior distribution to have closed form and to be easy to sample from (for artificial data generation). We have the freedom to choose the prior distribution as a constraint. The network training will ensure it is obeyed. A common choice is a normal distribution $$ p(z) = \mathcal{N}(0,1) \tag{7} $$ Variational Autoencoder & Evidence Lower BOund (ELBO) To train the Variational Auto-Encoder we maximize the average log probability, $\log p(x)$, or, as we will see now, a lower bound to this quantity. Equation (3) involves solving a fairly high dimensional integral, which is a computationally expensive and sometimes infeasible operation. This integral has to be solved not only once, but in each training step. Variational Autoencoders solve this integral approximately by using a variational ansatz for the posterior distribution, the approximate posterior $q_\phi(z|x)$. This distribution approximates the true posterior p(z|x) and is parameterized by the encoder parameters $\phi$. The classic choice for the variational posterior is a multivariate Gaussian in the mean field approxiation (mean field meaning no off-diagonal terms in the covariance) $$ q_\phi(z|x) = \mathcal{G}(\mu,\sigma_i) \tag{7} $$ where the mean, $\mu$, and variance, $\sigma$, are determined by the encoder network. $$(\mu, \sigma) = f_\phi(x) \tag{8} $$ As we saw in the lecture, the variational ansatz allows us to formulate a lower bound to $\log p(x)$, the Evidence Lower BOund. $$ \log p(x) >= \int \mathrm{d}z\, q_\phi(z|x) \log{p_\psi(x|z)} - \int \mathrm{d}z\, q_\phi(z|x) \log{\frac{q_\phi(z|x)}{p(z)}} = ELBO \tag{8}$$ $$ \mathcal{L}_{VAE}(\phi,\psi) = -ELBO \tag{9}$$ The ELBO consists of two terms. The first term measures the expectation value of the likelihood over the posterior. Maximizing this term encourages high quality reconstructions (similar to the autoencoder). The second term is the KL-Divergence (a distance measure) between the variational posterior and the prior. This term acts as a regularizer. It encourages posterior distributions which are similar to the prior. In VAE training the first term is evaluated stochastically, meaning that the expectation value is evaluated approximately by averaging over a number of samples from $q_\phi(z|x)$. The second term can be either evaluated analytically (the KL divergence between to Gaussian distributions can be calculated) or stochastically. Reparametrization Trick Minimizing Eq. (9) requires taking gradients with respect to $\phi$ and $\psi$. But how do we take the gradient through an expectation value? We use what is called the reparametrization trick. Instead of sampling from the posterior $q_\phi(z|x)$ we sample from the parameter-independent normal distribution $$ \zeta ∼ \mathcal{N}(0,1) \tag{10}$$ and use the identity $z=\zeta*\sigma_\phi+\mu_\phi$, an operation which is trivially differentiably, to obtain our samples. Pytorch will perform the reparametrization trick for us under the hood, if we use distribution.rsample() - so we don't have to code it explicitly. End of recap. Begin of coding exercise! In this first part, all you need to do is download the dataset and read through the code. You learned about Autoencoders yesterday, so all you need to do is go through the code below and make sure you understand it. We will start modifying it in the next section. Let' start by importing a few packages, that we will need later End of explanation """ from google.colab import drive import os drive.mount('/content/drive') """ Explanation: Our dataset In this coding exercise we will be working with a galaxy spectra sample from the SDSS-BOSS DR 16 release. The spectra have been de-redshifted to the restframe and their magnitude has been standardized to a distance correspondingg to $z_\lambda=0.1$. They have further been downsampled to 1000 pixels, denoised and inpainted where masks were present. (I can tell you more about the data cuts and preprocessing if you are interested, but it is not relevant for this task.) Despite being relatively high-dimensional ($d=1000$), galaxy specrtra actually reside on a lower dimensional manifold. An indication for this is that we can compress them to much smaller dimensionality without sacrificing much reconstruction quality. This property makes them a very suitable data type for VAEs. (The same applies to image data, but images datasets are computationally more expensive to train on and they need more complicated nework architectures - things that we don't want to worry about in this exercise.) STEP 1: Download the training and test datasets training set test set and place them in your Google Drive. (If you want to avoid having to modify the file paths in the code below, you need to create a folder called 'ML_lecture_data' and place the files in there.) Next, we link Google Drive to this notebook End of explanation """ ! ls drive/MyDrive/ML_lecture_data/ """ Explanation: Use this line to confirm the location of your files End of explanation """ INPUT_SIZE = 1000 LATENT_SIZE = 6 """ Explanation: Let's set some immutable variables: The dimensionality of the input data and the dimensionality of the latent (encoded) space End of explanation """ class SDSS_DR16(Dataset): """De-redshifted and downsampled spectra from SDSS-BOSS DR16""" def __init__(self, root_dir='drive/MyDrive/ML_lecture_data/', transform=True, train=True): """ Args: root_dir (string): Directory of data file transform (callable, optional): Optional transform to be applied on a sample. """ if train: self.data = np.load(open(os.path.join(root_dir,'DR16_denoised_inpainted_train.npy'),'rb'),allow_pickle=True) else: self.data = np.load(open(os.path.join(root_dir,'DR16_denoised_inpainted_test.npy'),'rb'),allow_pickle=True) self.data = torch.as_tensor(self.data) self.mean = torch.mean(self.data) self.std = torch.std(self.data) def __len__(self): return len(self.data) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() sample = (self.data[idx]-self.mean)/self.std return sample #initialize datasets training_data = SDSS_DR16(train=True) test_data = SDSS_DR16(train=False) """ Explanation: Next we create pytorch datasets from the training and test data (note that you need to change the root_dir, if you placed the data in a different folder) End of explanation """ # we inherit from pytorch Module class; https://pytorch.org/docs/stable/generated/torch.nn.Module.html class Encoder(nn.Module): def __init__(self, seed=853): """ seed: int, random seed for reproducibility """ super(Encoder, self).__init__() self.seed = torch.manual_seed(seed) # here we are initializing the linear layers. This registeres the layer parameters (W,b) as parameters of the Module self.fc1 = nn.Linear(INPUT_SIZE,50) self.fc2 = nn.Linear(50,LATENT_SIZE) # this defines a forward pass of the network (="applying" the network to some input data) def forward(self, x): x = torch.nn.LeakyReLU()(self.fc1(x)) z = self.fc2(x) return z class Decoder(nn.Module): def __init__(self, seed=620): """ seed: int, random seed for reproducibility """ super(Decoder, self).__init__() self.seed = torch.manual_seed(seed) self.fc1 = nn.Linear(LATENT_SIZE,50) self.fc2 = nn.Linear(50,INPUT_SIZE) def forward(self, z): z = torch.nn.LeakyReLU()(self.fc1(z)) x = self.fc2(z) return x """ Explanation: Autoencoder This exercise starts with an Autoencoder, which is already implemented and working. The next cells walk you through the code. Your task (further below) will be to take that code and modify it into a Variational Autoencoder. First, we define our encoder and decoder networks. We use a very simple MLP, with two linear layers and one non-linear activation function. End of explanation """ class Autoencoder(nn.Module): def __init__(self): super(Autoencoder, self).__init__() # here we are creating instances of the Encoder and Decoder class self.encoder = Encoder() self.decoder = Decoder() def forward(self, x): z = self.encoder(x) x = self.decoder(z) return x # This creates an instance of the Autoencoder class AE = Autoencoder() """ Explanation: Having defined the encoder and decoder network, we can move on to define the Autoencoder. End of explanation """ # the training loop takes a function that loads the data batch by batch, a model to train, a loss function to train the model on and an optimizer def train_loop(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) losses = [] # iterate over the dataset for batch, X in enumerate(dataloader): # Compute prediction of the model (in case of the AE the prediction is the reconstructed data) pred = model(X) # Compute the loss function (in case of the AE this is the L2 distance to the input data) loss = loss_fn(pred,X) # Backpropagation; this is where we take the gradient and update the network parameters optimizer.zero_grad() loss.backward() optimizer.step() # here we keep track of the loss if batch % 100 == 0: loss, current = loss.item(), batch * len(X) losses.append(loss) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") return losses # the test loop is similar to the training loop, only that we don't take any gradients/don't update the network parameters, but only evaluate def test_loop(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) test_loss = 0 with torch.no_grad(): for X in dataloader: pred = model(X) test_loss += loss_fn(pred, X).item() test_loss /= num_batches print(f" Avg loss: {test_loss:>8f} \n") return test_loss """ Explanation: The next step is to train the Autoencoder. This is what a generic training loop looks like: End of explanation """ BATCHSIZE = 128 BATCHSIZE_TEST = 256 LEARNING_RATE = 1e-3 # MeanSquaredError (L2) Loss loss_fn = nn.MSELoss() # Adam Optimizer optimizer = torch.optim.Adam(AE.parameters(), lr=LEARNING_RATE) # Dataloaders train_dataloader = DataLoader(training_data, batch_size=BATCHSIZE, shuffle=True) test_dataloader = DataLoader(test_data, batch_size=BATCHSIZE_TEST, shuffle=True) """ Explanation: In the next cell we set the training parameters, define the loss function and create DataLoaders. Pytorch DataLoaders manage the data loading for us (break the dataset into batches, keep track of epochs, reshuffle the data after each epoch) End of explanation """ EPOCHS = 30 SEED = 555 train_loss = [] test_loss = [] for t in range(EPOCHS): torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) np.random.seed(SEED) print(f"Epoch {t+1}\n-------------------------------") train_loss.append(train_loop(train_dataloader, AE, loss_fn, optimizer)) test_loss.append(test_loop(test_dataloader, AE, loss_fn)) print("Done!") """ Explanation: It's finally time for training: End of explanation """ # losses length = len(np.asarray(train_loss).flatten()) plt.figure() plt.plot(np.linspace(0,length*100,length), np.asarray(train_loss).flatten(),label='training set') plt.plot(np.linspace(100,(length)*100,len(test_loss)),test_loss,label='test set') plt.xlabel('training step') plt.ylabel('loss') plt.legend() plt.show() ### zoom in length = len(np.asarray(train_loss).flatten()) plt.figure() plt.plot(np.linspace(0,length*100,length), np.asarray(train_loss).flatten(),label='training set') plt.plot(np.linspace(100,(length)*100,len(test_loss)),test_loss,label='test set') plt.xlabel('training step') plt.ylabel('loss') plt.legend() plt.ylim(0,0.1) plt.show() """ Explanation: Let's see how the model is doing. Let's look at Training and test loss Final reconstruction quality End of explanation """ test_input = next(iter(test_dataloader)) with torch.no_grad(): recons = AE(test_input) # This is the mapping from pixel to the de-redshifted (rest) wavelength wlmin, wlmax = (3388,8318) fixed_num_bins = 1000 wl_range = (np.log10(wlmin),np.log10(wlmax)) wl = np.logspace(wl_range[0],wl_range[1],fixed_num_bins) fig, ax = plt.subplots(4,4, figsize=(20,10), sharex=True) ax = ax.flatten() for ii in range(16): ax[ii].plot(wl,test_input[ii], label='input') ax[ii].plot(wl,recons[ii],alpha=0.7,label='reconstruction') if ii in np.arange(12,16): ax[ii].set_xlabel('wavelength [Ångströms]') if ii in [0,4,8,12]: ax[ii].set_ylabel('some standardized flux') if ii==0: ax[ii].legend() plt.show() """ Explanation: You can see that the model had a really easy time learning the task and that we haven't overfitted yet Now, let's look at a few reconstructions: End of explanation """ avg_loss = 0 with torch.no_grad(): for X in test_dataloader: pred = AE(X) avg_loss+=np.mean((pred.cpu().numpy()-X.cpu().numpy())**2,axis=0)/(len(test_data)//BATCHSIZE_TEST) plt.figure() plt.plot(wl,np.sqrt(avg_loss)) plt.ylabel('average reconstruction error') plt.xlabel('wavelength [Ångströms]') plt.show() #Optional: save the model weights #torch.save(AE.state_dict(), 'drive/MyDrive/ML_lecture_models/AE_model_weights.pth') """ Explanation: ... and the average reconstruction error as a function of wavelength: End of explanation """ class VAEEncoder(nn.Module): def __init__(self, seed=853): super(VAEEncoder, self).__init__() # TASK: change the output size of the encoder network. How many parameters must it return to define q(z|x)? self.seed = torch.manual_seed(seed) self.fc1 = nn.Linear(INPUT_SIZE,50) self.fc2 = nn.Linear(50,LATENT_SIZE*2) def forward(self, x): # TASK: change the output of the encoder network. Instead of just returning z, it should return z and ...? # HINT: Don't forget that the standard deviation/variance must be strictly positive! # HINT: You might want to use torch.split(): https://pytorch.org/docs/stable/generated/torch.split.html x = torch.nn.LeakyReLU()(self.fc1(x)) x = self.fc2(x) mu,std = torch.split(x, LATENT_SIZE,dim=-1) std = torch.exp(std) + 1e-8 return mu, std """ Explanation: A few things you might have noticed and that can be useful to keep in mind: The model expects a single precision input. You can change the type of a tensor with tensor_name.type(), where tensor_name is the name of your tensor and type is the dtype. For typecasting into single precision floating points, use float(). A numpy array is typecasted with array_name.astype(type). For single precision, the type should be np.float32. Before we analyze tensors we often want to convert them to numpy arrays with tensor_name.numpy() If pytorch has been tracking operations that resulted in the current tensor value, you need to detach the tensor from the graph before you can transform it into a numpy array: tensor_name.detach(). Scalars can be detached with scalar.item() If you tensor is currently on the GPU, you can bring it onto the CPU with tensor_name.cpu() Variational Autoencoder Your task today is to transform the above Autoencoder into a Variational Autoencoder. You can follow either follow my Step-by-Step instructions or do it your way (there's more than one solution). We start by modifying the encoder network. Tasks are marked with the keyword #TASK, hints are marked with the keyword #HINT. STEP 1: Modify the encoder network. The encoder network is used to characterize the variational distribution q(z|x). Recall that we want q(z|x) to be a Gaussian with diagonal covariance. An N-dimensional Gaussian with diagonal covariance is eqivalent to N independent 1-dimensional Gaussians (where N is the latent size). Each Gaussian is defined by two quantities, its mean and variance (or, if we take the sqrt, the standard deviation). End of explanation """ class VAEDecoder(nn.Module): def __init__(self, seed=620): super(VAEDecoder, self).__init__() self.seed = torch.manual_seed(seed) self.fc1 = nn.Linear(LATENT_SIZE,50) self.fc2 = nn.Linear(50,INPUT_SIZE) def forward(self, z): z = torch.nn.LeakyReLU()(self.fc1(z)) x = self.fc2(z) return x """ Explanation: STEP 2: Modify the decoder network. We will leave our decoder network as it is :) End of explanation """ #TASK: Familiarize yourself with torch.distribution.Normal - you can find the documentation here: https://pytorch.org/docs/stable/distributions.html#normal #HINT: It takes a standard deviation (scale) not a variance as input from torch.distributions import Normal as Normal """ Explanation: Since we work with probability distributions in the VAE, we need to import the torch distributions package. We will only need the normal distribution for this exercise. End of explanation """ class VariationalAutoencoder(nn.Module): #TASK: add parameters mentioned in point 1. def __init__(self, sample_size, sigma): super(VariationalAutoencoder, self).__init__() self.encoder = VAEEncoder() self.decoder = VAEDecoder() self.sample_size = sample_size self.sigma = sigma #TASK: Use the Normal class to define the prior (a standard normal distribution), p(z) self.prior = Normal(torch.zeros(LATENT_SIZE), torch.ones(LATENT_SIZE)) def change_sample_size(self,sample_size): self.sample_size = sample_size return True def get_q(self,x): #TASK: write a method that computes q(z,x) #HINT: use the Normal class we imported above mu, std = self.encoder(x) self.q = Normal(mu, std) return True def sample_q(self): #TASK: write a method that samples from q #HINT: use rsample to apply the reparameterization trick z_sample = self.q.rsample(torch.Size([self.sample_size])) return z_sample def get_avg_log_likelihood(self,recons,x): #TASK: Write a method that returns the first term in the ELBO (this method should define the likelihood and evaluate the average log likelihood of the reconstruction) #HINT: Pay attention to shapes. The function should return an average log likelihood (a single number) for every data point in the batch. #HINT: The output shape of Normal(mu, sigma).log_prob() is a little unintuitive. If mu or sigma are N-dimensional, it returns N results (applies N independent Gaussians). #HINT: You need to average over samples from q to obtain the final result. ll = Normal(x[None,:,:], self.sigma) log_p = ll.log_prob(recons) log_p = torch.sum(log_p,dim=-1) return torch.mean(log_p,dim=0) def stochastic_kl_divergence(self,z_sample): #TASK: Write a method that computes the kl-divergence between q(z|x) and p(z) #HINT: Pay attention to shape return torch.mean(torch.sum(self.q.log_prob(z_sample),dim=-1)-torch.sum(self.prior.log_prob(z_sample),dim=-1), dim=0) def forward(self, x): #TASK: a forward pass should return the two terms in the ELBO #HINT: use all the methods we defined above self.get_q(x) samples = self.sample_q() recons = self.decoder(samples) log_likelihood = self.get_avg_log_likelihood(recons,x) kl = self.stochastic_kl_divergence(samples) return log_likelihood, kl """ Explanation: STEP 3: Modify the AE class into a VAE class! A VAE has a few more input parameters than an AE. We need a sample size, which determines how many samples we draw from $q_\phi(z|x)$ for evaluating the ELBO. We also need a $\sigma_\epsilon$ to characterize the likelihood, $p_\psi(x|z)=\mathcal{G}(x',\sigma_\epsilon)$. The prior is fixed. We can define it in the beginning, when we initialize the VAE. We need methods to compute the variational posterior, the likelihood, the KL-divergence and the ELBO. End of explanation """ #TASK: create an instance of the Variational Autoencoder with sample_size=4 and sigma=1 VAE = VariationalAutoencoder(4,sigma=1) optimizer = torch.optim.Adam(VAE.parameters(), lr=LEARNING_RATE) #from torch.optim.lr_scheduler import StepLR #scheduler = StepLR(optimizer, step_size=10, gamma=0.75) #TASK: define the new loss function def negative_ELBO(avg_log_likelihood,kl): negative_ELBO = - torch.mean(avg_log_likelihood-kl) return negative_ELBO def train_loop(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) losses = [] for batch, X in enumerate(dataloader): #TASK: compute the loss from the output of the VAE foward pass log_likelihood, kl = model(X) loss = loss_fn(log_likelihood,kl) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) losses.append(loss) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") #scheduler.step() return losses def test_loop(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) test_loss, nllh, kl_ = 0, 0, 0 model.eval() with torch.no_grad(): for X in dataloader: #TASK: in the test loop we want to keep track not only of the ELBO, but also of the two terms that contribute to the ELBO (kl diveregence and loglikelihood) log_likelihood, kl = model(X) test_loss += loss_fn(log_likelihood,kl).item() nllh += -np.mean(log_likelihood.cpu().numpy()) kl_ += np.mean(kl.cpu().numpy()) test_loss /= num_batches kl_ /= num_batches nllh /= num_batches print(f" Avg test loss : {test_loss:>8f}") print(f" Avg KL : {kl_:>8f}") print(f" Avg negative log likelihood : {nllh:>8f} \n") return test_loss, kl_, nllh """ Explanation: STEP 4: Prepare for training End of explanation """ EPOCHS = 8 SEED = 1234 train_loss = [] test_loss = [] for t in range(EPOCHS): torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) np.random.seed(SEED) print(f"Epoch {t+1}\n-------------------------------") train_loss.append(train_loop(train_dataloader, VAE, negative_ELBO, optimizer)) test_loss.append(test_loop(test_dataloader, VAE, negative_ELBO)) print("Done!") test_loss = np.asarray(test_loss) #TASK: plot the training loss, test loss, and the contributions to the loss from each of the two terms length = len(np.asarray(train_loss).flatten()) plt.figure() plt.plot(np.linspace(0,length*100,length), np.asarray(train_loss).flatten(),label='training set') plt.plot(np.linspace(100,(length)*100,len(test_loss)),test_loss[:,0],label='test set loss') plt.plot(np.linspace(100,(length)*100,len(test_loss)),test_loss[:,1],label='test set kl') plt.plot(np.linspace(100,(length)*100,len(test_loss)),test_loss[:,2],label='test set neg log likelihood') plt.xlabel('training step') plt.ylabel('loss') plt.legend() plt.show() # TASK: Inspect how the contribution of the kl divergence and log likelihood to the loss change as you change the noise in the likelihood. Some suggested values: sigma=[0.5,1,2] # TASK: what happens when you change the number of samples? # What do you observe? Can you interpret it? """ Explanation: Let's train! Note: Training a VAE can be unstable. If things get weird, try changing the random seed or hyperparameters, such as the number of epochs, batchsize, learning rate, sample size, likelihood noise level... HINT: To get good artificial data samples, you need to have a KL divergence in the single digits End of explanation """ #TASK: plot the average reconstruction error of the model as a function of wavelength (similar to above). How does it compare to the Autoencoder? #HINT: Use the mean of $q(z|x)$ as the latent point for data x avg_loss = 0 VAE.eval() with torch.no_grad(): for X in test_dataloader: pred = VAE.decoder(VAE.encoder(X)[0]) avg_loss+=np.mean((pred.cpu().numpy()-X.cpu().numpy())**2,axis=0)/(len(test_data)//BATCHSIZE_TEST) plt.figure() plt.plot(wl,np.sqrt(avg_loss)) plt.ylabel('average reconstruction error') plt.xlabel('wavelength [Ångströms]') plt.show() #TASK: make a corner plot of posterior samples. Does the average posterior match the prior? import seaborn as sns import pandas as pd VAE.eval() with torch.no_grad(): for ii, X in enumerate(test_dataloader): VAE.get_q(X) prior_sample = VAE.prior.sample([BATCHSIZE_TEST]) sample = VAE.sample_q().cpu().numpy()[0:1].swapaxes(0,1) if ii==0: samples = sample prior_samples = prior_sample else: samples = np.vstack([samples, sample]) prior_samples = np.vstack([prior_samples, prior_sample]) samples = np.reshape(samples,[-1, LATENT_SIZE]) prior_samples = np.reshape(prior_samples,[-1, LATENT_SIZE]) print(samples.shape) print(prior_samples.shape) data1 = pd.DataFrame() data2 = pd.DataFrame() for ii in range(LATENT_SIZE): data1['dim_%d'%ii] = samples[:,ii] data1['source'] = 'posterior' for ii in range(LATENT_SIZE): data2['dim_%d'%ii] = prior_samples[:,ii] data2['source'] = 'prior' data = pd.concat([data1,data2]).reset_index(drop=True) #HINT: to get a density estimate you can set kind='kde', but you'll probably have to reduce the number of samples, KDE optimization scales pretty badly with number of samples sns.pairplot(data,corner=True,kind='scatter', hue='source', plot_kws={'s':4}) plt.show() #TASK: Generate artificial data: sample from the prior and foward model the sample thorugh the decoder. Do the samples look realistic? Why?/Why not? VAE.eval() with torch.no_grad(): samples = VAE.prior.sample([16]) data_samples = VAE.decoder(samples) # This is the mapping from pixel to the de-redshifted (rest) wavelength wlmin, wlmax = (3388,8318) fixed_num_bins = 1000 wl_range = (np.log10(wlmin),np.log10(wlmax)) wl = np.logspace(wl_range[0],wl_range[1],fixed_num_bins) fig, ax = plt.subplots(4,4, figsize=(20,10), sharex=True) ax = ax.flatten() for ii in range(16): ax[ii].plot(wl,data_samples[ii], label='artificial data') if ii in np.arange(12,16): ax[ii].set_xlabel('wavelength [Ångströms]') if ii in [0,4,8,12]: ax[ii].set_ylabel('some standardized flux') if ii==0: ax[ii].legend() plt.show() #Optional: Save model weights # torch.save(VAE.state_dict(), 'drive/MyDrive/ML_lecture_models/VAE_model_weights.pth') # torch.save(VAE.encoder.state_dict(), 'drive/MyDrive/ML_lecture_models/Encoder_model_weights.pth') # torch.save(VAE.decoder.state_dict(), 'drive/MyDrive/ML_lecture_models/Decoder_model_weights.pth') # VAE.load_state_dict(torch.load('drive/MyDrive/ML_lecture_models/VAE_model_weights.pth')) """ Explanation: STEP 5: Inspect the model performance Similar to the AE, we will look at the average reconstruction quality. But in addition, we also want to know how well the kl term was minimized. We will therefore look at three things Reconstruction quality Scatter plots of posterior samples and prior samples. Recall that $p(z)=\int \mathcal{d}x\, p(x,z) \approx \frac{1}{N_{samples}} \sum_{x\sim p(x)} p(z|x)$. Quality of artificial data generation End of explanation """
ValFadeev/ihaskell-notebooks
notebooks/functional_python.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from random import random, randint, choice from itertools import cycle, ifilter, imap, islice, izip, starmap, tee from collections import defaultdict from operator import add, mul from pymonad.Maybe import * from pymonad.Reader import * """ Explanation: In this document I would like to go through some functional idioms in Python involving the use of iterators and highlight some parallels with the equivalent Haskell implementations. End of explanation """ a = range(0, 9) zip(*[iter(a)]*4) """ Explanation: The ubiquitous zip I must admit I severely underestimated the importance of zip when I first started learning functional style of programming. It comes up in a wide range of patterns, however one of the more elegant applications in Python is given in the following example: End of explanation """ def zip_with_(f, a, b): return (f(*z) for z in izip(a, b)) """ Explanation: In fact, it is mentioned in the documentation in the section on built-in functions. I guess, it is one of those bits that are easily skipped on first reading. The reason this example works the way it does, namely, partitioning the iterable into chunks of the given length, is that zip evaluates its arguments strictly left-to-right. Hence, all the copies of the iterator get advanced together and dereferenced one by one. Now we take a deeper dive straight away. Haskell introduces the zipWith function which takes an additional argument, a function which is applied to the corresponding elements of the zipped iterables. Thus the output is in general no longer a list of tuples of the original elements. A possible implementation in Python is given below. Note that we return a generator so that the result can be evaluated lazily as needed. End of explanation """ def zip_with(f, *a): return starmap(f, izip(*a)) """ Explanation: Another, more generic, implementation is possible using the starmap function. End of explanation """ b = range(10, 20) list(zip_with(add, a, reversed(b))) """ Explanation: Using zip_with we can express operations on sequences in a more functional way: End of explanation """ @curry def take(n, a): return islice(a, None, n) @curry def drop(n, a): return islice(a, n, None) """ Explanation: Before we proceed, let's introduce another bit of syntactical convenience: End of explanation """ def double_every_other(a): return zip_with(mul, a, cycle([1,2])) """ Explanation: @curry will be explained later, for now just think of it as a fancy way of applying the idea behind functools.partial to forge partial application in Python. I discovered some of the less obvious applications of zipWith while working on the exercises from the canonical CIS194 course on Haskell. For example, in order to multiply every other element of a sequence by, say 2, we can generate a "masking" sequenc of 1s and 2s and zip it with the original sequence via multiplication: End of explanation """ x = cycle([1, 2, 3]) take15 = take(15) list(take15(x)) y = double_every_other(x) list(take15(y)) """ Explanation: Lazy evaluation allows us to work with infinite lists without much trouble: End of explanation """ def rotate(n, a): return (x for x, _ in izip(islice(cycle(a), n , None), a)) h = "_hello_lazy_world" r = rotate(6, h) ''.join(list(r)) """ Explanation: In another amazing example which I first came across in a SO answer zip is used to rotate a (potentially empty or infinite) sequence. In order to emphasize the Haskell influence, let us first write it without all of the convenience functions defined above: End of explanation """ def const(x, y): return x def rotate1(n, a): return zip_with(const, drop(n, cycle(a)), a) r1 = rotate1(6, h) ''.join(list(r1)) """ Explanation: Now we rewrite the same with more Haskell flavour: End of explanation """ class Stream(object): def __init__(self, data): self.data = iter(data) def __iter__(self): class iterator(object): def __init__(self, it): self.it = it.data def next(self): return next(self.it) return iterator(self) def filter(self, pred): return Stream(ifilter(pred, self)) def map(self, fn): return Stream(imap(fn, self)) s = Stream(range(20)). \ filter(lambda x: x % 7 == 0). \ map(lambda x: x * x) list(s) """ Explanation: Contextual iterators and monads Using a custom iterator for a class we can implement some fluent syntax for situations when operations on iterators need to be chained (inspiration taken from lodash.js). End of explanation """ def make_stream(data): try: return Just(iter(data)) except TypeError: return Nothing """ Explanation: We may notice that the resulting flow has certain traits of a composable contextual computation - something that monads were introduced to deal with. Indeed, we start by putting the original data in a "streaming context". Each public method then extracts the actual data, applies some transformation and wraps it back into the context before passing on. This document is not going to descend into yet another tutorial on monads. Instead we will use the PyMonad in a somewhat crude manner to demonstrate how the same goal can be achieved in a more functional way. First we define a function that will wrap raw data in a context. If we were implementing our own monad, this would be the unit End of explanation """ @curry def filter_stream(pred, stream): return Just(ifilter(pred, stream)) @curry def map_stream(fn, stream): return Just(imap(fn, stream)) """ Explanation: Now express the operations performed by the methods as monadic functions. End of explanation """ def eval_stream(stream): if isinstance(stream, Just): return stream.value else: return () """ Explanation: At the end of the chain we will also need a way out of the context to continue working with the data. We do not intend to be 100% pure after all. End of explanation """ my_filter = filter_stream(lambda x: x % 7 == 0) my_map = map_stream(lambda x: x * x) """ Explanation: Now we can also partially apply our function for better readability and potential reuse. End of explanation """ st = make_stream(range(20)) st1 = st >> my_filter >> my_map list(eval_stream(st1)) """ Explanation: Finally, execute the flow. End of explanation """ @curry def filter_stream1(pred, stream): return ifilter(pred, stream) @curry def map_stream1(fn, stream): return imap(fn, stream) """ Explanation: We can take a different approach and work with ordinary functions instead: End of explanation """ my_filter1 = filter_stream1(lambda x: x % 7 == 0) my_map1 = map_stream1(lambda x: x * x) my_transform = my_filter1 * my_map1 """ Explanation: We can partially apply them, as before, and then use the overloaded * operator to denote curried function composition. End of explanation """ st2 = make_stream(range(20)) st3 = my_transform * st2 list(eval_stream(st3)) """ Explanation: Finally the transformation is applied to the "contextualized" data using the applicative style instead of the bind operator. End of explanation """ def repeatedly(f, *args, **kwargs): return (f(*args, **kwargs) for _ in iter(int, 1)) """ Explanation: Splitting and unpacking We finish with a brief example showcasing the use of the tee function to create independent iterators and * (splat) operator to unpack sequences with zip. We shall consider the task of plotting a histogram for a somewhat unusual data model. Say we are given a sequence of results of some measurements (scores) for a number of observables identified by labels. Suppose also that there may be more than one occurence (count) of results with exactly the same score. We are going to partition scores into a given number of bins and aggregate counts within this bins, for each label separately. Before we proceed we shall define a convenience function inspired by the eponimous example from Clojure (also this answer provided a nice idiom for an infinite generator) End of explanation """ def generate_data(): return (choice(['a', 'b', 'c']), float("{:.2f}".format(random())), randint(1, 20)) data = take(100, repeatedly(generate_data)) sample = take(10, repeatedly(generate_data)) list(sample) """ Explanation: Now let's produce some mock data. Although using effectful functions in comprehensions is generally frowned upon in Python, here it allows us to achieve a clean separation of the generating (repeatedly) and consuming (take) parts of the logic. End of explanation """ def bin_labels(bins, label_format=None): a, b = tee(bins) next(b) label_format = label_format or '>= %.2f, < %.2f' return [label_format % b for b in izip(a, b)] """ Explanation: Now we define a function formatting labels for the plot. This is a typical example of using tee to traverse a sequence in a pairwise manner. End of explanation """ def bin_data(x, y, bin_edges): data_to_bins = izip(np.digitize(x, bin_edges), y) bin_sums = defaultdict(int) for index, count in data_to_bins: bin_sums[index] += count return [bin_sums.get(index, 0) for index in xrange(len(bin_edges))] """ Explanation: Here we aggregate data in the bins End of explanation """ by_label = defaultdict(list) d1, d2 = tee(data) for label, score, count in d1: by_label[label].append([score, count]) num_bins = 20 _, score, _ = izip(*d2) bins = np.linspace(min(score), max(score), num_bins) bin_lbl = bin_labels(bins) series = [] for label, records in by_label.iteritems(): _, count = izip(*records) series.append({'label': label, 'data': bin_data(score, count, bins)}) result = {'series': series, 'bins': bin_lbl} fig, ax = plt.subplots(figsize=(18,6)) bar_width = 0.2 colors = ['r', 'g', 'b'] for k, item in enumerate(result['series']): index = np.arange(len(item['data'])) plt.bar(index + k * bar_width, item['data'], color = colors[k], width = bar_width, label = item['label']) plt.xlabel('Scores') plt.ylabel('Count') plt.title('Count by scores') plt.xticks(index + bar_width, result['bins'], rotation=70) plt.legend() plt.show() """ Explanation: Finally we put together a dictionary containing all the data for the plot End of explanation """
AEW2015/PYNQ_PR_Overlay
docs/source/5_programming_onboard.ipynb
bsd-3-clause
from pynq import Overlay from pynq.board import LED from pynq.board import RGBLED from pynq.board import Switch from pynq.board import Button Overlay("base.bit").download() """ Explanation: Programming PYNQ-Z1's onboard peripherals LEDs, switches and buttons PYNQ-Z1 has the following on-board LEDs, pushbuttons and switches: 4 monochrome LEDs (LD3-LD0) 4 push-button switches (BTN3-BTN0) 2 RGB LEDs (LD5-LD4) 2 Slide-switches (SW1-SW0) The peripherals are highlighted in the image below. All of these peripherals are connected to programmable logic. This means controllers must be implemented in an overlay before these peripherals can be used. The base overlay contains controllers for all of these peripherals. Note that there are additional push-buttons and LEDs on the board (e.g. power LED, reset button). They are not user accessible, and are not highlighted in the figure. Peripheral Example Using the base overlay, each of the highlighted devices can be controlled using their corresponding pynq classes. To demonstrate this, we will first download the base overlay to ensure it is loaded, and then import the LED, RGBLED, Switch and Button classes from the module pynq.board. End of explanation """ led0 = LED(0) led0.on() """ Explanation: Controlling a single LED Now we can instantiate objects of each of these classes and use their methods to manipulate the corresponding peripherals. Let’s start by instantiating a single LED and turning it on and off. End of explanation """ led0.off() """ Explanation: Check the board and confirm the LD0 is ON End of explanation """ import time from pynq.board import LED from pynq.board import Button led0 = LED(0) for i in range(20): led0.toggle() time.sleep(.1) """ Explanation: Let’s then toggle led0 using the sleep() method to see the LED flashing. End of explanation """ MAX_LEDS = 4 MAX_SWITCHES = 2 MAX_BUTTONS = 4 leds = [0] * MAX_LEDS switches = [0] * MAX_SWITCHES buttons = [0] * MAX_BUTTONS for i in range(MAX_LEDS): leds[i] = LED(i) for i in range(MAX_SWITCHES): switches[i] = Switch(i) for i in range(MAX_BUTTONS): buttons[i] = Button(i) """ Explanation: Example: Controlling all the LEDs, switches and buttons The example below creates 3 separate lists, called leds, switches and buttons. End of explanation """ # Helper function to clear LEDs def clear_LEDs(LED_nos=list(range(MAX_LEDS))): """Clear LEDS LD3-0 or the LEDs whose numbers appear in the list""" for i in LED_nos: leds[i].off() clear_LEDs() """ Explanation: It will be useful to be able to turn off selected LEDs so we will create a helper function to do that. It either clears the LEDs whose numbers we list in the parameter, or by default clears LD3-LD0. End of explanation """ clear_LEDs() for i in range(MAX_LEDS): if switches[i%2].read(): leds[i].on() else: leds[i].off() """ Explanation: First, all LEDs are set to off. Then each switch is read, and if in the on position, the corresponding led is turned on. You can execute this cell a few times, changing the position of the switches on the board. LEDs start in the off state If SW0 is on, LD2 and LD0 will be on If SW1 is on, LD3 and LD1 will be on End of explanation """ import time clear_LEDs() while switches[0].read(): for i in range(MAX_LEDS): if buttons[i].read(): leds[i].toggle() time.sleep(.1) clear_LEDs() """ Explanation: The last example toggles an led (on or off) if its corresponding push button is pressed for so long as SW0 is switched on. To end the program, slide SW0 to the off position. End of explanation """
WenboTien/Crime_data_analysis
exploratory_data_analysis/UCIrvine_Crime_data_analysis.ipynb
mit
df = pd.read_csv('../datasets/UCIrvineCrimeData.csv'); df = df.replace('?',np.NAN) features = [x for x in df.columns if x not in ['fold', 'state', 'community', 'communityname', 'county' ,'ViolentCrimesPerPop']] """ Explanation: Read the CSV We use pandas read_csv(path/to/csv) method to read the csv file. Next, replace the missing values with np.NaN i.e. Not a Number. This way we can count the number of missing values per column. The dataset as described on UC Irvine repo. (125 predictive, 4 non-predictive, 18 potential goal) We remove the features which are final goals and some other irrelevant features. For example the following attribute is to be predicted. murdPerPop: number of murders per 100K population (numeric - decimal) potential GOAL attribute (to be predicted) End of explanation """ df.isnull().sum() """ Explanation: Find the number of missing values in every column End of explanation """ df.dropna() """ Explanation: Eliminating samples or features with missing values One of the easiest ways to deal with missing values is to simply remove the corresponding features(columns) or samples(rows) from the dataset entirely. Rows with missing values can be easily dropped via the dropna method. End of explanation """ df.dropna(axis=1); """ Explanation: Similarly, we can drop columns that have atleast one NaN in any row by setting the axis argument to 1: End of explanation """ #only drop rows where all columns are null df.dropna(how='all'); # drop rows that have not at least 4 non-NaN values df.dropna(thresh=4); # only drop rows where NaN appear in specific columns (here :'community') df.dropna(subset=['community']); """ Explanation: The dropna() method supports additional parameters that can come in handy. End of explanation """ imr = Imputer(missing_values='NaN', strategy='mean', axis=0) imr = imr.fit(df[features]) imputed_data = imr.transform(df[features]); """ Explanation: Imputing missing values Often, the removal of samples or dropping of entire feature columns is simply not feasible, because we might lost too much valuable data. In this case, we can use different interpolation techniques to estimate the missing values from the othere training samples in our dataset. One of the most common interpolation technique is mean interpolation, where we simply replace the missing value by the mean value of the entire feature column. A convenient way to achieve this is using the Imputer class from the scikit-learn as shown in the following code. End of explanation """ #df = df.drop(["communityname", "state", "county", "community"], axis=1) X, y = imputed_data, df['ViolentCrimesPerPop'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0); """ Explanation: Sklearn fundamentals A convenient way to randomly partition the dataset into a separate test & training dataset is to use the train_test_split function from scikit-learn's cross_validation submodule. As of now, the target variable is '' End of explanation """ class SBS(): def __init__(self, estimator, features, scoring=r2_score, test_size=0.25, random_state=1): self.scoring = scoring self.estimator = estimator self.features = features self.test_size = test_size self.random_state = random_state def fit(self, X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = self.test_size, random_state = self.random_state) dim = X_train.shape[1] self.indices_ = tuple(range(dim)) self.subsets_ = [self.indices_] score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.features: scores = [] subsets = [] for p in combinations(self.indices_, r=dim-1): score = self._calc_score(X_train, y_train, X_test, y_test, p) scores.append(score) subsets.append(p) best = np.argmax(score) self.indices_ = subsets[best] self.subsets_.append(self.indices_) dim -= 1 self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X): return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices): self.estimator.fit(X_train[:, indices], y_train) y_pred = self.estimator.predict(X_test[:, indices]) score = self.scoring(y_test, y_pred) return score """ Explanation: First, we assigned the NumPy array representation of features columns to the variable X, and we assigned the predicted variable to the variable y. Then we used the train_test_split function to randomly split X and y into separate training & test datasets. By setting test_size=0.3 we assigned 30 percent of samples to X_test and the remaining 70 percent to X_train. Sequential Feature Selection algorithm : Sequential Backward Algorithm(SBS) Sequential feature selection algorithms are a family of greedy search algorithms that can reduce an initial d-dimensional feature space into a k-dimensional feature subspace where k < d. The idea is to select the most relevant subset of features to improve computational efficieny and reduce generalization error End of explanation """ clf = LinearRegression() #This can also be replaced by Lasso() or Ridge() sbs = SBS(clf, features=1) sbs.fit(X_train, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.plot(k_feat, sbs.scores_, marker='o') plt.ylim([-2, 3]) plt.ylabel('Accuracy') plt.xlabel('Number of Features') plt.grid() plt.show() """ Explanation: In the preceding implementation, we define k_features to be the final number of features when the algorithm should stop. Inside the while loop of the fit method, the feature subsets created by the itertools.combination function are evaluated and reduced until the feature subset has the desired dimensionality. In each iteration, the accuracy score of the best subset is collected in a list self.scores_ based on the internally created test dataset X_test. We will use thoese scores to evaluate the results. Let's see our implementation using the LinearRegression End of explanation """ k100 = sbs.subsets_[23] features100 = [features[x] for x in k100] print features100 plt.figure(figsize=(20, 6)) classifiers = [LinearRegression(), Lasso(), Ridge()] names = ["Linear Regression", "Lasso", "Ridge"] for i, classifier, name in zip(range(len(classifiers)), classifiers, names): ax = plt.subplot(1, len(classifiers), i+1) ax.set_ylim([-1, 1]) plt.setp(ax.get_yticklabels(), visible=True) plt.setp(ax.get_xticklabels(), visible=True) sbs = SBS(classifier, features=1) sbs.fit(X_train, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.plot(k_feat, sbs.scores_, marker='o', label="Accuracy") plt.ylabel('Accuracy') plt.xlabel('Number of Features') plt.grid() plt.legend(loc="best") plt.title("For {} min-score = {:.2e} & max-score={:.2e}".format(name, min(sbs.scores_), max(sbs.scores_))) plt.show() """ Explanation: I have used r2_score as a measurement of accuracy. R^2 (coefficient of determination) regression score function. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Also, please note that the accuracy scores are reported on the validation set. Let us see which 100 features give us the best score of 0.75 on the validation set End of explanation """
tensorflow/docs
site/en/r1/tutorials/eager/custom_layers.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import tensorflow.compat.v1 as tf """ Explanation: Custom layers <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: This is an archived TF1 notebook. These are configured to run in TF2's compatibility mode but will run in TF1 as well. To use TF1 in Colab, use the %tensorflow_version 1.x magic. We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution. End of explanation """ # In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time the layer is used, but it can be provided if you want to # specify it manually, which is useful in some complex models. layer = tf.keras.layers.Dense(10, input_shape=(None, 5)) """ Explanation: Layers: common sets of useful operations Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables. Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers. TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models. End of explanation """ # To use a layer, simply call it. layer(tf.zeros([10, 5])) # Layers have many useful methods. For example, you can inspect all variables # in a layer using `layer.variables` and trainable variables using # `layer.trainable_variables`. In this case a fully-connected layer # will have variables for weights and biases. layer.variables # The variables are also accessible through nice accessors layer.kernel, layer.bias """ Explanation: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer), Conv2D, LSTM, BatchNormalization, Dropout, and many others. End of explanation """ class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_variable("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, input): return tf.matmul(input, self.kernel) layer = MyDenseLayer(10) print(layer(tf.zeros([10, 5]))) print(layer.trainable_variables) """ Explanation: Implementing custom layers The best way to implement your own layer is extending the tf.keras.Layer class and implementing: * __init__ , where you can do all input-independent initialization * build, where you know the shapes of the input tensors and can do the rest of the initialization * call, where you do the forward computation Note that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes required to create the variables will need to be explicitly specified. End of explanation """ class ResnetIdentityBlock(tf.keras.Model): def __init__(self, kernel_size, filters): super(ResnetIdentityBlock, self).__init__(name='') filters1, filters2, filters3 = filters self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1)) self.bn2a = tf.keras.layers.BatchNormalization() self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same') self.bn2b = tf.keras.layers.BatchNormalization() self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1)) self.bn2c = tf.keras.layers.BatchNormalization() def call(self, input_tensor, training=False): x = self.conv2a(input_tensor) x = self.bn2a(x, training=training) x = tf.nn.relu(x) x = self.conv2b(x) x = self.bn2b(x, training=training) x = tf.nn.relu(x) x = self.conv2c(x) x = self.bn2c(x, training=training) x += input_tensor return tf.nn.relu(x) block = ResnetIdentityBlock(1, [1, 2, 3]) print(block(tf.zeros([1, 2, 3, 3]))) print([x.name for x in block.trainable_variables]) """ Explanation: Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request! Models: composing layers Many interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut. The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model. End of explanation """ my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(2, 1, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(3, (1, 1)), tf.keras.layers.BatchNormalization()]) my_seq(tf.zeros([1, 2, 3, 3])) """ Explanation: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential End of explanation """
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/03_01/Final/Creating Series.ipynb
bsd-3-clause
import pandas as pd import numpy as np """ Explanation: Series Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). The axis labels are collectively referred to as the index. documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html End of explanation """ my_simple_series = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) my_simple_series my_simple_series.index """ Explanation: Create series from NumPy array number of labels in 'index' must be the same as the number of elements in array End of explanation """ my_simple_series = pd.Series(np.random.randn(5)) my_simple_series """ Explanation: Create series from NumPy array, without explicit index End of explanation """ my_simple_series[:3] """ Explanation: Access a series like a NumPy array End of explanation """ my_dictionary = {'a' : 45., 'b' : -19.5, 'c' : 4444} my_second_series = pd.Series(my_dictionary) my_second_series """ Explanation: Create series from Python dictionary End of explanation """ my_second_series['b'] """ Explanation: Access a series like a dictionary End of explanation """ pd.Series(my_dictionary, index=['b', 'c', 'd', 'a']) my_second_series.get('a') unknown = my_second_series.get('f') type(unknown) """ Explanation: note order in display; same as order in "index" note NaN End of explanation """ pd.Series(5., index=['a', 'b', 'c', 'd', 'e']) """ Explanation: Create series from scalar If data is a scalar value, an index must be provided. The value will be repeated to match the length of index End of explanation """
SciTools/courses
course_content/iris_course/8.Final_Exercise.ipynb
gpl-3.0
import matplotlib.pyplot as plt import iris import iris.plot as iplt """ Explanation: Iris introduction course 8. Final Exercise This exercise draws on various aspects of Iris's functionality that were introduced in the course. Once you have attempted the exercise, you can check your answers with the provided sample solutions. Setup End of explanation """ # EDIT for user code ... # SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ... # %load solutions/iris_exercise_8.1 """ Explanation: Exercise: Produce a set of plots that provide a comparison of decadal-mean air temperatures over North America. Part 1 Load 'A1B_north_america.nc' from the Iris sample data. End of explanation """ # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_8.2 """ Explanation: Part 2 Extract just data from the year 1980 and beyond from the loaded data. End of explanation """ # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_8.3 """ Explanation: Part 3 Define a function that takes a coordinate and a single time point as arguments, and returns the decade. For example, your function should return 2010 for the following: python time = iris.coords.DimCoord([10], 'time', units='days since 2018-01-01') print(your_decade_function(time, time.points[0])) End of explanation """ # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_8.4 """ Explanation: Part 4 Add a "decade" coordinate to the loaded cube using your function and the coord categorisation module. (Hint: Look at the documentation for <a href='https://scitools-iris.readthedocs.io/en/stable/generated/api/iris/coord_categorisation.html#iris.coord_categorisation.add_categorised_coord'>iris.coord_categorisation.add_categorised_coord</a>, which can be used to create a new categorised coordinate from such a function.) End of explanation """ # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_8.5 """ Explanation: Part 5 Calculate the decadal means cube for this scenario. End of explanation """ # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_8.6 """ Explanation: Part 6 Create a figure with 3 rows and 4 columns displaying the decadal means, with the decade displayed prominently in each axes' title. End of explanation """
liebannam/pipes
examples/Pipe_tutorial.ipynb
gpl-3.0
from IPython.display import Image Image("/Users/anna/Desktop/export_inp.png", width=720, height=450) """ Explanation: <p> Introduces the **PyNetwork** class used to set up and run simulations <p> <p> An instance of this **PyNetwork** class is a conceptual representation of a water distribution network. To create an instance of this class, you will use two files to specify parameters:</p> <ol> <li>the *.inp* file--specifies network connectivity, can be generated by EPANET</li> <li>the *.config* file--specifies runtime parameters like number of grid cells, boundary values, simulation time, and number of time steps</li> </ol> Below you will see how to start with an EPANET-generated .inp file, use script to generate compatible .inp and .config files, and examine the network you've created. Then you'll learn how to specify your own network and runtime parameters, create new .inp and config files, and run a simulation. <p> You need to specify network connectivity externally from a .inp file (I suggest generating with EPANET. If the formatting is different from that of EPANET, the file may not be read properly. All other parameters--pipe length, diameter, manning coefficient, elevation, etc can be specified from python notebook and used to write new files to run simulations.<p> <p> In general, before you begin: <p> <ol> <li> Export network *mynetwork.inp* from EPANET. **Create this file with flow units of "GPM", with pipe lengths specified in feet, diameters in inches, and roughness coefficient for Hazen-Williams** Step 2. converts units to metric.</li> <li> Use *cleanupinpfiles.py* to generate new files *mynetwork2.0.inp* and *mynetwork2.0.config* with the correct units and labeling scheme for pipes and junctions. You can do this by running the command </li> </ol> <p> python cleanupinpfiles.py mynetwork.inp <p> in a terminal in the folder where you have created myfile. <p> **Double check** that lengths, diameters, elevations etc in mynetwork2.0.inp have correct units (should be meters for lengths, and a manning coefficient for roughness)<p> <p>Now follow steps below to change parameters and set up your own simulation.</p> End of explanation """ import sys sys.path.append("..") from allthethings import PyNetwork, PyPipe_ps from allthethings import PyBC_opt_dh import numpy as np import matplotlib.pyplot as plt from writeit import * %pylab inline """ Explanation: The cell below loads modules you will need for creating networks, solving, plotting, etc. End of explanation """ fi = "../indata/mynetwork2.0.inp" fc = "../indata/mynetwork2.0.config" mtype =1 #this specifies Preissman slot model n0 = PyNetwork(fi,fc,mtype) # an instance of the network class """ Explanation: Now we're going to create a network based on the files we just generated using cleanupinpfile.py End of explanation """ #"print" shows the memory address and the number of nodes and edge print n0 #showLayout() shows the connectivity--how the nodes and edges are connected n0.showLayout() #you can also plot the network layout (xs,ys,conns,ls) = getBasicConnectivity(fi) Np = shape(conns)[0] plotNetworkLayout(xs,ys,conns,ls,Np) """ Explanation: Now let's look at what we've created. A network object consists of "edges" (pipes), "nodes" (junctions), runtime info, and various methods. End of explanation """ #print network parameters--these cannot be changed once the network is instantiated print "pipe L D Mr #grid cells" for k in range(n0.Nedges): print "%d %.2f %.2f %.3f %d"%(k, n0.Ls[k], n0.Ds[k],n0.Mrs[k], n0.Ns[k]) #print pipe initial conditions--these can be modified using setIC #n0.showCurrentData() """ Explanation: The edges have the following attributes: <ul> <li>length L(m)</li> <li>diameter D (m) </li> <li>roughness Mr (Manning coefficient) </li> <li>number of grid cells for finite volume solver </li> <li>initial water height, h0 </li> <li> initial discharge, Q0</li> </ul> This information is stored in vectors in the network class instance. For example, n0.Ls is the vector [L0,L1,...] where L0 is the length of pipe 0, L1 is the length of pipe 1, etc. Let's take a look at these in our network: End of explanation """ #look at specification for boundary value types for junction1s--these cannot be changed once network is instantiated n0.showExternalBoundaries() """ Explanation: Now look at junction information End of explanation """ print "Simulation run time is %.2f s, number of time steps is %d, and pressure wavespeed is %.2f m/s"%(n0.T, n0.M, n0.a[0]) """ Explanation: Now let's check out the runtime parameters--these cannot be changed once network is instantiated End of explanation """ #PyPipe_ps? x = np.linspace(0,n0.Ls[0],n0.Ns[0])# make a vector with as many entries as grid cells in pipe 0 Af =(n0.Ds[0]**2)*pi/4 #full cross sectional area in pipe 0 A0 = []#we're going to store a list of these initial conditions to compare with final results later Q0 = [] p0 = PyPipe_ps(n0.Ns[0], n0.Ds[0], n0.Ls[0], n0.M, n0.a[0]) print (n0.Ds[0]**2)*pi/4. Af =(n0.Ds[k]**2)*pi/4 print p0.AofH(3,False) #assume pipes are 3/4 full with moving water # reflective boundary is like we've closed valves at all exit points at time t=0 A0.append(.75*Af*np.ones(n0.Ns[0])) Q0.append(0.007*np.ones(n0.Ns[0])) plot(x,A0[0],label='pipe 0') n0.setIC(0,A0[0],Q0[0]) #set the initial conditions in pipe 0 #now make other pipes 'empty' (very small cross sectional area, 0 discharge) for k in range(1,3): Af =(n0.Ds[k]**2)*pi/4 x = np.linspace(0,n0.Ls[k],n0.Ns[k]) A0.append(.75*Af*np.ones(n0.Ns[k])) Q0.append(0.007*np.ones(n0.Ns[k])) plot(x,A0[k],label='pipe %d'%k) n0.setIC(k,A0[k],Q0[k]) ylabel('A(x,t=0) in pipe k') legend() #and we'll plot these initial cross-sectional areas #now run a simulation, keeping track of solve time and the initial and final volume of water in the network #the simulation is run up to time T using the method runForwardProblem(dt) import time V0 = n0.getTotalVolume() dt = n0.T/float(n0.M) dx = n0.Ls/[float(nn) for nn in n0.Ns] t0 = time.clock() n0.runForwardProblem(dt) tf = time.clock() Vf = n0.getTotalVolume() print "Solve time is %.5f s"%(tf-t0) print "Simulated time is %.5f s"%n0.T print "change in volume is %e m^3"%(Vf-V0) """ Explanation: Now we're going to modify initial conditions and run a simulation End of explanation """ #The PyNetwork function q(k) returns a list [A,Q] of the current data in pipe k #Below we use this to retrieve and plot the final cross sectional area # along length of each pipe and compare with our initial conditions fig, ax1= plt.subplots(figsize=(15,10), nrows = n0.Nedges) fig, ax2= plt.subplots(figsize=(15,10), nrows = n0.Nedges) for k in range(n0.Nedges): N =n0.Ns[k] x = linspace(0,n0.Ls[k], N) q = n0.q(k)#this is the current data (A,Q) A = q[0:N]#first N entries are values of A in each cell) Q = q[N:]#next N entries are values of Q in each cell ax1[k].plot(x,A0[k],'g',label='pipe %d initial'%k)#we stored this above ax1[k].plot(x,A,'k',label='pipe %d final'%k) ax1[k].legend() ax1[k].set_ylabel('A(x)') ax1[k].set_xlabel('x') ax2[k].plot(x,Q0[k],'g',label='pipe %d initial'%k)#we stored this above ax2[k].plot(x,Q,'k',label='pipe %d final'%k) ax2[k].set_ylabel('Q(x)') ax2[k].set_xlabel('x') ax2[k].legend() """ Explanation: <p>**IMPORTANT:** once you instantiate a network, you can only run the method *runForwardProblem(dt)* **ONCE** because only enough memory is allocated to store a single run of M time steps. If you want to re-run a simulation, you need to either </p> <ol> <li> re-instantiate the network using n0 = PyNetwork(fi,fc,mtype) to create a new network and start at time T=0, OR</li> <li> first run n0.reset(), which will allow you to run the simulation another T seconds starting with the last state at t=T see how to use this in *Intro_simulation_Alameda.ipynb*</li> <p> For now we're just going to look at the data from this run using several attributes of the PyNetwork class.</p> End of explanation """ #first look at a time series of pressure data near the end of each pipe t = linspace(0,n0.T, n0.M+1) for k in range(n0.Nedges): K = n0.Ns[k]-2 #index of cell we want to look at Ht = n0.pressureTimeSeries(k,K)#this function returns H in cell K of pipe k, at each time step plot(t,Ht, label = 'pipe %d'%k) plot(t,n0.Ds[k]*np.ones(len(t)),'k:') legend(loc = 'upper left') xlabel('t (s)') ylabel('H (m)') print 'dashed line denotes pipe crown' #now look at pressure head H as a function of x in pipe 2 at time step m, for m = 0,100, ...M # import a nice colormap to do this from matplotlib import cm import matplotlib.colors as colors cNorm = colors.Normalize(vmin=0, vmax=n0.M) sMap = cm.ScalarMappable(norm=cNorm, cmap=cm.get_cmap('Blues') ) figure(figsize(10,6)) for m in range(0,n0.M,100): Hx = n0.pressureSpaceSeries(2,m)#this returns H as a function of x in pipe 2 at time step m x = np.linspace(0,n0.Ls[2],n0.Ns[2]) plot(x,Hx, lw = 2, color = sMap.to_rgba(m), label = 't=%.2f'%(dt*m)) xlabel('x (m)') ylabel('H (m) in pipe 2') xlim(0,n0.Ls[2]) legend(loc = 'upper left') """ Explanation: <p>The vector of the entire history of (A,Q) at each time step is stored and can be accessed using the PyNetwork function q_hist(i)</p> <p>However, this requires some slightly complicated indexing (see water_hammer.ipynb for an example)</p> <p>Below we use two shortcut functions to look at pressure head $H = \bar{p}/(\rho g)$ as a function of space and time in each pipe.</p> End of explanation """ Np = n0.Nedges #number of pipes T = 20 #run time in seconds a = 150 #pressure wave speed in m/s (this sets the Preissman slot width) #pipe stuff Ls = [100,50,25] #try new pipe lengths Ns = [int(l) for l in Ls] # set dx = 1 meter by assigning 1 grid cell per meter of length Ds = [0.1,0.05,0.05] #try new pipe diameters Mrs = n0.Mrs #keep Manning coeff the same h0s = [5,5,5] #constant initial value of water height (if you want nonconstant, specify later with setIC) q0s = [0.007, 0.007,0.007] #constant initial discharge Q (if you want nonconstant, specify later with setIC) #runtime stuff dx = [Ls[i]/Ns[i] for i in range(Np)] M = int(T*a/(max(dx)*.8))#set time steps based on dx to assure CFL condition (may need to adjust up if slopes are steep) #junction stuff elevs = [0,0,0,0] #make it flat this time jt = n0.nodeTypes #junction type (can also write this out as a list--in this case, [1,3,1,1] # note that below, r, bt, and bval only matter for the nodes with one incoming pipe--nodes 0,1 and 2 r = [0,0,1,0] #reflection: we're doing specified bounary at nodes 0 and 3, reflection at node 2 bt = [1,0,0,2] #boundary type (only matters if r=0). Here we're specifying Q at node 0 and orifice outflow at node 2 bv = [0.007,0,0,n0.Ds[2]/2] #node 0: Q = 0.007; node 3: orifice opening height =D/2 #specify .inp file with network connectivity oldinp = "../indata/mynetwork2.0.inp" #new prefix for .inp and .config files that will be used for actual runtime parameters fn = "../indata/myfile_new" #and write the files! (fi2, fc2) = rewritePipes(fn,oldinp, Ns, Ls, Mrs, Ds, jt, bt, bv, r, h0s, q0s, T, M, a, elevs) print "name of new files is %s and %s "%(fi2, fc2) # now make a new network n1 = PyNetwork(fi2,fc2,mtype) n1.showExternalBoundaries() T = n1.T M = n1.M dt = T/float(M) V0 = n1.getTotalVolume() t0 = time.clock() n1.runForwardProblem(dt) tf = time.clock() Vf = n1.getTotalVolume() print "Solve time is %.5f s"%(tf-t0) print "Simulated time is %.5f s"%n0.T print "change in volume is %e m^3"%(Vf-V0) print "note we don't expect volume to be constant since we have orifice and outflow boundaries" t = linspace(0,n1.T, n1.M+1) for k in range(0,n1.Nedges): K = n1.Ns[k]/2 #index of cell we want to look at (midway in each pipe) Ht = n1.pressureTimeSeries(k,K)#this function returns H in cell K of pipe k, at each time step plot(t,Ht, label = 'pipe %d'%k) plot(t,n1.Ds[k]*np.ones(len(t)),'k:') legend(loc = 'lower right') xlabel('t (s)') ylabel('H (m)') print 'dashed line denotes pipe crown' """ Explanation: Ok, we've now seen how to instantiate a network class, look at some attributes, run a simulation, and extract the data. However, it is possible (likely!) that you will want to change runtime parameters like the simulation time, the boundary condition type, the boundary values, and more. Below we will use writeit.py to create new .inp and .config files with desired parameters. Then we can instantiate another network and run it! End of explanation """
seg/2016-ml-contest
esaTeam/esa_Submission01a.ipynb
apache-2.0
# Import from __future__ import division get_ipython().magic(u'matplotlib inline') import matplotlib as mpl import matplotlib.pyplot as plt mpl.rcParams['figure.figsize'] = (20.0, 10.0) inline_rc = dict(mpl.rcParams) from classification_utilities import make_facies_log_plot, make_multi_facies_log_plot import pandas as pd import numpy as np import seaborn as sns from sklearn import preprocessing from sklearn.model_selection import LeavePGroupsOut from sklearn.metrics import f1_score from sklearn.model_selection import GridSearchCV from sklearn.multiclass import OneVsOneClassifier from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, GradientBoostingClassifier import xgboost as xgb from xgboost.sklearn import XGBClassifier from scipy.signal import medfilt import sys, scipy, sklearn print('Python: ' + sys.version.split('\n')[0]) print(' ' + sys.version.split('\n')[0]) print('Pandas: ' + pd.__version__) print('Numpy: ' + np.__version__) print('Scipy: ' + scipy.__version__) print('Sklearn: ' + sklearn.__version__) print('Xgboost: ' + xgb.__version__) """ Explanation: Facies classification using machine learning techniques The ideas of <a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", <a href="https://github.com/ar4">Alan Richardson's</a> "Try 2", <a href="https://github.com/dalide">Dalide's</a> "Try 6", augmented, by Dimitrios Oikonomou and Eirik Larsen (ESA AS) by adding the gradient of gradient of features as augmented features. with an ML estimator for PE using both training and blind well data. removing the NM_M from augmented features. In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest. The proposed algorithm is based on the use of random forests, xgboost or gradient boost combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of: - Robust feature normalization. - Feature imputation for missing feature values. - Well-based cross-validation routines. - Feature augmentation strategies. - Test multiple classifiers Script initialization Let's import the used packages and define some parameters (e.g., colors, labels, etc.). End of explanation """ feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS'] facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS'] facies_NM= [1, 2, 3, 4] facies_M= [5, 6, 7, 8, 9] facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] #Select classifier type #clfType='RF' #Random Forest clasifier clfType='GB' #Gradient Boosting Classifier #clfType='XBA' #XGB Clasifier #Seed seed = 24 np.random.seed(seed) """ Explanation: Parameters End of explanation """ # Load data from file data = pd.read_csv('../facies_vectors.csv') # Formation is a category data['Formation'] = data['Formation'].astype('category') # Load Test data from file test_data = pd.read_csv('../validation_data_nofacies.csv') test_data.insert(0,'Facies',np.ones(test_data.shape[0])*(-1)) #Create Dataset for PE prediction from both dasets all_data=pd.concat([data,test_data]) all_data=pd.concat([data,test_data]) all_data['Formation'] = all_data['Formation'].astype('category') all_data['FM_Cat']=all_data['Formation'].cat.codes data=all_data.iloc[np.arange(data.shape[0])].copy() test_data=all_data.iloc[np.arange(data.shape[0],data.shape[0]+test_data.shape[0])].copy() FM_cat=all_data['Formation'].cat.categories.tolist() """ Explanation: Load data Let's load the data End of explanation """ # Store features and labels X = data[feature_names].values # features y = data['Facies'].values # labels # Store well labels and depths well = data['Well Name'].values depth = data['Depth'].values """ Explanation: Let's store features, labels and other data into numpy arrays. End of explanation """ # Define function for plotting feature statistics def plot_feature_stats(X, y, feature_names, facies_colors, facies_names): # Remove NaN nan_idx = np.any(np.isnan(X), axis=1) X = X[np.logical_not(nan_idx), :] y = y[np.logical_not(nan_idx)] # Merge features and labels into a single DataFrame features = pd.DataFrame(X, columns=feature_names) labels = pd.DataFrame(y, columns=['Facies']) for f_idx, facies in enumerate(facies_names): labels[labels[:] == f_idx] = facies data = pd.concat((labels, features), axis=1) # Plot features statistics facies_color_map = {} for ind, label in enumerate(facies_names): facies_color_map[label] = facies_colors[ind] sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names))) """ Explanation: Data inspection Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that: - Some features seem to be affected by a few outlier measurements. - Only a few wells contain samples from all classes. - PE measurements are available only for some wells. End of explanation """ # Facies per well for w_idx, w in enumerate(np.unique(well)): ax = plt.subplot(3, 4, w_idx+1) hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5) plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist[0]))) ax.set_xticklabels(facies_names) ax.set_title(w) # Features per well for w_idx, w in enumerate(np.unique(well)): ax = plt.subplot(3, 4, w_idx+1) hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0)) plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist))) ax.set_xticklabels(feature_names) ax.set_yticks([0, 1]) ax.set_yticklabels(['miss', 'hit']) ax.set_title(w) # Store Facies histogram per Formation FM_fac=[] local_wells=['SHRIMPLIN', 'LUKE G U', 'KIMZEY A', 'NOLAN', 'NEWBY', 'ALEXANDER D', 'CHURCHMAN BIBLE', 'Recruit F9'] for fm_idx, fm in enumerate( FM_cat): FM_fac.append(np.histogram(y[( data['Well Name'].isin(local_wells)) & (data['Formation'] == fm)], bins=np.arange(len(facies_names)+1)+0.5)) ax = plt.subplot(3, 5, fm_idx+1) hist = np.histogram(y[( data['Well Name'].isin(local_wells)) & (data['Formation'] == fm)], bins=np.arange(len(facies_names)+1)+.5) plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist[0]))) ax.set_xticklabels(facies_names) ax.set_title(fm) ax.set_yscale('log') ax.set_ylim(bottom=1) """ Explanation: Feature distribution plot_feature_stats(X, y, feature_names, facies_colors, facies_names) mpl.rcParams.update(inline_rc) End of explanation """ reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed) DataImpAll = all_data[feature_names].copy() DataImp = DataImpAll.dropna(axis = 0, inplace=False) Ximp=DataImp.loc[:, DataImp.columns != 'PE'] Yimp=DataImp.loc[:, 'PE'] reg.fit(Ximp, Yimp) X[np.array(data.PE.isnull()),feature_names.index('PE')] = reg.predict(data.loc[data.PE.isnull(),:][['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']]) """ Explanation: Feature imputation Let us fill missing PE values. Currently no feature engineering is used, but this should be explored in the future. End of explanation """ # ## Feature augmentation # Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by: # - Select features to augment. # - Aggregating aug_features at neighboring depths. # - Computing aug_features spatial gradient. # - Computing aug_features spatial gradient of gradient. # Feature windows concatenation function def augment_features_window(X, N_neig, features=-1): # Parameters N_row = X.shape[0] if features==-1: N_feat = X.shape[1] features=np.arange(0,X.shape[1]) else: N_feat = len(features) # Zero padding X = np.vstack((np.zeros((N_neig, X.shape[1])), X, (np.zeros((N_neig, X.shape[1]))))) # Loop over windows X_aug = np.zeros((N_row, N_feat*(2*N_neig)+X.shape[1])) for r in np.arange(N_row)+N_neig: this_row = [] for c in np.arange(-N_neig,N_neig+1): if (c==0): this_row = np.hstack((this_row, X[r+c,:])) else: this_row = np.hstack((this_row, X[r+c,features])) X_aug[r-N_neig] = this_row return X_aug # Feature gradient computation function def augment_features_gradient(X, depth, features=-1): if features==-1: features=np.arange(0,X.shape[1]) # Compute features gradient d_diff = np.diff(depth).reshape((-1, 1)) d_diff[d_diff==0] = 0.001 X_diff = np.diff(X[:,features], axis=0) X_grad = X_diff / d_diff # Compensate for last missing value X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1])))) return X_grad # Feature augmentation function def augment_features(X, well, depth, N_neig=1, features=-1): if (features==-1): N_Feat=X.shape[1] else: N_Feat=len(features) # Augment features X_aug = np.zeros((X.shape[0], X.shape[1] + N_Feat*(N_neig*2+2))) for w in np.unique(well): w_idx = np.where(well == w)[0] X_aug_win = augment_features_window(X[w_idx, :], N_neig,features) X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx],features) X_aug_grad_grad = augment_features_gradient(X_aug_grad, depth[w_idx]) X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad,X_aug_grad_grad), axis=1) # Find padded rows padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0]) return X_aug, padded_rows # Define window length N_neig=1 # Define which features to augment by introducing window and gradients. augm_Features=['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'RELPOS'] # Get the columns of features to be augmented feature_indices=[feature_names.index(log) for log in augm_Features] # Augment features X_aug, padded_rows = augment_features(X, well, depth, N_neig=N_neig, features=feature_indices) # Remove padded rows data_no_pad = np.setdiff1d(np.arange(0,X_aug.shape[0]), padded_rows) X=X[data_no_pad ,:] depth=depth[data_no_pad] X_aug=X_aug[data_no_pad ,:] y=y[data_no_pad] data=data.iloc[data_no_pad ,:] well=well[data_no_pad] """ Explanation: Augment features End of explanation """ lpgo = LeavePGroupsOut(2) # Generate splits split_list = [] for train, val in lpgo.split(X, y, groups=data['Well Name']): hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5) hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5) if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0): split_list.append({'train':train, 'val':val}) # Print splits for s, split in enumerate(split_list): print('Split %d' % s) print(' training: %s' % (data.iloc[split['train']]['Well Name'].unique())) print(' validation: %s' % (data.iloc[split['val']]['Well Name'].unique())) """ Explanation: Generate training, validation and test data splitsar4_submission_withFac.ipynb The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that: - Features from each well belongs to training or validation set. - Training and validation sets contain at least one sample for each class. Initialize model selection methods End of explanation """ if clfType=='XB': md_grid = [3] mcw_grid = [1] gamma_grid = [0.3] ss_grid = [0.7] csb_grid = [0.8] alpha_grid =[0.2] lr_grid = [0.05] ne_grid = [200] param_grid = [] for N in md_grid: for M in mcw_grid: for S in gamma_grid: for L in ss_grid: for K in csb_grid: for P in alpha_grid: for R in lr_grid: for E in ne_grid: param_grid.append({'maxdepth':N, 'minchildweight':M, 'gamma':S, 'subsample':L, 'colsamplebytree':K, 'alpha':P, 'learningrate':R, 'n_estimators':E}) if clfType=='XBA': learning_rate_grid=[0.12] max_depth_grid=[5] min_child_weight_grid=[8] colsample_bytree_grid = [0.7] n_estimators_grid=[100] #[150] param_grid = [] for max_depth in max_depth_grid: for min_child_weight in min_child_weight_grid: for colsample_bytree in colsample_bytree_grid: for learning_rate in learning_rate_grid: for n_estimators in n_estimators_grid: param_grid.append({'maxdepth':max_depth, 'minchildweight':min_child_weight, 'colsamplebytree':colsample_bytree, 'learningrate':learning_rate, 'n_estimators':n_estimators}) if clfType=='RF': N_grid = [100] # [50, 100, 150] M_grid = [10] # [5, 10, 15] S_grid = [25] # [10, 25, 50, 75] L_grid = [5] # [2, 3, 4, 5, 10, 25] param_grid = [] for N in N_grid: for M in M_grid: for S in S_grid: for L in L_grid: param_grid.append({'N':N, 'M':M, 'S':S, 'L':L}) if clfType=='GB': N_grid = [100] MD_grid = [3] M_grid = [10] LR_grid = [0.1] L_grid = [5] S_grid = [25] param_grid = [] for N in N_grid: for M in MD_grid: for M1 in M_grid: for S in LR_grid: for L in L_grid: for S1 in S_grid: param_grid.append({'N':N, 'MD':M, 'MF':M1,'LR':S,'L':L,'S1':S1}) def getClf(clfType, param): if clfType=='RF': clf = RandomForestClassifier(n_estimators=param['N'], criterion='entropy', max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'], class_weight='balanced', random_state=seed) if clfType=='XB': clf = XGBClassifier( learning_rate = param['learningrate'], n_estimators=param['n_estimators'], max_depth=param['maxdepth'], min_child_weight=param['minchildweight'], colsample_bytree=param['colsamplebytree'], nthread =4, seed = 17 ) if clfType=='XBA': clf = XGBClassifier( learning_rate = param['learningrate'], n_estimators=param['n_estimators'], max_depth=param['maxdepth'], min_child_weight=param['minchildweight'], colsample_bytree=param['colsamplebytree'], nthread =4, seed = seed ) if clfType=='GB': clf=GradientBoostingClassifier( n_estimators=param['N'], learning_rate=param['LR'], max_depth=param['MD'], max_features= param['MF'], min_samples_leaf=param['L'], min_samples_split=param['S1'], random_state=seed, max_leaf_nodes=None) return clf min_appearences_ratio=0.01 def postProc(df,y_pred, y_prob): for idx,((index, row), y_e) in enumerate(zip(df.iterrows(),y_pred)): fh=FM_fac[row['FM_Cat']][0] iOrd=8 while (fh[int(y_e)-1]/np.sum(fh))<min_appearences_ratio: if ((((row['NM_M']==1)& ('LM' in row['Formation'])) & (int(y_e) in facies_NM )) or (((row['NM_M']==2)& ('SH' in row['Formation'])) & (int(y_e) in facies_M ))): break y_e=y_prob[idx].argsort()[iOrd]+1 y_pred[idx]=y_e # if (y_prob[idx][int(y_e)-1]>0.7): # break # y_e=y_prob[idx].argsort()[iOrd]+1 iOrd=iOrd-1 return y_pred # Train and test a classifier def train_and_test(X_tr, y_tr, X_v, well_v, clf): # Feature normalization scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr) X_tr = scaler.transform(X_tr) X_v = scaler.transform(X_v) # Train classifier clf.fit(X_tr, y_tr) # Test classifier y_v_hat = clf.predict(X_v) # Clean isolated facies for each well for w in np.unique(well_v): y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=3) return y_v_hat, clf.predict_proba(X_v) # For each set of parameters score_param = [] print('features: %d' % X_aug.shape[1]) exportScores=[] for param in param_grid: #score_param.append(1) #break # For each data split score_split = [] for split in split_list: split_train_no_pad = split['train'] # Select training and validation data from current split X_tr = X_aug[split_train_no_pad, :] X_v = X_aug[split['val'], :] y_tr = y[split_train_no_pad] y_v = y[split['val']] # Select well labels for validation data well_v = well[split['val']] # Train and test y_v_hat,y_prob = train_and_test(X_tr, y_tr, X_aug, well, getClf(clfType,param)) scorePre = f1_score(y[split['val']], y_v_hat[split['val']], average='micro') y_new=postProc(data,y_v_hat,y_prob) scorePost = f1_score(y[split['val']], y_new[split['val']], average='micro') print 'Pre {0:0.3f}, Post {1:0.3f}'.format(scorePre,scorePost) # Score #score = f1_score(y_v, y_v_hat, average='micro') score_split.append(scorePost) #print('Split: {0}, Score = {1:0.3f}'.format(split_list.index(split),score)) # Average score for this param score_param.append(np.mean(score_split)) print('Average F1 score = %.3f %s' % (score_param[-1], param)) exportScores.append('Average F1 score = %.3f %s' % (score_param[-1], param)) # Best set of parameters best_idx = np.argmax(score_param) param_best = param_grid[best_idx] score_best = score_param[best_idx] print('\nBest F1 score = %.3f %s' % (score_best, param_best)) # Store F1 scores for multiple param grids if len(exportScores)>1: exportScoresFile=open('results_{0}_sub01a.txt'.format(clfType),'wb') exportScoresFile.write('features: %d' % X_aug.shape[1]) for item in exportScores: exportScoresFile.write("%s\n" % item) exportScoresFile.write('\nBest F1 score = %.3f %s' % (score_best, param_best)) exportScoresFile.close() # ## Predict labels on test data # Let us now apply the selected classification technique to test data. # Training data X_tr = X_aug y_tr = y # Prepare test data well_ts = test_data['Well Name'].values depth_ts = test_data['Depth'].values X_ts = test_data[feature_names].values # Augment Test data features X_ts, padded_rows = augment_features(X_ts, well_ts,depth_ts,N_neig=N_neig, features=feature_indices) # Predict test labels y_ts_hat,y_ts_prob = train_and_test(X_tr, y_tr, X_ts, well_ts, getClf(clfType,param_best)) # Save predicted labels test_data['Facies_Pre'] = y_ts_hat test_data.to_csv('_esa_predicted_facies_{0}_Pre_sub01a.csv'.format(clfType)) test_data['Facies']=postProc(test_data,y_ts_hat,y_ts_prob) test_data.to_csv('esa_predicted_facies_{0}_Post_sub01a.csv'.format(clfType)) # Plot predicted labels make_facies_log_plot( test_data[test_data['Well Name'] == 'STUART'], facies_colors=facies_colors) make_facies_log_plot( test_data[test_data['Well Name'] == 'CRAWFORD'], facies_colors=facies_colors) mpl.rcParams.update(inline_rc) """ Explanation: Classification parameters optimization Let us perform the following steps for each set of parameters: Select a data split. Normalize features using a robust scaler. Train the classifier on training data. Test the trained classifier on validation data. Repeat for all splits and average the F1 scores. At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data. End of explanation """
gte620v/PythonTutorialWithJupyter
exercises/solutions/Ex3-Stock_Data_solutions.ipynb
mit
import pandas as pd from pandas_datareader import data, wb import datetime # We will look at stock prices over the past year, starting at January 1, 2016 start = datetime.datetime(2016,1,1) end = datetime.date.today() # Let's get Apple stock data; Apple's ticker symbol is AAPL # First argument is the series we want, second is the source ("yahoo" for Yahoo! Finance), third is the start date, fourth is the end date apple = data.get_data_yahoo("AAPL", start, end) apple.head() """ Explanation: Get Stock Data In this exercise, we will download some stock ticker data and process it with pandas. Install pandas_datareader with conda. End of explanation """ apple.Low.head() apple[['Low','High']].head() apple.query('Date>2017').head() apple[apple.Close>140].head() apple.describe() apple.groupby(pd.TimeGrouper(freq='60D')).mean() """ Explanation: Pandas Basics Take first N values: df.head(N) Get data for one column: df['column_name'] or df.column_name. Get data for multiple columns of data: df[['col1', 'col2']] Get data by a condition on the index: df.query('Date&gt;2017') Get data by a condition on the column: df[df.Close&gt;140] Summarize the data: df.describe() Aggregate the data by 60 days: df.groupby(pd.TimeGrouper(freq='60D')).mean() End of explanation """ import pylab as plt %matplotlib inline plt.rcParams['figure.figsize'] = (15, 9) # Change the size of plots apple[['Adj Close','Low','High']].plot(grid = True) """ Explanation: Plot Daily Trend Capture the daily Low, High, and Close and plot those values. End of explanation """ microsoft = data.get_data_yahoo("MSFT", start, end) google = data.get_data_yahoo("GOOG", start, end) # Below I create a DataFrame consisting of the adjusted closing price of these stocks, first by making a list of these objects and using the join method stocks = pd.DataFrame({"AAPL": apple["Adj Close"], "MSFT": microsoft["Adj Close"], "GOOG": google["Adj Close"]}) stocks.head() """ Explanation: Add More Stocks Grab the closing values from two more stocks. End of explanation """ # plot here stocks.plot(grid = True,logy=True) """ Explanation: Plot Price of all three stocks End of explanation """ # stocks['AAPL']... ax = stocks['AAPL'].rolling(window=20, center=True).mean().plot(grid=True) ax = stocks['AAPL'].plot(grid=True,ax=ax) plt.legend() """ Explanation: Apply Rolling Window Read http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html and apply a 20-day rolling averaging window to the apple adjusted close. End of explanation """ # apply stock_return = stocks.apply(lambda x: x / x[0]) stock_return.head() stock_return.plot(grid = True) """ Explanation: Profit We'd like to normalize the share price so that we can fit all of the stock trends on one plot. To do that let's normalize by the price on the start date for each stock. Check the http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html documentation to apply this scaling to each column. Plot resulting profit time series. End of explanation """ url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies' sp500 = pd.read_html(url,header=0)[0] output = {} for ticker in sp500['Ticker symbol'][:200]: try: output[ticker] = data.get_data_yahoo(ticker, start, end)['Adj Close'] except: continue df_sp500 = pd.DataFrame(output) df_sp500.head() """ Explanation: S&P 500 So far we manually specified a few ticker symbols. In order to do larger scale analysis, we need to pull data for more tickers. To do that, we need to find a data source with the ticker names. Fortunately, Wikipedia has all of the S&P tickers in a html table at https://en.wikipedia.org/wiki/List_of_S%26P_500_companies. Use pd.read_html to grab this ticker data and then loop through all ticker and pull down the stock data for each ticker. End of explanation """ stock_return_sp500 = df_sp500.apply(lambda x: x / x[0]) stock_return_sp500.plot(grid = True) plt.legend([]) """ Explanation: Profit Create a new dataframe that calculates the profit for each stock and plot this profit across all stocks. End of explanation """ plt.hist(stock_return_sp500.ix[-1,:],100) plt.grid(True) plt.show() """ Explanation: Profit Histogram Plot the histogram of the profit. End of explanation """ stock_return_sp500.ix[-1,stock_return_sp500.ix[-1,:]>2] """ Explanation: Use Pandas to find high-performance stocks Retrun a list with all stock that doubled over the time period. End of explanation """
yl565/statsmodels
examples/notebooks/plots_boxplots.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm """ Explanation: Box Plots The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot. End of explanation """ data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] """ Explanation: Bean Plots The following example is taken from the docstring of beanplot. We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. End of explanation """ plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook age = [data.exog['age'][data.endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") #plt.show() def beanplot(data, plot_opts={}, jitter=False): """helper function to try out different plot options """ fig = plt.figure() ax = fig.add_subplot(111) plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} plot_opts_.update(plot_opts) sm.graphics.beanplot(data, ax=ax, labels=labels, jitter=jitter, plot_opts=plot_opts_) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") fig = beanplot(age, jitter=True) fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'}) fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'}) """ Explanation: Group age by party ID, and create a violin plot with it: End of explanation """ from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Necessary to make horizontal axis labels fit plt.rcParams['figure.subplot.bottom'] = 0.23 data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] # Group age by party ID. age = [data.exog['age'][data.endog == id] for id in party_ID] # Create a violin plot. fig = plt.figure() ax = fig.add_subplot(111) sm.graphics.violinplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a bean plot. fig2 = plt.figure() ax = fig2.add_subplot(111) sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30}) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create a jitter plot. fig3 = plt.figure() ax = fig3.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8), 'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00', 'bean_mean_color':'#009D91'} sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Create an asymmetrical jitter plot. ix = data.exog['income'] < 16 # incomes < $30k age = data.exog['age'][ix] endog = data.endog[ix] age_lower_income = [age[endog == id] for id in party_ID] ix = data.exog['income'] >= 20 # incomes > $50k age = data.exog['age'][ix] endog = data.endog[ix] age_higher_income = [age[endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts['violin_fc'] = (0.5, 0.5, 0.5) plot_opts['bean_show_mean'] = False plot_opts['bean_show_median'] = False plot_opts['bean_legend_text'] = 'Income < \$30k' plot_opts['cutoff_val'] = 10 sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left', jitter=True, plot_opts=plot_opts) plot_opts['violin_fc'] = (0.7, 0.7, 0.7) plot_opts['bean_color'] = '#009D91' plot_opts['bean_legend_text'] = 'Income > \$50k' sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right', jitter=True, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent.") ax.set_ylabel("Age") ax.set_title("US national election '96 - Age & Party Identification") # Show all plots. #plt.show() """ Explanation: Advanced Box Plots Based of example script example_enhanced_boxplots.py (by Ralf Gommers) End of explanation """
LocalGroupAstrostatistics2015/MCMC
MCMC tutorial (worksheet).ipynb
mit
name = "YOUR NAME HERE" print("Hello {0}!".format(name)) """ Explanation: Practical MCMC in Python by Dan Foreman-Mackey A worksheet for the Local Group Astrostatistics workshop at the University of Michigan, June 2015. Introduction In this notebook, we'll implement a Markov Chain Monte Carlo (MCMC) algorithm and demonstrate its use on two realistic simulated datasets. First, we'll fit a line to a set of data points with Gaussian uncertainties in one dimension. This problem should never be done using MCMC in practice—the solution is analytic!—but it is useful as a functional test of the code and as a demonstration of the concepts. Next, we'll fit a power law model to a set of entries in a catalog assuming a Poisson likelihood function. This problem is very relevant to this meeting for a few reasons but we'll come back to that later. This worksheet is written in Python and it lives in an IPython notebook. In this context, you'll be asked to write a few lines of code to implement the sampler and the models but much of the boilerplate code is already in place. Therefore, even if you're not familiar with Python, you should be able to get something out of the notebook. I don't expect that everyone will finish the full notebook but that's fine because it has been designed to get more difficult as we progress. How to use the notebook If you're familiar with IPython notebooks, you can probably skip this section without missing anything. IPython notebooks work by running a fully functional Python sever behind the scenes and if you're reading this then you probably already figured out how to get that running. Then, inside the notebook, the content is divided into cells containing code or text. You'll be asked to edit a few of the cells below to add your own code. To do this, click on the cell to start editing and then type as you normally would. To execute the code contained in the cell, press Shift-Enter. Even for existing cells that you don't need to edit, you should select them and type Shift-Enter when you get there because the cells below generally depend on the previous cells being executed first. To get started, edit the cell below to assign your name (or whatever you want) to the variable name and then press Shift-Enter to exectue the cell. End of explanation """ %matplotlib inline from matplotlib import rcParams rcParams["savefig.dpi"] = 100 # This makes all the plots a little bigger. import numpy as np import matplotlib.pyplot as plt """ Explanation: If this works, the output should greet you without throwing any errors. If so, that's pretty much all we need so let's get started with some MCMC! Dataset 1: Fitting a line to data Today, we're going to implement the simplest possible MCMC algorithm but before we do that, we'll need some data to test our method with. Load the data I've generated a simulated dataset generated from a linear model with no uncertainties in the $x$ dimension and known Gaussian uncertainties in the $y$ dimension. These data are saved in the CSV file linear.csv included with this notebook. First we'll need numpy and matplotlib so let's import them: End of explanation """ # Load the data from the CSV file. x, y, yerr = np.loadtxt("linear.csv", delimiter=",", unpack=True) # Plot the data with error bars. plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) plt.xlim(0, 5); """ Explanation: Now we'll load the datapoints and plot them. When you execute the following cell, you should see a plot of the data. If not, make sure that you run the import cell from above first. End of explanation """ A = np.vander(x, 2) # Take a look at the documentation to see what this function does! ATA = np.dot(A.T, A / yerr[:, None]**2) w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2)) V = np.linalg.inv(ATA) """ Explanation: As I mentioned previously, it is pretty silly to use MCMC to solve this problem because the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. Therefore, let's compute what the right answer should be before we even start. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean $$\mathbf{w} = \left(\begin{array}{c} m \ b \end{array}\right) = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1} \, \mathbf{A}^\mathrm{T}\,C^{-1}\,\mathbf{y}$$ and covariance matrix $$\mathbf{V} = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1}$$ where $$\mathbf{y} = \left(\begin{array}{c} y_1 \ y_2 \ \vdots \ y_N \end{array}\right) \quad , \quad \mathbf{A} = \left(\begin{array}{cc} x_1 & 1 \ x_2 & 1 \ \vdots & \vdots \ x_N & 1 \end{array}\right) \quad ,\, \mathrm{and} \quad \mathbf{C} = \left(\begin{array}{cccc} \sigma_1^2 & 0 & \cdots & 0 \ 0 & \sigma_2^2 & \cdots & 0 \ &&\ddots& \ 0 & 0 & \cdots & \sigma_N^2 \end{array}\right)$$ There are various functions in Python for computing this but I prefer to do it myself (it only takes a few lines of code!) and here it is: End of explanation """ plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) for m, b in np.random.multivariate_normal(w, V, size=50): plt.plot(x, m*x + b, "g", alpha=0.1) plt.xlim(0, 5); """ Explanation: We'll save these results for later to compare them to the result computed using MCMC but for now, it's nice to take a look and see what this prediction looks like. To do this, we'll sample 24 slopes and intercepts from this 2d Gaussian and overplot them on the data. End of explanation """ def lnlike_linear((m, b)): raise NotImplementedError("Delete this placeholder and implement this function") """ Explanation: This plot is a visualization of our posterior expectations for the true underlying line that generated these data. We'll reuse this plot a few times later to test the results of our code. The probabilistic model In order use MCMC to perform posterior inference on a model and dataset, we need a function that computes the value of the posterior probability given a proposed setting of the parameters of the model. For reasons that will become clear below, we actually only need to return a value that is proportional to the probability. As discussed in a previous tutorial, the posterior probability for parameters $\mathbf{w} = (m,\,b)$ conditioned on a dataset $\mathbf{y}$ is given by $$p(\mathbf{w} \,|\, \mathbf{y}) = \frac{p(\mathbf{y} \,|\, \mathbf{w}) \, p(\mathbf{w})}{p(\mathbf{y})}$$ where $p(\mathbf{y} \,|\, \mathbf{w})$ is the likelihood and $p(\mathbf{w})$ is the prior. For this example, we're modeling the likelihood by assuming that the datapoints are independent with known Gaussian uncertainties $\sigma_n$. This specifies a likelihood function: $$p(\mathbf{y} \,|\, \mathbf{w}) = \prod_{n=1}^N \frac{1}{\sqrt{2\,\pi\,\sigma_n^2}} \, \exp \left(-\frac{[y_n - f_\mathbf{w}(x_n)]^2}{2\,\sigma_n^2}\right)$$ where $f_\mathbf{w}(x) = m\,x + b$ is the linear model. For numerical reasons, we will acutally want to compute the logarithm of the likelihood. In this case, this becomes: $$\ln p(\mathbf{y} \,|\, \mathbf{w}) = -\frac{1}{2}\sum_{n=1}^N \frac{[y_n - f_\mathbf{w}(x_n)]^2}{\sigma_n^2} + \mathrm{constant} \quad.$$ In the following cell, replace the contents of the lnlike_linear function to implement this model. The function takes two values (m and b) as input and it should return the log likelihood (a single number) up to a constant. In this function, you can just use the globaly defined dataset x, y and yerr. For performance, I recommend using vectorized numpy operations (the key function will be np.sum). End of explanation """ p_1, p_2 = (0.0, 0.0), (0.01, 0.01) ll_1, ll_2 = lnlike_linear(p_1), lnlike_linear(p_2) if not np.allclose(ll_2 - ll_1, 535.8707738280209): raise ValueError("It looks like your implementation is wrong!") print("☺︎") """ Explanation: After you're satisfied with your implementation, run the following cell. In this cell, we're checking to see if your code is right. If it is, you'll see a smiling face (☺︎) but if not, you'll get an error message. End of explanation """ def lnprior_linear((m, b)): if not (-10 < m < 10): return -np.inf if not (-10 < b < 10): return -np.inf return 0.0 def lnpost_linear(theta): return lnprior_linear(theta) + lnlike_linear(theta) """ Explanation: If you don't get the ☺︎, go back and try to debug your model. Iterate until your result is correct. Once you get that, we'll use this to implement the full model (Remember: we haven't added in the prior yet). For the purposes of this demonstration, we'll assume broad uniform priors on both $m$ and $b$. This isn't generally a good idea... instead, you should normally use a prior that actually represents your prior beliefs. But this a discussion for another day. I've chosen to set the bounds on each parameter to be (-10, 10) but you should feel free to change these numbers. Since this is the log-prior, we'll return -np.inf from lnprior_linear when the parameter is outside of the allowed range. And then, since we only need to compute the probability up to a constant, we will return 0.0 (an arbitrary constant) when the parameters are valid. Finally, the function lnpost_linear sums the output of lnprior_linear and lnlike_linear to compute the log-posterior probability up to a constant. End of explanation """ def metropolis_step(lnpost_function, theta_t, lnpost_t, step_cov): raise NotImplementedError("Delete this placeholder and implement this function") """ Explanation: Metropolis(–Hastings) MCMC The simplest MCMC algorithm is generally referred to as the Metropolis method. All MCMC algorithms work by specifying a "step" that moves from one position in parameter space to another with some probability. The Metropolis step takes a position $\theta_t$ (a vector containing the slope and intercept at step $t$) to the position $\theta_{t+1}$ using the following steps: propose a new position $\mathbf{q}$ drawn from a Gaussian centered on the current position $\theta_t$ compute the probability of the new position $p(\mathbf{q}\,|\,\mathbf{y})$ draw a random number $r$ between 0 and 1 and if $$r < \frac{p(\mathbf{q}\,|\,\mathbf{y})}{p(\mathbf{x}t\,|\,\mathbf{y})}$$ return $\mathbf{q}$ as $\theta{t+1}$ and, otherwise, return $\theta_t$ as $\theta_{t+1}$. In the following cell, you'll implement this step. The function will take 4 arguments: a function that computes the ln-probability (for this demo, it'll be lnpost_linear from above), the current position $\theta_t$, the ln-probability at the current point $p(\theta_t\,|\,\mathbf{y})$, and the covariance matrix of the Gaussian proposal distribution. It should return two values, the new coordinate $\theta_{t+1}$ and the ln-probability at that point $p(\theta_{t+1}\,|\,\mathbf{y})$. The syntax for returning multiple values is return a, b. This function is really the key to this whole tutorial so spend some time getting it right! It is hard to robustly test functions with a random component so chat with other people around you to check your method. We'll also try to test it below but it's worth spending some time now. There are a few functions that will come in handy here but the two most important ones are: np.random.multivariate_normal(theta_t, step_cov) - draws a vector sample from the multivariate Gaussian centered on theta_t with covariance matrix step_cov. np.random.rand() - draws a random number between 0 and 1. End of explanation """ lptest = lambda x: -0.5 * np.sum(x**2) th = np.array([0.0]) lp = 0.0 chain = np.array([th for th, lp in (metropolis_step(lptest, th, lp, [[0.3]]) for _ in range(10000))]) if np.abs(np.mean(chain)) > 0.1 or np.abs(np.std(chain) - 1.0) > 0.1: raise ValueError("It looks like your implementation is wrong!") print("☺︎") """ Explanation: As before, here's a simple test for this function. When you run the following cell it will either print a smile or throw an exception. Since the algorithm is random, it might occasionally fail this test so if it fails once, try running it again. If it fails a second time, edit your implementation until the test consistently passes. End of explanation """ # Edit these guesses. m_initial = 2. b_initial = 0.45 # You shouldn't need to change this plotting code. plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) for m, b in np.random.multivariate_normal(w, V, size=24): plt.plot(x, m_initial*x + b_initial, "g", alpha=0.1) plt.xlim(0, 5); """ Explanation: Running the Markov Chain Now that we have an implementation of the Metropolis step, we can go on to sample from the posterior probability density that we implemented above. To start, we need to initialize the sampler somewhere in parameter space. In the following cell, edit your guess for the slope and intercept of the line until it looks like a reasonably good fit to the data. End of explanation """ # Edit this line to specify the proposal covariance: step = np.diag([1e-6, 1e-6]) # Edit this line to choose the number of steps you want to take: nstep = 50000 # Edit this line to set the number steps to discard as burn-in. nburn = 1000 # You shouldn't need to change any of the lines below here. p0 = np.array([m_initial, b_initial]) lp0 = lnpost_linear(p0) chain = np.empty((nstep, len(p0))) for i in range(len(chain)): p0, lp0 = metropolis_step(lnpost_linear, p0, lp0, step) chain[i] = p0 # Compute the acceptance fraction. acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1) print("The acceptance fraction was: {0:.3f}".format(acc)) # Plot the traces. fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True) axes[0].plot(chain[:, 0], "k") axes[0].axhline(w[0], color="g", lw=1.5) axes[0].set_ylabel("m") axes[0].axvline(nburn, color="g", alpha=0.5, lw=2) axes[1].plot(chain[:, 1], "k") axes[1].axhline(w[1], color="g", lw=1.5) axes[1].set_ylabel("b") axes[1].axvline(nburn, color="g", alpha=0.5, lw=2) axes[1].set_xlabel("step number") axes[0].set_title("acceptance: {0:.3f}".format(acc)); """ Explanation: In the next cell, we'll start from this initial guess for the slope and intercept and walk through parameter space (using the transition probability from above) to generate a Markov Chain of samples from the posterior probability. There are a few tuning parameters for the method. The first and most important choice has already been covered: initialization. The practical performance of an MCMC sampler depends sensitively on the initial position so it's worth spending some time choosing a good initialization. The second big tuning parameter is the scale of the proposal distribution. We must specify the covariance matrix for the proposal Gaussian. This proposal is currently set to a very bad value. Your job is to run the sampler, look at the output, and try to tune the proposal until you find a "good" value. You will judge this based on a few things. First, you can check the acceptance fraction (the fraction of accepted proposals). For this (easy!) problem, the target is around about 50% but for harder problems in higher dimensions, a good target is around 20%. Another useful diagnostic is a plot of the parameter values as a function of step number. For example, if this looks like a random walk then your proposal scale is probably too small. Once you reach a good proposal, this plot should "look converged". The final tuning parameter is the number of steps to take. In theory, you need to take an infitite number of steps but we don't (ever) have time for that so instead you'll want to take a large enough number of samples so that the sampler has sufficiently explored parameter space and converged to a stationary distribution. This is, of course, unknowable so for today you'll just have to go with your intuition. You can also change the number of steps that are discarded as burn-in but (in this problem) your results shouldn't be very sensitive to this number. Take some time now to adjust these tuning parameters and get a sense of what happens to the sampling when you change different things. End of explanation """ if np.any(np.abs(np.mean(chain, axis=0)-w)>0.01) or np.any(np.abs(np.cov(chain, rowvar=0)-V)>1e-4): raise ValueError("It looks like your implementation is wrong!") print("☺︎") """ Explanation: The results of the MCMC run are stored in the array called chain with dimensions (nstep, 2). These are samples from the posterior probability density for the parameters. We know from above that this should be a Gaussian with mean $\mathbf{w}$ and covariance $\mathbf{V}$ so let's compare the sample mean and covariance to the analytic result that we computed above: End of explanation """ import triangle triangle.corner(chain[nburn:, :], labels=["m", "b"], truths=w); """ Explanation: If you don't get a smile here, that could mean a few things: you didn't run for long enough (try increasing nstep), your choice of step scale was not good (try playing around with the definition of step), or there's a bug in your code. Try out all of these tuning parameters until you have a good intuition for what's going on and figure out which settings pass this test and which don't. Plotting the results In this section, we'll make two plots that are very useful for checking your results after you run an MCMC: corner plot or scatterplot matrix — a plot of all the 2- and 1-D projections of the MCMC samples. To make this plot, we'll use triangle.py, a Python module specifically designed for this purpose. For simplicity, I've included the module with this notebook so you won't have to install it separately. predictive distribution — a plot of the "posterior predicted data" overplotted on the observed data. This kind of plot can be used as a qualitative model check. First, the corner plot: End of explanation """ plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) for m, b in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]: plt.plot(x, m*x + b, "g", alpha=0.1) plt.xlim(0, 5); """ Explanation: This plot is a representation of our contraints on the posterior probability for the slope and intercept conditioned on the data. The 2-D plot shows the full posterior and the two 1-D plots show the constraints for each parameter marginalized over the other. The second plot that we want to make is a represnetation of the posterior predictive distribution for the data. To do this we will plot a few (50) randomly selected samples from the chain and overplot the resulting line on the data. End of explanation """ # Edit these guesses. alpha_initial = 100 beta_initial = -1 # These are the edges of the distribution (don't change this). a, b = 1.0, 5.0 # Load the data. events = np.loadtxt("poisson.csv") # Make a correctly normalized histogram of the samples. bins = np.linspace(a, b, 12) weights = 1.0 / (bins[1] - bins[0]) + np.zeros(len(events)) plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights) # Plot the guess at the rate. xx = np.linspace(a, b, 500) plt.plot(xx, alpha_initial * xx ** beta_initial, "g", lw=2) # Format the figure. plt.ylabel("number") plt.xlabel("x"); """ Explanation: It is always useful to make a plot like this because it lets you see if your model is capable of describing your data or if there is anything catasrophically wrong. Dataset 2: Population inference In this section, we'll go through a more realistic example problem. There is not closed form solution for the posterior probability in this case and the model might even be relevant to your research! In this problem, we're using a simulated catalog of measurements and we want to fit for a power law rate function. This is similar to how you might go about fitting for the luminosity function of a population of stars for example. A (wrong!) method that is sometimes used for this problem is to make a histogram of the samples and then fit a line to the log bin heights but the correct method is not much more complicated than this. Instead, we start by choosing a rate model that (in this case) will be a power law: $$\Gamma(x) = \alpha\,x^{\beta} \quad \mathrm{for} \, a < x < b$$ and we want to find the posterior probability for $\alpha$ and $\beta$ conditioned on a set of measurements ${x_k}_{k=1}^K$. To do this, we need to choose a likelihood function (a generative model for the dataset). A reasonable choice in this case is the likelihood function for an inhomogeneous Poisson process (the generalization of the Poisson likelihood to a variable rate function): $$p({x_k}\,|\,\alpha,\,\beta) \propto \exp \left( - \int_a^b \Gamma(x)\,\mathrm{d}x \right) \, \prod_{k=1}^K \Gamma(x_k)$$ Because of our choice of rate function, we can easily compute the integral in the exponent: $$\int_a^b \Gamma(x)\,\mathrm{d}x = \frac{\alpha}{\beta+1}\,\left[b^{\beta+1} - a^{\beta+1}\right]$$ Therefore, the full log-likelihood function is: $$\ln p({x_k}\,|\,\alpha,\,\beta) = \frac{\alpha}{\beta+1}\,\left[a^{\beta+1} - b^{\beta+1}\right] + K\,\ln\alpha + \sum_{k=1}^K \beta\,\ln x_k + \mathrm{const}$$ In the next few cell, you'll implement this model and use your MCMC implementation from above to sample from the posterior for $\alpha$ and $\beta$. But first, let's load the data and plot it. In this cell, you should change your initial guess for alpha and beta until the green line gives a good fit to the histogram. End of explanation """ def lnlike_poisson((alpha, beta)): raise NotImplementedError("Delete this placeholder and implement this function") """ Explanation: In the following cell, you need to implement the log-likelihood function for the problem (same as above): $$\ln p({x_k}\,|\,\alpha,\,\beta) = \frac{\alpha}{\beta+1}\,\left[a^{\beta+1} - b^{\beta+1}\right] + K\,\ln\alpha + \sum_{k=1}^K \beta\,\ln x_k + \mathrm{const}$$ Note that this is only valid for $\beta \ne -1$. In practice you shouldn't ever hit this boundary but, just in case, you should special case beta == 1.0 where $$\ln p({x_k}\,|\,\alpha,\,\beta=-1) = \alpha\,\left[\ln a - \ln b\right] + K\,\ln\alpha - \sum_{k=1}^K \ln x_k + \mathrm{const}$$ End of explanation """ p_1, p_2 = (1000.0, -1.), (1500., -2.) ll_1, ll_2 = lnlike_poisson(p_1), lnlike_poisson(p_2) if not np.allclose(ll_2 - ll_1, 337.039175916): raise ValueError("It looks like your implementation is wrong!") print("☺︎") """ Explanation: As before, edit your implementation until the following test passes. End of explanation """ def lnprior_poisson((alpha, beta)): if not (0 < alpha < 1000): return -np.inf if not (-10 < beta < 10): return -np.inf return 0.0 def lnpost_poisson(theta): return lnprior_poisson(theta) + lnlike_poisson(theta) """ Explanation: Once you're happy with this implementation, we'll define the full probabilistic model including a prior. As before, I've chosen a broad flat prior on alpha and beta but you should feel free to change this. End of explanation """ # Edit this line to specify the proposal covariance: step = np.diag([1000., 4.]) # Edit this line to choose the number of steps you want to take: nstep = 50000 # Edit this line to set the number steps to discard as burn-in. nburn = 1000 # You shouldn't need to change any of the lines below here. p0 = np.array([alpha_initial, beta_initial]) lp0 = lnpost_poisson(p0) chain = np.empty((nstep, len(p0))) for i in range(len(chain)): p0, lp0 = metropolis_step(lnpost_poisson, p0, lp0, step) chain[i] = p0 # Compute the acceptance fraction. acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1) print("The acceptance fraction was: {0:.3f}".format(acc)) # Plot the traces. fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True) axes[0].plot(chain[:, 0], "k") axes[0].set_ylabel("alpha") axes[0].axvline(nburn, color="g", alpha=0.5, lw=2) axes[1].plot(chain[:, 1], "k") axes[1].set_ylabel("beta") axes[1].axvline(nburn, color="g", alpha=0.5, lw=2) axes[1].set_xlabel("step number") axes[0].set_title("acceptance: {0:.3f}".format(acc)); """ Explanation: Now let's run the MCMC for this model. As before, you should tune the parameters of the algorithm until you get a reasonable acceptance fraction ($\sim 25- 40\%$) and the chains seem converged. End of explanation """ triangle.corner(chain[nburn:], labels=["alpha", "beta"], truths=[500, -2]); plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights) # Plot the guess at the rate. xx = np.linspace(a, b, 500) for alpha, beta in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]: plt.plot(xx, alpha * xx ** beta, "g", alpha=0.1) # Format the figure. plt.ylabel("number") plt.xlabel("x"); """ Explanation: Once you're happy with to convergence of your chain, plot the results as a corner plot (compared to the values that I used to generate the dataset; $\alpha = 500$ and $\beta = -2$) and plot the posterior predictive distribution. End of explanation """
ewulczyn/talk_page_abuse
src/analysis/Prevalence and Efficacy of Moderation.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 %matplotlib inline import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from load_utils import * # Load scored diffs and moderation event data d = load_diffs() df_block_events, df_blocked_user_text = load_block_events_and_users() df_warn_events, df_warned_user_text = load_warn_events_and_users() moderated_users = [('warned', df_warned_user_text), ('blocked', df_blocked_user_text)] moderation_events = [('warned', df_warn_events), ('blocked', df_block_events)] """ Explanation: Prevalence and Efficacy of Moderation In this notebook, we explore the prevalance and efficacy of methods for moderating personal attacks on Wikipedia. Background on Moderation There is an explicit policy on Wikipedia against making personal attacks. Any user who observes or experiences a personal attack can place a standardized warning message on the offending user's talk page via a set of templates. In these tamplates, users have the option to cite the page on which the attack occured, but they generally do not cite the actual revision where the attack was introduced. In addition to warnings, Wikipedia administrators have the ability to suspend the editing rights of users who violate the policy on personal attacks. This action is known as blocking a user. The duration and extent of the block is variable and at the discretion of the administrator. The interface admins use for blocking users requires providing a reason for the block. The reasons generally come from a small set of choices from a drop down list and map to the different Wikipedia policies. We are only interested in blocks where the admin selected the "[WP:No personal attacks|Personal attacks] or [WP:Harassment|harassment]" reason. Note that there is a separate policy on Wikipedia against harassment, which encompasses behaviors such as legal threats and posting personal information, which we do not address in this study. Unfortunately, admins generally do to cite the page or revision the incident occurred on when blocking a user. Administrators tend to block users in response to personal attacks reported on the Administrators Noticeboard for incidents, but it is also not uncommon for them to block users in response to attacks they observe during their other activities. Finally, administrators have the ability to delete the revision containing the attack from public view. Note that we only work with comments that have not been deleted. Data In order to get data on warnings, we generated a dataset of all public user talk page diffs, identified the diffs that contained a warning and parsed the information in the template. From the template we can extract the following information: - the user name of the user posting the warning - the user name of the user being warned - the timestamp of the warning - the page the attack occurred on (not always given) Data on block events comes from the public logging table. Each record provides: - the user name of the admin - the user name of the user being blocked - the timestamp of the block event (this is not the timestamp of the attack) Again, we only block events with the "[WP:No personal attacks|Personal attacks] or [WP:Harassment|harassment]" reason. We also ran our machine learning model for detecting personal attacks on two dataset of comments (diffs) - every user and article talk comment made in 2015 - every user and article talk comment made by a user blocked for personal attacks Analysis This notebook attempts to provide insight into each of the following questions: Summary stats of warnings and blocks How often do they occur? How many individuals have been blocked/warned? Temporal Trends How often do warnings result in blocks? How do blocked users behave after being blocked? Will they be blocked again? Are new attacks from blocked users more likely to lead to a block? What are the "politics" of blocking a user? Are anons more likely to be blocked? Are long term editors less likely to be blocked? How comprehensive is the moderation? What fraction of attacking users have been blocked for harassment? What fraction of attacking comments come from users blocked for harassment? What fraction of attacking comments were followed by a block event? How does the probability of a user being blocked change with the number of attacking comments? Note: interesting interplay between ground truth data, that comes from Wikipedia community itself, but that may be incomplete and the ML scores which are based on evaluations from people outside community, are a bit noisy, but are totally comprehensive. End of explanation """ print('# block events: %s' % df_block_events.shape[0]) print('# warn events: %s' % df_warn_events.shape[0]) """ Explanation: Summary stats of warnings and blocks End of explanation """ print('# block events') print(df_block_events.groupby('anon').size().to_frame()) print() print('# warn events') print(df_warn_events.groupby('anon').size()) """ Explanation: There have been 27343 instances of a user being blocked and 36520 instances of a user being warned for personal attacks in the history of Wikipedia. Warnings have different levels and most are of the form "if you continue to attack users, you may be blocked", so we expect there to be a large difference. However, we will dig deeper into the relationship between warning and blocking later. End of explanation """ print('# blocked users') print(df_block_events.groupby('anon').user_text.nunique()) print() print('# warned users') print(df_warn_events.groupby('anon').user_text.nunique()) """ Explanation: Almost half of all block events involved the blocking of an anonymous user. Half of warn events were addressed to anons. Later we will investigate if anons or registered users are disproportionately blocked. End of explanation """ print('fraction of blocked users with a public user talk comment') print(d['blocked'].groupby('author_anon').user_text.nunique() / df_block_events.groupby('anon').user_text.nunique()) """ Explanation: Here we see that there are more moderations events than moderated users, since users can be blocked or warned multiple times. End of explanation """ df_block_events.assign(block_count = 1)\ .groupby(['user_text', 'anon'], as_index = False)['block_count'].sum()\ .groupby(['anon', 'block_count']).size() """ Explanation: Only 55% of users blocked have a public comment in the user or talk ns. This can be because users can get blocked for personal attacks for comments they make in the main ns and because the revisions can be made private. Q: How many users got blocked multiple times? End of explanation """ df_block_events.year.value_counts().sort_index().plot(label = 'block') df_warn_events.year.value_counts().sort_index().plot(label = 'warn') plt.xlabel('year') plt.ylabel('count') plt.xlim((2004, 2015)) plt.legend() """ Explanation: The vast majority of blocked users only get blocked once. Again, note that blocks are usually not indefinite but result in a temporary suspension of certain editing rights. Unsurprisingly, registered users are more likely to be reblocked than anons. Q: How are the counts changing from year to year Note: there is no monthly pattern and the hourly pattern just tells you when enforcement happens, not when the attacks happen. As a result, hourly and monthly plots have been omitted. End of explanation """ b = set(df_blocked_user_text['user_text']) w = set(df_warned_user_text['user_text']) len(w.intersection(b)) / len(w) """ Explanation: It looks like the warning template was introduced in 2007. The number of block events per year looks a lot like the graph of participation in Wikipedia per year, although there has been a slight uptick attacks in recent years. Q: How often do warnings result in blocks? How often where blocks preceded by warnings? End of explanation """ len(w.intersection(b)) / len(b) """ Explanation: Only 11% warned users have been blocked. Lets also, check what fraction of blocked users ever got warned. This is a bit tricky since our group of blocked users includes users who were blocked for harassment, which is quite different. End of explanation """ a = set(d['blocked'].query('pred_recipient_score > 0.7')['user_text']) len(a.intersection(b).intersection(w)) / len(a.intersection(b)) """ Explanation: A first estimate gives that, 14% of blocked users have been warned. This is probably an underestimate because the set of blocked user contains harassers that may not have made a personal attack. We can also check what fraction of blocked users, who have made an attack that our algorithm detected, have been warned. End of explanation """ dfs = [] for k in range(1, 5): for event, data in moderation_events: df_k = data.assign(blocked = 1)\ .groupby('user_text', as_index = False)['blocked'].sum()\ .query('blocked >=%f'%k) df_k1 = data.assign(blocked = 1)\ .groupby('user_text', as_index = False)['blocked'].sum()\ .query('blocked >=%f'%(k+1))\ .assign(again = 1) df = df_k.merge(df_k1, how = 'left', on = 'user_text')\ .assign(again = lambda x: x.again.fillna(0))\ .assign(k=k, event = event) dfs.append(df) sns.pointplot(x = 'k', y = 'again', hue = 'event', data = pd.concat(dfs)) plt.xlabel('k') plt.ylabel('P(warned/blocked again| warned/blocked at least k times)') """ Explanation: 31% of blocked editors that our model confirmed made a personal attack where warned before hand. In conclusion, we see that warning and blocking are fairly different mechanisms. Most people who get warned do not get blocked and most people who get blocked do not get warned. It seems like it would provide transparency to the moderation process if there was a clearer policy. How do blocked users behave after being blocked? Q: Will they be blocked again? P(blocked again | blocked at least k times) End of explanation """ K = 6 sample = 'blocked' threshold = 0.5 events = {} # null events set e = d[sample][['user_text']].drop_duplicates() e['timestamp'] = pd.to_datetime('1900') events[0] = e # rank block events ranked_events = df_block_events.copy() ranks = df_block_events\ .groupby('user_text')['timestamp']\ .rank() ranked_events['rank'] = ranks for k in range(1,K): e = ranked_events.query("rank==%d" % k)[['user_text', 'timestamp']] events[k] = e attacks = {} for k in range(0, K-1): c = d[sample].merge(events[k], how = 'inner', on='user_text') c = c.query('timestamp < rev_timestamp') del c['timestamp'] c = c.merge(events[k+1], how = 'left', on = 'user_text') c['timestamp'] = c['timestamp'].fillna(pd.to_datetime('2100')) c = c.query('rev_timestamp < timestamp') c = c.query('pred_recipient_score > %f' % threshold) attacks[k] = c blocked_users = {i:set(events[i]['user_text']) for i in events.keys()} attackers = {i:set(attacks[i]['user_text']) for i in attacks.keys()} dfs_sns = [] for k in range(1, K-1): u_a = attackers[k] u_b = blocked_users[k+1] u_ab = u_a.intersection(u_b) n_a = len(u_a) n_ab = len(u_ab) print('k:',k, n_ab/n_a) dfs_sns.append(pd.DataFrame({'blocked': [1]*n_ab, 'k': [k]*n_ab})) dfs_sns.append(pd.DataFrame({'blocked': [0]*(n_a- n_ab), 'k': [k]*(n_a- n_ab)})) sns.pointplot(x = 'k', y = 'blocked', data = pd.concat(dfs_sns)) plt.xlabel('k') plt.ylabel('P(blocked | attacked and blocked k times already)') """ Explanation: Every time a user gets blocked, the chance that they will get blocked again increases. This could be explained by the fact that there is some set of persistently toxic users/accounts, who keep coming back after being blocked, while the less persistent ones get discouraged by the blocks. It may also be that being blocked tends to lead to more toxic behavior. TODO: - How can we tell the difference between the 2 hypotheses for the observed pattern? - Try a simple LR for whether a user will be blocked again based on simple user behavior features Q: Are new attacks from blocked users more likely to lead to a block? P(blocked | attacked and blocked k times already) The methodology for this is a bit involved. events[i]: the set of ith block events per user blocked_users[i] = set of users blocked i times, e.g. set of users in events[i] attacks[i]: the set of attacks made by users after their ith block, excluding comments made after their (i+1)th block, if it happened. attackers[i]: set of users in attacks[i] P(blocked | attacked and blocked k times already): $$ \frac{|blocked[k+1] \cap attackers[k]|}{|attackers[k]|}$$ End of explanation """ dfs = [] step = 0.2 ts = np.arange(0.4, 0.81, step) moderated_users = [('warn', df_warned_user_text), ('blocked', df_blocked_user_text)] for t in ts: for (event_type, users) in moderated_users: dfs.append(\ d['2015'].query('pred_recipient_score >= %f and pred_recipient_score <= %f' % (t, t+step))\ [['user_text', 'author_anon']]\ .drop_duplicates()\ .merge(users, how = 'left', on = 'user_text')\ .assign(blocked = lambda x: x.blocked.fillna(0), threshold = t, event = event_type ) ) df = pd.concat(dfs) df['author_anon']=df['author_anon'].astype(str) g = sns.factorplot(x="threshold", y="blocked", hue="author_anon", col="event", data=df, hue_order=["False", "True"]) """ Explanation: The probability of being blocked after making a personal attack and increases as a function of how many times the user has been blocked before. This could indicate heightened scrutiny by administrators. The pattern could also occur if users who continue to attack after being blocked make more frequent or more toxic attacks and are hence more likely to be discovered. TODO - check if users make more or more toxic attacks after being blocked - it could even be that they get blocked for smaller offenses What are the "politics" of blocking a user? Q: Are anons more likely to be moderated? Methodology: Label a user as an attacker if they make a comment with a score within a given range/bin. Compute probability of being warned/blocked broken down by anon/registered, score range, and moderation event type. End of explanation """ attackers = d['2015'].query('not author_anon and pred_recipient_score > 0.5')[['user_text']].drop_duplicates() # get days active d_tenure = pd.read_csv('../../data/long_term_users.tsv', sep = '\t') d_tenure.columns = ['user_text', 'n'] attackers = attackers.merge(d_tenure, how = 'left', on = 'user_text') attackers['n'] = attackers['n'].fillna(0) # bin days active tresholds = np.percentile(attackers['n'], np.arange(0, 100.01,10 )) tresholds = sorted(set(tresholds.astype(int))) bins = [] for i in range(len(tresholds)-1): label = '%d-%d' % (tresholds[i], tresholds[i+1]-1) rnge = range(tresholds[i], tresholds[i+1]) bins.append((label, rnge)) bins = [('<7', range(0, 8)), ('8-365', range(8, 366)), ('365<', range(366, 4500))] def map_count(x): for label, rnge in bins: if x in rnge: return label attackers['binned_n'] = attackers['n'].apply(map_count) # get if blocked blocked_users_2015 = df_block_events.query("timestamp > '2014-12-31'")[['user_text']].drop_duplicates() blocked_users_2015['blocked'] = 1 attackers = attackers.merge(blocked_users_2015, how='left', on='user_text') attackers['blocked'] = attackers['blocked'].fillna(0) #plot o = [e[0] for e in bins] sns.pointplot(x='binned_n', y = 'blocked', data=attackers, order = o) plt.ylabel('P(blocked | n days active prior to 2015)') plt.xlabel('n days active prior to 2015') """ Explanation: Anons are less likely to be blocked and warned! That is not what I expected. TODO: - investigate why this could be - talk to some admins to see if this makes sense Q: Are long term editors less likely to be blocked? Methodology: Consider all editors who made a comment in 2015. Select those that made a comment with an attack score of 0.5 or higher. See how many days they were active before Jan 1 2015. Compute block probability as a function of how many days they were active. TODO - Get # of days active before launching the first attack in 2015, instead of # of days active before 2015 End of explanation """ dfs = [] ts = np.arange(0.5, 0.91, 0.1) moderated_users = [('warn', df_warned_user_text), ('blocked', df_blocked_user_text)] for t in ts: for (event_type, users) in moderated_users: dfs.append(\ d['2015'].query('pred_recipient_score >= %f' % t)[['user_text', 'author_anon']]\ .drop_duplicates()\ .merge(users, how = 'left', on = 'user_text')\ .assign(blocked = lambda x: x.blocked.fillna(0), threshold = t, event = event_type ) ) df = pd.concat(dfs) sns.pointplot(x = 'threshold', y = 'blocked', hue='event', data = df) plt.ylabel('fraction of attacking users warned/blocked') plt.savefig('../../paper/figs/fraction_of_attacking_users_warned_and_blocked.png') """ Explanation: New editors are the least likely to be blocked for attacks. Editors with 8-365 active days are the most likely to be blocked for attacks. Experienced editors are less likely than editors with medium experience to be blocked. Note that although the CIs overlap, all differences are significant at alpah=0.05. TODO: - interpretation - consider possible confounds How comprehensive is the moderation? Q: What fraction of attacking users have been warned/blocked for harassment? Take unsampled data from 2015. Label a user as attacking if one of their comments is above the threshold. Join list of blocked users to see if any of them have ever or will ever be blocked. End of explanation """ dfs = [] ts = np.arange(0.5, 0.91, 0.1) moderation_events = [('warn', df_warn_events), ('blocked', df_block_events)] def get_delta(x): if x['timestamp'] is not None and x['rev_timestamp'] is not None: return x['timestamp'] - x['rev_timestamp'] else: return pd.Timedelta('0 seconds') for t in ts: for (event_type, events) in moderation_events: dfs.append( d['2015'].query('pred_recipient_score >= %f' % t)\ .loc[:, ['user_text', 'rev_id', 'rev_timestamp']]\ .merge(events, how = 'left', on = 'user_text')\ .assign(delta = lambda x: get_delta(x))\ .assign(blocked= lambda x: (x['delta'] < pd.Timedelta('7 days')) & (x['delta'] > pd.Timedelta('0 seconds')))\ .drop_duplicates(subset = ['rev_id'])\ .assign(threshold = t, event=event_type) ) ax = sns.pointplot(x='threshold', y='blocked', hue='event', data = pd.concat(dfs)) plt.xlabel('threshold') plt.ylabel('fraction of attacking comments followed by a moderation event') """ Explanation: Most users who have made at least one attack have never been warned or blocked. Q: What fraction of attacking comments were followed by a warn or block event? Within one week End of explanation """ dfs = [] for t in ts: for event_type, users in moderated_users: dfs.append(\ d['2015'].query('pred_recipient_score >= %f' % t)\ .merge(users, how = 'left', on = 'user_text')\ .assign(blocked = lambda x: x.blocked.fillna(0), threshold = t, event = event_type) ) df = pd.concat(dfs) sns.pointplot(x = 'threshold', y = 'blocked', hue = 'event', data = df) plt.ylabel('fraction of attacking comments from warned/blocked users') """ Explanation: Most attacking comments do not lead to the user being warned/blocked within the next 7 days. Q: What fraction of attacking comments come from users warned/blocked for harassment? End of explanation """ def remap(x): if x < 5: return str(int(x)) if x < 10: return '5-10' else: return '10+' t = 0.5 d_temp = d['2015'].assign(attack = lambda x: x.pred_recipient_score >= t)\ .groupby('user_text', as_index = False)['attack'].sum()\ .rename(columns={'attack':'num_attacks'})\ .merge(df_blocked_user_text, how = 'left', on = 'user_text')\ .assign( blocked = lambda x: x.blocked.fillna(0,), num_attacks = lambda x: x.num_attacks.apply(remap), threshold = t) ax = sns.pointplot(x='num_attacks', y= 'blocked', data=d_temp, hue = 'threshold', order = ('0', '1', '2', '3', '4', '5-10', '10+')) plt.ylabel('fraction blocked') """ Explanation: Most attacks do not come from people who have never been warned/blocked for harassment. Q: How does the probablity of a user being blocked change with the number of attacking comments? End of explanation """
ueapy/ueapy.github.io
content/notebooks/2017-03-10-regex.ipynb
mit
import re """ Explanation: A regular expression (regex, RE) is a sequence of characters that define a search pattern. Usually this pattern is used by string searching algorithms for "find" or "find and replace" operations on strings. For example, search engines use regular expressions to find matches to your query as do various text editors when you, e.g., enter a search and replace dialogue. re module provides regular expression matching operations in Python. It lets you check if a particular string matches a given regular expression or if a given regular expression matches a particular string. End of explanation """ string = 'Sic Parvis Magna' pattern = r'.*' # any character as many times as possible """ Explanation: There are two types of characters in regular expressions, ordinary and special characters. Ordinary characters, like 'A', 'z', or '0', simply match themselves, while special characters, like '\' or '(', either stand for classes of ordinary characters, or affect how the regular expressions around them are interpreted. In other words, special characters help you to specify how regular expressions work and what will be returned to you if you find a match. Let us learn some special characters: '.' (Dot.) In the default mode, this matches any character except a newline. '*' (Asterisk) Causes the resulting RE to match 0 or more repetitions of the preceding RE, as many repetitions as are possible. To test how these special characters work we need to create two variables, one for a string and one for a regular expression that we will try to match with a specific pattern in a string. End of explanation """ re.search(r'.*', string) """ Explanation: r in r'.*' indicates that we are using Python's raw string notation, which, in short, differs from ordinary Python strings by its interpretation of the backslash character. To search for a pattern in a string we will use re.search() function: re.search(pattern, string, flags=0) Scan through string looking for the first location where the regular expression pattern produces a match, and return a corresponding MatchObject instance. Return None if no position in the string matches the pattern. End of explanation """ pattern = r'Magna' re.search(pattern, string) """ Explanation: What if we want to find only 'Magna'? End of explanation """ pattern = r'magna' re.search(pattern, string) """ Explanation: What about 'magna'? End of explanation """ string = 'Station : Boulder, CO \n Station Height : 1743 meters \n Latitude : 39.95' """ Explanation: Nothing was returned because no match was found. Let us change our string to something that contains numbers and assume that we need to find only those numbers. End of explanation """ pattern = r'\d+' # one or more digit re.search(pattern, string) """ Explanation: \d Matches any decimal digit; this is equivalent to the class [0-9]. '+' Causes the resulting RE to match 1 or more repetitions of the preceding RE. End of explanation """ re.search(r'\d+\.\d+', string) # float number """ Explanation: Why we found only 1743, but not 1743 and 39 or 1743 and 39.95? Answer: re.search() scans through string looking for the first location where the regular expression pattern produces a match [...]. Let us now try to find 39.95 for latitude. There is no special character for a float number, but we can combine existing special characters to produce a regular expression that will match only float numbers. In other words, we need to include the dot '.' character into our new regular expression. However, dot has a special meaning in Python's raw string notation (see above). To construct the right regular expression we need to add the backslash character '\' before the dot character in order to avoid invoking its special meaning, i.e. quote or escape it. End of explanation """ re.findall(r'\d+\.\d+|\d+', string) # float or integer number """ Explanation: But how to find both numbers? For that we need to use the pipeline character '|' and re.findall() function since we want to get more than one result in return. '|' A|B, where A and B can be arbitrary REs, creates a regular expression that will match either A or B. re.findall(pattern, string, flags=0) Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found. End of explanation """ raw_data = 'O1D = OH + OH : 2.14e-10*H2O;\nOH + O3 = HO2 : 1.70e-12*EXP(-940/TEMP);' raw_lines = raw_data.split('\n') raw_lines """ Explanation: Moving on to a more science related example. Let us assume that we have a list of chemical reaction equations and rate coefficients and we want to separate equations from rate coefficients. End of explanation """ m = re.search(r'(.*) (\d)', 'The Witcher 3') m.group(0) # entire match m.group(1) # first parenthesized subgroup m.group(2) # second parenthesized subgroup m.group(1, 2) # multiple arguments give us a tuple """ Explanation: When we apply re.search() function to a line in raw_lines, we will get a MatchObject in return. MatchObjects support various methods, .group() is among them. group([group1, ...]) Returns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument. For example, End of explanation """ for l in raw_lines: line = re.search(r'(.*)(.*)', l).group(1, 2) print(line) """ Explanation: So let us indicate that we want to return two subgroups, one for an equation and one for a rate coefficient. If we put them simply one after another in the regular expression, we do not get what we want: End of explanation """ for l in raw_lines: line = re.search(r'(.*)\s:\s(.*);', l).group(1, 2) print(line) """ Explanation: The equation part is separated from the rate coefficient part by the double colon ':' and two whitespaces, therefore we need to put those characters between the subgroups, as well as the semicolon ';' at the end if we do not want to see it in the resulting string. \s Matches any whitespace character, this is equivalent to the set [ \t\n\r\f\v]. End of explanation """ alphanum_pattern = r'\w+' # any number or character as many times as possible for l in raw_lines: line = re.search(r'(.*)\s:\s(.*);', l).group(1,2) subline_reac, subline_prod = line[0].split('=') # split equation into reactants and products parts using '=' as a separator print('Reactants: '+subline_reac, 'Products: '+subline_prod) reac = re.findall(alphanum_pattern, subline_reac) prod = re.findall(alphanum_pattern, subline_prod) print(reac, prod) """ Explanation: Now we want to separate chemical reactants from products and store them in lists of strings without any arithmetic signs. To do that let us use re.findall() and a regular expression that matches letters and numbers that comprise our chemical species names: \w Matches any alphanumeric character and the underscore; this is equivalent to the set [a-zA-Z0-9_]. '+' Causes the resulting RE to match 1 or more repetitions of the preceding RE. End of explanation """ eqs = [] for l in raw_lines: line = re.search(r'(.*)\s:\s(.*);', l).group(1,2) subline_reac, subline_prod = line[0].split('=') reac = re.findall(alphanum_pattern, subline_reac) prod = re.findall(alphanum_pattern, subline_prod) eqs.append(dict(reac=reac, prod=prod, coef=line[1])) print(eqs) """ Explanation: We finally got all pieces of information we wanted about each chemical reaction: what reactants and products are and what the corresponding rate coefficient is. The best way to store this information is to create a dictionary for each chemical reaction and append those dictionaries into a list. End of explanation """ HTML(html) """ Explanation: This approach becomes pretty handy if you have thousands of reactions to work with (as I do), and there is still plenty of room for using re module. References: https://en.wikipedia.org/wiki/Regular_expression https://docs.python.org/3.6/library/re.html https://docs.python.org/2.0/ref/strings.html http://stackoverflow.com/questions/12871066/what-exactly-is-a-raw-string-regex-and-how-can-you-use-it Interactive website to play with strings and regular expressions: http://pythex.org/ End of explanation """
gregcaporaso/short-read-tax-assignment
ipynb/runtime/compute-runtimes.ipynb
bsd-3-clause
from os.path import join, expandvars from joblib import Parallel, delayed from tax_credit.framework_functions import (runtime_make_test_data, runtime_make_commands, clock_runtime) ## project_dir should be the directory where you've downloaded (or cloned) the ## tax-credit repository. project_dir = '../..' data_dir = join(project_dir, "data") results_dir = join(project_dir, 'temp_results_runtime') runtime_results = join(results_dir, 'runtime_results.txt') tmpdir = join(results_dir, 'tmp') ref_db_dir = join(project_dir, 'data/ref_dbs/gg_13_8_otus') ref_seqs = join(ref_db_dir, '99_otus_clean_515f-806r_trim250.fasta') ref_taxa = join(ref_db_dir, '99_otu_taxonomy_clean.tsv') num_iters = 1 sampling_depths = [1] + list(range(2000,10001,2000)) """ Explanation: Prepare the environment First we'll import various functions that we'll need for generating the report and configure the environment. End of explanation """ runtime_make_test_data(ref_seqs, tmpdir, sampling_depths) """ Explanation: Generate test datasets Subsample reference sequences to create a series of test datasets and references. End of explanation """ ! qiime tools import --input-path {ref_taxa} --output-path {ref_taxa}.qza --type "FeatureData[Taxonomy]" --source-format HeaderlessTSVTaxonomyFormat for depth in sampling_depths: tmpfile = join(tmpdir, str(depth)) + '.fna' ! qiime tools import --input-path {tmpfile} --output-path {tmpfile}.qza --type "FeatureData[Sequence]" ! qiime feature-classifier fit-classifier-naive-bayes --o-classifier {tmpfile}.nb.qza --i-reference-reads {tmpfile}.qza --i-reference-taxonomy {ref_taxa}.qza """ Explanation: Import to qiime for q2-feature-classifier methods, train scikit-learn classifiers. We do not include the training step in the runtime analysis, because under normal operating conditions a reference dataset will be trained once, then re-used many times for any datasets that use the same marker gene (e.g., 16S rRNA). Separating the training step from the classification step was a conscious decision on part of the designers to make classification as quick as possible, and removing redundant training steps! End of explanation """ qiime1_setup = join(results_dir, '.bashrc') qiime1_template = ('bash -c "source activate qiime1; source ' + qiime1_setup + '; ' 'assign_taxonomy.py -i {1} -o {0} -r {2} -t {3} -m {4} {5}"') blast_template = ('qiime feature-classifier classify-consensus-blast --i-query {1}.qza --o-classification ' '{0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}') vsearch_template = ('qiime feature-classifier classify-consensus-vsearch --i-query {1}.qza ' '--o-classification {0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}') naive_bayes_template = ('qiime feature-classifier classify-sklearn ' '--o-classification {0}/assign.tmp --i-classifier {2}.nb.qza --i-reads {1}.qza {5}') # {method: template, method-specific params} methods = { 'rdp': (qiime1_template, '--confidence 0.5 --rdp_max_memory 16000'), 'uclust': (qiime1_template, '--min_consensus_fraction 0.51 --similarity 0.8 --uclust_max_accepts 3'), 'sortmerna': (qiime1_template, '--sortmerna_e_value 0.001 --min_consensus_fraction 0.51 --similarity 0.8 ' '--sortmerna_best_N_alignments 3 --sortmerna_coverage 0.8'), 'blast' : (qiime1_template, '-e 0.001'), 'blast+' : (blast_template, '--p-evalue 0.001'), 'vsearch' : (vsearch_template, '--p-perc-identity 0.90'), 'naive-bayes': (naive_bayes_template, '--p-confidence 0.7') } """ Explanation: Preparing the method/parameter combinations Finally we define the method, parameter combintations that we want to test and command templates to execute. Template fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters End of explanation """ commands_a = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa, sampling_depths, num_iters=1, subsample_ref=True) """ Explanation: Generate the list of commands and run them First we will vary the size of the reference database and search a single sequence against it. End of explanation """ commands_b = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa, sampling_depths, num_iters=1, subsample_ref=False) """ Explanation: Next, we will vary the number of query seqs, and keep the number of ref seqs constant End of explanation """ print(len(commands_a + commands_b)) print(commands_a[1]) print(commands_b[-1]) Parallel(n_jobs=23)(delayed(clock_runtime)(command, runtime_results, force=False) for command in (list(set(commands_a + commands_b)))); """ Explanation: Let's look at the first command in each list and the total number of commands as a sanity check... End of explanation """
NathanYee/ThinkBayes2
code/.ipynb_checkpoints/chap09mine-checkpoint.ipynb
gpl-2.0
from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot """ Explanation: Think Bayes: Chapter 9 This notebook presents code and exercises from Think Bayes, second edition. Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ import pandas as pd df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t') df.head() grouped = df.groupby('Treatment') for name, group in grouped: print(name, group.Response.mean()) from scipy.stats import norm class Normal(Suite, Joint): def Likelihood(self, data, hypo): """ data: sequence of test scores hypo: mu, sigma """ mu, sigma = hypo likes = norm.pdf(data, mu, sigma) return np.prod(likes) from itertools import product mus = np.linspace(20, 80, 101) sigmas = np.linspace(5, 30, 101) control = Normal(product(mus, sigmas)) data = df[df.Treatment=='Control'].Response control.Update(data) thinkplot.Contour(control, pcolor=True) pmf_mu0 = control.Marginal(0) thinkplot.Pdf(pmf_mu0) pmf_sigma0 = control.Marginal(1) thinkplot.Pdf(pmf_sigma0) """ Explanation: Improving Reading Ability From DASL(http://lib.stat.cmu.edu/DASL/Stories/ImprovingReadingAbility.html) An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve. Summary statistics on the two groups of children show that the average score of the treatment class was almost ten points higher than the average of the control class. A two-sample t-test is appropriate for testing whether this difference is statistically significant. The t-statistic is 2.31, which is significant at the .05 level. End of explanation """ # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here """ Explanation: Exercise: Run this analysis again for the control group. What is the distribution of the difference between the groups? What is the probability that the average "reading power" for the treatment group is higher? What is the probability that the variance of the treatment group is higher? End of explanation """ class Paintball(Suite, Joint): """Represents hypotheses about the location of an opponent.""" def __init__(self, alphas, betas, locations): """Makes a joint suite of parameters alpha and beta. Enumerates all pairs of alpha and beta. Stores locations for use in Likelihood. alphas: possible values for alpha betas: possible values for beta locations: possible locations along the wall """ self.locations = locations pairs = [(alpha, beta) for alpha in alphas for beta in betas] Suite.__init__(self, pairs) def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: pair of alpha, beta data: location of a hit Returns: float likelihood """ alpha, beta = hypo x = data pmf = MakeLocationPmf(alpha, beta, self.locations) like = pmf.Prob(x) return like def MakeLocationPmf(alpha, beta, locations): """Computes the Pmf of the locations, given alpha and beta. Given that the shooter is at coordinates (alpha, beta), the probability of hitting any spot is inversely proportionate to the strafe speed. alpha: x position beta: y position locations: x locations where the pmf is evaluated Returns: Pmf object """ pmf = Pmf() for x in locations: prob = 1.0 / StrafingSpeed(alpha, beta, x) pmf.Set(x, prob) pmf.Normalize() return pmf def StrafingSpeed(alpha, beta, x): """Computes strafing speed, given location of shooter and impact. alpha: x location of shooter beta: y location of shooter x: location of impact Returns: derivative of x with respect to theta """ theta = math.atan2(x - alpha, beta) speed = beta / math.cos(theta)**2 return speed alphas = range(0, 31) betas = range(1, 51) locations = range(0, 31) suite = Paintball(alphas, betas, locations) suite.UpdateSet([15, 16, 18, 21]) locations = range(0, 31) alpha = 10 betas = [10, 20, 40] thinkplot.PrePlot(num=len(betas)) for beta in betas: pmf = MakeLocationPmf(alpha, beta, locations) pmf.label = 'beta = %d' % beta thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Distance', ylabel='Prob') marginal_alpha = suite.Marginal(0, label='alpha') marginal_beta = suite.Marginal(1, label='beta') print('alpha CI', marginal_alpha.CredibleInterval(50)) print('beta CI', marginal_beta.CredibleInterval(50)) thinkplot.PrePlot(num=2) thinkplot.Cdf(Cdf(marginal_alpha)) thinkplot.Cdf(Cdf(marginal_beta)) thinkplot.Config(xlabel='Distance', ylabel='Prob') betas = [10, 20, 40] thinkplot.PrePlot(num=len(betas)) for beta in betas: cond = suite.Conditional(0, 1, beta) cond.label = 'beta = %d' % beta thinkplot.Pdf(cond) thinkplot.Config(xlabel='Distance', ylabel='Prob') thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True) thinkplot.Config(xlabel='alpha', ylabel='beta', axis=[0, 30, 0, 20]) d = dict((pair, 0) for pair in suite.Values()) percentages = [75, 50, 25] for p in percentages: interval = suite.MaxLikeInterval(p) for pair in interval: d[pair] += 1 thinkplot.Contour(d, contour=False, pcolor=True) thinkplot.Text(17, 4, '25', color='white') thinkplot.Text(17, 15, '50', color='white') thinkplot.Text(17, 30, '75') thinkplot.Config(xlabel='alpha', ylabel='beta', legend=False) """ Explanation: Paintball End of explanation """ # Solution goes here # Solution goes here # Solution goes here # Solution goes here """ Explanation: Exercise: From John D. Cook "Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There’s no way to know with one tester. But if you have two testers, you can get a good idea, even if you don’t know how skilled the testers are. Suppose two testers independently search for bugs. Let k1 be the number of errors the first tester finds and k2 the number of errors the second tester finds. Let c be the number of errors both testers find. The Lincoln Index estimates the total number of errors as k1 k2 / c [I changed his notation to be consistent with mine]." So if the first tester finds 20 bugs, the second finds 15, and they find 3 in common, we estimate that there are about 100 bugs. What is the Bayesian estimate of the number of errors based on this data? End of explanation """ # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here """ Explanation: Exercise: The GPS problem. According to Wikipedia  GPS included a (currently disabled) feature called Selective Availability (SA) that adds intentional, time varying errors of up to 100 meters (328 ft) to the publicly available navigation signals. This was intended to deny an enemy the use of civilian GPS receivers for precision weapon guidance. [...] Before it was turned off on May 2, 2000, typical SA errors were about 50 m (164 ft) horizontally and about 100 m (328 ft) vertically.[10] Because SA affects every GPS receiver in a given area almost equally, a fixed station with an accurately known position can measure the SA error values and transmit them to the local GPS receivers so they may correct their position fixes. This is called Differential GPS or DGPS. DGPS also corrects for several other important sources of GPS errors, particularly ionospheric delay, so it continues to be widely used even though SA has been turned off. The ineffectiveness of SA in the face of widely available DGPS was a common argument for turning off SA, and this was finally done by order of President Clinton in 2000. Suppose it is 1 May 2000, and you are standing in a field that is 200m square. You are holding a GPS unit that indicates that your location is 51m north and 15m west of a known reference point in the middle of the field. However, you know that each of these coordinates has been perturbed by a "feature" that adds random errors with mean 0 and standard deviation 30m. 1) After taking one measurement, what should you believe about your position? Note: Since the intentional errors are independent, you could solve this problem independently for X and Y. But we'll treat it as a two-dimensional problem, partly for practice and partly to see how we could extend the solution to handle dependent errors. You can start with the code in gps.py. 2) Suppose that after one second the GPS updates your position and reports coordinates (48, 90). What should you believe now? 3) Suppose you take 8 more measurements and get: (11.903060613102866, 19.79168669735705) (77.10743601503178, 39.87062906535289) (80.16596823095534, -12.797927542984425) (67.38157493119053, 83.52841028148538) (89.43965206875271, 20.52141889230797) (58.794021026248245, 30.23054016065644) (2.5844401241265302, 51.012041625783766) (45.58108994142448, 3.5718287379754585) At this point, how certain are you about your location? End of explanation """
rescu/brainstorm
chapter0.ipynb
mit
#addition print 4+3 #subtraction print 4-3 #multiplication print 4*3 #exponentiation print 4**3 #division print 4/3 """ Explanation: Chapter 0-Introduction Often when we think of scientists conducting an experiment, we think of laboratories filled with beakers and whirring machines. However, especially in physics, these laboratories are often replaced by computers, whether they are simple desktop machines or some of the world's largest supercomputing clusters. Scientists' reliance on computing and software skills increases daily, but often a student in the physical sciences may only take one (or sometimes zero!) programming courses. As a result, these programming skills are often self-taught and can sometimes result in poorly-built and/or incorrect software. One way to remedy this would be to insist that scientists get an extra degree in computer science or software engineering. This however is unfeasible for a variety of reasonables. One alternative is to give students exposure to software development at an earlier point in their education. This gives up-and-coming scientists a chance to hone their coding skills for a longer period of time and will help to avoid many of <a href="http://www.nature.com/news/2010/101013/pdf/467775a.pdf">the pitfalls that scientific codes have recently been subject to.</a> This textbook seeks to give an introduction to software development through a series of motivating physical examples using the Python language. Specifically, we will be using the IPython notebook which will allow us to mix code with equations and explanations. Our first chapter will provide a (very!) brief introduction to the Python language and all of the other additional tools we need to work through the following physical examples. Why Python? There are hundreds of different programming languages out there, but many are not suited for scientific computing. Often we also must mix and match several different languages depending on the task at hand. In the last 10 years or so, Python has emerged as a very versatile and easy to use language. Its applications range from managing supercomputers to critical financial calculations to detailed physics simulations. Because of its widespread use, being a skilled Python programmer is useful not only in an academic setting, but in a commercial one as well. Looking at the Fig. 1, we can see that being able to program in Python is a very versatile skill. Learning Python won't make you automatically rich of course; we only include this figure to show that the skills learned here are not limited to scientific computing. <figure> <img src="figures_0/salary_v_language.png"> <figcaption>**Fig. 1**-Python is useful outside of science as well; knowing Python can earn you one of the highest salaries in software engineering. Source: <a href="http://qz.com/298635/these-programming-languages-will-earn-you-the-most-money/">Quartz</a></figcaption> </figure> Historically, physicists have preferred less-than-readable programming languages like C,C++, and Fortran. This is mostly due to the fact that, while verbose and difficult to use, these languages are extremely fast. This is extremely important in instances where we need to perform many hundreds of millions of operations in a reasonable amount of time. These languages use what is known as static type-checking. That is, when we declare a variable or a function, we must tell the computer exactly how much memory we need. This ensures that the computer uses no more memory than it needs, allowing our simulation to run much quicker, but forcing us to write many more lines of code. Fig. 2 shows the speed of Python relative to Fortran and C, as well as a few other popular languages; Python falls in between much faster languages like Fortran and much slower languages like Matlab and R. <figure> <img src="figures_0/python_benchmark.png"> <figcaption>**Fig. 2**-Speed of Python versus a lot of other programming languages all compared to the C language. The different colors correspond to several different benchmarks. Note that a value of 1 indicates the operation is exactly comparable to C. Source: <a href="https://github.com/ViralBShah/julia-presentations/blob/master/Fifth-Elephant-2013/Fifth-Elephant-2013.pdf">Viral B. Shah, Julia development team</a></figcaption> </figure> Python uses what is known as dynamic type-checking. This means that we don't need to specify the type when declaring a variable. For example, we can say a=2 where in this case a has type int for integer. Then, later in our program we can say a='Hello World!', which is a string (or a collection of characters) without having to say "I'm switching a from an int to a string." This type of operation would be forbidden in languages with static type-checking. In C, for example, we would need to say int a=2; and later on if we assigned a string to a, we would get an error because a doesn't have enough room to store a string. Hopefully by now, you've seen why we've chosen to use Python for this tutorial and why it is widely used by both scientists and software engineers alike: Python is easy to use while providing reasonable compute times and is used in a wide variety of applications. Additionally, several open source packages have been developed for Python that provide a large number of additional functions useful to scientists, mathematicians, and engineers. Two of the most prominent examples are <a hre="http://www.numpy.org/">Numpy</a> and <a hre="http://www.scipy.org/">SciPy</a>. Additionally, the <a href="">Matplotlib</a> package has been developed to create publication-quality figures in Python. We will use all three of these packages throughout this course, but will leave discussion of them to subsequent sections. Using Python In the following few sections, we'll provide a brief introduction to the Python language mostly by example. Nearly any possible question you could have regarding programming has probably been answered on the popular question-and-answer website <a href="http://stackoverflow.com/">StackOverflow</a> or by just typing your question into Google. The <a href="https://www.python.org/doc/">Python documentation</a> is also a good source of Python help, but sometimes the explanations can be a bit technical and unhelpful to beginners. Basic Operations In the context of scientific computing, it's often easiest to think about any programming language as a really fancy calculator. One function that we will use right away (and throughout the rest of the tutorial) is the print function; unsurprisingly, this just tells the computer to print whatever follows print to the screen. The cell below shows a bunch of simple (and probably already obvious) operations. End of explanation """ #addition print "4+3 = ",4+3 #subtraction print "4-3 = ",4-3 #multiplication print '4*3 = ',4*3 #exponentiation print "4^3 = ",4**3 #division print "4/3 = ",4/3 """ Explanation: The lines beginning with # are called comments. These are ignored by Python and are often used to provide explanations of your code. Note that we can combine strings and numbers in our print statements to give more meaningful output. We denote strings, or collections of characters, using double quotes "" or single quotes ''. End of explanation """ #division with floats print "4/3 = ",4.0/3.0 """ Explanation: This is a nice example, but if we look at the last line, $4/3=1$, we notice that is in fact incorrect. In fact, $4/3=1~\frac{1}{3}=1.\overline{33}$. So what's going on with division in Python? In turns out that this is a very common mistake that programmers make and can lead to some very serious and hard-to-find problems. Recall that we said Python is dynamically typed, meaning it automatically infers the data type when you declare a variable. Note that both 4 and 3 have type int. Thus, Python expects the result of an operation between these two numbers to also be of type int. However, $4/3$ has a decimal component and thus must be represented as type float, meaning that Python has requested space to store the decimal part of our answer as well. We can thus solve our problem by writing 4 and 3 with type float; in general, it is good practice to always represent your numbers with type float if you think there is a chance you will suffer from roundoff error (or truncation error), the mistakes that result in representing numbers with incorrect type. End of explanation """ #Give a value of 3.0 to a and 4.0 to b a = 4.0 b = 3.0 #output the values of these variables to the screen print "The initial value of a is ",a," and the initial value of b is ",b """ Explanation: Variables Performing simple calculations is nice, but what if we want to keep track of a particular value after we've performed several different operations on it? We do this be defining a variable, a concept we already referred to while discussing dynamic versus static type-checking. Rather than providing anymore wordy explanations, we'll show a few easy examples of how variables are used. End of explanation """ #Update a and b a = a - 1.0 b = b + 5.0 print "The new value of a is ",a," and the new value of b is ",b """ Explanation: Now let's perform some operations on a and b and see what happens. End of explanation """ #define the two words word1 = "Hello " word2 = "World!" #print the two words to the screen print "The first word is ",word1 print "The second word is ",word2 #add or concatenate the two strings expression = word1+word2 #print the new string to the screen print "The whole sentence is ",expression """ Explanation: Variables can also contain strings and we can even perform operations on these strings, within reason of course. End of explanation """ word1-word2 word1*word2 word1/word2 """ Explanation: We shouldn't get too carried away though. For example, it's nonsensical to multiply, divide, or subtract two words. If we try to do this, Python will give us an error. End of explanation """ student1 = "Jake" student2 = "Jenny" student3 = "Lucas" print "The names of three of the students are ",student1,', ',student2,', and ',student3 """ Explanation: Containers Python provides several different tools for organizing data. Creating collections of numbers or words is helpful when organizing our data and may often be necessary for handling large amounts of data. Lists: The most basic (and often most useful) container that Python provides is the list. A list gives us a way to store multiple values, whether they're words or numbers, with a single variable. For example, what if wanted to keep track of all the names of students in a class. We could create a series of variables, each one containing the name of a student. End of explanation """ #Use a list to define a classroom rather than individual variables class1 = ["Marissa","Ben","Seth","Rachel","Ryan"] #Print the list print "The students in the class are ",class1 """ Explanation: However, what if we have 30 students? Defining a variable for each one seems a little unwieldy and offers no information about how these variables are connected (i.e. they are in the same class). It's easier and better practice to define the relationship between these students using a list. End of explanation """ print "The first student in our class is ",class1[0] print "The second student in our class is ",class1[1] print "The fifth student in our class is ",class1[4] """ Explanation: But what if we wanted to access the individual parts of the list? We will use what is called the index. One important thing to note about lists (and counting in general) in Python is that numbers start at 0. Thus, for the above list, we can use 0-4 to access the parts of our list. End of explanation """ print class1[5] """ Explanation: But what if we try to access an element beyond the last element in our list? End of explanation """ print "The last student on our class list is ",class1[-1] print "The second-to-last student on our class list is ",class1[-2] """ Explanation: We can also use negative numbers to access the elements of our list, unintuitive as this may seem. In this case, -1 corresponds to the last element of our list, -2 the second-to-last element and so on. This is especially useful when we have very long lists or we have lists where the length is unknown and we want to access elements starting from the back. End of explanation """ #add a student to the class class1.append('Mischa') #print the class with the new student print "The students in the class are ",class1 """ Explanation: What if another student joins the class? We would like to be able to add elements to our list as well. This can be done through the append command, shown below. End of explanation """ #remove Ben from the class list; note Ben corresponds to entry 1 class1.pop(1) #print the class minus Ben print "The new class roster is ",class1 """ Explanation: Alternatively, we can remove elements of a list using the pop command in a similar way. End of explanation """ #make a list for my_car my_car_list = ['Mercury','Sable','Dark Green',1998] #print the information print "The details on my car are ",my_car_list """ Explanation: There are many more ways of manipulating lists and we won't cover all of them here. Consult the Python documentation for (many more) additional details. Dictionaries: Another common container used in Python is the dictionary, denoted using {}. The main difference between dictionaries and lists is that dictionaries use a key-value pair rather than a numerical index to locate specific entries. But why would we want to use a dictionary instead of a list? Say we have a car and we want to specify several different properties of the car: its make, model, color, year. We could of course put this information in a list. End of explanation """ #make a dictionary for my car my_car_dict = {'make':'Mercury','model':'Sable','color':'Dark Green','year':1998} #print the details of my car print "The make of my car is ",my_car_dict['make'] print "The model of my car is ",my_car_dict['model'] print "The color of my car is ",my_car_dict['color'] print "My car was made in ",my_car_dict['year'] """ Explanation: However, this doesn't give us any information about what each of the individual entries mean. To preserve the context of the information in our list, we have to know the correspondence of the index (0-3) to the property it specifies. However, by using a dictionary, the key we use to access the value tells us what the value means. End of explanation """ my_average=91 """ Explanation: The kind of container we choose to use will depend on the problem at hand. Throughout our tutorial, we will show the advantages of using both types. Unsurprisingly, there are several more types of containers available in Python. We have only provided the two most used types here. Boolean Logic and Conditional Statements When writing a piece of code, we often want to tell our program to make a certain decision based on some input. For example, consider the conversion of numerical grade percentages to their corresponding letter grades. Suppose we want to assign grades based on the table below. Letter Grade | Numerical Grade -------------|---------------- A |$\ge90$ B |$\ge80,\lt90$ C |$\ge70,\lt80$ D |$\ge60,\lt70$ F |everything else How do we do this? Quite intuitively, most programming languages use what are called "if-else" statements. The general idea is that if some condition is met, we execute a certain piece of code. This is the "if" part. We can also provide an "else" block that will be executed if the condition is not met, though the "else" statement is not required. Additionally, "else-if" statements are also used to test multiple conditions (such as the different grade brackets). This is all best explained through an example. Say your class average is an 88. End of explanation """ if my_average >= 90: print "Your letter grade is an A!" else: print "You did not get an A." """ Explanation: And now I want to assign a letter grade to this average. End of explanation """ my_average = 89 if my_average >= 90: print "Your letter grade is an A!" else: print "You did not get an A." """ Explanation: This is nice, but if we got anything below a 90, this snippet of code doesn't give us much information. Say my class average is an 89. End of explanation """ if my_average >= 80 and my_average < 90: print "Your letter grade is a B!" """ Explanation: Well now I know I didn't get an A, but for all I know I got an F when in reality I got a B. To solve this problem, let's test the B condition. Looking at the table, we can see that to get a B, our average must satisfy two conditions: it must be greater than or equal to an 80 and less than a 90. To do this, we use what is called (unsurprisingly) an and statement, shown in the example below. End of explanation """ my_average=72 if my_average >= 90: print "Your letter grade is an A!" elif my_average >= 80 and my_average < 90: print "Your letter grade is a B!" elif my_average >= 70 and my_average < 80: print "Your letter grade is a C" elif my_average >= 60 and my_average < 70: print "Your letter grade is a D" else: print "Your letter grade is an F" """ Explanation: Now, let's combine our A and B conditions (along with the C, D, and F conditions) using the "if", "else-if" (denoted in Python using elif) and "else" statements. Now, we'll change our average to a 72. End of explanation """ my_letter_grade="D" if my_letter_grade == "A": print "Your grade is greater than a 90" elif my_letter_grade == "B": print "Your grade is between 80 and 90" elif my_letter_grade == "C": print "Your grade is between a 70 and an 80" elif my_letter_grade == "D": print "Your grade is between a 60 and a 70" else: print "Your grade is below a 60" """ Explanation: We can also evaluate more strict conditions, like if two things are exactly equal. Say we reverse the above situation: we are given a letter grade and we want to determine what numerical bracket we fall into. Equality is determined through the == sign. End of explanation """ if my_letter_grade == "A" or my_letter_grade == "B": print "You're doing great!" elif my_letter_grade == "C" or my_letter_grade == "D": print "You need to do better..." else: print "You are failing." my_letter_grade="B" if my_letter_grade == "A" or my_letter_grade == "B": print "You're doing great!" elif my_letter_grade == "C" or my_letter_grade == "D": print "You need to do better..." else: print "You are failing." """ Explanation: These symbols that we've been using to determine relationships between objects are called relational operators: they tell us something about on object relative to another object. Pay careful attention not to mix up == and =. A single equal sign, the assignment operator, assigns a value to a variable. Using it with "if-else" statements will lead to an error. Finally, say we want to test whether one or the other condition is true. If we are getting an A or a B in a class, we are doing pretty well, but if we're getting a C or below, we need to improve our grade. To do this, we use the or keyword. End of explanation """ my_letter_grade is "B" if my_letter_grade is "A": print "You got an A!" else: print "You did not get an A." """ Explanation: Collectively, what we've been using are known as conditional statements: based on whether a condition (the thing that follows the if (or else or elif)) is true or not, we evaluate a block of code. Note that this block of code to be evaluated is indented. In Python, this indentation is required and tells Python that this piece of code is be evaluated only if the condition is true. The conditions of True and False are what these conditional statements are built on. Variables that evaluate to True or False are known as boolean variables. Correspondingly, we can use boolean logic much in the same way we used our relational operators through the is statement. End of explanation """ i_got_an_a=(my_letter_grade is "A") print "The statement 'I got an A' is ",i_got_an_a """ Explanation: Here it should be obvious that is is not an assignment operator. Rather we are testing whether something is true. Similarly, we can assign the result of an is statement to a variable. End of explanation """ if i_got_an_a: print "Your letter grade is an A!" else: print "You did not get an A." if i_got_an_a is True: print "Your letter grade is an A!" else: print "You did not get an A." """ Explanation: Note also that conditional statements can also be used to directly evaluate whether something is true or false. Really this is what has been going on all along, we've just been hiding it in a way. Additionally, the default for a conditional statement is to test whether a statement is True. Note the equivalence of the following two statements. End of explanation """ class_grades = [99,78,44,82,56,61,94,78,76,100,85] """ Explanation: This may just seem like we are saying the same thing over and over again and the use of relational operators and boolean logic may not seem immediately obvious. However, these decision making tools are some of the most useful when writing code, scientific or otherwise, and their usefulness will become more apparent the more examples we work through. Iteration for loops: Often (nearly always) when we write a program, we want to perform a task (or a similar set of tasks) over and over again. Let's return to our example of class averages and what we learned about lists earlier. Let's say we have a list of numerical grades and we want to know their corresponding letter grades. End of explanation """ for i in range(len(class_grades)): print "The numerical grade is ",class_grades[i] """ Explanation: We could of course look at each list entry individually, writing a block of code to evaluate the 0th entry, then the 1st entry, then the 2nd entry and so on. However, this would mean writing as many if statements as there are entries in our class grades list. Instead, we will use what is called a loop, in this case a for loop, to iterate over the list, applying the same block of code to each successive entry. End of explanation """ for i in class_grades: print "The numerical grade is ",i """ Explanation: Let's unpack this code snippet. The len() command gets the length of the list, in this case 11. The range(n) command creates a list with entries 0 through n-1, separated by 1; thus, this is a list of all the indices of our class_grades list. The for i in ... line tells Python to execute the indented block of code 11 times, incrementing i by 1 each time. Similarly, and perhaps more succinctly, we can skip the range() and len() commands and just iterate over the class_grades list itself. End of explanation """ c_grades = [] #declare empty list to save C grades found_c_grades = 0 #set a counter for the number of C grades found counter = 0 #set a counter to step through the class grades list while found_c_grades < 2: if class_grades[counter] >= 70 and class_grades[counter] < 80: c_grades.append(class_grades[counter]) found_c_grades = found_c_grades + 1 counter = counter +1 print "The first two C grades are ",c_grades """ Explanation: Here, i doesn't represent the index of the list, but rather the list entry itself. When iterating over a single list, this is often the best and most concise way to construct your for loop. However, when iterating over two lists where there is a correspondence between the entries, it is often useful to iterate over the list of indices. A for loop is possibly the most useful tool in any programming language, especially in scientific computing. We will make frequent use of both for and while loops in this tutorial, showing their usefulness in a variety of contexts. while loops: Whether you want to use a while loop or a for loop depends on the task at hand. If I know that I need to perform n number of tasks, I would use a for loop. But what if I don't know how many times I need to perform a task? What if instead I want to perform some set of tasks until a condition is met? Like the if and else and elif statements that we discussed previously, we give the while loop a condition (or a series of conditions). As long as this/these condition(s) are met, the statement inside of the while loop will continue to be executed. Another way of saying this is, as long as the condition given to the while loop evaluates to True, then the block below the while statement will continue to be evaluated. As soon as the statement evaluates to False, the evaluation of this block stops. For example, say we want to find the first two Cs in the list of class grades and only the first two. End of explanation """ def my_first_function(x): return x**2 + 1 """ Explanation: As long as the number of C grades we've found is less than 2, we will continue searching the list. Once we've found 2, we stop searching. Note that if we didn't increase our counter and the found_c_grades variable, our while loop will continue to execute forever. When using a while loop, special attention should be given to avoiding this problem. Notice that we could've used a for loop to accomplish this task. However, what if we had a list of 1,000 grades or 100,000 grades? We can save quite a bit of time by stopping the evaluation of this block of code once we've finished the task: finding the first two C grades. This would also be useful if we were reading from a file of unknown length. Functions When writing a program, one of the main things we want to avoid is rewriting code. This is a good way to waste space, spend more time writing our program, decreasing the readability of our code, and potentially slowing the execution time of our program. One easy way to avoid all of these pitfalls is through the use of functions. The concept of functions is simple, but powerful. Consider the mathematical expression $f(x)=x^2+1$. We put in a value $x$ or a range of values, say $-10<x<10$, and get out that value squared plus one. How would we write this in terms of a Python function? End of explanation """ #single value of x x = 1.0 print "The result of f(x) = x^2 + 1 for x = ",x," is ",my_first_function(x) #list of values f_result = [] x = [-2.0,-1.0,0.0,1.0,2.0] for i in x: f_result.append(my_first_function(i)) print "The result of f(x) = x^2 + 1 for x = ",x," is ",f_result """ Explanation: Every function is denoted using the def keyword (for definition). Then, we give the function a name (my_first_function in this example) followed by the inputs (x in this case) to the function. The return statement then tells the function what result should be output. Below is an example of how we would use the function. End of explanation """ def numerical_grade_to_letter_grade(num_grade): if num_grade >= 90: let_grade = 'A' elif num_grade >= 80 and num_grade < 90: let_grade = 'B' elif num_grade >= 70 and num_grade < 80: let_grade = 'C' elif num_grade >= 60 and num_grade < 70: let_grade = 'D' else: let_grade = 'F' return let_grade #map numerical grades to letter grades class_grades_letters = [] for grade in class_grades: class_grades_letters.append(numerical_grade_to_letter_grade(grade)) #print correspondence between numerical and letter grades for i in range(len(class_grades)): print "The numerical grade is ",class_grades[i]," and the letter grade is ",class_grades_letters[i] """ Explanation: Notice that we've saved ourselves quite a few lines of code by not having to write $f(x)=x^2+1$ repeatedly. Instead, we can just reference the function definition above. Additionally, if we wanted to change something about our expression $f(x)$, we would only need to make the change in one place, the function definition, rather than having to make the same change in multiple places in our code. Writing our code using functions helps us to avoid simple mistakes that so often occur when writing a program. We can do much more than just evaluate simple mathematical expressions with functions. Let's look back to our example of mapping numerical grades to letter grades. This time, we'll iterate through our list of numerical grades, passing each one to a function that finds the corresponding letter grade, and then adding that letter grade to a new list. End of explanation """
gully/adrasteia
notebooks/adrasteia_05-03_DR2_variability_catalog_rotational_modulation.ipynb
mit
# %load /Users/obsidian/Desktop/defaults.py import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' ! du -hs ../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv df0 = pd.read_csv('../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv/VariRotationModulation_0.csv.gz') df0.shape """ Explanation: Gaia DR2 variability catalogs Part II: Rotation modulation In this notebook we explore what's in the VariRotationModulation catalog from Gaia DR2. We eventually cross-match it with K2 and see what it looks like! gully May 2, 2018 End of explanation """ df0.columns """ Explanation: The catalog has many columns. What are they? End of explanation """ df0.num_segments.describe() df0.loc[1] """ Explanation: Gaia Documentation section 14.3.6 explains that some of the columns are populated with arrays! So this catalog can be thought of as a table-of-tables. The typical length of the tables are small, usually just 3-5 entries. End of explanation """ import glob fns = glob.glob('../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv/VariRotationModulation_*.csv.gz') n_files = len(fns) """ Explanation: I think the segments further consist of lightcurves, for which merely the summary statistics are listed here, but I'm not sure. Since all the files are still only 150 MB, we can just read in all the files and concatenate them. End of explanation """ from astropy.utils.console import ProgressBar df_rotmod = pd.DataFrame() with ProgressBar(n_files, ipython_widget=True) as bar: for i, fn in enumerate(fns): df_i = pd.read_csv(fn) df_rotmod = df_rotmod.append(df_i, ignore_index=True) bar.update() df_rotmod.shape """ Explanation: This step only takes a few seconds. Let's use a progress bar to keep track. End of explanation """ df_rotmod.num_segments.hist(bins=11) plt.yscale('log') plt.xlabel('$N_{\mathrm{segments}}$') plt.ylabel('occurence'); """ Explanation: We have 147,535 rotationally modulated variable stars. What is the typical number of segments across the entire catalog? End of explanation """ df_rotmod.best_rotation_period.hist(bins=30) plt.yscale('log') plt.xlabel('$P_{\mathrm{rot}}$ [days]') plt.ylabel('$N$') """ Explanation: What are these segments? Are they the arbitrary Gaia segments, or are they something else? Let's ask our first question: What are the distribution of periods? best_rotation_period : Best rotation period (double, Time[day]) this field is an estimate of the stellar rotation period and is obtained by averaging the periods obtained in the different segments End of explanation """ df_rotmod.max_activity_index.hist(bins=30) plt.yscale('log') plt.xlabel('$95^{th} - 5^{th}$ variability percentile[mag]') plt.ylabel('$N$'); """ Explanation: Next up: What are the distribution of amplitudes? We will use the segments_activity_index: segments_activity_index : Activity Index in segment (double, Magnitude[mag]) this array stores the activity indexes measured in the different segments. In a given segment the amplitude of variability A is taken as an index of the magnetic activity level. The amplitude of variability is measured by means of the equation: $$A=mag_{95}−mag_{5}$$ where $mag_{95}$ and $mag_{5}$ are the 95-th and the 5-th percentiles of the G-band magnitude values. End of explanation """ val = df_rotmod.segments_cos_term[0] val """ Explanation: Wow, $>0.4$ magnitudes is a lot! Most have much lower amplitudes. The problem with max activity index is that it may be sensitive to flares. Instead, let's use the $A$ and $B$ coefficients of the $\sin{}$ and $\cos{}$ functions: segments_cos_term : Coefficient of cosine term of linear fit in segment (double, Magnitude[mag]) if a significative period T0 is detected in a time-series segment, then the points of the time-series segment are fitted with the function $$mag(t) = mag_0 + A\cos(2\pi T_0 t) + B \sin(2\pi T_0 t)$$ Let's call the total amplitude $\alpha$, then we can apply: $\alpha = \sqrt{A^2+B^2}$ End of explanation """ np.array(eval(val)) NaN = np.NaN #Needed for all the NaN values in the strings. clean_strings = lambda str_in: np.array(eval(str_in)) """ Explanation: Gasp! The arrays are actually stored as strings! We need to first convert them to numpy arrays. End of explanation """ if type(df_rotmod['segments_cos_term'][0]) == str: df_rotmod['segments_cos_term'] = df_rotmod['segments_cos_term'].apply(clean_strings) df_rotmod['segments_sin_term'] = df_rotmod['segments_sin_term'].apply(clean_strings) else: print('Skipping rewrite.') amplitude_vector = (df_rotmod.segments_sin_term**2 + df_rotmod.segments_cos_term**2)**0.5 df_rotmod['mean_amplitude'] = amplitude_vector.apply(np.nanmean) """ Explanation: Only run this once: End of explanation """ amp_conv_factor = 1.97537 x_dashed = np.linspace(0,1, 10) y_dashed = amp_conv_factor * x_dashed plt.figure(figsize=(5,5)) plt.plot(df_rotmod.mean_amplitude, df_rotmod.max_activity_index, '.', alpha=0.05) plt.plot(x_dashed, y_dashed, 'k--') plt.xlim(0,0.5) plt.ylim(0,1); plt.xlabel(r'Mean cyclic amplitude, $\alpha$ [mag]') plt.ylabel(r'$95^{th} - 5^{th}$ variability percentile[mag]'); """ Explanation: Let's compare the max_activity_index with the newly determined mean amplitude. The $95^{th}$ to $5^{th}$ percentile should be almost-but-not-quite twice the amplitude: End of explanation """ df_rotmod['amplitude_linear'] = 10**(-df_rotmod.mean_amplitude/2.5) df_rotmod['amplitude_linear'].hist(bins=500) plt.xlim(0.9, 1.01) plt.figure(figsize=(5,5)) plt.plot(df_rotmod.best_rotation_period, df_rotmod.amplitude_linear, '.', alpha=0.01) plt.xlim(0, 60) plt.ylim(0.9, 1.04) plt.xlabel('$P_{\mathrm{rot}}$ [days]') plt.text(1, 0.92, ' Rapidly rotating\n spot dominated') plt.text(36, 1.02, ' Slowly rotating\n facular dominated') plt.ylabel('Flux decrement $(f_{\mathrm{spot, min}})$ '); """ Explanation: The lines track decently well. There's some scatter! Probably in part due to non-sinusoidal behavior. Let's convert the mean magnitude amplitude to an unspotted-to-spotted flux ratio: End of explanation """ from astropy.table import Table k2_fun = Table.read('../../K2-metadata/metadata/k2_dr2_1arcsec.fits', format='fits') len(k2_fun), len(k2_fun.columns) """ Explanation: Promising! Let's read in the Kepler data and cross-match! This cross-match with Gaia and K2 data comes from Meg Bedell. End of explanation """ col_subset = ['source_id', 'epic_number', 'tm_name', 'k2_campaign_str'] k2_df = k2_fun[col_subset].to_pandas() """ Explanation: We only want a few of the 95 columns, so let's select a subset. End of explanation """ def clean_to_pandas(df): '''Cleans a dataframe converted with the to_pandas method''' for col in df.columns: if type(k2_df[col][0]) == bytes: df[col] = df[col].str.decode('utf-8') return df k2_df = clean_to_pandas(k2_df) df_rotmod.columns keep_cols = ['source_id', 'num_segments', 'best_rotation_period', 'amplitude_linear'] """ Explanation: The to_pandas() method returns byte strings. Arg! We'll have to clean it. Here is a reuseable piece of code: End of explanation """ k2_df.head() df_rotmod[keep_cols].head() """ Explanation: We can merge (e.g. SQL join) these two dataframes on the source_id key. End of explanation """ df_comb = pd.merge(k2_df, df_rotmod[keep_cols], how='inner', on='source_id') df_comb.head() df_comb.shape """ Explanation: We'll only keep columns that are in both catalogs. End of explanation """ multiplicity_count = df_comb.groupby('epic_number').\ source_id.count().to_frame().\ rename(columns={'source_id':'multiplicity'}) df = pd.merge(df_comb, multiplicity_count, left_on='epic_number', right_index=True) df.head(20) """ Explanation: Only 524 sources appear in both catalogs! Boo! Well, better than nothing! It's actually even fewer K2 targets, since some targets are single in K2 but have two or more matches in Gaia. These could be background stars or bona-fide binaries. Let's flag them. End of explanation """ df_single = df[df.multiplicity == 1] df_single.shape """ Explanation: Let's cull the list and just use the "single" stars, which is really the sources for which Gaia did not identify more than one target within 1 arcsecond. End of explanation """ plt.figure(figsize=(5,5)) plt.plot(df_single.best_rotation_period, df_single.amplitude_linear, '.', alpha=0.1) plt.xlim(0, 60) plt.ylim(0.9, 1.04) plt.xlabel('$P_{\mathrm{rot}}$ [days]') plt.text(1, 0.92, ' Rapidly rotating\n spot dominated') plt.text(36, 1.02, ' Slowly rotating\n facular dominated') plt.ylabel('Flux decrement $(f_{\mathrm{spot, min}})$ ') plt.title('K2 x Gaia x rotational modulation'); """ Explanation: A mere 224 sources! Boo hoo! End of explanation """ df_single.sort_values('amplitude_linear', ascending=True).head(25).style.format({'source_id':"{:.0f}", 'epic_number':"{:.0f}"}) df_single.to_csv('../data/analysis/k2_gaia_rotmod_single.csv', index=False) """ Explanation: The points look drawn from their parent population. End of explanation """
phoebe-project/phoebe2-docs
development/tutorials/constraints_hierarchies.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.4,<2.5" """ Explanation: Advanced: Constraints and Changing Hierarchies Setup Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() """ Explanation: As always, let's do imports and initialize a logger and a new Bundle. End of explanation """ b.set_value('q', 0.8) """ Explanation: Changing Hierarchies Some of the built-in constraints depend on the system hierarchy, and will automatically adjust to reflect changes to the hierarchy. For example, the masses depend on the period and semi-major axis of the parent orbit but also depend on the mass-ratio (q) which is defined as the primary mass over secondary mass. For this reason, changing the roles of the primary and secondary components should be reflected in the masses (so long as q remains fixed). In order to show this example, let's set the mass-ratio to be non-unity. End of explanation """ print("M1: {}, M2: {}".format(b.get_value(qualifier='mass', component='primary', context='component'), b.get_value(qualifier='mass', component='secondary', context='component'))) """ Explanation: Here the star with component tag 'primary' is actually the primary component in the hierarchy, so should have the LARGER mass (for a q < 1.0). End of explanation """ b['mass@primary'] b.set_hierarchy('orbit:binary(star:secondary, star:primary)') b['mass@primary@star@component'] print(b.get_value('q')) print("M1: {}, M2: {}".format(b.get_value(qualifier='mass', component='primary', context='component'), b.get_value(qualifier='mass', component='secondary', context='component'))) """ Explanation: Now let's flip the hierarchy so that the star with the 'primary' component tag is actually the secondary component in the system (and so takes the role of numerator in q = M2/M1). For more information on the syntax for setting hierarchies, see the Building a System Tutorial. End of explanation """ print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'), b.get_value(qualifier='mass', component='secondary', context='component'), b.get_value(qualifier='period', component='binary', context='component'), b.get_value(qualifier='q', component='binary', context='component'))) b.flip_constraint('mass@secondary@constraint', 'period') print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'), b.get_value(qualifier='mass', component='secondary', context='component'), b.get_value(qualifier='period', component='binary', context='component'), b.get_value(qualifier='q', component='binary', context='component'))) b.set_value(qualifier='mass', component='secondary', context='component', value=1.0) print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'), b.get_value(qualifier='mass', component='secondary', context='component'), b.get_value(qualifier='period', component='binary', context='component'), b.get_value(qualifier='q', component='binary', context='component'))) """ Explanation: Even though under-the-hood the constraints are being rebuilt from scratch, they will remember if you have flipped them to solve for some other parameter. To show this, let's flip the constraint for the secondary mass to solve for 'period' and then change the hierarchy back to its original value. End of explanation """
thundergolfer/Insults
insults/exploration/model/non_personal_insults.ipynb
gpl-3.0
%matplotlib inline # Ugly Python PATH hack to import insults from notebook import os import sys nb_dir = os.path.split(os.getcwd())[0] if nb_dir not in sys.path: sys.path.append(nb_dir) from insults.core import Insults """ Explanation: Non-personal Insults This model was designed and training to detect personal insults, not insults directed at groups generally and third-parties. Insulting a celebrity, for example, should not be flagged by the model. Though it is interesting to detect comments that are insulting something/somebody generally, it is not the domain of this project. End of explanation """ samples = [ "Owen Wilson is the ugliest person I've ever seen, period.", "Of the things I don't like, I like bankers the least.", "You shouldn't listen to Sam Harris; He's an idiot.", "I don't like women.", "Alex is worse than James, though both of them are fuckheads." "I just want to tell those guys to go die in a hole." "You're great, but idealists are awful.", ] insult = Insults() results = [] for example in samples: results.append(insult.rate_comment(example)) """ Explanation: Setup End of explanation """ import seaborn seaborn.distplot(results, hist_kws={"range": [0,1]}) """ Explanation: Exploration End of explanation """
hfoffani/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (50, 53) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function st = source_text # eos only on target. tt = target_text.replace("\n", ' <EOS>\n') + ' <EOS>' # one eos at the end stl = [ [ source_vocab_to_int[w] for w in l.split() ] for l in st.split('\n') ] ttl = [ [ target_vocab_to_int[w] for w in l.split() ] for l in tt.split('\n') ] return stl, ttl """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input = tf.placeholder(tf.int32, [None, None], name='input') target = tf.placeholder(tf.int32, [None, None], name='target') learn_rate = tf.placeholder(tf.float32, name='learn_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return input, target, learn_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function goids = [ target_vocab_to_int["<GO>"] ] * batch_size tfgoids = tf.reshape(goids, [-1, 1]) no_ends = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) with_go = tf.concat([tfgoids, no_ends], 1) return with_go """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drops = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) ecell = tf.contrib.rnn.MultiRNNCell([drops] * num_layers) _, enc_state = tf.nn.dynamic_rnn(ecell, rnn_inputs, dtype=tf.float32) return enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) outputs = tf.nn.dropout(train_pred, keep_prob=keep_prob) train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drops = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) dcell = tf.contrib.rnn.MultiRNNCell([drops] * num_layers) with tf.variable_scope('decoding') as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) train_logits = decoding_layer_train(encoder_state, dcell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope('decoding', reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, dcell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size, initializer = tf.random_uniform_initializer(-1,1)) encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) new_target = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, new_target) train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation """ # Number of Epochs epochs = 8 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.005 # Dropout Keep Probability keep_probability = 0.5 # Show stats for every n number of batches show_every_n_batches = 100 """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ import time from datetime import timedelta def timer(start): elapsed = time.time() - start hours, rem = divmod(elapsed, 3600) minutes, seconds = divmod(rem, 60) return "{:0>2}:{:0>2}".format(int(minutes),int(seconds)) """ DON'T MODIFY ANYTHING IN THIS CELL """ import time start_train = time.time() def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() if (epoch_i * (len(source_int_text) // batch_size) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, ' 'Validation Accuracy: {:>6.3f}, Loss: {:>6.3f} elapsed={}'.format( epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss, timer(start_train))) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function unk = vocab_to_int['<UNK>'] getid = lambda w: vocab_to_int[w] if w in vocab_to_int else unk word_ids = [ getid(w) for w in sentence.split() ] return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
awellis/state-space-models
notebooks/state-space-model-v2.ipynb
apache-2.0
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns # sns.set(rc={"figure.figsize": (16, 12)}) sns.set_style('white') sns.set_style('ticks') sns.set_context("paper") %config InlineBackend.figure_format = 'retina' # %qtconsole --colors=linux import numpy as np import pymc3 as pm import theano.tensor as tt import scipy from scipy import stats # Convenience functions to time execution (and display start time) class timeit(): from datetime import datetime def __enter__(self): self.tic = self.datetime.now() print('Start: {}'.format(self.tic) ) def __exit__(self, *args, **kwargs): print('Runtime: {}'.format(self.datetime.now() - self.tic)) # from pymc3.distributions.timeseries import EulerMaruyama # %qtconsole T = 2 dt = 0.1 nsteps = T/dt amplitude = 20 phi = 0.0 f = 0.5 time = np.arange(0, T+dt, dt) def true_control(t, amplitude, phi=0): return(amplitude * np.sin(2 * np.pi * f * t + phi)) alpha = true_control(time, amplitude=20, phi=0.0) nsteps = int(T/dt) x = np.zeros((nsteps+1, 2)) y = np.zeros((nsteps+1, 1)) mvnorm = stats.multivariate_normal covar = np.diag([0.2, 0.1]) sigma_y = 1.7 x[0] = np.zeros(2) A = np.array([[1, dt], [0, 1]]) B = np.array([0.5 * dt**2, dt]) # x[,0] is the position # x[, 1] is the velcocity # This is written without matrix notation # for t in range(1, nsteps): # x[t, 0] = x[t-1, 0] + Δt*x[t-1, 1] + 0.5 * Δt**2 * α[t-1] # x[t, 1] = x[t-1, 1] + Δt * α[t-1] # This is in matrix notation # x[t] = np.dot(true_A, x[t-1].T) + np.dot(true_B, α[t-1]) # where α[t-1] is the ("known") control input # Generated data for t in range(1, nsteps+1): # x_t = Ax[t-1] + Bu[t-1] x[t] = mvnorm.rvs(mean=np.dot(A, x[t-1].T) + np.dot(B, true_control(dt*(t-1), amplitude, phi=0)), cov=covar, size=1) # x[t] = mvnorm.rvs(mean=np.dot(true_A, x[t-1].T) + np.dot(true_B, α[t-1]), cov=Σ, size=1) # y is the noisy observation of x[, 1], i.e. velocity y[t] = stats.norm.rvs(loc=x[t, 1], scale=sigma_y, size=1) #print(f"True A: \n{true_A}") #print(f"True B: \n{true_B}") plt.plot(time, alpha, 'k--', linewidth=6, alpha=0.25, label='alpha') plt.plot(time, x, linewidth=6, label='pos') # [plt.plot(time, var) for var in [α, x]] plt.plot(time, y, '.', markersize=20, color='black', alpha=0.6, label='obs') sns.despine(offset=10, trim=True) plt.legend(loc='best') # Better prior for SD? Inverse Gamma seems to work better than Half-Cauchy. sd_prior = scipy.stats.invgamma.rvs(a=6, loc=0.4, scale=3, size=1000) sns.distplot(sd_prior, kde=True); # fig, ax = plt.subplots(1, 1) # x = np.linspace(scipy.stats.halfcauchy.ppf(0.01), scipy.stats.halfcauchy.ppf(0.99), 100) # ax.plot(x, scipy.stats.halfcauchy.pdf(x), 'r-', lw=5, alpha=0.6, label='halfcauchy pdf') d_prior = scipy.stats.beta.rvs(a=9, b = 3, size=1000) sns.distplot(d_prior, kde=False); """ Explanation: Table of Contents <p><div class="lev2 toc-item"><a href="#Generative-model" data-toc-modified-id="Generative-model-01"><span class="toc-item-num">0.1&nbsp;&nbsp;</span>Generative model</a></div><div class="lev2 toc-item"><a href="#Nuts" data-toc-modified-id="Nuts-02"><span class="toc-item-num">0.2&nbsp;&nbsp;</span>Nuts</a></div><div class="lev2 toc-item"><a href="#Variational-inference" data-toc-modified-id="Variational-inference-03"><span class="toc-item-num">0.3&nbsp;&nbsp;</span>Variational inference</a></div> End of explanation """ nsteps = len(y) from theano.compile.ops import as_op # @as_op(itypes=[tt.lscalar, tt.dscalar, tt.dscalar], otypes=[tt.dscalar]) def control(t, direction, amplitude, phi=0): return(direction*amplitude * tt.sin(2 * np.pi * 0.5 * t + phi)) basic_model=pm.Model() with basic_model: pos = np.empty(nsteps, dtype='object') vel = np.empty(nsteps, dtype='object') pos[0]=pm.Normal('pos0', mu=0, sd=2) vel[0]=pm.Normal('vel0', mu=0, sd=2) p_dir = 2 * pm.Beta("p_dir", alpha=9, beta=3) - 1 direction = pm.Bernoulli("D", p=p_dir) sd_obs = pm.InverseGamma("sd_obs", alpha=1, beta=2) sd_pos = pm.InverseGamma("sd_pos", alpha=6, beta=3) sd_vel = pm.InverseGamma("sd_vel", alpha=6, beta=3) amplitude = 20 for t in range(1, nsteps): u = control(t-1, direction, amplitude, 0.0) pos[t] = pm.Normal('pos'+str(t), mu=pos[t-1] + dt*vel[t-1] + 0.5*dt**2 * u, sd=sd_pos) vel[t] = pm.Normal('vel'+str(t), mu=vel[t-1] + dt*u, sd=sd_vel) y_obs=pm.Normal("y_obs"+str(t), mu=vel[t], observed=y[t], sd=sd_obs) """ Explanation: Generative model End of explanation """ with timeit(): with basic_model: trace = pm.sample(2000, njobs = 4) xr=range(nsteps) # Unpack all the trace for plotting pos_all=[[atrace['pos'+str(t)] for atrace in trace[1000:]] for t in xr] vel_all=[[atrace['vel'+str(t)] for atrace in trace[1000:]] for t in xr] sd_obs_all=[atrace['sd_obs'] for atrace in trace[1000:]] sd_pos_all=[atrace['sd_pos'] for atrace in trace[1000:]] sd_vel_all=[atrace['sd_vel'] for atrace in trace[1000:]] plt.figure( figsize=(15,5)) plt.subplot(1,3,1) plt.violinplot(pos_all, positions=xr, widths=1.0) plt.plot(xr, x[:,0], 'o-') plt.subplot(1,3,2) plt.violinplot(vel_all, positions=xr, widths=1.0) plt.plot( x[:,1], 'o-') plt.plot( y, 'ko') plt.subplot(1,3,3) plt.violinplot( [sd_obs_all,sd_pos_all, sd_vel_all] ); plt.xticks([1,2,3], ['Observation', 'Position', 'Velocity'],size=16 ); plt.savefig('inferred.pdf', bbox_inches='tight') pm.forestplot(trace, varnames = ['sd_vel', 'sd_pos', 'sd_obs']) pm.plot_posterior(trace, varnames = ['sd_vel', 'sd_pos', 'sd_obs', 'D', 'p_dir']) pm.traceplot(trace, varnames = ['sd_vel', 'sd_pos', 'sd_obs']) pm.traceplot(trace, varnames = ['p_dir']) trace[1000:]['D'] """ Explanation: Nuts End of explanation """ # variational inference: with timeit(): with basic_model: v_params = pm.variational.advi(n=100000) means, sd, elbos = v_params samples = pm.sample_vp(v_params, draws=2000) sns.distplot(samples['A'], color='red') varnames = ['sd_obs', 'sd_pos', 'sd_vel'] [sns.distplot(samples[var]) for var in varnames] sns.distplot(samples['sd_vel'], color='blue', label="ADVI") sns.distplot(trace['sd_vel'], color='red') sns.distplot(samples['sd_obs'], color='blue', label="ADVI") sns.distplot(trace['sd_obs'], color='red') # varnames = means.keys() # fig, axs = plt.subplots(ncols=len(varnames), figsize=(15, 5)) # for var, ax in zip(varnames, axs): # mu_arr = means[var] # sigma_arr = sds[var] # ax.set_title(var) # for i, (mu, sigma) in enumerate(zip(mu_arr.flatten(), sigma_arr.flatten())): # sd3 = (-4*sigma + mu, 4*sigma + mu) # x = np.linspace(sd3[0], sd3[1], 300) # y = stats.norm(mu, sigma).pdf(x) # ax.plot(x, y) # fig.tight_layout() """ Explanation: Variational inference End of explanation """
zerothi/ts-tbt-sisl-tutorial
TB_05/run.ipynb
gpl-3.0
graphene = sisl.geom.graphene(orthogonal=True) """ Explanation: In this example you will learn how to make use of the periodicity of the electrodes. As seen in TB 4 the transmission calculation takes a considerable amount of time. In this example we will redo the same calculation, but speed it up (no approximations made). A large computational effort is made on calculating the self-energies which basically is inverting, multiplying and adding matrices, roughly 10-20 times per $k$-point, per energy point, per electrode. For systems with large electrodes compared to the full device, this becomes more demanding than calculating the Green function for the system. When there is periodicity in electrodes along the transverse semi-infinite direction (not along the transport direction) one can utilize Bloch's theorem to reduce the computational cost of calculating the self-energy. In ANY calculation if you have periodicity, please USE it. In this example you should scour the tbtrans manual on how to enable Bloch's theorem, and once enabled it should be roughly 3 - 4 times as fast, something that is non-negligeble for large systems. End of explanation """ H_elec = sisl.Hamiltonian(graphene) H_elec.construct(([0.1, 1.43], [0., -2.7])) H_elec.write('ELEC.nc') """ Explanation: Note the below lines are differing from the same lines in TB 4, i.e. we save the electrode electronic structure without extending it 25 times. End of explanation """ H = H_elec.repeat(25, axis=0).tile(15, axis=1) H = H.remove( H.geometry.close( H.geometry.center(what='cell'), R=10.) ) dangling = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.) if len(H.edges(ia)) < 3] H = H.remove(dangling) edge = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.) if len(H.edges(ia)) < 4] edge = np.array(edge) # Pretty-print the list of atoms print(sisl.utils.list2str(edge + 1)) H.geometry.write('device.xyz') H.write('DEVICE.nc') """ Explanation: See TB 2 for details on why we choose repeat/tile on the Hamiltonian object and not on the geometry, prior to construction. End of explanation """ tbt = sisl.get_sile('siesta.TBT.nc') # Easier manipulation of the geometry geom = tbt.geometry a_dev = tbt.a_dev # the indices where we have DOS # Extract the DOS, per orbital (hence sum=False) DOS = tbt.ADOS(0, sum=False) # Normalize DOS for plotting (maximum size == 400) # This array has *all* energy points and orbitals DOS /= DOS.max() / 400 a_xyz = geom.xyz[a_dev, :2] %%capture fig = plt.figure(figsize=(12,4)); ax = plt.axes(); scatter = ax.scatter(a_xyz[:, 0], a_xyz[:, 1], 1); ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]'); ax.axis('equal'); # If this animation does not work, then don't spend time on it! def animate(i): ax.set_title('Energy {:.3f} eV'.format(tbt.E[i])); scatter.set_sizes(DOS[i]); return scatter, anim = animation.FuncAnimation(fig, animate, frames=len(tbt.E), interval=100, repeat=False) HTML(anim.to_html5_video()) """ Explanation: Exercises Instead of analysing the same thing as in TB 4 you should perform the following actions to explore the available data-analysis capabilities of TBtrans. Please note the difference in run-time between example 04 and this example. Always use Bloch's theorem when applicable! HINT please copy as much as you like from example 04 to simplify the following tasks. Read in the resulting file into a variable called tbt. In the following we will concentrate on only looking at $\Gamma$-point related quantities. I.e. all quantities should only be plotted for this $k$-point. To extract information for one or more subset of points you should look into the function help(tbt.kindex) which may be used to find a resulting $k$-point index in the result file. Plot the transmission ($\Gamma$-point only). To extract a subset $k$-point you should read the documentation for the functions (hint: kavg is the keyword you are looking for). Full transmission Bulk transmission Plot the DOS with normalization according to the number of atoms ($\Gamma$ only) You may decide which atoms you examine. The Green function DOS The spectral DOS The bulk DOS TIME: Do the same calculation using only tiling. H_elec.tile(25, axis=0).tile(15, axis=1) instead of repeat/tile. Which of repeat or tile are faster? Transmission Density of states End of explanation """
IST256/learn-python
content/lessons/10-HTTP/Slides.ipynb
mit
x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( type(x['a']) ) """ Explanation: IST256 Lesson 10 HTTP and Network Programming Assigned Readings From https://ist256.github.io/spring2021/readings/Web-APIs-In-Python.html Links Participation: https://poll.ist256.com In-Class Questions: ZOOM CHAT! Agenda Homework How the Web Works Making HTTP requests using the Python requests module Parsing json responses into Python objects Procedure for calling API's How to read API documentation Project (49 out of 250 points) http://ist256.com/syllabus/#project-p1-p4 No grade until the end. only feedback. Project documents will be released after the 3rd exam. FEQT (Future Exam Questions Training) 1 What is the output of the following code? End of explanation """ x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( type(x['b'][1]) ) """ Explanation: A. str B. int C. dict D. list Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 2 What is the output of the following code? End of explanation """ x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['a'][2]) """ Explanation: A. str B. int C. dict D. list Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 3 What is the output of the following code? End of explanation """ x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['b'][4] ) """ Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 4 What is the output of the following code? End of explanation """ x = { 'a' : [1,7,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['c'] ) """ Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 5 What is the output of the following code? End of explanation """ x = { 'a' : [1,2,3,4], 'b' : 'rta', 'c': { 'r' : 3, 't' : 2} } print( x['c']['r']) """ Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 6 What is the output of the following code? End of explanation """ import requests params = { 'a' : 1, 'b' : 2 } headers = { 'c' : '3'} url = "https://httpbin.org/get" response = requests.get(url, params = params, headers = headers) print(response.url) """ Explanation: A. 2 B. 3 C. KeyError D. IndexError Vote Now: https://poll.ist256.com Connect Activity Question: The Python module to consume Web API's is called: A. api B. requests C. http D. urllibrary Vote Now: https://poll.ist256.com # What is the Big Picture Here? First learned how to call functions built into python like, input() and int() Then we larned how to import a module of functions then use them, like math or json, or ipywidgets Then learned how to find new code on http://pypi.org, install with pip and then import to use it, like gtts or emoji Then we learned built-in functions of variables of type str list and dict such as str.find() and dict.keys() Now we will learn how to call functions over the internet, executing code remotely!!! HTTP: The Protocol of The Web When you type a URL into your browser you’re making a request. The site processing your request sends a response. Part of the response is the status code. This indicates “what happened” The other part of the response is content (this is usually HTML) which is rendered by the browser. HTTP is a text based protocol. It is stateless meaning each request in independent of the other. HTTP Request Verbs HTTP Request Verbs: - GET - used to get resources - POST - used to send large data payloads as input - PUT - used for updates - DELETE - used to delete a resource HTTP Response Status codes The HTTP response has a payload of data and a status code. HTTP Status Codes: - 1xx Informational - 2xx Success - 3xx Redirection - 4xx Client Error - 5xx Server Error Watch Me Code 1 A Non-Python Demo of HTTP - What happens when you request a site? Like http://www.syr.edu ? - Chrome Developer tools - Now using requests. - Status codes and request verbs. - de-serializing json output. Check Yourself: Response Codes The HTTP Response code for success is A. 404 B. 501 C. 200 D. 301 Vote Now: https://poll.ist256.com 4 Ways to Send Data over HTTP In the URL GET http://www.someapi.com/user/45 On the Query String - a set of key-value pairs on the URL GET http://www.someapi.com?user=45 In the request header - a set of key-value pairs in the HTTP header header = { 'user' : 45 } GET http://www.someapi.com In the body of an HTTP post - any format Body: user=45 POST http://www.someapi.com Which approach do you use? Depends on the service you are using! Watch Me Code 2 Examples of the many ways send data over HTTP using the https://httpbin.org/ website (Wait, scratch that, using https://api.ist256.com) !!! HTTP GET in the url HTTP GET in the query string and url generation HTTP GET in the header HTTP POST Combinations Check Yourself : HTTP Methods What is the URL printed on the last line? End of explanation """
bambinos/bambi
docs/notebooks/t-test.ipynb
mit
import arviz as az import bambi as bmb import matplotlib.pyplot as plt import numpy as np import pandas as pd az.style.use("arviz-darkgrid") np.random.seed(1234) """ Explanation: Comparison of two means (T-test) End of explanation """ a = np.random.normal(6, 2.5, 160) b = np.random.normal(8, 2, 120) df = pd.DataFrame({"Group": ["a"] * 160 + ["b"] * 120, "Val": np.hstack([a, b])}) df.head() az.plot_violin({"a": a, "b": b}); """ Explanation: In this notebook we demo two equivalent ways of performing a two-sample Bayesian t-test to compare the mean value of two Gaussian populations using Bambi. Generate data We generate 160 values from a Gaussian with $\mu=6$ and $\sigma=2.5$ and another 120 values from a Gaussian' with $\mu=8$ and $\sigma=2$ End of explanation """ model_1 = bmb.Model("Val ~ Group", df) results_1 = model_1.fit() """ Explanation: When we carry out a two sample t-test we are implicitly using a linear model that can be specified in different ways. One of these approaches is the following: Model 1 $$ \mu_i = \beta_0 + \beta_1 (i) + \epsilon_i $$ where $i = 0$ represents the population 1, $i = 1$ the population 2 and $\epsilon_i$ is a random error with mean 0. If we replace the indicator variables for the two groups we have $$ \mu_0 = \beta_0 + \epsilon_i $$ and $$ \mu_1 = \beta_0 + \beta_1 + \epsilon_i $$ if $\mu_0 = \mu_1$ then $$ \beta_0 + \epsilon_i = \beta_0 + \beta_1 + \epsilon_i\ 0 = \beta_1 $$ Thus, we can see that testing whether the mean of the two populations are equal is equivalent to testing whether $\beta_1$ is 0. Analysis We start by instantiating our model and specifying the model previously described. End of explanation """ model_1 model_1.plot_priors(); """ Explanation: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following: End of explanation """ az.plot_trace(results_1, kind="rank_vlines"); az.summary(results_1) """ Explanation: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary. <!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here --> End of explanation """ # Grab just the posterior of the term of interest (group) group_posterior = results_1.posterior['Group'] az.plot_posterior(group_posterior, ref_val=0); """ Explanation: In the summary table we can see the 94% highest density interval for $\beta_1$ ranges from 1.511 to 2.499. Thus, according to the data and the model used, we conclude the difference between the two population means is somewhere between 1.2 and 2.2 and hence we support the hypotehsis that $\beta_1 \ne 0$. Similar conclusions can be made with the density estimate for the posterior distribution of $\beta_1$. As seen in the table, most of the probability for the difference in the mean roughly ranges from 1.2 to 2.2. End of explanation """ # Probabiliy that posterior is > 0 (group_posterior.values > 0).mean() """ Explanation: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different. End of explanation """ model_2 = bmb.Model("Val ~ 0 + Group", df) results_2 = model_2.fit() """ Explanation: The linear model implicit in the t-test can also be specified without an intercept term, such is the case of Model 2. Model 2 When we carry out a two sample t-test we're implicitly using the following model: $$ \mu_i = \beta_i + \epsilon_i $$ where $i = 0$ represents the population 1, $i = 1$ the population 2 and $\epsilon$ is a random error with mean 0. If we replace the indicator variables for the two groups we have $$ \mu_0 = \beta_0 + \epsilon $$ and $$ \mu_1 = \beta_1 + \epsilon $$ if $\mu_0 = \mu_1$ then $$ \beta_0 + \epsilon = \beta_1 + \epsilon\ $$ Thus, we can see that testing whether the mean of the two populations are equal is equivalent to testing whether $\beta_0 = \beta_1$. Analysis We start by instantiating our model and specifying the model previously described. In this model we will bypass the intercept that Bambi adds by default by setting it to zero, even though setting to -1 has the same effect. End of explanation """ model_2 model_2.plot_priors(); """ Explanation: We've only specified the formula for the model and Bambi automatically selected priors distributions and values for their parameters. We can inspect both the setup and the priors as following: End of explanation """ az.plot_trace(results_2, kind="rank_vlines"); az.summary(results_2) """ Explanation: To inspect our posterior and the sampling process we can call az.plot_trace(). The option kind='rank_vlines' gives us a variant of the rank plot that uses lines and dots and helps us to inspect the stationarity of the chains. Since there is no clear pattern or serious deviations from the horizontal lines, we can conclude the chains are stationary. <!-- I think the reasoning is too simplistic but I don't know if we should make it more complicated here --> End of explanation """ # Grab just the posterior of the term of interest (group) group_posterior = results_2.posterior['Group'][:,:,1] - results_2.posterior['Group'][:,:,0] az.plot_posterior(group_posterior, ref_val=0); """ Explanation: In this summary we can observe the estimated distribution of means for each population. A simple way to compare them is subtracting one to the other. In the next plot we can se that the entirety of the distribution of differences is higher than zero and that the mean of population 2 is higher than the mean of population 1 by a mean of 2. End of explanation """ # Probabiliy that posterior is > 0 (group_posterior.values > 0).mean() %load_ext watermark %watermark -n -u -v -iv -w """ Explanation: Another way to arrive to a similar conclusion is by calculating the probability that the parameter $\beta_1 - \beta_0 > 0$. This probability, practically equal to 1, tells us that the mean of the two populations are different. End of explanation """
kbennion/foundations-hw
07-notebook-and-data/.ipynb_checkpoints/Homework7-checkpoint.ipynb
mit
%matplotlib inline print(df['gender'].value_counts()) df.groupby('gender')['networthusbillion'].mean() df.groupby('gender')['sourceofwealth'].value_counts() """ Explanation: What country are most billionaires from? For the top ones, how many billionaires per billion people? Who are the top 10 richest billionaires? What's the average wealth of a billionaire? Male? Female? Who is the poorest billionaire? Who are the top 10 poorest billionaires? 'What is relationship to company'? And what are the most common relationships? Most common source of wealth? Male vs. female? Given the richest person in a country, what % of the GDP is their wealth? Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry? How many self made billionaires vs. others? How old are billionaires? How old are billionaires self made vs. non self made? or different industries? Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it? Maybe just made a graph about how wealthy they are in general? Maybe plot their net worth vs age (scatterplot) Make a bar graph of the top 10 or 20 richest How many female billionaires are there compared to male? What industries are they from? What is their average wealth? End of explanation """ df.plot(kind='scatter', x='gender', y='networthusbillion') """ Explanation: Let's make a graph 'bout it End of explanation """
hypergravity/cham_hates_python
exercise/gaussian_fitting_using_python.ipynb
mit
from lmfit.models import GaussianModel # initialize the gaussian model gm = GaussianModel() # take a look at the parameter names print gm.param_names # I get RuntimeError since my numpy version is a little old # guess parameters par_guess = gm.guess(n,x=xpos) # fit data result = gm.fit(n, par_guess, x=xpos, method='leastsq') # quick look at result print result.fit_report() # get best fit error and stderr print result.params['amplitude'].value,result.params['amplitude'].stderr print result.params['center'].value,result.params['center'].stderr print result.params['sigma'].value,result.params['sigma'].stderr fig = plt.figure() plt.hist(xdata, bins=bins) plt.plot(xpos, result.best_fit, 'green') """ Explanation: If you don't care about the confidence interval of parameter End of explanation """ import lmfit def my_gaussian_model(p, x, y): a = np.float(p['a']) b = np.float(p['b']) c = np.float(p['c']) return a/np.sqrt(2.*c) * np.exp( -np.power(x-b,2.)/2./np.power(c, 2.)) - y pars = lmfit.Parameters() pars.add_many(('a',0.1), ('b',0.1), ('c',0.1)) # initialize the minimizer mini = lmfit.Minimizer(my_gaussian_model, pars, (xpos, n)) # do the minimization result = mini.minimize(method='leastsq') # print the fit report print lmfit.fit_report(mini.params) # NOTE # the parameter 'a' in function my_gaussian_model is different from the built-in model in lmfit # so the amplitude value is a little different # predit the confidence interval of all parameters ci, trace = lmfit.conf_interval(mini, sigmas=[0.68,0.95], trace=True, verbose=False) # ci = lmfit.conf_interval(mini) lmfit.printfuncs.report_ci(ci) print ci.values() a,b,prob = trace['a']['a'], trace['a']['b'], trace['a']['prob'] cx, cy, grid = lmfit.conf_interval2d(mini, 'a','b',30,30) plt.contourf(cx, cy, grid, np.linspace(0,1,11)) plt.xlabel('a') plt.colorbar() plt.ylabel('b') """ Explanation: If you want the confidence intervals End of explanation """
QuantStack/quantstack-talks
2018-11-14-PyParis-widgets/notebooks/5.ipyvolume.ipynb
bsd-3-clause
import ipyvolume import numpy as np ds = ipyvolume.datasets.aquariusA2.fetch() ipyvolume.quickvolshow(ds.data, lighting=True) """ Explanation: <center><h1>ipyvolume</h1></center> Repository: https://github.com/maartenbreddels/ipyvolume Installation: conda install -c conda-forge ipyvolume Volume rendering End of explanation """ stream = ipyvolume.datasets.animated_stream.fetch() fig = ipyvolume.figure() q = ipyvolume.quiver(*stream.data[:,0:50,:200], color="red", size=7) ipyvolume.animation_control(q, interval=200) ipyvolume.show() u = np.linspace(-10, 10, 50) x, y = np.meshgrid(u, u) r = np.sqrt(x**2+y**2) x = x.flatten() y = y.flatten() r = r.flatten() time = np.linspace(0, np.pi*2, 50) z = np.array([(np.cos(r + t) * np.exp(-r/5)) for t in time]) color = np.array([[np.cos(r + t), 1-np.abs(z[i]), 0.1+z[i]*0] for i, t in enumerate(time)]) size = (z+1) color = np.transpose(color, (0, 2, 1)) ipyvolume.figure() s = ipyvolume.scatter(x, z, y, color=color, marker="sphere") ipyvolume.animation_control(s, interval=200) ipyvolume.ylim(-3,3) ipyvolume.show() s.geo = "diamond" s.size = 5 s.color = 1 - s.color """ Explanation: Animations Animations are made in the shaders End of explanation """ from ipywidgets import Widget Widget.close_all() """ Explanation: Clean End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/day-by-day/day07-modeling-viral-load/day07-in_class_activity.ipynb
agpl-3.0
# Make plots inline %matplotlib inline # Make inline plots vector graphics instead of raster graphics from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') # import modules for plotting and data analysis import matplotlib.pyplot as plt import numpy as np import pandas """ Explanation: Understanding Parameter-Fitting by Exploring Viral Load Student names Work in pairs, and put the names of both people in your group here! (If you're in a group of 3, just move your chairs so you can work together.) Learning Goals (Why Are We Asking You To Do This?) To develop an intuition about how equation-based models work To understand how to evaluate models by plotting them together with experimental data To practice predicting what the effect will be when we change the parameters of a model To learn how we can iterate to improve the fit of a model Understanding How We Treat the Human Immunodeficiency Virus (HIV) Here we explore a model of the viral load—the number of virions in the blood of a patient infected with HIV—after the administration of an antiretroviral drug. One model for the viral load predicts that the concentration $V (t)$ of HIV in the blood at time t after the start of treatment will be: $$ \begin{equation} V(t) = A \cdot \mathrm{exp}(-\alpha t) + B \cdot \mathrm{exp}(-\beta t) \end{equation}$$ When we write mathematics, $\mathrm{exp}(\dots)$ is notational shorthand for $e^{(\dots)}$. So, we can rewrite the viral load model this way: $$ \begin{equation} V(t) = A e^{(-\alpha t)} + B e^{(-\beta t)} \end{equation} $$ Two things to note about this particular model: Viral load is a function of time $t$. That's why we're writing it as $V(t) = \dots$. There are four modeling parameters (numbers) we can vary: $(A, \alpha, B, \beta )$.: $$ \begin{equation} V(t) = \textbf{A} e^{(-\boldsymbol{\alpha} t)} + \textbf{B} e^{(- \boldsymbol{\beta} t)} \end{equation} $$ Note: You probably know that there are black-box software packages that do such “curve fitting” automatically. In this lab, you should do it manually, just to see how the curves respond to changes in the parameters.] Kinder, Jesse M.; Nelson, Philip (2015-07-01). A Student's Guide to Python for Physical Modeling (Page 63). Princeton University Press. Kindle Edition. Doing it manually also helps build our intuition for how mathematical models behave when we visualize them. Let's get started by setting some options and loading the modules we'll need. Setting options and loading modules End of explanation """ import numpy as np np.linspace( 0, # where the interval starts 1, # where the interval ends 11 # How many steps (elements) we want in the final array ) """ Explanation: Now, we'll tackle the "function in time" part of this model by learning how to make and use arrays to represent time. Making intervals in time Using numpy, we can conveniently make arrays that would represent slices in time. We'll use the np.linspace command, which takes three arguments (inputs): Where you want the interval to start Where you want the interval to end How many steps there should be in the interval We can use np.linspace to make time intervals End of explanation """ # put your code here! """ Explanation: We can assign time intervals to variables We can also try assigning that array to a variable, so we our later code can use it and refer back to it. Here, we'll: Make the same interval as before (0 to 1), With eleven steps, Assign it to a variable we'll call trying_to_make_a_time_interval (it helps when our names are descriptive) Check how many items are in the array we made. (It should be 11). End of explanation """ # Create your time array here """ Explanation: Your turn - use np.linspace() to make a time interval for our model Now that you've seen how to make evenly-spaced time intervals, we'll start creating an array of time slices for our viral load model. Create a single array called time that: Starts at 0, Goes to 10, and Has 101 elements in it End of explanation """ # Make B equal to zero and set some non-zero values for the other parameters # DON'T PLOT YET -- you should make a prediction below! """ Explanation: We can set and change parameter values to see how the model behaves Remember how our model equation has four parameters $(A, \alpha, B, \beta )$? Below, you're going to use Python code to see what happens to the model when we try (and change) values for those parameters. Below, write code that sets $B = 0$ and chooses non-zero values for the other three parameters. End of explanation """ # Write and evaluate viral_load = ... """ Explanation: PREDICT BEFORE YOU PLOT Here's that viral load equation again: $$ \begin{equation} V(t) = \textbf{A} e^{(-\boldsymbol{\alpha} t)} + \textbf{B} e^{(- \boldsymbol{\beta} t)} \end{equation} $$ Just like the order-of-magnitude approximations we've been doing, thinking before we plot and evaluate helps us develop our intution about models. In your code above, you've set $B = 0$. Use the markdown cell below to predict (in words) how you think setting $B = 0$ affects the equation (and by extension, the model). How will the equation change? How does setting $B = 0$ affect the other model parameters (if at all) Your Prediction When we set B = 0, we think what will happen is... Explore the model Explore the model by evaluating the equation We can write the viral load function (from above) in Python, using the parameters we set above and our time array. In the cell below, write viral_load as a function of time. Note: We can write exponentation in numpy using np.exp(). So, $$ \begin{equation} e^{(\dots)} = \tt{np.exp(\dots)} \end{equation} $$ End of explanation """ # Verify that both arrays are the same length. # You can use .size, as in time.size or viral_load.size # Then, try plotting viral_load vs. time # Make plots inline %matplotlib inline # Make inline plots vector graphics instead of raster graphics from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') import matplotlib.pyplot as plt # Put your plot code here """ Explanation: Explore the model by visualizing it You should now have two arrays of the same length, time and viral load. In the Python cell below, verify that they're the same length (i.e., that they have the same number of values) with an if statement, then plot them. End of explanation """ # Change the values and make a new plot. """ Explanation: Explore the model by changing parameter values Create a few more plots using different values of the four model parameters, and put them in separate cells (with one plot per set of model parameters). For each new plot, Use a Python cell to change the model parameters however you'd like, Re-evaluate the viral_load Use a markdown cell to explain how the curve changed Remember: You can try negative (and non-integer) values for parameters. Try setting the x- and y-limits of the plots to be a constant value for several plots in a row so you can directly compare them! End of explanation """ # Loading the data using pandas hiv_data = pandas.read_csv( "https://raw.githubusercontent.com/ComputationalModeling/IPML-Data/master/01HIVseries/HIVseries.csv", header = None, names = ["time_in_days", "viral_load"] ) # the data type of hiv_data is "dataframe" type(hiv_data) """ Explanation: // Note each change you made and what you saw Loading and Examining Experimental Data What is this data? We're going to use experimental data of actual viral loads provided by Kindler and Nelson (2015). They write: File HIVseries.mat contains variable "a" with two columns of data. The first is the time in days since administration of a treatment to an HIV positive patient; the second contains the concentration of virus in that patient's blood, in arbitrary units. HIVseries.csv and HIVseries.npy contain the same data in the same format. HIVseries.npz contains the same data in two separate arrays called time_in_days and viral_load. Data from A. Perelson. Modelling viral and immune system dynamics. Nature Revs. Immunol. (2002) vol. 2 (1) pp. 28--36 (Box 1). So, to summarize, the dataset hiv_data has 2 columns: time_in_days is the number of days since an HIV-positive patient received a treatment. viral_load is the concentraiton of the virus in that patients blood, in arbitrary units. Use pandas.read_csv() to Load the Data The data file we'll use is in a file format called CSV, which stands for comma-separated values. It's a commonly-used format for storing 2-dimensional data, and programs like Microsoft Excel or OpenOffice can import and export .CSV files. The code below will use the read_csv() function from the pandas data analysis library to load the CSV file you need from the web, then store the data as a variable called hiv_data. End of explanation """ # Execute this cell (Shift + Enter) to see the data hiv_data """ Explanation: You Can View a Pandas DataFrame by Executing It End of explanation """ # If you have a pandas dataframe, you can call `head()` on it like this: hiv_data.head() # To see the last few rows, call `tail()` on it hiv_data.tail() """ Explanation: You can view the first/last few rows of data with .head() and .tail() functions End of explanation """ # How to view an individual column hiv_data["time_in_days"] # or hiv_data["viral_load"] """ Explanation: Use data["column_name"] to View or Refer to a Column of Data End of explanation """ # Here's the viral load column again hiv_data["viral_load"] # And we can calulate its mean, max, size, and other properties # Just like we would on a numpy array hiv_data["viral_load"].mean() hiv_data["viral_load"].max() hiv_data["viral_load"].size """ Explanation: Pandas DataFrame Columns Behave Like Numpy Arrays End of explanation """ # Plot viral load vs. time """ Explanation: Plotting the Experimental Data In a Python cell below, plot the viral load versus time from the hiv_data we loaded. End of explanation """ # Plot the data and model together """ Explanation: Fitting Our Model To Experimental Data Now that we've seen what the experimental data look like, we'll take our model one step further by putting the data and our model together on the same plot. Plot the Data and Model Together In the Python Cell below, create a plot that contains both the data and the model we were working with earlier. So, we'll superimpose the datapoints on a plot of $V(t) = A e^{(-\alpha t)} + B e^{(-\beta t)}$. You may need to adjust the model parameters until you can see both the data and model in your plot. End of explanation """ # Do whatever work you need here to determine # the parameter values you think work best. # REMEMBER: You can assign each new model to a new variable, # like `model_01`, `model_02`, ... """ Explanation: THINK about Tuning the Model Parameters to Fit the Model to Data The goal will be to tune the four parameters of $V(t) = A e^{(-\alpha t)} + B e^{(-\beta t)}$ until the model agrees with the data. It is hard to find the right needle in a four-dimensional haystack! We need a more systematic approach than just guessing. Consider and try to answer each of the following in a Markdown cell below: Assuming $\beta > \alpha$, how does the trial solution behave at long times? If the data also behave that way, can we use the long-time behavior to determine two of the four unknown constants, then hold them fixed while adjusting the other two? Even two constants is a lot to adjust by hand, so let’s think some more: How does the initial value $V(0)$ depend on the four constant parameters? Can you vary these constants in a way that always gives the correct long-time behavior and initial value? (Kinder and Nelson, 2015, p. 62) // Answer the questions here Make and Plot Multiple Models to Find Model Parameters that Best Fit the Model to the Data Kinder and Nelson (2015) recommend that you: Carry out this analysis so that you have only one remaining free parameter, which you can adjust fairly easily. Adjust this parameter until you like what you see. We suggest you try and save different models by assigning those models to named variables (i.e., model 1 has A1, B1, model01, etc. and model 2 has A2, B2, model02, etc.) Use one or more Python cells below to create multiple models and decide on good parameter values. End of explanation """ from IPython.display import IFrame IFrame('http://goo.gl/forms/v8oZUSLDaa', width=800, height=1200) """ Explanation: Please Give Feedback on this assignment End of explanation """
infilect/ml-course1
keras-notebooks/ANN/3.5-classifying-movie-reviews.ipynb
mit
from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) """ Explanation: Classifying movie reviews: a binary classification example This notebook contains the code samples found in Chapter 3, Section 5 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB dataset We'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews. Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely memorizing a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter. Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary. The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine): End of explanation """ train_data[0] train_labels[0] """ Explanation: The argument num_words=10000 means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size. The variables train_data and test_data are lists of reviews, each review being a list of word indices (encoding a sequence of words). train_labels and test_labels are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive": End of explanation """ max([max(sequence) for sequence in train_data]) """ Explanation: Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000: End of explanation """ # word_index is a dictionary mapping words to an integer index word_index = imdb.get_word_index() # We reverse it, mapping integer indices to words reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # We decode the review; note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) decoded_review """ Explanation: For kicks, here's how you can quickly decode one of these reviews back to English words: End of explanation """ import numpy as np def vectorize_sequences(sequences, dimension=10000): # Create an all-zero matrix of shape (len(sequences), dimension) results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. # set specific indices of results[i] to 1s return results # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) """ Explanation: Preparing the data We cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that: We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape (samples, word_indices), then use as first layer in our network a layer capable of handling such integer tensors (the Embedding layer, which we will cover in detail later in the book). We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence [3, 5] into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a Dense layer, capable of handling floating point vector data. We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity: End of explanation """ x_train[0] """ Explanation: Here's what our samples look like now: End of explanation """ # Our vectorized labels y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') """ Explanation: We should also vectorize our labels, which is straightforward: End of explanation """ from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) """ Explanation: Now our data is ready to be fed into a neural network. Building our network Our input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (Dense) layers with relu activations: Dense(16, activation='relu') The argument being passed to each Dense layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such Dense layer with a relu activation implements the following chain of tensor operations: output = relu(dot(W, input) + b) Having 16 hidden units means that the weight matrix W will have shape (input_dimension, 16), i.e. the dot product with W will project the input data onto a 16-dimensional representation space (and then we would add the bias vector b and apply the relu operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data). There are two key architecture decisions to be made about such stack of dense layers: How many layers to use. How many "hidden units" to chose for each layer. In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use relu as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A relu (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the [0, 1] interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like: And here's the Keras implementation, very similar to the MNIST example you saw previously: End of explanation """ model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) """ Explanation: Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the binary_crossentropy loss. It isn't the only viable choice: you could use, for instance, mean_squared_error. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions. Here's the step where we configure our model with the rmsprop optimizer and the binary_crossentropy loss function. Note that we will also monitor accuracy during training. End of explanation """ from keras import optimizers model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy']) """ Explanation: We are passing our optimizer, loss function and metrics as strings, which is possible because rmsprop, binary_crossentropy and accuracy are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the optimizer argument: End of explanation """ from keras import losses from keras import metrics model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=[metrics.binary_accuracy]) """ Explanation: The latter can be done by passing function objects as the loss or metrics arguments: End of explanation """ x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] """ Explanation: Validating our approach In order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data: End of explanation """ history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) """ Explanation: We will now train our model for 20 epochs (20 iterations over all samples in the x_train and y_train tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the validation_data argument: End of explanation """ history_dict = history.history history_dict.keys() """ Explanation: On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data. Note that the call to model.fit() returns a History object. This object has a member history, which is a dictionary containing data about everything that happened during training. Let's take a look at it: End of explanation """ import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() """ Explanation: It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy: End of explanation """ model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=4, batch_size=512) results = model.evaluate(x_test, y_test) results """ Explanation: The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network. As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set. In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter. Let's train a new network from scratch for four epochs, then evaluate it on our test data: End of explanation """ model.predict(x_test) """ Explanation: Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new data After having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the predict method: End of explanation """
obulpathi/datascience
scikit/Chapter 6/OneHotEncoder.ipynb
apache-2.0
X = np.array([[15.9, 1], # from Tokyo [21.5, 2], # from New York [31.3, 0], # from Paris [25.1, 2], # from New York [63.6, 1], # from Tokyo [14.4, 1], # from Tokyo ]) y = np.array([0, 1, 1, 1, 0, 0]) # Don't do this! from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C=100).fit(X, y) lr.score(X, y) lr.coef_ from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder(categorical_features=[1], sparse=False).fit(X) X_one_hot = encoder.transform(X) X_one_hot lr = LogisticRegression().fit(X_one_hot, y) lr.score(X_one_hot, y) """ Explanation: OneHotEncoding for categorical data ``` Users = age, location age is float location in ['Paris', 'Tokyo', 'New York'] ``` End of explanation """ X = np.array([[15.9, 1, 1], # likes puppies from Tokyo [21.5, 0, 2], # doesn't like puppies from New York [31.3, 0, 0], # doesn't like puppies from Paris [25.1, 1, 2], # likes puppies from New York [63.6, 0, 1], [14.4, 1, 1], ]) OneHotEncoder(categorical_features=[1, 2], sparse=False).fit(X).transform(X) from sklearn.cross_validation import train_test_split X_train, X_test = train_test_split(X, random_state=4) print("X_train:\n%s" % X_train) print("\nX_test:\n%s" % X_test) encoder = OneHotEncoder(categorical_features=[1, 2], sparse=False).fit(X_train) encoder.transform(X_test) # BAD OneHotEncoder(categorical_features=[1, 2], sparse=False).fit_transform(X_test) X_train, X_test = train_test_split(X, random_state=1) print("X_train:\n%s" % X_train) print("\nX_test:\n%s" % X_test) encoder = OneHotEncoder(categorical_features=[1, 2], sparse=False).fit(X_train) encoder.transform(X_test) encoder = OneHotEncoder(categorical_features=[1, 2], sparse=False, n_values=[2, 3]).fit(X_train) encoder.transform(X_test) """ Explanation: ``` Users = age, location age is float likes puppies in ['yes', 'no'] location in ['Paris', 'Tokyo', 'New York'] ``` End of explanation """
computational-class/computational-communication-2016
code/18.network analysis of tianya bbs.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt dtt = [] with open('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_network.txt', 'r') as f: for line in f: pnum, link, time, author_id, author, content = line.replace('\n', '').split('\t') dtt.append([pnum, link, time, author_id, author, content]) len(dtt) import pandas as pd dt = pd.DataFrame(dtt) dt=dt.rename(columns = {0:'page_num', 1:'link', 2:'time', 3:'author',4:'author_name', 5:'reply'}) dt[:5] # extract date from datetime date = map(lambda x: x[:10], dt.time) dt['date'] = pd.to_datetime(date) dt[:5] import pandas as pd df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None) df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'}) df[:2] from collections import defaultdict link_user_dict = defaultdict(list) for i in range(len(dt)): link_user_dict[dt.link[i]].append(dt.author[i]) df['user'] = [len(link_user_dict[l]) for l in df.link] df[:2] import statsmodels.api as sm import numpy as np x = np.log(df.user+1) y = np.log(df.reply+1) xx = sm.add_constant(x, prepend=True) res = sm.OLS(y,xx).fit() constant,beta = res.params r2 = res.rsquared fig = plt.figure(figsize=(8, 4),facecolor='white') plt.plot(df.user, df.reply, 'rs', label= 'Data') plt.plot(np.exp(x), np.exp(constant + x*beta),"-", label = 'Fit') plt.yscale('log');plt.xscale('log') plt.xlabel(r'$Users$', fontsize = 20) plt.ylabel(r'$Replies$', fontsize = 20) plt.text(max(df.user)/300,max(df.reply)/20, r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2))) plt.legend(loc=2,fontsize=10, numpoints=1) plt.axis('tight') plt.show() x = np.log(df.user+1) y = np.log(df.click+1) xx = sm.add_constant(x, prepend=True) res = sm.OLS(y,xx).fit() constant,beta = res.params r2 = res.rsquared fig = plt.figure(figsize=(8, 4),facecolor='white') plt.plot(df.user, df.click, 'rs', label= 'Data') plt.plot(np.exp(x), np.exp(constant + x*beta),"-", label = 'Fit') plt.yscale('log');plt.xscale('log') plt.xlabel(r'$Users$', fontsize = 20) plt.ylabel(r'$Replies$', fontsize = 20) plt.text(max(df.user)/300,max(df.click)/20, r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2))) plt.legend(loc=2,fontsize=10, numpoints=1) plt.axis('tight') plt.show() # convert str to datetime format dt.time = pd.to_datetime(dt.time) dt['month'] = dt.time.dt.month dt['year'] = dt.time.dt.year dt['day'] = dt.time.dt.day type(dt.time[0]) d = dt.year.value_counts() dd = pd.DataFrame(d) dd = dd.sort_index(axis=0, ascending=True) ds = dd.cumsum() def getDate(dat): dat_date_str = map(lambda x: str(x) +'-01-01', dat.index) dat_date = pd.to_datetime(dat_date_str) return dat_date ds.date = getDate(ds) dd.date = getDate(dd) fig = plt.figure(figsize=(12,5)) plt.plot(ds.date, ds.year, 'g-s', label = '$Cumulative\: Number\:of\: Threads$') plt.plot(dd.date, dd.year, 'r-o', label = '$Yearly\:Number\:of\:Threads$') #plt.yscale('log') plt.legend(loc=2,numpoints=1,fontsize=13) plt.show() """ Explanation: 网络科学理论简介 天涯论坛的回帖网络分析 王成军 wangchengjun@nju.edu.cn 计算传播网 http://computational-communication.com End of explanation """ dt.reply[:55] """ Explanation: Extract @ End of explanation """ import re tweet = u"//@lilei: dd //@Bob: cc//@Girl: dd//@魏武: \ 利益所致 自然念念不忘//@诺什: 吸引优质 客户,摆脱屌丝男!!!//@MarkGreene: 转发微博" RTpattern = r'''//?@(\w+)''' for word in re.findall(RTpattern, tweet, re.UNICODE): print word RTpattern = r'''@(\w+)\s''' tweet = u"@lilei: dd @Bob: cc @Girl: dd @魏武: \ 利益所致 自然念念不忘 //@诺什: 吸引优质 客户,摆脱屌丝男!!!" for word in re.findall(RTpattern, tweet, re.UNICODE): print word # dt.reply[11].decode('utf8'), re.UNICODE) if re.findall(RTpattern, dt.reply[0].decode('utf8'), re.UNICODE): print True else: print False for k, tweet in enumerate(dt.reply[:100]): tweet = tweet.decode('utf8') RTpattern = r'''@(\w+)\s''' for person in re.findall(RTpattern, tweet, re.UNICODE): print k,'\t',dt.author_name[k],'\t', person,'\t\t', tweet[:30] print dt.reply[80] link_author_dict = {} for i in range(len(df)): link_author_dict[df.link[i]] =df.author[i] graph = [] for k, tweet in enumerate(dt.reply): tweet = tweet.decode('utf8') url = dt.link[k] RTpattern = r'''@(\w+)\s''' persons = re.findall(RTpattern, tweet, re.UNICODE) if persons: for person in persons: graph.append([dt.author_name[k].decode('utf8'), person]) else: graph.append( [dt.author_name[k].decode('utf8'), link_author_dict[url].decode('utf8')] ) len(graph) for x, y in graph[:3]: print x, y import networkx as nx G = nx.DiGraph() for x,y in graph: if x != y: G.add_edge(x,y) nx.info(G) GU=G.to_undirected(reciprocal=True) graphs = list(nx.connected_component_subgraphs(GU)) import numpy as np size = [] for i in graphs: size.append(len(i.nodes())) len(size), np.max(size) gs = [] for i in graphs: if len(i.nodes()) >5: gs.append(i) len(gs) for g in gs: print len(g.nodes()) g_max = gs[0] len(g_max.nodes()) pos = nx.spring_layout(g_max) #定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别 nx.draw(g_max,pos,with_labels=False,node_size = 30) #绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径 plt.show() #显示图形 with open('/Users/chengjun/github/cjc2016/data/tianya_network_120.csv', 'a') as f: for x, y in g_max.edges(): f.write(x.encode('utf8') + ',' + y.encode('utf8') + '\n') """ Explanation: @贾也2012-10-297:59:00  导语:人人宁波,面朝大海,春暖花开  ........ @兰质薰心2012-10-2908:55:52  楼主好文!  相信政府一定有能力解决好这些... 回复第20楼,@rual_f  “我相信官场中,许多官员应该葆有社会正能量”  通篇好文,顶... End of explanation """
InsightLab/data-science-cookbook
2019/12-spark/12-spark-intro/bruno_mourao_spark1.ipynb
mit
from random import * from math import sqrt inside=0 n=1000 for i in range(0,n): x=random() y=random() if sqrt(x*x+y*y)<=1: inside+=1 pi=4*inside/n print(pi) from random import * from math import sqrt def soma(a,b): return a+b def area(x,y):return 1 if sqrt(x*x+y*y)<=1 else 0 def mapfunction(z): x=random() y=random() return area(x,y) nummin = sc.defaultMinPartitions * 10000 data = sc.parallelize(range(1, nummin)) res1 = data.map(mapfunction) print(res1.reduce(soma)) pispark = 4*(res1.reduce(soma))/(res1.count()) print(pispark) """ Explanation: Hands-on! Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark. Estimar o Pi Existe um algoritmo para estimar o Pi com números radômicos. Implemente-o sobre o Spark. Descrição do algoritmo: http://www.eveandersson.com/pi/monte-carlo-circle Implementação EM PYTHON (não sobre o SPARK): http://www.stealthcopter.com/blog/2009/09/python-calculating-pi-using-random-numbers/ O númer de pontos deve ser 100000 (cem mill) vezes o número mínimo de partições padrão do seu SparkContext (sc.defaultMinPartitions). Esses pontos devem ser selecionados aleatóriamente na etapa de map (ver observações). Observações: use as funções map (para mapear as ocorrêncas em 0 ou 1, significando 1 quando o ponto aleatório cair dentro do círculo e 0 quando o contrário) e reduce (para sumar as ocorrências). End of explanation """ def primes( n ): # i sera nosso divisor inicial i = 1; # j sera nosso contador de ocorrências j = 0; #Nenhum numero real vai ser divisivel por um numero maior do que sua metade n1 = (n/2); while (i <= n): if (n % i==0): i = i+1; j = j+1; if (i>=n1): # damos a i, o valor da variavel entrada, pois o próximo divisor sera o próprio número i = n; i = i+1; j = j+1; else: i = i+1; if(j==2): return True else: return False data = sc.parallelize(range(1, 1000000)) res = data.filter(primes) print(res.take(100)) """ Explanation: Filtragem de Primos Dado uma sequência de números de 1 a 1000000, filtre somente os primos dessa sequência. End of explanation """ def mapfun(x): return [x[0],x[1]] lines = sc.textFile("municipios_do_Brasil.csv") data = lines.map(lambda line: line.split(",")).map(mapfun) print(data.take(10)) data.sortByKey() print(data.take(20)) Counts = data.groupByKey().map(lambda x : (x[0], list(x[1]))) print(Counts.take(1)) new = Counts.map(lambda tup: (tup[0], len(tup[1]))) print(new.take(4)) """ Explanation: Municípios do Brasil Dado o dataset mucipios_do_Brasil.csv, faça duas operações com ele: Monte uma lista dos municípios por estado. Conte quantos municípios há em cada estado. Dicas: use as operações groupByKey e reduceByKey, não faça um count na lista da operação 1. End of explanation """ def filterfun(w): stopwords = set(open('stopwords.pt').read().split()) if w[0] in stopwords: return False else: return True text_file = sc.textFile("Machado-de-Assis-Memorias-Postumas.txt") counts = text_file.flatMap(lambda line: line.split(" ")) \ .map(lambda word: (word, 1)) \ .reduceByKey(lambda a, b: a + b) counts = counts.filter(filterfun) counts = counts.takeOrdered(100, key = lambda x: -x[1]) print(counts) """ Explanation: Word Count Memória Postumas de Brás Cubas Memórias Póstumas de Brás Cubas é um romance escrito por Machado de Assis, desenvolvido em princípio como folhetim, de março a dezembro de 1880, na Revista Brasileira, para, no ano seguinte, ser publicado como livro, pela então Tipografia Nacional. A obra retrata a escravidão, as classes sociais, o cientificismo e o positivismo da época. Dada essas informações, será que conseguimos idenficar essas características pelas palavras mais utilizadas em sua obra? Utilizando o dataset Machado-de-Assis-Memorias-Postumas.txt, faça um word count e encontre as palavras mais utilizadas por Machado de Assis em sua obra. Não esqueça de utilizar stopwords.pt para remover as stop words! End of explanation """
JasonNK/udacity-dlnd
autoencoder/Convolutional_Autoencoder_Solution.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation """ inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same') # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None) #Now 28x28x1 decoded = tf.nn.sigmoid(logits, name='decoded') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) """ Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation """ sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation """ inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same') # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same') # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7)) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x32 logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None) #Now 28x28x1 decoded = tf.nn.sigmoid(logits, name='decoded') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) """ Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation """
musketeer191/job_analytics
.ipynb_checkpoints/dups_filter-checkpoint.ipynb
gpl-3.0
df = df.sort_values(['employer_name', 'doc']) print('# posts bf filtering dups: %d' %df.shape[0]) df.head(10) df = df.drop_duplicates(['employer_name', 'doc']) print('# posts after filtering dups: %d' %df.shape[0]) df.head(10) df = df.reset_index() df.head() df.to_csv(SKILL_DAT + 'uniq_doc_index.csv', index=False) """ Explanation: Removing reposts End of explanation """ df = pd.read_csv(SKILL_DAT + 'uniq_doc_index.csv') df = df.set_index('index') with(open(LDA_DIR + 'doc_topic_distr.mtx', 'r')) as f: doc_topic_distr = mmread(f) # Global settings for all cluster plots abbv_clusters = ['FIN', 'HWR', 'SCI', 'PROD', 'CON', 'LEG', 'CUS', 'LOG', 'MKT', 'DEV', 'MAN', 'HOS', 'AUD', 'COM', 'HR'] x = range(1, 16); labels = abbv_clusters def getPosts2Check(job_title, industry, n_pair=10): sub_df = df[(df['title'] == job_title) & (df['industry'] == industry)] # Drop duplicated posts due to reposting scores = calScore(sub_df, doc_topic_distr) scores = scores.query('sim_score < 1').sort_values('sim_score', ascending=False) k_pairs = scores.head(n_pair) ids2check = np.unique(list(k_pairs['job_id1']) + list(k_pairs['job_id2'])) print('# posts to check: %d' %len(ids2check)) posts2check = sub_df[sub_df['job_id'].isin(ids2check)] return posts2check.sort_values('employer_name') # return k_pairs def plotClusterDists(posts, figsize): n_post = posts.shape[0] fig, axarr = plt.subplots(n_post, sharex=True, figsize=figsize) for r in range(n_post): plt.subplot(n_post, 1, r+1) plotClusterDistAtRow(r, posts, doc_topic_distr) # Show ylabel at the middle if r==(n_post/2 - 1): plt.ylabel('Probability', fontsize=24) ## Fine tune the fig fig.subplots_adjust(hspace=.5) # Hide xticks on all subplots except the last one hide_xticks(fig) # Provide xtick labels only for the last subplot plt.xticks(x, labels, rotation=45) # Show xlabel only at the last subplot plt.xlabel('Skill Clusters', fontsize=20) return fig posts2check = getPosts2Check(job_title="Software Engineer", industry="Financial and Insurance Activities", n_pair=5) fig = plotClusterDists(posts2check, figsize=(6, 10)) # plt.savefig(LDA_DIR + 'fig/se_in_fin2.pdf') plt.show(); plt.close() """ Explanation: A careful check also reveals a renaming of job title (from Marine Superintendent to Fleet Manager, employer: "K" LINE LOGISTICS SINGAPORE), though the posts are the same. This is also interesting and may need further analysis later. However, we first need to re-plot cluster distributions again to see if we really get rid of dups. End of explanation """ sample_employers = ["Capital Match Holdings Pte. Ltd.", "Comfortdelgro Corporation Limited", "Fujitsu Asia Pte Ltd", "Millennium Capital Management (Singapore) Pte. Ltd."] sample_employers = map(str.upper, sample_employers) sample_se_posts = df[(df.employer_name.isin(sample_employers)) & (df.title == "Software Engineer")] se_fig = plotClusterDists(sample_se_posts) plt.show(); plt.close() dups = df[(df.employer_name.isin(sample_employers)) & (df.title == "Software Engineer")] dups = dups.sort_values('employer_name') """ Explanation: Duplications seem still exist! Let us examine further. Sample posts for SE: End of explanation """ dups.to_csv(RES_DIR + 'tmp/se_dups.csv', index=False) dups.drop_duplicates(['doc']).to_csv(RES_DIR + 'tmp/se_no_dups2.csv', index=False) """ Explanation: Save duplicated and unique versions to files for further analysis: End of explanation """ se_in_infocom = getPosts2Check(job_title='Software Engineer', industry='Information and Communications') fig = plotClusterDists(se_in_infocom) plt.savefig(LDA_DIR + 'fig/se_in_infocom2.pdf') plt.show(); plt.close() """ Explanation: By reading the job posts of Capital Match Holdings, we found that they are actually different. However, the differences in their contents are subtle. Cluster distribution of the last post is a bit different from the first two because the last one has no salary detail while the first two provide salary and equity. A closer look at the skill sets in these posts reveals: skill set of the first 2 posts = skill set of the last post + {equity} salary in 1st post is more than salary in 2nd post (4K vs. 3K) Other examples of suspicious dups: End of explanation """ posts = getPosts2Check(job_title="Administrative Assistant", industry="Financial and Insurance Activities", n_pair=5) fig = plotClusterDists(posts, figsize=(6, 12)) # plt.savefig(LDA_DIR + 'fig/admin_in_fin.jpg') plt.show(); plt.close() # Manager posts = getPosts2Check(job_title="Manager", industry="Financial and Insurance Activities", n_pair=5) fig = plotClusterDists(posts, figsize=(6, 12)) plt.savefig(LDA_DIR + 'fig/man_in_fin.jpg') plt.savefig(LDA_DIR + 'fig/man_in_fin.pdf') plt.show(); plt.close() """ Explanation: Sample posts for Administrative Assistant: End of explanation """ cmh = df[df.employer_name == str.upper("Capital Match Holdings Pte. Ltd.")] pairwiseJacSim(df=cmh) """ Explanation: These example tell us that we should also filter out the posts with are almost identical. Such posts can be detected based on the set of common skills (Jaccard similarity) as in the next section. Detecting Almost-Identical Posts by Common Skills In this section we use Jaccard similarity to detect posts that are almost-identical. We expect that such posts usually come from the same company. Thus, we first analyze the posts under the same company. End of explanation """ def filterSimPosts(i, employer='Capital Match Holdings Pte. Ltd.', sim_thres=.9): def doFilter(df): keep, remain = [], df t0 = time() # While there are at least 2 posts then we need to check while (remain.shape[0] >= 2): res = jacSim2Others(0, remain) job_id = remain.iloc[0]['job_id'] keep.append(job_id) # In the remain, keep only posts that are sig. diff from post[0] sig_diff_posts = list(res.query('jacSim < {}'.format(sim_thres))['job_id2']) remain = remain[remain['job_id'].isin(sig_diff_posts)] # If only 1 post remains then we can just keep it as there are no more posts too similar to it if (remain.shape[0] == 1): keep.append(remain.iloc[0]['job_id']) print('\t done after {}s'.format(round(time()-t0, 1))) return df[df['job_id'].isin(keep)] sub_df = df[df.employer_name == employer.upper()]; n_post = sub_df.shape[0] # if (i % 100 == 0): print('\t{}, {}, {} posts'.format(i, employer, n_post)) return doFilter(sub_df) tmp = filterSimPosts(0, employer='Capital Match Holdings Pte. Ltd.', sim_thres=.9) pairwiseJacSim(tmp) tmp = filterSimPosts(1, employer='Apar Technologies Pte. Ltd.', sim_thres=.9) pairwiseJacSim(tmp) by_employer = df.groupby('employer_name').agg({'job_id': 'nunique'}) by_employer = by_employer.add_prefix('n_').reset_index() # by_employer.n_job_id.describe().round(1) by_employer = by_employer.sort_values('n_job_id') employers = list(by_employer.query('n_job_id >= 2')['employer_name']) len(employers) print('Filtering too similar posts in {} companies...'.format(len(employers))) def filterPosts4Employers(start=0, end=100): t0 = time() frames = [filterSimPosts(i+start, emp, sim_thres=.9) for i, emp in enumerate(employers[start:end])] filtered_df = pd.concat(frames) print('Done after %.1fs' %(time() - t0)) return filtered_df filtered_df_1 = filterPosts4Employers(start=0, end=3000) filtered_df_2 = filterPosts4Employers(start=3000, end=5000) filtered_df_3 = filterPosts4Employers(start=5000, end=6000) # Defensive save filtered_df.to_csv(SKILL_DAT + 'filter_doc_index.csv', index=False) filtered_df_4 = filterPosts4Employers(start=6000, end=len(employers)-5) filtered_df_6 = filterPosts4Employers(start=len(employers)-5, end=len(employers)-4) filtered_df_5 = filterPosts4Employers(start=len(employers)-4, end=len(employers)) filtered_df = pd.concat([filtered_df_1, filtered_df_2, filtered_df_3, filtered_df_4, filtered_df_5, filtered_df_6]) print filtered_df.shape filtered_df = filtered_df.reset_index() filtered_df.to_csv(SKILL_DAT + 'filter_doc_index.csv', index=False) df = df.dropna() df.to_csv(SKILL_DAT + 'uniq_doc_index.csv', index=False) dbs = df[df.employer_name == 'DBS BANK LTD.'] dbs.iloc[541] # dbs = dbs.drop(dbs.index[541]) for r, skill_str in enumerate(dbs.occur_skills): print((r, skill_str.split())) sample_employers = ["Capital Match Holdings Pte. Ltd.", "Comfortdelgro Corporation Limited", "Fujitsu Asia Pte Ltd", "Millennium Capital Management (Singapore) Pte. Ltd."] def plotSamplePosts(employers, df, figsize): sample_employers = map(str.upper, employers) sample_posts = df[(df.employer_name.isin(sample_employers)) & (df.title == "Software Engineer")] return plotClusterDists(sample_posts, figsize) fig = plotSamplePosts(sample_employers, df=filtered_df, figsize=(6, 8)) plt.savefig(LDA_DIR + 'fig/se_in_fin3.jpg') plt.savefig(LDA_DIR + 'fig/se_in_fin3.pdf') plt.show(); plt.close() filtered_df.shape[0] # Statistics check posts = pd.read_csv(DATA_DIR + 'full_job_posts.csv') print posts.shape posts.head() jd_df = pd.read_csv('d:/larc_projects/job_analytics/data/raw/jd.csv') print jd_df.shape jd_df.head() tmp = pd.merge(posts, jd_df) tmp = tmp.drop_duplicates() tmp.shape tmp = tmp.drop_duplicates(['employer_name', 'job_description_clob']) tmp.shape """ Explanation: This example confirms that posts under the same company can be very similar, the percentage of overlapping skills can be more than 90%. We want to remove such pairs of posts as they can be the reason consistency score are over-estimated. The filtering process for a given DF of posts is follows. + Init an empty list keep + Compute Jaccard similarities of the first post with the remaining posts + Add the 1st post to keep and remove it from remain as well as any post which is too similar with the 1st post + Repeat the process until no more post left in remain. End of explanation """
NifTK/NiftyNet
demos/PROMISE12/PROMISE12_Demo_Notebook.ipynb
apache-2.0
import os,sys niftynet_path=r'path/to/NiftyNet' os.chdir(niftynet_path) """ Explanation: PROMISE12 prostate segmentation demo Preparation: 1) Make sure you have set up the PROMISE12 data set. If not, download it from https://promise12.grand-challenge.org/ (registration required) and run data/PROMISE12/setup.py 2) Make sure you are in NiftyNet root, setting niftynet_path correctly to the path with the niftynet folder in it End of explanation """ import pip #pip.main(['install','-r','requirements-gpu.txt']) pip.main(['install','-r','requirements-cpu.txt']) pip.main(['install', 'SimpleITK>=1.0.0']) """ Explanation: 3) Make sure you have all the dependencies installed (replacing gpu with cpu for cpu-only mode): End of explanation """ import os import sys import niftynet sys.argv=['','train','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_train_config.ini'),'--max_iter','10'] niftynet.main() """ Explanation: Training a network from the command line The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory: python net_segment.py train --conf demos/PROMISE12/promise12_demo_train_config.ini --max_iter 10 Notice that we use configuration file that is specific to this experiment. This file contains default settings. Also note that we can override these settings on the command line. To execute NiftyNet from within the notebook, you can run the following python code: End of explanation """ import os import sys import niftynet sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini')] niftynet.main() """ Explanation: Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command python net_segment.py inference --conf demos/PROMISE12/promise12_demo_inference_config.ini or the following python code in the Notebook End of explanation """ import os import sys import niftynet sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini'), '--model_dir', os.path.join('demos','PROMISE12','pretrained')] niftynet.main() """ Explanation: Otherwise, you can load up some pre-trained weights for the network: python net_segment.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained or the following python code in the Notebook End of explanation """
michael-isaev/cse6040_qna
PythonQnA_8_comprehensions.ipynb
apache-2.0
from numpy.random import randint import matplotlib.pyplot as plt %matplotlib inline S = randint(low=0, high=11, size=15) # 10 random integers b/w 0 and 10 def f(x): """ Dummy function - returns identity """ return x """ Explanation: Reading, and writing, comprehension(s) Before we delve into the topic of comprehensions, here is a bit of setup code. End of explanation """ print("1. S == {}".format(S)) y1 = [f(x) for x in S] print("2. All (x, f(x)) pairs: {}".format(list(zip(S, y1)))) plt.scatter(S, y1) """ Explanation: List comprehensions List comprehensions are a special syntax for compactly generating lists. Historically, they come from a programming style referred to as functional programming. A list comprehensions can always be expanded into procedural statements using loops. Although comprehensions have a slight advantage in performance when compared to loops, this doesn't mean that you should always prefer comprehensions over procedural code. Too much syntactic sugar can be hazardous to your (program's) health, in the sense of making it hard to read. Common patterns with list comprehensions in single variable The following sections contain examples of common patterns where list comprehensions are useful. The patterns described here are by no means exhaustive. Rather, they are meant to act as a solution template for common problems. A typical use of a list comprehension in a single variable is to expand statements in mathematics known as (universal) quantifiers, or "for-all" statements. Example 1 The following mathematical statement, $$ y = f(x) \ \ \forall x \in S, $$ is usually read as, "for all elements $x$ from a collection $S$, (let) $y$ equal $f(x)$." Statements of this form have natural translations into Python using list comprehensions. To wit: End of explanation """ y2 = [] for x in S: y2.append(f(x)) print("3. All (x, f(x)) pairs: {}".format(list(zip(S, y2)))) assert y1 == y2 """ Explanation: As you can see, the translation from the math to code is natural. In a procedural or iterative style, an equivalent program might look like the following. End of explanation """ y1 = [0 if x <= 5 else f(x) for x in S] print(*zip(S,y1)) plt.scatter(S, y1) """ Explanation: Example 2 The syntax extends nicely in the presence of conditionals. For instance, consider the following: \begin{align} y &= \begin{cases} 0 & x \leq 5 \ f(x) & \text{otherwise} \end{cases}\ &\forall x \in S \end{align} The list comprehension-based code might look as follows: End of explanation """ y2 = [] for x in S: if x <= 5: y2.append(0) else: y2.append(f(x)) print(*zip(S, y2)) assert y1 == y2 print("Passed!") """ Explanation: NOTE: This is not different from the first pattern in syntactic terms. This is a trick based on the ternary expressions in Python. The procedural equivalent of this code is shown below. End of explanation """ y1 = [f(x) for x in S if x <= 5] s = [x for x in S if x <= 5] print(*zip(s,y1)) # Note how the output range has been modified due to the change in input range plt.scatter(s, y1) """ Explanation: The two patterns shown in examples 1 and 2 can be generalised to the following pattern. ```python output_list = [expression(i) for i in some_iterable] ``` Example 3 Suppose we wish to construct a list from a subset of the elements of $S$. That is, let $R \subseteq S$ and consider \begin{align} y = f(x) \ \forall x \in R, \mbox{ where } R \subseteq S. \end{align} As this notation indicates, we are interested in the function's value for only a subset of the input space, namely $R \subseteq S$. The subset can be seen as imposing a condition on the input space. For the purpose of this example, we will use $R = {x: x \leq 5, x \in S}$. End of explanation """ y2 = [] for x in S: if x <= 5: y2.append(f(x)) assert y2 == y1 print(*zip(S, y2)) print("Passed!") """ Explanation: The procedural equivalent of this code is shown below. End of explanation """ import numpy as np def g(i, j): """ Returns the result of division of indices """ return (i + 1) / (j + 1) C1 = [g(i,j) for i in range(0,3) for j in range(0,3)] # replace g with any function that you want print(C1) print(np.array(C1).reshape(3,3)) """ Explanation: This pattern is syntactically different from the previous pattern. It can be generalized as python output_list = [expression(i) for i in some_iterable if condition(i)] List comprehensions with two variables Comprehensions can also be extended to multiple variables. The rules discussed in the previous section also apply to the multivariable comprehensions. The main thing you need to remember for multivariable comprehensions is that the outer variable in the comprehension varies the fastest. Example 4 For example, imagine a matrix $C$ whose elements are given by \begin{align} c_{i,j} &= g(i,j) \ i &\in 0\cdots2,\ j \in 0\cdots2 \end{align} We can create the (flattened) matrix in a single list comprehension using the following code. End of explanation """ C2 = [] for i in range(3): for j in range(3): C2.append(g(i, j)) print(C2) assert C1 == C2 print("Passed!") """ Explanation: The procedural equivalent of this code is shown below. End of explanation """ C1 = [g(i,j) if i !=j else 0 for i in range(0,3) for j in range(0,3)] print(C1) print(np.array(C1).reshape(3,3)) """ Explanation: Example 2 also has an equivalent in the two variable case. Example 5 For example, \begin{align} C &= \begin{cases} g(i,j) & i \neq j \ 0 & i = j \end{cases} \ i &\in 0\cdots2, j \in 0\cdots2 \end{align} End of explanation """ C2 = [] for i in range(3): for j in range(3): if i != j: C2.append(g(i,j)) else: C2.append(0) print(C2) assert C1 == C2 print("Passed!") """ Explanation: Technically, this is the same pattern as the previous example but uses the ternary operator (as shown in example 2). The procedural equivalent is shown below. End of explanation """ C1 = [ (i, j, g(i,j)) for i in range(0,3) for j in range(0,3) if i !=j] print(C1) # note that the input restriction on the diagonals removes the diagonals from the output list """ Explanation: The two examples can be generalized to python output_list = [expr(i,j) for i in iterable1 for j in iterable2] # j varies fastest Restrictions on the input space as shown in example 3 can also extended to the multivariable comprehension. This is illustrated below for the sake of completeness, though the result cannot be displayed as a matrix. Example 6 For example, \begin{align} C &= g(i,j) \ i &\in 0\cdots2,\ j \in 0\cdots2,\ i \neq j \end{align} End of explanation """ C2 = [] for i in range(3): for j in range(3): if i != j: C2.append((i, j, g(i,j))) print(C2) assert C1 == C2 print("Passed!") """ Explanation: The procedural equivalent is shown below. End of explanation """ dict_comp = {x: f(x) for x in S} print(dict_comp) """ Explanation: The pattern can be generalized as python output_list = [expr(i,j) for i in iterable1 for j in iterable2 if condition(i,j)] Comprehensions can be used with even more variables but readability takes a serious hit with more than two variables. See PEP 202 (https://www.python.org/dev/peps/pep-0202/) for more details about list comprehensions. I highly encourage reading PEP documents since you often get the rationale behind a feature in the language straight from the horse's mouth. Dictionary comprehensions Some other built-in collections in Python have "comprehensive" analogues. One example is the dictionary comprehension, which is described in PEP 274 (https://www.python.org/dev/peps/pep-0274/). You can use dictionary comprehensions in ways very similar to list comprehensions, except that the output of a dictionary comprehension is, well, a dictionary instead of a list. Mathematically, dictionary comprehensions are suited to representing functions. $$ \begin{align} x \rightarrow f(x), x \in S \end{align} $$ can be translated as End of explanation """ # Bad code [print(i) for i in range(3)] # you know you can do better than that for i in range(3): print(i) # that's better """ Explanation: The patterns discussed in the previous section also apply here. There are mainly two kinds of patterns in the single variable case. python dict_comp1 = {x: expr(x) for x in iterable} dict_comp2 = {x: expr(x) for x in iterable if condition(x)} What not to do with comprehensions 1. Do not use side effects within comprehensions End of explanation """ x1 = [i if i <= 10 else i**2 if 10 < i <= 20 else i**4 if 20 < i <= 50 else 1.0 / i for i in range(100) if i not in (5, 7, 11, 13, 17, 19, 29, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97)] """ Explanation: 2. Do not sacrifice readability over "speed." For example, do not write code like the snippet shown below End of explanation """ # procedural code is more readable in this case here x2 = [] for i in range(100): # optimus primes are beyond our reach, https://oeis.org/A217090 if i not in (5, 7, 11, 13, 17, 19, 29, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97): if i <= 10: x2.append(i) elif 10 < i <= 20: x2.append(i**2) elif 20 < i <= 50: x2.append(i**4) else: x2.append(1.0 / i) assert x2 == x1 """ Explanation: The procedural code is more readable compared to the comprehension End of explanation """ def function(val): if val <= 10: return val elif 10 < val <= 20: return val**2 elif 20 < val <= 50: return val**4 else: return 1.0 / val def is_optimus_prime(val): return val in (5, 7, 11, 13, 17, 19, 29, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97) x3 = [function(i) for i in range(100) if not is_optimus_prime(i)] assert x1 == x2 == x3 print("Passed!") """ Explanation: This can be shortened to comprehension for readability with a little bit of refactoring. End of explanation """
awwong1/nd101
dlnd-project-1/dlnd-your-first-neural-network.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). At the very bottom of the notebook, you'll find some unit tests to check the correctness of your neural network. Be sure to run these before you submit your project. After you've submitted this project, feel free to explore the data and the model more. End of explanation """ data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() """ Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation """ rides[:24*10].plot(x='dteday', y='cnt') """ Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation """ dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() """ Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation """ quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std """ Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation """ # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] """ Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation """ n_records = features.shape[0] split = np.random.choice(features.index, size=int(n_records*0.8), replace=False) train_features, train_targets = features.ix[split], targets.ix[split] val_features, val_targets = features.drop(split), targets.drop(split) """ Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. It's important to split the data randomly so all cases are represented in both sets. End of explanation """ class NeuralNetwork: def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.learning_rate = learning_rate #### Set this to your implemented sigmoid function #### # TODO: Activation function is the sigmoid function self.activation_function = def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin = 2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = # signals into hidden layer hidden_outputs = # signals from hidden layer # TODO: Output layer final_inputs = # signals into final output layer final_outputs = # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_errors = # errors propagated to the hidden layer hidden_grad = # hidden layer gradients # TODO: Update the weights self.weights_hidden_to_output += # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = # signals into hidden layer hidden_outputs = # signals from hidden layer # TODO: Output layer final_inputs = # signals into final output layer final_outputs = # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) """ Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. . You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation """ ### Set the hyperparameters here ### epochs = 1000 learning_rate = 0.1 hidden_nodes = 10 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) if e%(epochs/10) == 0: # Calculate losses for the training and test sets train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) losses['train'].append(train_loss) losses['validation'].append(val_loss) # Print out the losses as the network is training print('Training loss: {:.4f}'.format(train_loss)) print('Validation loss: {:.4f}'.format(val_loss)) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() """ Explanation: Training the network Here you'll set the hyperparameters for the network. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network faster. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. However, it can become too specific to the training set and will fail to generalize to the validation set. This is called overfitting. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) """ Explanation: Check out your predictions Here, use the test data to check that network is accurately making predictions. If your predictions don't match the data, try adjusting the hyperparameters and check to make sure the forward passes in the network are correct. End of explanation """ import unittest np.random.seed(42) inputs = [0.5, -0.2, 0.1] targets = [0.4] class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path == 'Bike-Sharing-Dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, [ 0.22931895, -1.28754157])) self.assertTrue(np.allclose(network.weights_input_to_hidden, [[-0.7128223, 0.22086344, -0.64139849], [-1.06444693, 1.06268915, -0.17280743]])) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) self.assertTrue(np.allclose(network.run(inputs), -0.97900982)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) """ Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below Your answer here Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation """
fevangelista/wicked
tutorials/01-Basics.ipynb
mit
import wicked as w from IPython.display import display, Math, Latex def latex(expr): """Function to render any object that has a member latex() function""" display(Math(expr.latex())) """ Explanation: Basics of Wick&d Loading the module To use wick&d you will have to first import the module wicked. Here we abbreviate it with w for convenience. We also define a function (latex) to display objects in LaTeX format. End of explanation """ osi = w.osi() print(str(osi)) """ Explanation: Defining orbital spaces To use wick&d, you need to specify what type of reference state is used as a vacuum state. This information is stored in the OrbitalsSpaceInfo class, which holds information about the orbital space defined. We get access to this object (a singleton) via the function osi(). Calling the print function we can the see information about the orbital spaces defined. When wick&d is initialized, no orbital space is defined and no text is printed: End of explanation """ w.reset_space() w.add_space("p", "fermion", "unoccupied", ['a','b','c','d','e','f']) """ Explanation: Defining orbital spaces To define an orbital space one must specify: - The label (a single character, e.g., 'o' for occupied) - The type of operator field (a string, currently the only option is "fermion"). - The type of reference state associated with this space (a string, one of ["occupied", "unoccupied", "general"]) - The pretty indices that we associate with this space (e.g., ['i','j','k'] for occupied orbitals) Wick&d defines three types of occupations associated to each space: - Occupied (occupied): orbitals that are occupied in the vacuum (applies to fermions) - Unoccupied (unoccupied): orbitals that are unoccupied in the vacuum - General (general): orbitals that are partially occupied in the vacuum Examples of different reference states/orbital spaces Physical vacuum The reference state is the physical vacuum state \begin{equation} | - \rangle \end{equation} In this case all orbitals are unoccupied. The following code initializes a single orbital space with label p and unoccupied orbitals: End of explanation """ w.reset_space() w.add_space("o", "fermion", "occupied", ['i','j','k','l','m']) w.add_space("v", "fermion", "unoccupied", ['a','b','c','d','e','f']) """ Explanation: Single determinant (Fermi vacuum) The reference state is the determinant \begin{equation} | \Phi \rangle = |\psi_1 \cdots \psi_N \rangle \end{equation} In this case you need to specify only two spaces, occupied and unoccupied orbitals. The following code initializes two spaces: 1) occupied (o) and 2) virtual (v) orbitals, End of explanation """ w.reset_space() w.add_space("c", "fermion", "occupied", ['i','j','k','l','m']) w.add_space("a", "fermion", "general", ['u','v','w','x','y','z']) w.add_space("v", "fermion", "unoccupied", ['a','b','c','d','e','f']) """ Explanation: Linear combination of determinants (Generalized vacuum) This case requires three orbital spaces: core (occupied), active (general), and virtual (unoccupied). The reference is a linear combination of determinants \begin{equation} | \Psi \rangle = \sum_\mu c_\mu | \Phi_\mu \rangle \end{equation} where each determinant $| \Phi_\mu \rangle $ is \begin{equation} | \Phi_\mu \rangle = \underbrace{\hat{a}^\dagger_u \hat{a}^\dagger_v \cdots}{\text{active}} | \Phi\mathrm{c} \rangle \end{equation} where the core determinant $\Phi_\mathrm{c} \rangle$ is defined as \begin{equation} | \Phi_\mathrm{c} \rangle = |\psi_1 \cdots \psi_{N_\mathrm{c}} \rangle. \end{equation} To specify this reference you can use the code: End of explanation """ print(str(osi)) """ Explanation: We can now verify that the orbital spaces have been updated End of explanation """
CCBatIIT/AlGDock
Example/test_fractional_GB.ipynb
mit
# This is probably due to a unit conversion in a multiplicative prefactor # This multiplicative prefactor is based on nanometers r_min = 0.14 r_max = 1.0 print (1/r_min - 1/r_max) # This multiplicative prefactor is based on angstroms r_min = 1.4 r_max = 10.0 print (1/r_min - 1/r_max) """ Explanation: Igrid[atomI] appears to be off by a factor of 12.6! End of explanation """ 4*np.pi """ Explanation: Switching from nanometers to angstroms makes the multiplicative prefactor smaller, which is opposite of the desired effect! End of explanation """ # This is after multiplication by 4*pi # Sum for atom 0: 1.55022, 2.96246 # Sum for atom 1: 1.56983, 2.96756 # Sum for atom 2: 1.41972, 2.90796 # Sum for atom 3: 1.45936, 3.02879 # Sum for atom 4: 2.05316, 3.32989 # Sum for atom 5: 1.5354, 3.06405 # Sum for atom 6: 1.43417, 3.02438 # Sum for atom 7: 1.85508, 3.21875 # Sum for atom 8: 2.06909, 3.42013 # Sum for atom 9: 2.4237, 4.32524 # Sum for atom 10: 1.9603, 4.54512 # Sum for atom 11: 2.18017, 3.99349 # Sum for atom 12: 2.19774, 3.8152 # Sum for atom 13: 2.02152, 3.7884 # Sum for atom 14: 2.05662, 4.10305 # Sum for atom 15: 2.65659, 4.34157 # Sum for atom 16: 2.81839, 4.63688 # Sum for atom 17: 2.90653, 4.40561 # Sum for atom 18: 2.37779, 4.38092 # Sum for atom 19: 2.17795, 4.01659 # Sum for atom 20: 1.77652, 3.25573 # Sum for atom 21: 1.22359, 3.24223 # Sum for atom 22: 1.2336, 3.3604 # Sum for atom 23: 1.21771, 3.1483 """ Explanation: Igrid[atomI] appears to be off by a factor of 4*pi! End of explanation """
kirichoi/tellurium
examples/notebooks/core/tellurium_plotting.ipynb
apache-2.0
from __future__ import print_function import tellurium as te te.setDefaultPlottingEngine('matplotlib') %matplotlib inline r = te.loada(''' model feedback() // Reactions:http://localhost:8888/notebooks/core/tellurium_export.ipynb# J0: $X0 -> S1; (VM1 * (X0 - S1/Keq1))/(1 + X0 + S1 + S4^h); J1: S1 -> S2; (10 * S1 - 2 * S2) / (1 + S1 + S2); J2: S2 -> S3; (10 * S2 - 2 * S3) / (1 + S2 + S3); J3: S3 -> S4; (10 * S3 - 2 * S4) / (1 + S3 + S4); J4: S4 -> $X1; (V4 * S4) / (KS4 + S4); // Species initializations: S1 = 0; S2 = 0; S3 = 0; S4 = 0; X0 = 10; X1 = 0; // Variable initialization: VM1 = 10; Keq1 = 10; h = 10; V4 = 2.5; KS4 = 0.5; end''') # simulate using variable step size r.integrator.setValue('variable_step_size', True) s = r.simulate(0, 50) # draw the diagram r.draw(width=200) # and the plot r.plot(s, title="Feedback Oscillations", ylabel="concentration", alpha=0.9); """ Explanation: Back to the main Index Draw diagram This example shows how to draw a network diagram, requires graphviz. End of explanation """ import tellurium as te te.setDefaultPlottingEngine('matplotlib') %matplotlib inline import numpy as np import matplotlib.pylab as plt # Load a model and carry out a simulation generating 100 points r = te.loada ('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10') r.draw(width=100) # get colormap # Colormap instances are used to convert data values (floats) from the interval [0, 1] cmap = plt.get_cmap('Blues') k1_values = np.linspace(start=0.1, stop=1.5, num=15) max_k1 = max(k1_values) for k, value in enumerate(k1_values): r.reset() r.k1 = value s = r.simulate(0, 30, 100) color = cmap((value+max_k1)/(2*max_k1)) # use show=False to plot multiple curves in the same figure r.plot(s, show=False, title="Parameter variation k1", xtitle="time", ytitle="concentration", xlim=[-1, 31], ylim=[-0.1, 11]) te.show() print('Reference Simulation: k1 = {}'.format(r.k1)) print('Parameter variation: k1 = {}'.format(k1_values)) """ Explanation: Plotting multiple simulations All plotting is done via the r.plot or te.plotArray functions. To plot multiple curves in one figure use the show=False setting. End of explanation """ import tellurium as te te.setDefaultPlottingEngine('matplotlib') %matplotlib inline r = te.loadTestModel('feedback.xml') r.integrator.variable_step_size = True s = r.simulate(0, 50) r.plot(s, logx=True, xlim=[10E-4, 10E2], title="Logarithmic x-Axis with grid", ylabel="concentration"); """ Explanation: Logarithmic axis The axis scale can be adapted with the xscale and yscale settings. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cmcc/cmip6/models/sandbox-2/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: CMCC Source ID: SANDBOX-2 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:50 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
sdpython/teachpyx
_doc/notebooks/python/tarabiscote.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: Exercices expliqués de programmation Quelques exercices autour de la copie de liste, du temps de calcul, de l'héritage. End of explanation """ def somme(tab): l = tab[0] for i in range(1, len(tab)): l += tab [i] return l ens = [[0,1],[2,3]] print(somme(ens)) print(ens) """ Explanation: Copie de listes La fonction somme est censée faire la concaténation de toutes les listes contenues dans ens. Le résultat retourné est effectivement celui désiré mais la fonction modifie également la liste ens, pourquoi ? End of explanation """ import copy ###### ligne ajoutée def somme(tab): l = copy.copy (tab[0]) ###### ligne modifiée for i in range(1, len (tab)): l += tab[i] return l ens = [[0,1],[2,3]] print(somme(ens)) print(ens) """ Explanation: Le problème vient du fait qu'une affectation en python (seconde ligne de la fonction somme ne fait pas une copie mais crée un second identificateur pour désigner la même chose. Ici, l et tab[0] désignent la même liste, modifier l'une modifie l'autre. Ceci explique le résultat. Pour corriger, il fallait faire une copie explicite de tab[0] : End of explanation """ def somme(tab) : l = [] ###### ligne modifiée for i in range (0, len (tab)) : ###### ligne modifiée l += tab [i] return l ens = [[0,1],[2,3]] print(somme(ens)) print(ens) """ Explanation: Il était possible, dans ce cas, de se passer de copie en écrivant : End of explanation """ l = ["un", "deux", "trois", "quatre", "cinq"] for i in range (0,len (l)) : mi = i for j in range (i, len (l)) : if l[mi] < l [j] : mi = j e = l [i] l [mi] = l [i] l [i] = e l """ Explanation: Erreur de logique Le programme suivant fonctionne mais le résultat n'est pas celui escompté. End of explanation """ l = ["un", "deux", "trois", "quatre", "cinq"] for i in range (0,len (l)) : mi = i for j in range (i, len (l)) : if l[mi] < l [j] : mi = j e = l [mi] ######## ligne modifiée l [mi] = l [i] l [i] = e l """ Explanation: Ce programme est censé effectuer un tri par ordre alphabétique décroissant. Le problème intervient lors de la permutation de l'élément l[i] avec l'élément l[mi]. Il faut donc écrire : End of explanation """ def moyenne (tab) : s = 0.0 for x in tab : s += x return s / len (tab) def variance (tab) : s = 0.0 for x in tab : t = x - moyenne (tab) s += t * t return s / len (tab) l = [ 0,1,2, 2,3,1,3,0] print(moyenne (l)) print(variance (l)) """ Explanation: Coût d'un algorithme Le coût d'un algorithme ou d'un programme est le nombre d'opérations (additions, multiplications, tests, ...) qu'il effectue. Il s'exprime comme un multiple d'une fonction de la dimension des données que le programme manipule. Par exemple : $O(n)$, $O(n^2)$, $O(n\ln n)$, ... End of explanation """ def variance (tab) : s = 0.0 m = moyenne (tab) for x in tab : t = x - m s += t * t return s / len (tab) variance(l) """ Explanation: Tout d'abord, le coût d'un algorithme est très souvent exprimé comme un multiple de la dimension des données qu'il traite. Ici, la dimension est la taille du tableau tab. Par exemple, si on note n = len(tab), alors le coût de la fonction moyenne s'écrit $O(n)$ car cette fonction fait la somme des $n$ éléments du tableau. La fonction variance contient quant à elle un petit piège. Si elle contient elle aussi une boucle, chacun des $n$ passages dans cette boucle fait appel à la fonction moyenne. Le coût de la fonction variance est donc $O(n^2)$. Il est possible d'accélérer le programme car la fonction moyenne retourne le même résultat à chaque passage dans la boucle. Il suffit de mémoriser son résultat dans une variable avant d'entrer dans la boucle comme suit : End of explanation """ class carre: def __init__ (self, a): self.a = a def surface (self): return self.a ** 2 class rectangle(carre): def __init__ (self, a,b) : carre.__init__(self,a) self.b = b def surface (self): return self.a * self.b rectangle(3, 4).surface() """ Explanation: Le coût de la fonction variance est alors $O(n)$. Le coût d'un algorithme peut être évalué de manière plus précise et nécessiter un résultat comme $n^2 + 3n + 2$ mais cette exigence est rarement utile pour des langages comme python. L'expression for x in tab: cache nécessairement un test qu'il faudrait prendre en compte si plus de précision était exigée. Il faudrait également se tourner vers un autre langage de programmation, plus précis dans sa syntaxe. Par exemple, lorsqu'on conçoit un programme avec le langage C ou C++, à partir du même code informatique, on peut construire deux programmes exécutables. Le premier (ou version debug), lent, sert à la mise au point : il inclut des tests supplémentaires permettant de vérifier à chaque étape qu'il n'y a pas eu d'erreur (une division par zéro par exemple). Lorsqu'on est sûr que le programme marche, on construit la seconde version (ou release), plus rapide, dont ont été ôtés tous ces tests de conception devenus inutiles. python aboutit à un programme lent qui inclut une quantité de tests invisibles pour celui qui programme mais qui détecte les erreurs plus vite et favorise une conception rapide. Il n'est pas adapté au traitement d'information en grand nombre et fait une multitude d'opérations cachées. Héritage double On a besoin dans un programme de créer une classe carre et une classe rectangle. Mais on ne sait pas quelle classe doit hériter de l'autre. Dans le premier programme, rectangle hérite de carre. End of explanation """ class rectangle : def __init__ (self, a,b) : self.a = a self.b = b def surface (self) : return self.a * self.b class carre (rectangle) : def __init__ (self, a) : rectangle.__init__ (self, a,a) def surface (self) : return self.a ** 2 carre(3).surface() """ Explanation: Dans le second programme, c'est la classe carre qui hérite de la classe rectangle. End of explanation """ import math class carre : def __init__ (self, a) : self.a = a def surface (self) : return self.a ** 2 class rectangle (carre) : def __init__ (self, a,b) : carre.__init__(self,a) self.b = b def surface (self) : return self.a * self.b class losange (carre) : def __init__ (self, a,theta) : carre.__init__(self,a) self.theta = theta def surface (self) : return self.a * math.cos (self.theta) * self.a * math.sin (self.theta) * 2 losange(3, 1).surface() """ Explanation: Dans le second programme, est-il nécessaire de redéfinir la méthode surface dans la classe carre ? Quel est le sens d'héritage qui vous paraît le plus censé, class rectangle(carre) ou class carre(rectangle) ? On désire ajouter la classe losange. Est-il plus simple que rectangle hérite de la classe carre ou l'inverse pour introduire la classe losange ? Quel ou quels attributs supplémentaires faut-il introduire dans la classe losange ? Le principe de l'héritage est qu'une classe carre héritant de la classe rectangle hérite de ses attributs et méthodes. L'aire d'un carré est égale à celle d'un rectangle dont les côtés sont égaux, par conséquent, la méthode surface de la classe retourne la même valeur que celle de la classe rectangle. Il n'est donc pas nécessaire de la redéfinir. * D'après la réponse de la première question, il paraît plus logique de considérer que carre hérite de rectangle. * Un losange est défini par un côté et un angle ou un côté et la longueur d'une de ses diagonales, soit dans les deux cas, deux paramètres. Dans la première question, il paraissait plus logique que la classe la plus spécifique hérite de la classe la plus générale afin de bénéficier de ses méthodes. Pour introduire le losange, il paraît plus logique de partir du plus spécifique pour aller au plus général afin que chaque classe ne contienne que les informations qui lui sont nécessaires. End of explanation """ x = 1.0 for i in range (0,19) : x = x / 10 print(i, "\t", 1.0 - x, "\t", x, "\t", x **(0.5)) """ Explanation: Le sens de l'héritage dépend de vos besoins. Si l'héritage porte principalement sur les méthodes, il est préférable de partir du plus général pour aller au plus spécifique. La première classe sert d'interface pour toutes ses filles. Si l'héritage porte principalement sur les attributs, il est préférable de partir du plus spécifique au plus général. Dans le cas général, il n'y a pas d'héritage plus sensé qu'un autre mais pour un problème donné, il y a souvent un héritage plus sensé qu'un autre. Précision des calculs Voici un aperçu de la précision des calculs pour le calcul $1 - 10^{-n}$. L'exercice a pour but de montrer que l'ordinateur ne fait que des calculs approchés et que la précision du résultat dépend de la méthode numérique employée. End of explanation """ import numpy x = numpy.float32(1.0) for i in range (0,19) : x = x / numpy.float32(10) print(i, "\t", 1.0 - x, "\t", x, "\t", x **(0.5)) """ Explanation: Le programme montre que l'ordinateur affiche 1 lorsqu'il calcule $1-10^{-17}$. Cela signifie que la précision des calculs en python est au mieux de $10^{-16}$. C'est encore moins bon dans le cas de float ou réel simple précision codé sur 4 octets au lieu de 8 pour les double. End of explanation """ class matrice_carree_2 : def __init__ (self, a,b,c,d) : self.a, self.b, self.c, self.d = a,b,c,d def determinant (self) : return self.a * self.d - self.b * self.c m1 = matrice_carree_2 (1.0,1e-6,1e-6,1.0) m2 = matrice_carree_2 (1.0,1e-9,1e-9,1.0) print(m1.determinant()) print(m2.determinant()) """ Explanation: On écrit une classe matrice_carree_2 qui représente une matrice carrée de dimension 2. End of explanation """ class matrice_carree_2 : def __init__ (self, a,b,c,d) : self.a, self.b, self.c, self.d = a,b,c,d def determinant (self) : return self.a * self.d - self.b * self.c def valeurs_propres (self) : det = self.determinant () trace = self.a + self.d delta = trace ** 2 - 4 * det l1 = 0.5 * (trace - (delta ** (0.5)) ) l2 = 0.5 * (trace + (delta ** (0.5)) ) return l1,l2 m1 = matrice_carree_2 (1.0,1e-6,1e-6,1.0) m2 = matrice_carree_2 (1.0,1e-9,1e-9,1.0) print(m1.valeurs_propres()) print(m2.valeurs_propres()) """ Explanation: La seconde valeur est donc fausse. On considère maintenant la matrice $M = \left(\begin{array}{cc} 1 & 10^{-9} \ 10^{-9} & 1 \end{array} \right)$. On pose $D = \det(M) = 1 - 10^{-18}$ et $T = tr(M) = 2$. $\Delta$ est le déterminant de $M$ et $T$ sa trace. On sait que les valeurs propres de $M$ notées $\lambda_1, \lambda_2$ vérifient : $$ \begin{array}{lll} D &=& \lambda_1 \lambda_2 \ T &=& \lambda_1 + \lambda_2 \end{array} $$ On vérifie que $(x - \lambda_1)(x - \lambda_2) = x^2 - x (\lambda_1 + \lambda_2) + \lambda_1 \lambda_2$. Les valeurs propres de $M$ sont donc solutions de l'équation : $x^2 - T x + D = 0$. Le discriminant de ce polynôme est $\Delta = T^2 - 4 D$. On peut donc exprimer les valeurs propres de la matrice $M$ par : $$ \begin{array}{lll} \lambda_1 &=& \frac{T - \sqrt{\Delta}}{2} \ \lambda_2 &=& \frac{T + \sqrt{\Delta}}{2} \end{array} $$ On ajoute donc la méthode suivante à la classe matrice_carree_2 : End of explanation """
irfani/Jenis-Kelamin
Gender Prediction.ipynb
apache-2.0
import pandas as pd # pandas is a dataframe library df = pd.read_csv("./data/data-pemilih-kpu.csv", encoding = 'utf-8-sig') #dimensi dataset terdiri dari 13137 baris dan 2 kolom df.shape #melihat 5 baris pertama dataset df.head(5) #melihat 5 baris terakhir dataset df.tail(5) """ Explanation: Memprediksi jenis kelamin dari nama bahasa Indonesia menggunakan Machine Learning Loading dataset End of explanation """ # mengecek apakah ada data yang berisi null df.isnull().values.any() # mengecek jumlah baris data yang berisi null len(df[pd.isnull(df).any(axis=1)]) # menghapus baris null dan recheck kembali df = df.dropna(how='all') len(df[pd.isnull(df).any(axis=1)]) # mengecek dimensi dataset df.shape # mengubah isi kolom jenis kelamin dari text menjadi integer (Laki-laki = 1; Perempuan= 0) jk_map = {"Laki-Laki" : 1, "Perempuan" : 0} df["jenis_kelamin"] = df["jenis_kelamin"].map(jk_map) # cek kembali data apakah telah berubah df.head(5) # Mengecek distribusi jenis kelamin pada dataset num_obs = len(df) num_true = len(df.loc[df['jenis_kelamin'] == 1]) num_false = len(df.loc[df['jenis_kelamin'] == 0]) print("Jumlah Pria: {0} ({1:2.2f}%)".format(num_true, (num_true/num_obs) * 100)) print("Jumlah Wanita: {0} ({1:2.2f}%)".format(num_false, (num_false/num_obs) * 100)) """ Explanation: Cleansing dataset End of explanation """ from sklearn.model_selection import train_test_split feature_col_names = ["nama"] predicted_class_names = ["jenis_kelamin"] X = df[feature_col_names].values y = df[predicted_class_names].values split_test_size = 0.30 text_train, text_test, y_train, y_test = train_test_split(X, y, test_size=split_test_size, stratify=y, random_state=42) """ Explanation: Split Dataset Dataset yang adalah akan dipecah menjadi dua bagian, 70% data akan digunakan sebagai data training untuk melatih mesin. Kemudian 30% sisanya akan digunakan sebagai data testing untuk mengevaluasi akurasi predisksi machine learning. End of explanation """ print("Dataset Asli Pria : {0} ({1:0.2f}%)".format(len(df.loc[df['jenis_kelamin'] == 1]), (len(df.loc[df['jenis_kelamin'] == 1])/len(df.index)) * 100.0)) print("Dataset Asli Wanita : {0} ({1:0.2f}%)".format(len(df.loc[df['jenis_kelamin'] == 0]), (len(df.loc[df['jenis_kelamin'] == 0])/len(df.index)) * 100.0)) print("") print("Dataset Training Pria : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 1]), (len(y_train[y_train[:] == 1])/len(y_train) * 100.0))) print("Dataset Training Wanita : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 0]), (len(y_train[y_train[:] == 0])/len(y_train) * 100.0))) print("") print("Dataset Test Pria : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 1]), (len(y_test[y_test[:] == 1])/len(y_test) * 100.0))) print("Dataset Test Wanita : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 0]), (len(y_test[y_test[:] == 0])/len(y_test) * 100.0))) """ Explanation: Dataset telah dipecah menjadi 2 bagian, mari kita cek distribusi nya. End of explanation """ from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6)) vectorizer.fit(text_train.ravel()) X_train = vectorizer.transform(text_train.ravel()) X_test = vectorizer.transform(text_test.ravel()) """ Explanation: Terlihat hasilnya, dataset yang telah dipecah dua tetap dapat mempertahankan persentase distribusi jenis kelamin seperti pada dataset asli. Features Extraction Proses features extraction, berpengaruh terhadap hasil akurasi yang didapatkan nantinya. Disini saya kan menggunakan metode simple yaitu CountVectorizer yang akan membuat matrix frekwensi kemunculan dari suatu karakter di tiap nama yang diberikan, dengan opsi analisa ngram_range 2 - 6 hanya di dalam satu kata saja. Misal Muhammad Irfani Sahnur, menghasilkan n-gram : * mu * ham * mad * nur * dst End of explanation """ from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train.ravel()) """ Explanation: Logistic Regression Percobaan pertama menggunakan algoritma Logistic Regression. Data hasil feature extraction akan diinput sebagai data training. End of explanation """ # dataset training print(clf.score(X_train, y_train)) # dataset test print(clf.score(X_test, y_test)) """ Explanation: Akurasi prediksi menggunakan data test yang didapat cukup lumayan berada pada tingkat 93.6% End of explanation """ from sklearn import metrics clf_predict = clf.predict(X_test) # training metrics print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, clf_predict))) print(metrics.confusion_matrix(y_test, clf_predict, labels=[1, 0]) ) print("") print("Classification Report") print(metrics.classification_report(y_test, clf_predict, labels=[1,0])) """ Explanation: Detail akurasi metriks End of explanation """ jk_label = {1:"Laki-Laki", 0:"Perempuan"} test_predict = vectorizer.transform(["niky felina"]) res = clf.predict(test_predict) print(jk_label[int(res)]) """ Explanation: Testing prediksi jenis kelamin End of explanation """ from sklearn.pipeline import Pipeline clf_lg = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))), ('clf', LogisticRegression()), ]) _ = clf_lg.fit(text_train.ravel(), y_train.ravel()) predicted = clf_lg.predict(text_test.ravel()) np.mean(predicted == y_test.ravel()) """ Explanation: Menggunakan Pipeline Scikit memiliki fitur untuk memudahkan proses diatas dengan mengguanakan Pipeline. Penulisan kode jadi lebih simple dan rapih, berikut konversi kode diatas jika menggunakan Pipeline End of explanation """ result = clf_lg.predict(["muhammad irfani sahnur"]) print(jk_label[result[0]]) """ Explanation: Tingkat akurasi persis sama, dan lebih mudah dalam penulisan kode nya. Mari kita lakukan kembali testing prediksi jenis kelamin End of explanation """ from sklearn.naive_bayes import MultinomialNB clf_nb = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))), ('clf', MultinomialNB()), ]) clf_nb = clf_nb.fit(text_train.ravel(), y_train.ravel()) predicted = clf_nb.predict(text_test.ravel()) np.mean(predicted == y_test.ravel()) """ Explanation: Naive Bayes Algoritma berikutnya yang akan digunakan adalah Naive Bayes. Lansung saja kita coba End of explanation """ result = clf_nb.predict(["Alifah Rahmah"]) print(jk_label[result[0]]) """ Explanation: Dengan algoritman Naive Bayes, tingkat akurasi yang didapatkan sedikit saja lebih rendah dari Logistic Regression yaitu 93.3%. Mari kita lakukan kembali testing prediksi jenis kelamin End of explanation """ from sklearn.ensemble import RandomForestClassifier clf_rf = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))), ('clf', RandomForestClassifier(n_estimators=90, n_jobs=-1)), ]) clf_rf = clf_rf.fit(text_train.ravel(), y_train.ravel()) predicted = clf_rf.predict(text_test.ravel()) np.mean(predicted == y_test.ravel()) """ Explanation: Random Forest Algoritma terakhir yang akan digunakan adalah Random Forest. Lansung saja kita coba End of explanation """ result = clf_rf.predict(["Yuni ahmad"]) print(jk_label[result[0]]) """ Explanation: Dengan algoritman Random Forest, tingkat akurasi yang didapatkan lebih rendah dari dua algoritma sebelumnya, itu sebesar 93.12%. Algoritma ini juga mempunyai kekurangan, yaitu performance yang yang lebih lambat. Ok, Mari kita lakukan kembali testing prediksi jenis kelamin End of explanation """
Phylliade/poppy-inverse-kinematics
tutorials/Quickstart.ipynb
gpl-2.0
import ikpy.chain import numpy as np import ikpy.utils.plot as plot_utils """ Explanation: IKpy Quickstart Requirements First, you need to install IKPy (see installations instructions). You also need a URDF file. By default, we use the files provided in the resources folder of the IKPy repo. Import the IKPy module : End of explanation """ my_chain = ikpy.chain.Chain.from_urdf_file("../resources/poppy_ergo.URDF") """ Explanation: The basic element of IKPy is the kinematic Chain. To create a chain from an URDF file : End of explanation """ target_position = [ 0.1, -0.2, 0.1] print("The angles of each joints are : ", my_chain.inverse_kinematics(target_position)) """ Explanation: Note : as mentioned before, here we use a file in the resource folder. Inverse kinematics In Inverse Kinematics, you want your kinematic chain to reach a 3D position in space. To have a more general representation of position, IKPy works with homogeneous coordinates. It is a 4x4 matrix storing both position and orientation. Prepare your desired position as a 4x4 matrix. Here we only consider position, not orientation of the chain. End of explanation """ real_frame = my_chain.forward_kinematics(my_chain.inverse_kinematics(target_position)) print("Computed position vector : %s, original position vector : %s" % (real_frame[:3, 3], target_position)) """ Explanation: You can check that the Inverse Kinematics is correct by comparing with the original position vector : End of explanation """ # Optional: support for 3D plotting in the NB # If there is a matplotlib error, uncomment the next line, and comment the line below it. # %matplotlib inline %matplotlib widget import matplotlib.pyplot as plt fig, ax = plot_utils.init_3d_figure() my_chain.plot(my_chain.inverse_kinematics(target_position), ax, target=target_position) plt.xlim(-0.1, 0.1) plt.ylim(-0.1, 0.1) """ Explanation: Plotting And finally plot the result : (If the code below doesn't work, comment the %maplotlib widget line, and uncomment the %matplotlib inline line) End of explanation """
prasants/pyds
03.All_about_Numbers.ipynb
mit
a = 1 print(a) print(type(a)) b = 2.0 print(b) print(type(b)) """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Ints-and-Floats" data-toc-modified-id="Ints-and-Floats-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Ints and Floats</a></div><div class="lev2 toc-item"><a href="#Case-Study-1:-Baghead's-Bag-of-Riches" data-toc-modified-id="Case-Study-1:-Baghead's-Bag-of-Riches-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Case Study 1: Baghead's Bag of Riches</a></div><div class="lev2 toc-item"><a href="#Case-Study-2:-Peter-Gregory's-Profit" data-toc-modified-id="Case-Study-2:-Peter-Gregory's-Profit-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Case Study 2: Peter Gregory's Profit</a></div><div class="lev2 toc-item"><a href="#Recap-of-what-you-already-know" data-toc-modified-id="Recap-of-what-you-already-know-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Recap of what you already know</a></div><div class="lev1 toc-item"><a href="#Converting-between-types" data-toc-modified-id="Converting-between-types-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Converting between types</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Revisiting-Mathematical-Operations" data-toc-modified-id="Revisiting-Mathematical-Operations-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Revisiting Mathematical Operations</a></div><div class="lev1 toc-item"><a href="#Equality-and-Comparison-Operators" data-toc-modified-id="Equality-and-Comparison-Operators-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Equality and Comparison Operators</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-41"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Exercise</a></div> # Ints and Floats While there are multiple formats to represent numerical values, this is an introductory course. We will primarily focus on two representations - Integers and Floats (Decimals point numbers). End of explanation """ c = a + b print(c) print(type(c)) # How many kittens did the cat give birth to? kittens = 3 print(type(kittens) ) # What is the value of Pi? val_pi = 3.142 print(type(val_pi)) # Not an integer hooli_founder = 30,000,000,000 bachmanity = 60,000,000,000 moolah = hooli_founder + bachmanity print(moolah) type(moolah) """ Explanation: Can we add an 'int' and a 'float'? End of explanation """ salary = 500000 bonus = 0.75 equity = 1000000 vesting_period = 4 total_annual_value = salary*(1+bonus) + (equity/vesting_period) print(total_annual_value) salary*(1+bonus) salary + (salary*bonus) equity/vesting_period print("Baghead's Yearly Offer is: $", total_annual_value) """ Explanation: Case Study 1: Baghead's Bag of Riches Hooli has just offered Baghead a fantastic offer. He gets a salary, a bonus, vested equity, and free soda. Should he take it? End of explanation """ close_jan = 200 close_sep = 450 return_rate = (close_sep - close_jan)/close_jan print(return_rate) """ Explanation: Case Study 2: Peter Gregory's Profit Peter analyzed cockroach samples at a restaurant in January, and bought 1,000,000 shares of a cooking oil company at \$200 per share. He sold his position in September at $450. Calculate his profit. End of explanation """ # ints are whole numbers int1 = 1 int2 = 42 int3 = -27 # floats are anything with a decimal point float1 = 1.25 float2 = .496 float3 = 3.0 """ Explanation: Note: The above doesn't take into account slippage, cost of transaction, brokerage etc. It's just to illustrate mathematical operations in Python. Recap of what you already know End of explanation """ type(float3) new_int = int(float3) print(new_int) print("The type is: ", type(new_int)) # Time for revenge! new_float1= float(int1) print(new_float1) """ Explanation: Converting between types End of explanation """ # Enter your code below """ Explanation: Exercise Convert the following: 42 into float 27.3 into an integer The result of 99/36 into an integer End of explanation """ print("Addition") print(1 + 1) print(1.0 + 2.0) print("Subtraction") print(100-50) print(75.3-22.7) print("Multiplication") print(2*5) print(5.0 * 7.5) print("Division") print(21 / 7) print(22.0 / 7) # Another type of division 22.0//7 """ Explanation: Revisiting Mathematical Operations +: plus to add numbers -: minus, to subtract numbers, or assign a negative value (eg. -5, -25, -100 etc) *: multiply numbers /: divide number %: modulo operator End of explanation """ print("Modulo Operator") print(5%2) # Prints 1, as 1 is the remainder when 5 is divided by 2 print(4%2) # Prints 0 """ Explanation: Also known as 'Floor Division'. Read about it here - https://docs.python.org/3.1/tutorial/introduction.html End of explanation """ print("Equality") print(2 == 2) # Will print True print(2 == 1) # Will print False print("Comparison") print(0 > 0) # False, since they are equal print(1 > 0) # True, since 1 is greater than 0 print(1.0 < 1) # False, since numerically they are the same print(2 >= 2) # True, since it checks for two conditions, greater than as well as equal to print(2 <= 2) # True, since it checks for two conditions, less than as well as equal to print(1 != 2) # True, since 1 is not equal to 2 """ Explanation: Equality and Comparison Operators ==: equality &lt;: less than &lt;=: less than or equal to &gt;: greater than &gt;=: greater than or equal to !=: not equal to End of explanation """ # Enter your code below: food = xx sales_tax = xx #Remember to enter in decimals. So 7.5% will be 0.075 tip = xx # Same as above, if you're calculating tip as percentage of food cost tax_cost = food*(1 + sales_tax) after_tip = tax_cost*(1 + tip) print(after_tip) """ Explanation: Exercise Create a tip calculator that takes into account the cost of food, sales tax, and tip. End of explanation """
stevenydc/2015lab1
Lab1-babypython_original.ipynb
mit
# The %... is an iPython thing, and is not part of the Python language. # In this case we're just telling the plotting library to draw things on # the notebook, instead of on a separate window. %matplotlib inline #this line above prepares IPython notebook for working with matplotlib # See all the "as ..." contructs? They're just aliasing the package names. # That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot(). import numpy as np # imports a fast numerical programming library import scipy as sp #imports stats functions, amongst other things import matplotlib as mpl # this actually imports matplotlib import matplotlib.cm as cm #allows us easy access to colormaps import matplotlib.pyplot as plt #sets up plotting under plt import pandas as pd #lets us handle data as dataframes #sets up pandas table display pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns #sets up styles and gives us more plotting options """ Explanation: Python and Friends This is a very quick run-through of some python syntax End of explanation """ 1+2 """ Explanation: The Python Language Lets talk about using Python as a calculator... End of explanation """ 1/2,1.0/2.0,3*3.2 """ Explanation: Notice integer division and floating-point error below! End of explanation """ print 1+3.0,"\n",5/3.0 5/3 """ Explanation: Here is how we can print things. Something on the last line by itself is returned as the output value. End of explanation """ a=5.0/6.0 print(a) print type(a) import types type(a)==types.FloatType type(a)==types.IntType """ Explanation: We can obtain the type of a variable, and use boolean comparisons tontest these types. End of explanation """ alist=[1,2,3,4,5] asquaredlist=[i*i for i in alist] asquaredlist """ Explanation: Python and Iteration (and files) In working with python I always remember: a python is a duck. What I mean is, python has a certain way of doing things. For example lets call one of these ways listiness. Listiness works on lists, dictionaries, files, and a general notion of something called an iterator. But first, lets introduce the notion of a comprehension. Its a way of constructing a list End of explanation """ enumerate(asquaredlist),zip(alist, asquaredlist) """ Explanation: Python has some nifty functions like enumerate and zip. The former gives a list of tuples with each tuple of the form (index, value), while the latter takes elements from each list and outs them together into a tuple, thus creating a list of tuples. The first is a duck, but the second isnt. End of explanation """ from itertools import izip izip(alist, asquaredlist) print enumerate(asquaredlist) [k for k in enumerate(asquaredlist)] """ Explanation: Someone realized that design flaw and created izip. End of explanation """ linelengths=[len(line) for line in open("hamlet.txt")]#poor code as we dont close the file print linelengths sum(linelengths), np.mean(linelengths), np.median(linelengths), np.std(linelengths) """ Explanation: Open files behave like lists too! Here we get each line in the file and find its length, using the comprehension syntax to put these lengths into a big list. End of explanation """ hamletfile=open("hamlet.txt") hamlettext=hamletfile.read() hamletfile.close() hamlettokens=hamlettext.split()#split with no arguments splits on whitespace len(hamlettokens) """ Explanation: But perhaps we want to access Hamlet word by word and not line by line End of explanation """ with open("hamlet.txt") as hamletfile: hamlettext=hamletfile.read() hamlettokens=hamlettext.split() print len(hamlettokens) """ Explanation: One can use the with syntax which cretaes a context. The file closing is then done automatically for us. End of explanation """ print hamlettext[:1000]#first 1000 characters from Hamlet. print hamlettext[-1000:]#and last 1000 characters from Hamlet. """ Explanation: There are roughly 32,000 words in Hamlet. The indexing of lists End of explanation """ print hamlettokens[1:4], hamlettokens[:4], hamlettokens[0], hamlettokens[-1] hamlettokens[1:8:2]#get every 2nd world between the 2nd and the 9th: ie 2nd, 4th, 6th, and 8th """ Explanation: Lets split the word tokens. The first one below reads, give me the second, third, and fourth words (remember that python is 0 indexed). Try and figure what the others mean. End of explanation """ mylist=[] for i in xrange(10): mylist.append(i) mylist """ Explanation: range and xrange get the list of integers upto N. But xrange behaves like an iterator. The reason for this is that there is no point generaing all os a million integers. We can just add 1 to the previous one and save memory. So we trade off storage for computation. End of explanation """ adict={'one':1, 'two': 2, 'three': 3} print [i for i in adict], [(k,v) for k,v in adict.items()], adict.values() """ Explanation: Dictionaries These are the bread and butter. You will use them a lot. They even duck like lists. But be careful how. End of explanation """ mydict ={k:v for (k,v) in zip(alist, asquaredlist)} mydict """ Explanation: The keys do not have to be strings. From python 2.7 you can use dictionary comprehensions as well End of explanation """ dict(a=1, b=2) """ Explanation: You can construct then nicely using the function dict. End of explanation """ import json s=json.dumps(mydict) print s json.loads(s) """ Explanation: and conversion to json End of explanation """ lastword=hamlettokens[-1] print(lastword) lastword[-2]="k"#cant change a part of a string lastword[-2] You can join a list with a separator to make a string. wierdstring=",".join(hamlettokens) wierdstring[:1000] """ Explanation: Strings Basically they behave like immutable lists End of explanation """ def square(x): return(x*x) def cube(x): return x*x*x square(5),cube(5) print square, type(cube) """ Explanation: Functions Functions are even more the bread and butter. You'll see them as methods on objects, or standing alone by themselves. End of explanation """ def sum_of_anything(x,y,f): print x,y,f return(f(x) + f(y)) sum_of_anything(3,4,square) """ Explanation: In Python, functions are "first-class". This is just a fancy way of saying, you can pass functions to other functions End of explanation """ def f(a,b,*posargs,**dictargs): print "got",a,b,posargs, dictargs return a print f(1,3) print f(1,3,4,d=1,c=2) """ Explanation: Python functions can have positional arguments and keyword arguments. Positional arguments are stored in a tuple, and keyword arguments in a dictionary. Note the "starred" syntax End of explanation """ #your code here """ Explanation: YOUR TURN create a dictionary with keys the integers upto and including 10, and values the cubes of these dictionaries End of explanation """ a=[1,2,3,4,5] 1 in a 6 in a """ Explanation: Booleans and Control-flow Lets test for belonging... End of explanation """ def do_it(x): if x==1: print "One" elif x==2: print "Two" else: print x do_it(1) do_it(2), do_it(3) """ Explanation: Python supports if/elif/else clauses for multi-way conditionals End of explanation """ for i in range(10): print i if (i > 5): break """ Explanation: You can break out of a loop based on a condition. The loop below is a for loop. End of explanation """ i=0 while i < 10: print i i=i+1 if i < 5: continue else: break """ Explanation: While loops are also supported. continue continues to the next iteration of the loop skipping all the code below, while break breaks out of it. End of explanation """ try: f(1)#takes atleast 2 arguments except: import sys print sys.exc_info() """ Explanation: Exceptions This is the way to catch errors. End of explanation """ hamletlctokens=[word.lower() for word in hamlettokens] hamletlctokens.count("thou") """ Explanation: All together now Lets see what hamlet gives us. We convert all words to lower-case End of explanation """ uniquelctokens=set(hamletlctokens) tokendict={} for ut in uniquelctokens: tokendict[ut]=hamletlctokens.count(ut) """ Explanation: We then find a unique set of words using python's set data structure. We count how often those words occured usinf the count method on lists. End of explanation """ L=sorted(tokendict.iteritems(), key= lambda (k,v):v, reverse=True)[:100] L """ Explanation: We find the 100 most used words... End of explanation """ topfreq=L[:20] print topfreq pos = np.arange(len(topfreq)) plt.bar(pos, [e[1] for e in topfreq]); plt.xticks(pos+0.4, [e[0] for e in topfreq]); """ Explanation: Lets get the top 20 of this and plot a bar chart! End of explanation """
bokeh/bokeh
examples/howto/notebook_comms/Numba Image Example.ipynb
bsd-3-clause
from timeit import default_timer as timer from bokeh.plotting import figure, show, output_notebook from bokeh.models import GlyphRenderer, LinearColorMapper from bokeh.io import push_notebook from numba import jit, njit from ipywidgets import interact import numpy as np import scipy.misc output_notebook() """ Explanation: Interactive Image Processing with Numba and Bokeh This demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required. Numba must be installed in order to run this demo. To run, click on, Cell-&gt;Run All in the top menu, then scroll down to individual examples and play around with their controls. End of explanation """ # smaller image img_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C') palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)] width, height = img_blur.shape p_blur = figure(x_range=(0, width), y_range=(0, height)) r_blur = p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur') @njit def blur(outimg, img, amt): iw, ih = img.shape for i in range(amt, iw-amt): for j in range(amt, ih-amt): px = 0. for w in range(-amt//2, amt//2): for h in range(-amt//2, amt//2): px += img[i+w, j+h] outimg[i, j]= px/(amt*amt) def update(i=0): level = 2*i + 1 out = img_blur.copy() ts = timer() blur(out, img_blur, level) te = timer() print('blur takes:', te - ts) renderer = p_blur.select(dict(name="blur", type=GlyphRenderer)) r_blur.data_source.data['image'] = [out] push_notebook(handle=t_blur) t_blur = show(p_blur, notebook_handle=True) interact(update, i=(0, 10)) """ Explanation: Gaussian Blur This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider. Note: This simple example does not handle the edge case, so the edge of the image will remain unblurred as the slider is increased. End of explanation """ @jit def getitem(img, x, y): w, h = img.shape if x >= w: x = w - 1 - (x - w) if y >= h: y = h - 1 - (y - h) return img[x, y] def filter_factory(kernel): ksum = np.sum(kernel) if ksum == 0: ksum = 1 k9 = kernel / ksum @jit def kernel_apply(img, out, x, y): tmp = 0 for i in range(3): for j in range(3): tmp += img[x+i-1, y+j-1] * k9[i, j] out[x, y] = tmp @jit def kernel_apply_edge(img, out, x, y): tmp = 0 for i in range(3): for j in range(3): tmp += getitem(img, x+i-1, y+j-1) * k9[i, j] out[x, y] = tmp @jit def kernel_k9(img, out): # Loop through all internals for x in range(1, img.shape[0] -1): for y in range(1, img.shape[1] -1): kernel_apply(img, out, x, y) # Loop through all the edges for x in range(img.shape[0]): kernel_apply_edge(img, out, x, 0) kernel_apply_edge(img, out, x, img.shape[1] - 1) for y in range(img.shape[1]): kernel_apply_edge(img, out, 0, y) kernel_apply_edge(img, out, img.shape[0] - 1, y) return kernel_k9 average = np.array([ [1, 1, 1], [1, 1, 1], [1, 1, 1], ], dtype=np.float32) sharpen = np.array([ [-1, -1, -1], [-1, 12, -1], [-1, -1, -1], ], dtype=np.float32) edge = np.array([ [ 0, -1, 0], [-1, 4, -1], [ 0, -1, 0], ], dtype=np.float32) edge_h = np.array([ [ 0, 0, 0], [-1, 2, -1], [ 0, 0, 0], ], dtype=np.float32) edge_v = np.array([ [0, -1, 0], [0, 2, 0], [0, -1, 0], ], dtype=np.float32) gradient_h = np.array([ [-1, -1, -1], [ 0, 0, 0], [ 1, 1, 1], ], dtype=np.float32) gradient_v = np.array([ [-1, 0, 1], [-1, 0, 1], [-1, 0, 1], ], dtype=np.float32) sobol_h = np.array([ [ 1, 2, 1], [ 0, 0, 0], [-1, -2, -1], ], dtype=np.float32) sobol_v = np.array([ [-1, 0, 1], [-2, 0, 2], [-1, 0, 1], ], dtype=np.float32) emboss = np.array([ [-2, -1, 0], [-1, 1, 1], [ 0, 1, 2], ], dtype=np.float32) kernels = { "average" : filter_factory(average), "sharpen" : filter_factory(sharpen), "edge (both)" : filter_factory(edge), "edge (horizontal)" : filter_factory(edge_h), "edge (vertical)" : filter_factory(edge_v), "gradient (horizontal)" : filter_factory(gradient_h), "gradient (vertical)" : filter_factory(gradient_v), "sobol (horizontal)" : filter_factory(sobol_h), "sobol (vertical)" : filter_factory(sobol_v), "emboss" : filter_factory(emboss), } images = { "ascent" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]), "face" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]), } palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)] cm = LinearColorMapper(palette=palette, low=0, high=256) width, height = images['ascent'].shape p_kernel = figure(x_range=(0, width), y_range=(0, height)) r_kernel = p_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name="kernel") def update(image="ascent", kernel_name="none", scale=100, bias=0): global _last_kname global _last_out img_kernel = images.get(image) kernel = kernels.get(kernel_name, None) if kernel == None: out = np.copy(img_kernel) else: out = np.zeros_like(img_kernel) ts = timer() kernel(img_kernel, out) te = timer() print('kernel takes:', te - ts) out *= scale / 100.0 out += bias r_kernel.data_source.data['image'] = [out] push_notebook(handle=t_kernel) t_kernel = show(p_kernel, notebook_handle=True) knames = ["none"] + sorted(kernels.keys()) interact(update, image=["ascent" ,"face"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255)) """ Explanation: 3x3 Image Kernels Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides several predefined kernels for the user to experiment with. The UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additionally there are sliders the permit adjustment to the bias and scale of the final greyscale image. Note: Right now, adjusting the scale and bias are not as efficient as possible, because the update function always also applies the kernel (even if it has not changed). A better implementation might have a class that keeps track of the current kernel and output image so that bias and scale can be applied by themselves. End of explanation """ @njit def wavelet_decomposition(img, tmp): """ Perform inplace wavelet decomposition on `img` with `tmp` as a temporarily buffer. This is a very simple wavelet for demonstration """ w, h = img.shape halfwidth, halfheight = w//2, h//2 lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :] # Along first dimension for x in range(halfwidth): for y in range(h): lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2 righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y] # Swap buffer img, tmp = tmp, img tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:] # Along second dimension for y in range(halfheight): for x in range(w): tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2 bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1] return halfwidth, halfheight img_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :]) palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)] width, height = img_wavelet.shape p_wavelet = figure(x_range=(0, width), y_range=(0, height)) r_wavelet = p_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name="wavelet") def update(level=0): out = np.copy(img_wavelet) tmp = np.zeros_like(img_wavelet) ts = timer() hw, hh = img_wavelet.shape while level > 0 and hw > 1 and hh > 1: hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh]) level -= 1 te = timer() print('wavelet takes:', te - ts) r_wavelet.data_source.data['image'] = [out] push_notebook(handle=t_wavelet) t_wavelet = show(p_wavelet, notebook_handle=True) interact(update, level=(0, 7)) """ Explanation: Wavelet Decomposition This last example demonstrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see different levels of decomposition of the image. End of explanation """
philipmat/presentations
python/snoop_and_better_exceptions.ipynb
mit
ROMAN = [ (1000, "M"), ( 900, "CM"), ( 500, "D"), ( 400, "CD"), ( 100, "C"), ( 90, "XC"), ( 50, "L"), ( 40, "XL"), ( 10, "X"), ( 9, "IX"), ( 5, "V"), ( 4, "IV"), ( 1, "I"), ] def to_roman(number: int): result = "" for (arabic, roman) in ROMAN: (factor, number) = divmod(number, arabic) result += roman * factor return result print(to_roman(2021)) print(to_roman(8)) """ Explanation: Python Libraries For Better Code Insights Snoop - Never Use print Again End of explanation """ import snoop @snoop def to_roman2(number: int): result = "" for (arabic, roman) in ROMAN: (factor, number) = divmod(number, arabic) result += roman * factor return result print(to_roman2(2021)) from statistics import stdev numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print(f"numbers={numbers}: stdev={stdev(numbers)}") """ Explanation: Snooping on execution End of explanation """ def mystddev(max: int) -> float: my_numbers = list(range(max)) with snoop(depth=2): return stdev(my_numbers) print(mystddev(5)) from statistics import median print(median(numbers) + 2 * stdev(numbers)) """ Explanation: Snooping on referenced functions End of explanation """ from snoop import pp pp(pp(median(numbers)) + pp(2 * pp(stdev(numbers)))) """ Explanation: pp - pretty print End of explanation """ # print(median(numbers) + 2 * stdev(numbers)) pp.deep(lambda: median(numbers) + 2 * stdev(numbers)) """ Explanation: Shortcut: pp.deep + parameters-less lambda End of explanation """ users = { 'user1': { 'is_admin': True, 'email': 'one@exmple.com'}, 'user2': { 'is_admin': True, 'phone': '281-555-5555' }, 'user3': { 'is_admin': False, 'email': 'three@example.com' }, } def email_user(*user_names) -> None: global users for user in user_names: print("Emailing %s at %s", (user, users[user]['email'])) email_user('user1', 'user2') """ Explanation: How to use in Jupyter Load extension with %load_ext snoop in a notebook cell, then use the cell magic %%snoop at the top of a notebook cell to trace that cell: better-exceptions - Better and Prettier Stack Traces End of explanation """
dolittle007/dolittle007.github.io
notebooks/gaussian-mixture-model-advi.ipynb
gpl-3.0
%matplotlib inline import theano theano.config.floatX = 'float64' import pymc3 as pm from pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, \ DensityDist, find_MAP, NUTS, Slice import theano.tensor as tt from theano.tensor.nlinalg import det import numpy as np import matplotlib.pyplot as plt import seaborn as sns n_samples = 100 rng = np.random.RandomState(123) ms = np.array([[-1, -1.5], [1, 1]]) ps = np.array([0.2, 0.8]) zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples) for z, m in zip(zs, ms)] data = np.sum(np.dstack(xs), axis=2) plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5) plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100) plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100) """ Explanation: Gaussian Mixture Model with ADVI Here, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed. First, create artificial data from a mixuture of two Gaussian components. End of explanation """ from pymc3.math import logsumexp # Log likelihood of normal distribution def logp_normal(mu, tau, value): # log probability of individual samples k = tau.shape[0] delta = lambda mu: value - mu return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) + (delta(mu).dot(tau) * delta(mu)).sum(axis=1)) # Log likelihood of Gaussian mixture distribution def logp_gmix(mus, pi, tau): def logp_(value): logps = [tt.log(pi[i]) + logp_normal(mu, tau, value) for i, mu in enumerate(mus)] return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0)) return logp_ with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) """ Explanation: Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable. In the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood. End of explanation """ with model: start = find_MAP() step = Metropolis() trace = sample(1000, step, start=start) """ Explanation: For comparison with ADVI, run MCMC. End of explanation """ plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') mu_0, mu_1 = trace['mu_0'], trace['mu_1'] plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=10) plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=10) plt.xlim(-6, 6) plt.ylim(-6, 6) sns.barplot([1, 2], np.mean(trace['pi'][-5000:], axis=0), palette=['red', 'blue']) """ Explanation: Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters. End of explanation """ with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) %time means, sds, elbos = pm.variational.advi( \ model=model, n=1000, learning_rate=1e-1) """ Explanation: We can use the same model with ADVI as follows. End of explanation """ from copy import deepcopy mu_0, sd_0 = means['mu_0'], sds['mu_0'] mu_1, sd_1 = means['mu_1'], sds['mu_1'] def logp_normal_np(mu, tau, value): # log probability of individual samples k = tau.shape[0] delta = lambda mu: value - mu return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) + (delta(mu).dot(tau) * delta(mu)).sum(axis=1)) def threshold(zz): zz_ = deepcopy(zz) zz_[zz < np.max(zz) * 1e-2] = None return zz_ def plot_logp_normal(ax, mu, sd, cmap): f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value)) g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1) xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1])) zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape) ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9) fig, ax = plt.subplots(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') plot_logp_normal(ax, mu_0, sd_0, cmap='Reds') plot_logp_normal(ax, mu_1, sd_1, cmap='Blues') plt.xlim(-6, 6) plt.ylim(-6, 6) """ Explanation: The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space. End of explanation """ plt.plot(elbos) """ Explanation: TODO: We need to backward-transform 'pi', which is transformed by 'stick_breaking'. 'elbos' contains the trace of ELBO, showing stochastic convergence of the algorithm. End of explanation """ n_samples = 100000 zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples) for z, m in zip(zs, ms)] data = np.sum(np.dstack(xs), axis=2) plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5) plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100) plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100) plt.xlim(-6, 6) plt.ylim(-6, 6) """ Explanation: To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution. End of explanation """ with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data) start = find_MAP() step = Metropolis() trace = sample(1000, step, start=start) """ Explanation: MCMC took 55 seconds, 20 times longer than the small dataset. End of explanation """ plt.figure(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') mu_0, mu_1 = trace['mu_0'], trace['mu_1'] plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=50) plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=50) plt.xlim(-6, 6) plt.ylim(-6, 6) """ Explanation: Posterior samples are concentrated on the true means, so looks like single point for each component. End of explanation """ data_t = tt.matrix() data_t.tag.test_value = np.zeros((1, 2)).astype(float) with pm.Model() as model: mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,)) for i in range(2)] pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,)) xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t) minibatch_tensors = [data_t] minibatch_RVs = [xs] """ Explanation: For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors'). End of explanation """ def create_minibatch(data): rng = np.random.RandomState(0) while True: ixs = rng.randint(len(data), size=200) yield [data[ixs]] minibatches = create_minibatch(data) total_size = len(data) """ Explanation: Make a generator for mini-batches of size 200. Here, we take random sampling strategy to make mini-batches. End of explanation """ # Used only to write the function call in single line for using %time # is there more smart way? def f(): return pm.variational.advi_minibatch( model=model, n=1000, minibatch_tensors=minibatch_tensors, minibatch_RVs=minibatch_RVs, minibatches=minibatches, total_size=total_size, learning_rate=1e-1) %time means, sds, elbos = f() """ Explanation: Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison. End of explanation """ from copy import deepcopy mu_0, sd_0 = means['mu_0'], sds['mu_0'] mu_1, sd_1 = means['mu_1'], sds['mu_1'] fig, ax = plt.subplots(figsize=(5, 5)) plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g') plt.scatter(mu_0[0], mu_0[1], c="r", s=50) plt.scatter(mu_1[0], mu_1[1], c="b", s=50) plt.xlim(-6, 6) plt.ylim(-6, 6) """ Explanation: The result is almost the same. End of explanation """ plt.plot(elbos); """ Explanation: The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples. End of explanation """
quantumlib/Cirq
docs/qudits.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The Cirq Developers End of explanation """ try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq print("installed cirq.") """ Explanation: Qudits <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/qudits"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/qudits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/qudits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/qudits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> End of explanation """ import cirq import numpy as np class QutritPlusGate(cirq.Gate): """A gate that adds one in the computational basis of a qutrit. This gate acts on three-level systems. In the computational basis of this system it enacts the transformation U|x〉 = |x + 1 mod 3〉, or in other words U|0〉 = |1〉, U|1〉 = |2〉, and U|2> = |0〉. """ def _qid_shape_(self): # By implementing this method this gate implements the # cirq.qid_shape protocol and will return the tuple (3,) # when cirq.qid_shape acts on an instance of this class. # This indicates that the gate acts on a single qutrit. return (3,) def _unitary_(self): # Since the gate acts on three level systems it has a unitary # effect which is a three by three unitary matrix. return np.array([[0, 0, 1], [1, 0, 0], [0, 1, 0]]) def _circuit_diagram_info_(self, args): return '[+1]' # Here we create a qutrit for the gate to act on. q0 = cirq.LineQid(0, dimension=3) # We can now enact the gate on this qutrit. circuit = cirq.Circuit( QutritPlusGate().on(q0) ) # When we print this out we see that the qutrit is labeled by its dimension. print(circuit) """ Explanation: Most of the time in quantum computation, we work with qubits, which are 2-level quantum systems. However, it is possible to also define quantum computation with higher dimensional systems. A qu-d-it is a generalization of a qubit to a d-level or d-dimension system. For example, the state of a single qubit is a superposition of two basis states, $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$, whereas the state of a qudit for a three dimensional system is a superposition of three basis states $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle+\gamma|2\rangle$. Qudits with known values for d have specific names. A qubit has dimension 2, a qutrit has dimension 3, a ququart has dimension 4, and so on. In Cirq, qudits work exactly like qubits except they have a dimension attribute different than 2, and they can only be used with gates specific to that dimension. In cirq, both qubits and qudits are subclasses of the class cirq.Qid. To apply a gate to some qudits, the dimensions of the qudits must match the dimensions it works on. For example, consider gate represents a unitary evolution on three qudits,. Further suppose that there are a qubit, a qutrit, and another qutrit. Then the gate's "qid shape" is (2, 3, 3) and its on method will accept exactly 3 Qids with dimension 2, 3, and 3, respectively. This is an example single qutrit gate acting on a single qutrit in a simple quantum circuit: End of explanation """ # Create an instance of the qutrit gate defined above. gate = QutritPlusGate() # Verify that it acts on a single qutrit. print(cirq.qid_shape(gate)) """ Explanation: cirq.Qid cirq.Qid is the type that represents both qubits and qudits. Cirq has the built-in qubit types, cirq.NamedQubit, cirq.GridQubit, and cirq.LineQubit, and it also provides corresponding cirq.Qid types: cirq.NamedQid Example: Create a qutrit named 'a' by specifying the dimension in the constructor: cirq.NamedQid('a', dimension=3). cirq.GridQid Example: Create a qutrit at location (2, 0) by specifying the dimension in the constructor: cirq.GridQid(2, 0, dimension=3). Example: You can create regions of cirq.GridQids. For example, to create a 2x2 grid of ququarts, use cirq.GridQid.rect(2, 2, dimension=4). cirq.LineQid Example: Create a qutrit at location 1 on the line by specifying the dimension in the constructor: cirq.LineQid(0, dimension=3). Example: You can create ranges of cirq.LineQids. For example, to create qutrits on a line with locations from 0 to 4, use cirq.LineQid.range(5, dimension=3). By default cirq.Qid classes in cirq will default to qubits unless their dimension parameter is specified in creation. Thus a cirq.Qid like cirq.NamedQid('a') is a qubit. The cirq.qid_shape protocol Quantum gates, operations, and other types that act on a sequence of qudits can specify the dimension of each qudit they act on by implementing the _qid_shape_ magic method. This method returns a tuple of integers corresponding to the required dimension of each qudit it operates on, e.g. (2, 3, 3) means an object that acts on a qubit, a qutrit, and another qutrit. When you specify _qid_shape_ we say that the object implements the qid_shape protocol. When cirq.Qids are used with cirq.Gates, cirq.Operations, and cirq.Circuits, the dimension of each qid must match the corresponding entry in the qid shape. An error is raised otherwise. Callers can query the qid shape of an object or a list of Qids by calling cirq.qid_shape on it. By default, cirq.qid_shape will return the equivalent qid shape for qubits if _qid_shape_ is not defined. In particular, for a qubit-only gate the qid shape is a tuple of 2s containing one 2 for each qubit e.g. (2,) * cirq.num_qubits(gate). End of explanation """ # Create an instance of the qutrit gate defined above. This gate implements _unitary_. gate = QutritPlusGate() # Because it acts on qutrits, its unitary is a 3 by 3 matrix. print(cirq.unitary(gate)) """ Explanation: Unitaries, mixtures, and channels on qudits The magic methods _unitary_, _apply_unitary_, _mixture_, and _kraus_ can be used to define unitary gates, mixtures, and channels can be used with qudits (see protocols for how these work.) Because the state space for qudits for $d>2$ live on larger dimensional spaces, the corresponding objects returned by the magic methods will be of corresponding higher dimension. End of explanation """ # Create a circuit from the gate we defined above. q0 = cirq.LineQid(0, dimension=3) circuit = cirq.Circuit(QutritPlusGate()(q0)) # Run a simulation of this circuit. sim = cirq.Simulator() result = sim.simulate(circuit) # Verify that the returned state is that of a qutrit. print(cirq.qid_shape(result)) """ Explanation: For a single qubit gate, its unitary is a 2x2 matrix, whereas for a single qutrit gate its unitary is a 3x3 matrix. A two qutrit gate will have a unitary that is a 9x9 matrix (3 * 3 = 9) and a qubit-ququart gate will have a unitary that is an 8x8 matrix (2 * 4 = 8). The size of the matrices involved in defining mixtures and channels follow the same pattern. Simulating qudits Cirq's simulators can be used to simulate or sample from circuits which act on qudits. Simulators like cirq.Simulator and cirq.DensityMatrixSimulator will return simulation results with larger states than the same size qubit circuit when simulating qudit circuits. The size of the state returned is determined by the product of the dimensions of the qudits being simulated. For example, the state vector output of cirq.Simulator after simulating a circuit on a qubit, a qutrit, and a qutrit will have 2 * 3 * 3 = 18 elements. You can call cirq.qid_shape(simulation_result) to check the qudit dimensions. End of explanation """ # Create a circuit with three qutrit gates. q0, q1 = cirq.LineQid.range(2, dimension=3) circuit = cirq.Circuit([ QutritPlusGate()(q0), QutritPlusGate()(q1), QutritPlusGate()(q1), cirq.measure(q0, q1, key="x") ]) # Sample from this circuit. result = cirq.sample(circuit, repetitions=3) # See that the results are all integers from 0 to 2. print(result) """ Explanation: Circuits on qudits are always assumed to start in the $|0\rangle$ computational basis state, and all the computational basis states of a qudit are assumed to be $|0\rangle$, $|1\rangle$, ..., $|d-1\rangle$. Correspondingly, measurements of qudits are assumed to be in the computational basis and for each qudit return an integer corresponding to these basis states. Thus measurement results for each qudit are assumed to run from $0$ to $d-1$. End of explanation """
CodeNeuro/notebooks
worker/notebooks/bolt/tutorials/stacking.ipynb
mit
from bolt import ones a = ones((100, 5), sc) """ Explanation: Stacking (with an example using scikit-learn) When we construct a distributed 2D array in Bolt, we by default represent the values as one-dimensional arrays. While this is useful and generic, for some applications it is preferable to stack the values into larger arrays that we can operate on in parallel. How to stack To do this kind of stacking, you can call stack on a distributed array and just provide the target size (as the number of records per stack) End of explanation """ a.tordd().values().first().shape """ Explanation: Without stacking each record is an array of shape (5,) End of explanation """ a.stack(5).tordd().values().first().shape """ Explanation: But once we stack, each record has shape (5,5) End of explanation """ a.stack(5).shape """ Explanation: Note that the "true" shape itself hasn't changed. This is important when keeping track of shape changes across operations. End of explanation """ a.stack(5).map(lambda x: x * 2).unstack().toarray().shape a.stack(5).map(lambda x: x.sum(axis=0)).unstack().toarray().shape """ Explanation: We can perform map operations over the stacked array. When we subsequently unstack it, shape information has been automatically propagated End of explanation """ from sklearn import datasets X, y = datasets.make_blobs(n_samples=500, random_state=8) X.shape """ Explanation: Use case with scikit-learn As an example use case, we'll use scikit-learn to apply a model's partial_fit method in parallel. First, make the data End of explanation """ from bolt import array b = array(X, sc) b.shape """ Explanation: Construct our Bolt array End of explanation """ stacked = b.stack(10) """ Explanation: Stack the array End of explanation """ from sklearn.cluster import MiniBatchKMeans km = MiniBatchKMeans(n_clusters=3) """ Explanation: Create our model End of explanation """ models = stacked.map(lambda x: km.partial_fit(x).cluster_centers_) average = models.unstack().mean(axis=0) centers = average.toarray() """ Explanation: And now call partial_fit on our stacked array End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') sns.set_context('notebook') plt.scatter(X[:, 0], X[:, 1], s=100, c=y, cmap='rainbow'); plt.scatter(centers[:, 0], centers[:, 1], s=200, c='black', cmap='rainbow'); """ Explanation: Let's plot the result End of explanation """ from sklearn.cluster import KMeans km = KMeans(n_clusters=3) model = km.fit(b.toarray()) centers = model.cluster_centers_ """ Explanation: NOTE: just because we're parallelizing doesn't mean we'll see a performance improvement! It will depend on the size of the data set, the size of the cluster, and the kind of operation, among other factors. One of the goals of Bolt is to make it easy to switch between local or distributed implementations, and use whichever one is faster with minimal code changes. In this case, we could easily switch to use a local array End of explanation """ plt.scatter(X[:, 0], X[:, 1], s=100, c=y, cmap='rainbow'); plt.scatter(centers[:, 0], centers[:, 1], s=200, c='black', cmap='rainbow'); """ Explanation: and get essentially the same answer End of explanation """
HCsoft-RD/shaolin
examples/Automatic-dashboard-creation.ipynb
agpl-3.0
from shaolin import KungFu dashb = KungFu(int_slider=4, text="moco", dropdown=['Hello','World'],float_slider=(2.,10.,1.),box='2r') dashb.widget """ Explanation: How to create dashboards using shaolin KungFu 1.1 Widgets defined as keyword arguments It is possible to instantiate widgets by instantiating a KungFu object. The widgets will be created from ts keyword arguments as shown below. End of explanation """ KungFu(int_slider=4, text="moco", dropdown=['Hello','World'],float_slider=(2.,10.,1.),box='1c')[0] """ Explanation: <img src="auto_dashboard_data/image_1.png"></img> You can use the box parameter to set the layout distribution of the widgets. It is possible to set the desired number of rows or columns that our dashboard will have: End of explanation """ kwargs = {'num_xbins':(1,40,2,10), 'num_ybins':(1,40,2,10), 'grid': True, 'link_bins':True, 'plot_density':True, 'reg_order':(1,10,1,1), 'ci':"(1,100,1,95)$N=ci&d=Confidence intervals", 'robust':[False], 'scatter':True, 'regression':True, 'marginals':(['None','Histogram','KDE','Both'],'Both'), 'bw':(0.001,10,0.05,0.5), 'Title':'#Combining plots$N=title&D=', 'dataset':['ALL','I','II','III','IV'], 'save':False, #'@kernel':['gau','cos','biw','epa','tri','triw'] } kf = KungFu(**kwargs,box='4c|')# using '|' in the box description sorts the widtgets by its value type kf[0]#indexing with any integer returns the widget """ Explanation: <img src="auto_dashboard_data/image_2.png"></img> 1.2 Instantiating from kwargs dict It is possible to create a dashboard containing widgets that control the value of the desired keyword arguments as shown in the following cells. It is possible to use the same syntax that is used in the ipywidgets interact function, shaolin object notation syntax, or shaoscript syntax to define the widgets that will be displayed in the dashboard. End of explanation """ kf() #calling the dashboard returns a kwargs dict """ Explanation: <img src="auto_dashboard_data/image_3.png"></img> End of explanation """ from IPython.core.display import display import seaborn as sns %matplotlib inline sns.set(style="ticks") data = sns.load_dataset("anscombe") #general title = '#Exploratory plots$N=title&D=' marginals = "@['Both','None','Histogram','KDE']$d=Marginals" dset = "@['ALL','I','II','III','IV']$D=Dataset" x_cols = '@dd$D=X column&o='+str(data.columns.values.tolist()) y_cols = '@dd$D=Y column&o='+str(data.columns.values.tolist())+'&v='+data.columns[1] save = "@[False]$D=Save plot&n=save" data_layout = ['c$N=data_layout',[title,dset,['r$N=sub_row',[x_cols,y_cols]],marginals,save]] #histograms h_title = '#Historam options$N=h_title&D=' num_xbins = '@(1,40,2,10)$D=Num xbins&n=num_xbins' num_ybins = '@(1,40,2,10)$D=Num ybins&n=num_ybins' lbins = '@True$D=link_bins' hist_layout = ['c$N=hist_layout',[h_title,num_xbins,num_ybins,lbins]] #scatter s_title = '#Scatterplot options$N=s_title&D=' scat = '@[True]$D=Plot Scatter&n=scatter' grid = '@True$D=Grid' scatter_layout = ['c$N=scatter_layout',[s_title,scat,grid]] #regression r_title = '#Regression options$N=r_title&D=' reg = '@[True]$D=Plot Regression&n=regression' robust = '@False$D=Robust' reg_order = "@(1,10,1,1)$D=Reg order" ci = "@(1,100,1,95)$N=ci&d=Confidence intervals" reg_layout = ['c$N=reg_layout',[r_title,reg,reg_order,ci,robust]] #kde k_title = '#KDE plot options$N=k_title&D=' kde = '@[True]$D=Plot KDE&n=plot_density' bw = "@(0.001,10,0.05,0.5)$D=Bandwidth&n=bw" kde_layout = ['@c$N=kde_layout',[k_title,kde,bw]] #accordion as top level layout dash = ['ac$N=dae_plot&t=General,Histogram,Scatter,KDE,Regression',[data_layout, hist_layout, scatter_layout, kde_layout, reg_layout, ] ] KungFu(dash=dash)[0] """ Explanation: 1.3 Instantiating from shaolin syntax KungFu can be used the same way the Dashboard was used in former version of shaolin to manually define the layout. End of explanation """ import numpy as np from ipywidgets import interact, interactive, fixed from IPython.core.display import clear_output,display import matplotlib.pyplot as plt import ipywidgets as widgets import seaborn as sns def draw_example_dist(n=10000): a = np.random.normal(loc=-15.,scale=3,size=n) b = np.random.normal(loc=20,scale=3,size=n) c = np.random.exponential(size=n,scale=3) return np.hstack([a,b,c]) def draw_regions(distribution='normal', bins=7, bw=0.18, normed=False, mean=False, std=False, percents=False, hist=True): x = draw_example_dist() if distribution=='example_2' else np.random.standard_normal(10000) #x = draw_regions_data if distribution=='custom' else x fig = plt.figure(figsize=(14,8)) ax = sns.kdeplot(x, cut=0,color='b',shade=True,alpha=0.3,bw=bw) d_bins = np.linspace(x.min(),x.max(),num=bins) if hist: n, bins, patches = ax.hist(x,bins=d_bins,normed=normed,rwidth=0.8,alpha=0.7) else: n=[1] maxx = 1 if normed else max(n) if mean: ax.axvline(x=x.mean(), ymin=0, ymax = maxx, linewidth=6, color='r',label='Mean: {:.3f}'.format(x.mean()),alpha=1.) if std: m = x.mean() ax.axvline(x=m+x.std(), ymin=0, ymax = maxx, linewidth=5, color='g',label='Std: {:.3f}'.format(x.std()),alpha=0.8) ax.axvline(x=m-x.std(), ymin=0, ymax = maxx, linewidth=5, color='g',alpha=0.8) if percents: d = pd.Series(x).describe() ax.axvline(x=d.ix['min'], ymin=0, ymax = maxx, linewidth=5, color=(0.19130826141258903, 0.13147472185630074, 0.09409307479747722), label='min: {:.2f}'.format(d.ix['min']),alpha=0.8) ax.axvline(x=d.ix['25%'], ymin=0, ymax = maxx, linewidth=5, color=(0.38717148143023966, 0.26607979423298955, 0.19042646089965626), label='25%: {:.2f}'.format(d.ix['25%']),alpha=0.8) ax.axvline(x=d.ix['50%'], ymin=0, ymax = maxx, linewidth=5, color=(0.5830347014478903, 0.4006848666096784, 0.2867598470018353), label='50%: {:.2f}'.format(d.ix['50%']),alpha=0.8) ax.axvline(x=d.ix['75%'], ymin=0, ymax = maxx, linewidth=5, color=(0.7743429628604792, 0.5321595884659791, 0.3808529217993126), label='75%: {:.2f}'.format(d.ix['75%']),alpha=0.8) ax.axvline(x=d.ix['max'], ymin=0, ymax = maxx, linewidth=5, color=(0.9415558823529412, 0.663581294117647, 0.47400294117647046), label='max%: {:.2f}'.format(d.ix['max']),alpha=0.8) # ax.plot((m-0.1, m+0.1), (0,max(n)), 'k-') plt.grid(linewidth=2) plt.title("Basic statistics",fontsize=20) plt.legend(loc='upper left',fontsize=18) plt.show() clear_output(True) """ Explanation: <img src="auto_dashboard_data/image_4.png"></img> Mimicking the interact function from ipywidgets Lets see how we can copy the functionality of the interact function from ipywidgets. It is possible to use KungFu on a single function the same way interact would work. We can also create a Dashboard from the default arguments of a function. Here we compare how we would use the ipywidgets interact function to build an interactive dashboard, and how we can achieve the same result with shaolin. End of explanation """ inter = interact(draw_regions, normed=False,distribution=['Normal','example_2','custom']) display(inter) """ Explanation: Using interact to create widgets End of explanation """ KungFu(func=draw_regions, interact=True, box='3r')[0] """ Explanation: Same thing using KungFu End of explanation """ KungFu(func=draw_regions, bins=(1,100,2,50), bw=(0.01,10.,0.025,0.18), normed=False, distribution=['Normal','example_2'], interact=True,box="2c")[0] """ Explanation: <img src="auto_dashboard_data/image_5.png"></img> It is also possible to overwrite the default values of the function parameters. End of explanation """ import numpy as np import pandas as pd import matplotlib.pylab as plt import seaborn as sns from IPython.core.display import clear_output from shaolin import KungFu,Dashboard class DensityPlot(KungFu): def __init__(self,data): #Kwargs expanding standard interact syntax with object notation and shaoscript d_kwargs= {'num_xbins':(1,40,2,10), 'num_ybins':(1,40,2,10), 'grid': True, 'link_bins':True, 'plot_density':True, 'reg_order':(1,10,1,1), 'ci':"(1,100,1,95)$N=ci&d=Confidence intervals", 'robust':[False], 'scatter':True, 'regression':True, 'marginals':(['None','Histogram','KDE','Both'],'Both'), 'bw':(0.001,10,0.05,0.5), 'Title':'#Combining plots$N=title&D=', 'dataset':['ALL','I','II','III','IV'], 'save':False, #'@kernel':['gau','cos','biw','epa','tri','triw'] } #general title = '#Exploratory plots$N=title&D=' marginals = "@['Both','None','Histogram','KDE']$d=Marginals" dset = "@['ALL','I','II','III','IV']$D=Dataset" x_cols = '@dd$D=X column&o='+str(data.columns.values.tolist()) y_cols = '@dd$D=Y column&o='+str(data.columns.values.tolist())+'&v='+data.columns[1] save = "@[False]$D=Save plot&n=save" data_layout = ['c$N=data_layout',[title,dset,['r$N=sub_row',[x_cols,y_cols]],marginals,save]] #histograms h_title = '#Historam options$N=h_title&D=' num_xbins = '@(1,40,2,10)$D=Num xbins&n=num_xbins' num_ybins = '@(1,40,2,10)$D=Num ybins&n=num_ybins' lbins = '@True$D=link_bins' hist_layout = ['c$N=hist_layout',[h_title,num_xbins,num_ybins,lbins]] #scatter s_title = '#Scatterplot options$N=s_title&D=' scat = '@[True]$D=Plot Scatter&n=scatter' grid = '@True$D=Grid' scatter_layout = ['c$N=scatter_layout',[s_title,scat,grid]] #regression r_title = '#Regression options$N=r_title&D=' reg = '@[True]$D=Plot Regression&n=regression' robust = '@False$D=Robust' reg_order = "@(1,10,1,1)$D=Reg order" ci = "@(1,100,1,95)$N=ci&d=Confidence intervals" reg_layout = ['c$N=reg_layout',[r_title,reg,reg_order,ci,robust]] #kde k_title = '#KDE plot options$N=k_title&D=' kde = '@[True]$D=Plot KDE&n=plot_density' bw = "@(0.001,10,0.05,0.5)$D=Bandwidth&n=bw" kde_layout = ['@c$N=kde_layout',[k_title,kde,bw]] dash = ['ac$N=dae_plot&t=General,Histogram,Scatter,KDE,Regression',[data_layout, hist_layout, scatter_layout, kde_layout, reg_layout, ] ] self.data = data KungFu.__init__(self,dash=dash,mode='interactive') self.num_xbins.widget.continuous_update=False self.num_ybins.widget.continuous_update=False self.density_plot() self.link_bins.observe(self.link_sliders) self.observe(self.density_plot) self.link_sliders() self._i_plot = 0 def link_sliders(self,_=None): if self.link_bins(): self.num_ybins.visible = False self.link('num_xbins','num_ybins') else: self.unlink('num_xbins','num_ybins') self.num_ybins.visible = True def density_plot(self,_=None): clear_output() if self.kwargs['dataset']=='ALL': self.x_column.value = 'x' self.y_column.value = 'y' subdf = self.data.copy() else: subdf = self.data[self.data['dataset']==self.dataset()].copy() x,y = subdf[self.x_column()],subdf[self.y_column()] x_regions = 10 y_regions = 10 x_bins = np.linspace(x.min(),x.max(),num=self.kwargs['num_xbins']) y_bins = np.linspace(y.min(),y.max(),num=self.kwargs['num_ybins']) g = sns.JointGrid(x=self.x_column(), y=self.y_column(), data=subdf) g.fig.set_figwidth(14) g.fig.set_figheight(9) if self.kwargs['plot_density']: g = g.plot_joint(sns.kdeplot, shade=True,alpha=0.5,legend=True,bw=self.kwargs['bw'], gridsize=int((len(x_bins)+len(y_bins))/2), clip=((x.min()*0.95,x.max()*1.05),(y.min()*0.95,y.max()*1.05))) if self.kwargs['scatter']: g = g.plot_joint(plt.scatter,s=80,alpha=0.8) if self.kwargs['marginals'] in ['Histogram','Both']: _ = g.ax_marg_x.hist(x, alpha=.6, bins=x_bins,normed=True) _ = g.ax_marg_y.hist(y, alpha=.6, orientation="horizontal", bins=y_bins,normed=True) if self.kwargs['marginals'] in ['KDE','Both']: clip = ((x.values.min()*0.95,x.values.max()*1.05),(y.values.min()*0.95,y.values.max()*1.05)) g = g.plot_marginals(sns.kdeplot, shade=True,alpha=0.5, gridsize=int((len(x_bins)+len(y_bins))/2)) if self.kwargs['regression']: g = g.plot_joint(sns.regplot, robust=self.kwargs['robust'], ci=self.kwargs['ci'], order=self.kwargs['reg_order'], truncate=True, scatter_kws={"s": 0}) if self.kwargs['grid']: plt.grid(linewidth=2) plt.xticks(x_bins) plt.yticks(y_bins) plt.xlim(x.values.min()*0.95,x.values.max()*1.05) plt.ylim(y.values.min()*0.95,y.values.max()*1.05) if self.kwargs['save']: _ =plt.savefig("density_plot_{}.png".format(self._i_plot), dpi=100) self._i_plot += 1 DensityPlot(data)[0] """ Explanation: <img src="auto_dashboard_data/image_6.png"></img> Shaolin KungFu for automatic dashboard creation Here it is an example on how to build a complex dashboard: End of explanation """
ck-quantuniversity/cntk_pyspark
CNTK_model_scoring_on_Spark_walkthrough.ipynb
mit
from cntk import load_model import findspark findspark.init('/root/spark-2.1.0-bin-hadoop2.6') import os import numpy as np import pandas as pd import pickle import sys from pyspark import SparkFiles from pyspark import SparkContext from pyspark.sql.session import SparkSession sc =SparkContext() spark = SparkSession(sc) import tarfile from urllib.request import urlretrieve import xml.etree.ElementTree cifar_uri = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz' # Location of test image dataset mean_image_uri = 'https://raw.githubusercontent.com/Azure-Samples/hdinsight-pyspark-cntk-integration/master/CIFAR-10_mean.xml' # Mean image for subtraction model_uri = 'https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration/raw/master/resnet20_meanimage_159.dnn' # Location of trained model local_tmp_dir = '/tmp/cifar' local_cifar_path = os.path.join(local_tmp_dir, os.path.basename(cifar_uri)) local_model_path = os.path.join(local_tmp_dir, 'model.dnn') local_mean_image_path = os.path.join(local_tmp_dir, 'mean_image.xml') os.makedirs(local_tmp_dir, exist_ok=True) """ Explanation: Walkthrough: Scoring a trained CNTK model with PySpark on a Microsoft Azure HDInsight cluster This notebook demonstrates how a trained Microsoft Cognitive Toolkit deep learning model can be applied to files in a distributed and scalable fashion using the Spark Python API (PySpark). An image classification model pretrained on the CIFAR-10 dataset is applied to 10,000 withheld images. A sample of the images is shown below along with their classes: <img src="https://cntk.ai/jup/201/cifar-10.png" width=500 height=500> To begin, follow the instructions below to set up a cluster and storage account. You will be prompted to upload a copy of this notebook to the cluster, where you can continue following the walkthrough by executing the PySpark code cells. Outline Load sample images into a Spark Resiliant Distributed Dataset or RDD Load modules and define presets Download the dataset locally on the Spark cluster Convert the dataset into an RDD Score the images using a trained CNTK model Download the trained CNTK model to the Spark cluster Define functions to be used by worker nodes Score the images on worker nodes Evaluate model accuracy <a name="images"></a> Load sample images into a Spark Resiliant Distributed Dataset or RDD We will now use Python to obtain the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. For more details on the dataset, see Alex Krizhevsky's Learning Multiple Layers of Features from Tiny Images (2009). <a name="imports"></a> Load modules and define presets Execute the cell below by selecting it with the mouse or arrow keys, then pressing Shift+Enter. End of explanation """ if not os.path.exists(local_cifar_path): urlretrieve(cifar_uri, filename=local_cifar_path) with tarfile.open(local_cifar_path, 'r:gz') as f: test_dict = pickle.load(f.extractfile('cifar-10-batches-py/test_batch'), encoding='latin1') """ Explanation: <a name="tarball"></a> Download the dataset locally on the Spark cluster The image data are ndarrays stored in a Python dict which has been pickled and tarballed. The cell below downloads the tarball and extracts the dict containing the test image data. End of explanation """ def reshape_image(record): image, label, filename = record return image.reshape(3,32,32).transpose(1,2,0), label, filename image_rdd = sc.parallelize(zip(test_dict['data'], test_dict['labels'], test_dict['filenames'])) image_rdd = image_rdd.map(reshape_image) """ Explanation: <a name="rdd"></a> Convert the dataset into an RDD The following code cell illustrates how the collection of images can be distributed to create a Spark RDD. The cell creates an RDD with one partition per worker to limit the number of times that the trained model must be reloaded during scoring. End of explanation """ sample_images = image_rdd.take(5) image_data = np.array([i[0].reshape((32*32*3)) for i in sample_images]).T image_labels = [i[2] for i in sample_images] image_df = pd.DataFrame(image_data, columns=image_labels) spark.createDataFrame(image_df).coalesce(1).write.mode("overwrite").csv("/tmp/cifar_image", header=True) import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from glob import glob image_df = pd.read_csv(glob('/tmp/cifar_image/*.csv')[0]) plt.figure(figsize=(15,1)) for i, col in enumerate(image_df.columns): plt.subplot(1, 5, i+1) image = image_df[col].values.reshape((32, 32, 3)) plt.imshow(image) plt.title(col) cur_axes = plt.gca() cur_axes.axes.get_xaxis().set_visible(False) cur_axes.axes.get_yaxis().set_visible(False) """ Explanation: To convince ourselves that the data has been properly loaded, let's visualize a few of these images. For plotting, we will need to transfer them to the local context by way of a Spark dataframe: End of explanation """ urlretrieve(model_uri, local_model_path) sc.addFile(local_model_path) urlretrieve(mean_image_uri, local_mean_image_path) mean_image = xml.etree.ElementTree.parse(local_mean_image_path).getroot() mean_image = [float(i) for i in mean_image.find('MeanImg').find('data').text.strip().split(' ')] mean_image = np.array(mean_image).reshape((32, 32, 3)).transpose((2, 0, 1)) mean_image_bc = sc.broadcast(mean_image) """ Explanation: <a name="score"></a> Score the images using a trained CNTK model Now that the cluster and sample dataset have been created, we can use PySpark to apply a trained model to the images. <a name="model"></a> Download the trained CNTK model and mean image to the Spark cluster We previously trained a twenty-layer ResNet model to classify CIFAR-10 images by following this tutorial from the CNTK git repo. The model expects input images to be preprocessed by subtracting the mean image defined in an OpenCV XML file. The following cell downloads both the trained model and the mean image, and ensures that data from both files can be accessed by worker nodes. End of explanation """ def get_preprocessed_image(my_image, mean_image): ''' Reshape and flip RGB order ''' my_image = my_image.astype(np.float32) bgr_image = my_image[:, :, ::-1] # RGB -> BGR image_data = np.ascontiguousarray(np.transpose(bgr_image, (2, 0, 1))) image_data -= mean_image return(image_data) def run_worker(records): ''' Scoring script run by each worker ''' loaded_model = load_model(SparkFiles.get('./model.dnn')) mean_image = mean_image_bc.value # Iterate through the records in the RDD. # record[0] is the image data # record[1] is the true label # record[2] is the file name for record in records: preprocessed_image = get_preprocessed_image(record[0], mean_image) dnn_output = loaded_model.eval({loaded_model.arguments[0]: [preprocessed_image]}) yield record[1], np.argmax(np.squeeze(dnn_output)) """ Explanation: <a name="functions"></a> Define functions to be used by worker nodes The following functions will be used during scoring to load, preprocess, and score images. A class label (integer in the range 0-9) will be returned for each image, along with its filename. End of explanation """ labelled_images = image_rdd.mapPartitions(run_worker) # Time how long it takes to score 10k test images start = pd.datetime.now() results = labelled_images.collect() print('Scored {} images'.format(len(results))) stop = pd.datetime.now() print(stop - start) """ Explanation: <a name="map"></a> Score the images on worker nodes The code cell below maps each partition of image_rdd to a worker node and collects the results. Runtimes of 1-3 minutes are typical. End of explanation """ df = pd.DataFrame(results, columns=['true_label', 'predicted_label']) num_correct = sum(df['true_label'] == df['predicted_label']) num_total = len(results) print('Correctly predicted {} of {} images ({:0.2f}%)'.format(num_correct, num_total, 100 * num_correct / num_total)) """ Explanation: <a name="evaluate"></a> Evaluate model accuracy The trained model assigns a class label (represented by an integer value 0-9) to each image. We now compare the true and predicted class labels to evaluate our model's accuracy. End of explanation """ spark.createDataFrame(df).coalesce(1).write.mode("overwrite").csv("/tmp/cifar_scores", header=True) import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix import os from glob import glob df = pd.read_csv(glob('/tmp/cifar_scores/*.csv')[0]) print('Constructing a confusion matrix with the first {} samples'.format(len(df.index))) label_to_name_dict = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog', 6: 'frog', 7: 'horse', 8: 'ship', 9: 'truck'} labels = np.sort(df['true_label'].unique()) named_labels = [label_to_name_dict[i] for i in labels] cm = confusion_matrix(df['true_label'], df['predicted_label'], labels=labels) plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues) plt.colorbar() tick_marks = np.arange(len(labels)) plt.xticks(tick_marks, named_labels, rotation=90) plt.yticks(tick_marks, named_labels) plt.xlabel('Predicted label') plt.ylabel('True Label') plt.show() """ Explanation: We can construct a confusion matrix to visualize which classification errors are most common: End of explanation """
psiq/gdsfactory
notebooks/10_YAML_component.ipynb
mit
import pp yaml = """ instances: mmi_long: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 10 mmi_short: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 5 """ c = pp.component_from_yaml(yaml) pp.show(c) pp.plotgds(c) c.instances c.instances['mmi_long'].x = 100 pp.show(c) pp.plotgds(c) import pp yaml = """ instances: mmi_long: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 10 mmi_short: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 5 placements: mmi_long: x: 100 y: 100 """ c = pp.component_from_yaml(yaml) pp.show(c) pp.plotgds(c) import pp yaml = """ instances: mmi_long: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 10 mmi_short: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 5 placements: mmi_long: x: 100 y: 100 routes: mmi_short,E1: mmi_long,W0 """ c = pp.component_from_yaml(yaml) pp.show(c) pp.plotgds(c) """ Explanation: YAML component place and route We can define a place and route component by a netlist in YAML format End of explanation """ import pp yaml = """ instances: mmi_long: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 10 mmi_short: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 5 placements: mmi_long: rotation: 180 x: 100 y: 100 routes: mmi_short,E1: mmi_long,E0 """ c = pp.component_from_yaml(yaml) pp.show(c) pp.plotgds(c) """ Explanation: You can rotate and instance specifying the angle in degrees End of explanation """ import pp yaml = """ instances: mmi_long: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 10 mmi_short: component: mmi1x2 settings: width_mmi: 4.5 length_mmi: 5 placements: mmi_long: rotation: 180 x: 100 y: 100 routes: mmi_short,E1: mmi_long,E0 ports: E0: mmi_short,W0 W0: mmi_long,W0 """ c = pp.component_from_yaml(yaml) pp.show(c) pp.plotgds(c) r = c.routes['mmi_short,E1:mmi_long,E0'] r r.parent.length c.instances c.routes pp.write_gds(c, add_ports_to_all_cells=True) """ Explanation: You can also define ports for the component End of explanation """
vadim-ivlev/STUDY
handson-data-science-python/DataScience-Python3/PCA.ipynb
mit
from sklearn.datasets import load_iris from sklearn.decomposition import PCA import pylab as pl from itertools import cycle iris = load_iris() numSamples, numFeatures = iris.data.shape print(numSamples) print(numFeatures) print(list(iris.target_names)) """ Explanation: Principal Component Analysis PCA is a dimensionality reduction technique; it lets you distill multi-dimensional data down to fewer dimensions, selecting new dimensions that preserve variance in the data as best it can. We're not talking about Star Trek stuff here; let's make it real - a black & white image for example, contains three dimensions of data: X position, Y position, and brightness at each point. Distilling that down to two dimensions can be useful for things like image compression and facial recognition, because it distills out the information that contributes most to the variance in the data set. Let's do this with a simpler example: the Iris data set that comes with scikit-learn. It's just a small collection of data that has four dimensions of data for three different kinds of Iris flowers: The length and width of both the petals and sepals of many individual flowers from each species. Let's load it up and have a look: End of explanation """ X = iris.data pca = PCA(n_components=2, whiten=True).fit(X) X_pca = pca.transform(X) """ Explanation: So, this tells us our data set has 150 samples (individual flowers) in it. It has 4 dimensions - called features here, and three distinct Iris species that each flower is classified into. While we can visualize 2 or even 3 dimensions of data pretty easily, visualizing 4D data isn't something our brains can do. So let's distill this down to 2 dimensions, and see how well it works: End of explanation """ print(pca.components_) """ Explanation: What we have done is distill our 4D data set down to 2D, by projecting it down to two orthogonal 4D vectors that make up the basis of our new 2D projection. We can see what those 4D vectors are, although it's not something you can really wrap your head around: End of explanation """ print(pca.explained_variance_ratio_) print(sum(pca.explained_variance_ratio_)) """ Explanation: Let's see how much information we've managed to preserve: End of explanation """ %matplotlib inline from pylab import * colors = cycle('rgb') target_ids = range(len(iris.target_names)) pl.figure() for i, c, label in zip(target_ids, colors, iris.target_names): pl.scatter(X_pca[iris.target == i, 0], X_pca[iris.target == i, 1], c=c, label=label) pl.legend() pl.show() """ Explanation: That's pretty cool. Although we have thrown away two of our four dimensions, PCA has chosen the remaining two dimensions well enough that we've captured 92% of the variance in our data in a single dimension alone! The second dimension just gives us an additional 5%; altogether we've only really lost less than 3% of the variance in our data by projecting it down to two dimensions. As promised, now that we have a 2D representation of our data, we can plot it: End of explanation """