repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
dariox2/CADL
session-5/session-5-part-1[1-3].ipynb
apache-2.0
# First check the Python version import sys if sys.version_info < (3,4): print('You are running an older version of Python!\n\n', 'You should consider updating to Python 3.4.0 or', 'higher as the libraries built for this course', 'have only been tested in Python 3.4 and higher.\n') print('Try installing the Python 3.5 version of anaconda' 'and then restart `jupyter notebook`:\n', 'https://www.continuum.io/downloads\n\n') # Now get necessary libraries try: import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize from skimage import data from scipy.misc import imresize from scipy.ndimage.filters import gaussian_filter import IPython.display as ipyd import tensorflow as tf from libs import utils, gif, datasets, dataset_utils, nb_utils except ImportError as e: print("Make sure you have started notebook in the same directory", "as the provided zip file which includes the 'libs' folder", "and the file 'utils.py' inside of it. You will NOT be able", "to complete this assignment unless you restart jupyter", "notebook inside the directory created by extracting", "the zip file or cloning the github repo.") print(e) # We'll tell matplotlib to inline any drawn figures like so: %matplotlib inline plt.style.use('ggplot') # Bit of formatting because I don't like the default inline code style: from IPython.core.display import HTML HTML("""<style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style>""") """ Explanation: Session 5: Generative Networks Assignment: Generative Adversarial Networks and Recurrent Neural Networks <p class="lead"> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning with Google's Tensorflow</a><br /> <a href="http://pkmital.com">Parag K. Mital</a><br /> <a href="https://www.kadenze.com">Kadenze, Inc.</a> </p> Table of Contents <!-- MarkdownTOC autolink="true" autoanchor="true" bracket="round" --> Overview Learning Goals Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN) Introduction Building the Encoder Building the Discriminator for the Training Samples Building the Decoder Building the Generator Building the Discriminator for the Generated Samples GAN Loss Functions Building the Optimizers w/ Regularization Loading a Dataset Training Equilibrium Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN) Batch Normalization Building the Encoder Building the Variational Layer Building the Decoder Building VAE/GAN Loss Functions Creating the Optimizers Loading the Dataset Training Part 3 - Latent-Space Arithmetic Loading the Pre-Trained Model Exploring the Celeb Net Attributes Find the Latent Encoding for an Attribute Latent Feature Arithmetic Extensions Part 4 - Character-Level Language Model Part 5 - Pretrained Char-RNN of Donald Trump Getting the Trump Data Basic Text Analysis Loading the Pre-trained Trump Model Inference: Keeping Track of the State Probabilistic Sampling Inference: Temperature Inference: Priming Assignment Submission <!-- /MarkdownTOC --> <a name="overview"></a> Overview This is certainly the hardest session and will require a lot of time and patience to complete. Also, many elements of this session may require further investigation, including reading of the original papers and additional resources in order to fully grasp their understanding. The models we cover are state of the art and I've aimed to give you something between a practical and mathematical understanding of the material, though it is a tricky balance. I hope for those interested, that you delve deeper into the papers for more understanding. And for those of you seeking just a practical understanding, that these notebooks will suffice. This session covered two of the most advanced generative networks: generative adversarial networks and recurrent neural networks. During the homework, we'll see how these work in more details and try building our own. I am not asking you train anything in this session as both GANs and RNNs take many days to train. However, I have provided pre-trained networks which we'll be exploring. We'll also see how a Variational Autoencoder can be combined with a Generative Adversarial Network to allow you to also encode input data, and I've provided a pre-trained model of this type of model trained on the Celeb Faces dataset. We'll see what this means in more details below. After this session, you are also required to submit your final project which can combine any of the materials you have learned so far to produce a short 1 minute clip demonstrating any aspect of the course you want to invesitgate further or combine with anything else you feel like doing. This is completely open to you and to encourage your peers to share something that demonstrates creative thinking. Be sure to keep the final project in mind while browsing through this notebook! <a name="learning-goals"></a> Learning Goals Learn to build the components of a Generative Adversarial Network and how it is trained Learn to combine the Variational Autoencoder with a Generative Adversarial Network Learn to use latent space arithmetic with a pre-trained VAE/GAN network Learn to build the components of a Character Recurrent Neural Network and how it is trained Learn to sample from a pre-trained CharRNN model End of explanation """ # We'll keep a variable for the size of our image. n_pixels = 32 n_channels = 3 input_shape = [None, n_pixels, n_pixels, n_channels] # And then create the input image placeholder X = tf.placeholder(name='X'... """ Explanation: <a name="part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan"></a> Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN) <a name="introduction"></a> Introduction Recall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The "generator" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as: 1, the image you gave me was real, or 0, the image you gave me was generated by the generator, it's a FAKE! So the discriminator is like an encoder which takes an image and then perfoms lie detection. Are you feeding me lies? Or is the image real? Consider the AE and VAE we trained in Session 3. The loss function operated partly on the input space. It said, per pixel, what is the difference between my reconstruction and the input image? The l2-loss per pixel. Recall at that time we suggested that this wasn't the best idea because per-pixel differences aren't representative of our own perception of the image. One way to consider this is if we had the same image, and translated it by a few pixels. We would not be able to tell the difference, but the per-pixel difference between the two images could be enormously high. The GAN does not use per-pixel difference. Instead, it trains a distance function: the discriminator. The discriminator takes in two images, the real image and the generated one, and learns what a similar image should look like! That is really the amazing part of this network and has opened up some very exciting potential future directions for unsupervised learning. Another network that also learns a distance function is known as the siamese network. We didn't get into this network in this course, but it is commonly used in facial verification, or asserting whether two faces are the same or not. The GAN network is notoriously a huge pain to train! For that reason, we won't actually be training it. Instead, we'll discuss an extension to this basic network called the VAEGAN which uses the VAE we created in Session 3 along with the GAN. We'll then train that network in Part 2. For now, let's stick with creating the GAN. Let's first create the two networks: the discriminator and the generator. We'll first begin by building a general purpose encoder which we'll use for our discriminator. Recall that we've already done this in Session 3. What we want is for the input placeholder to be encoded using a list of dimensions for each of our encoder's layers. In the case of a convolutional network, our list of dimensions should correspond to the number of output filters. We also need to specify the kernel heights and widths for each layer's convolutional network. We'll first need a placeholder. This will be the "real" image input to the discriminator and the discrimintator will encode this image into a single value, 0 or 1, saying, yes this is real, or no, this is not real. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def encoder(x, channels, filter_sizes, activation=tf.nn.tanh, reuse=None): # Set the input to a common variable name, h, for hidden layer h = x # Now we'll loop over the list of dimensions defining the number # of output filters in each layer, and collect each hidden layer hs = [] for layer_i in range(len(channels)): with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse): # Convolve using the utility convolution function # This requirs the number of output filter, # and the size of the kernel in `k_h` and `k_w`. # By default, this will use a stride of 2, meaning # each new layer will be downsampled by 2. h, W = utils.conv2d(... # Now apply the activation function h = activation(h) # Store each hidden layer hs.append(h) # Finally, return the encoding. return h, hs """ Explanation: <a name="building-the-encoder"></a> Building the Encoder Let's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def discriminator(X, channels=[50, 50, 50, 50], filter_sizes=[4, 4, 4, 4], activation=utils.lrelu, reuse=None): # We'll scope these variables to "discriminator_real" with tf.variable_scope('discriminator', reuse=reuse): # Encode X: H, Hs = encoder(X, channels, filter_sizes, activation, reuse) # Now make one last layer with just 1 output. We'll # have to reshape to 2-d so that we can create a fully # connected layer: shape = H.get_shape().as_list() H = tf.reshape(H, [-1, shape[1] * shape[2] * shape[3]]) # Now we can connect our 2D layer to a single neuron output w/ # a sigmoid activation: D, W = utils.linear(... return D """ Explanation: <a name="building-the-discriminator-for-the-training-samples"></a> Building the Discriminator for the Training Samples Finally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ D_real = discriminator(X) """ Explanation: Now let's create the discriminator for the real training data coming from X: End of explanation """ graph = tf.get_default_graph() nb_utils.show_graph(graph.as_graph_def()) """ Explanation: And we can see what the network looks like now: End of explanation """ # We'll need some variables first. This will be how many # channels our generator's feature vector has. Experiment w/ # this if you are training your own network. n_code = 16 # And in total how many feature it has, including the spatial dimensions. n_latent = (n_pixels // 16) * (n_pixels // 16) * n_code # Let's build the 2-D placeholder, which is the 1-d feature vector for every # element in our batch. We'll then reshape this to 4-D for the decoder. Z = tf.placeholder(name='Z', shape=[None, n_latent], dtype=tf.float32) # Now we can reshape it to input to the decoder. Here we have to # be mindful of the height and width as described before. We need # to make the height and width a factor of the final height and width # that we want. Since we are using strided convolutions of 2, then # we can say with 4 layers, that first decoder's layer should be: # n_pixels / 2 / 2 / 2 / 2, or n_pixels / 16: Z_tensor = tf.reshape(Z, [-1, n_pixels // 16, n_pixels // 16, n_code]) """ Explanation: <a name="building-the-decoder"></a> Building the Decoder Now we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above. Let's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3. But how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want. Let's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator). End of explanation """ def decoder(z, dimensions, channels, filter_sizes, activation=tf.nn.relu, reuse=None): h = z hs = [] for layer_i in range(len(dimensions)): with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse): h, W = utils.deconv2d(x=h, n_output_h=dimensions[layer_i], n_output_w=dimensions[layer_i], n_output_ch=channels[layer_i], k_h=filter_sizes[layer_i], k_w=filter_sizes[layer_i], reuse=reuse) h = activation(h) hs.append(h) return h, hs """ Explanation: Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter "channels" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network. End of explanation """ # Explore these parameters. def generator(Z, dimensions=[n_pixels//8, n_pixels//4, n_pixels//2, n_pixels], channels=[50, 50, 50, n_channels], filter_sizes=[4, 4, 4, 4], activation=utils.lrelu): with tf.variable_scope('generator'): G, Hs = decoder(Z_tensor, dimensions, channels, filter_sizes, activation) return G """ Explanation: <a name="building-the-generator"></a> Building the Generator Now we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3. End of explanation """ G = generator(Z) graph = tf.get_default_graph() nb_utils.show_graph(graph.as_graph_def()) """ Explanation: Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image. End of explanation """ D_fake = discriminator(G, reuse=True) """ Explanation: <a name="building-the-discriminator-for-the-generated-samples"></a> Building the Discriminator for the Generated Samples Lastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We're just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values. End of explanation """ nb_utils.show_graph(graph.as_graph_def()) """ Explanation: Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator. End of explanation """ with tf.variable_scope('loss/generator'): loss_G = tf.reduce_mean(utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake))) """ Explanation: <a name="gan-loss-functions"></a> GAN Loss Functions We now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times! Recall from Session 3's Supervised Network, we created a binary classification task: music or speech. We again have a binary classification task: real or fake. So our loss metric will again use the binary cross entropy to measure the loss of our three different modules: the generator, the discriminator for our real images, and the discriminator for our generated images. To find out the loss function for our generator network, answer the question, what makes the generator successful? Successfully fooling the discriminator. When does that happen? When the discriminator for the fake samples produces all ones. So our binary cross entropy measure will measure the cross entropy with our predicted distribution and the true distribution which has all ones. End of explanation """ with tf.variable_scope('loss/discriminator/real'): loss_D_real = utils.binary_cross_entropy(D_real, ... with tf.variable_scope('loss/discriminator/fake'): loss_D_fake = utils.binary_cross_entropy(D_fake, ... with tf.variable_scope('loss/discriminator'): loss_D = tf.reduce_mean((loss_D_real + loss_D_fake) / 2) nb_utils.show_graph(graph.as_graph_def()) """ Explanation: What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Grab just the variables corresponding to the discriminator # and just the generator: vars_d = [v for v in tf.trainable_variables() if ...] print('Training discriminator variables:') [print(v.name) for v in tf.trainable_variables() if v.name.startswith('discriminator')] vars_g = [v for v in tf.trainable_variables() if ...] print('Training generator variables:') [print(v.name) for v in tf.trainable_variables() if v.name.startswith('generator')] """ Explanation: With our loss functions, we can create an optimizer for the discriminator and generator: <a name="building-the-optimizers-w-regularization"></a> Building the Optimizers w/ Regularization We're almost ready to create our optimizers. We just need to do one extra thing. Recall that our loss for our generator has a flow from the generator through the discriminator. If we are training both the generator and the discriminator, we have two measures which both try to optimize the discriminator, but in opposite ways: the generator's loss would try to optimize the discriminator to be bad at its job, and the discriminator's loss would try to optimize it to be good at its job. This would be counter-productive, trying to optimize opposing losses. What we want is for the generator to get better, and the discriminator to get better. Not for the discriminator to get better, then get worse, then get better, etc... The way we do this is when we optimize our generator, we let the gradient flow through the discriminator, but we do not update the variables in the discriminator. Let's try and grab just the discriminator variables and just the generator variables below: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ d_reg = tf.contrib.layers.apply_regularization( tf.contrib.layers.l2_regularizer(1e-6), vars_d) g_reg = tf.contrib.layers.apply_regularization( tf.contrib.layers.l2_regularizer(1e-6), vars_g) """ Explanation: We can also apply regularization to our network. This will penalize weights in the network for growing too large. End of explanation """ learning_rate = 0.0001 lr_g = tf.placeholder(tf.float32, shape=[], name='learning_rate_g') lr_d = tf.placeholder(tf.float32, shape=[], name='learning_rate_d') """ Explanation: The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so: End of explanation """ opt_g = tf.train.AdamOptimizer(learning_rate=lr_g).minimize(...) opt_d = tf.train.AdamOptimizer(learning_rate=lr_d).minimize(loss_D + d_reg, var_list=vars_d) """ Explanation: Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders. With your optimizers, you can now train the network by "running" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # You'll want to change this to your own data if you end up training your own GAN. batch_size = 64 n_epochs = 1 crop_shape = [n_pixels, n_pixels, 3] crop_factor = 0.8 input_shape = [218, 178, 3] files = datasets.CELEB() batch = dataset_utils.create_input_pipeline( files=files, batch_size=batch_size, n_epochs=n_epochs, crop_shape=crop_shape, crop_factor=crop_factor, shape=input_shape) """ Explanation: <a name="loading-a-dataset"></a> Loading a Dataset Let's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ ckpt_name = 'gan.ckpt' sess = tf.Session() saver = tf.train.Saver() sess.run(tf.initialize_all_variables()) coord = tf.train.Coordinator() tf.get_default_graph().finalize() threads = tf.train.start_queue_runners(sess=sess, coord=coord) if os.path.exists(ckpt_name): saver.restore(sess, ckpt_name) print("VAE model restored.") n_examples = 10 zs = np.random.uniform(0.0, 1.0, [4, n_latent]).astype(np.float32) zs = utils.make_latent_manifold(zs, n_examples) """ Explanation: <a name="training"></a> Training We'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train. End of explanation """ equilibrium = 0.693 margin = 0.2 """ Explanation: <a name="equilibrium"></a> Equilibrium Equilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is: \begin{align} \sum_i \text{X}_i * \text{log}(\tilde{\text{X}}_i) + (1 - \text{X}_i) * \text{log}(1 - \tilde{\text{X}}_i) \end{align} Which is written out in tensorflow as: python (-(x * tf.log(z) + (1. - x) * tf.log(1. - z))) Where x is the discriminator's prediction of the true distribution, in the case of GANs, the input images, and z is the discriminator's prediction of the generated images corresponding to the mathematical notation of $\tilde{\text{X}}$. We sum over all features, but in the case of the discriminator, we have just 1 feature, the guess of whether it is a true image or not. If our discriminator guesses at chance, i.e. 0.5, then we'd have something like: \begin{align} 0.5 * \text{log}(0.5) + (1 - 0.5) * \text{log}(1 - 0.5) = -0.693 \end{align} So this is what we'd expect at the start of learning and from a game theoretic point of view, where we want things to remain. So unlike our previous networks, where our loss continues to drop closer and closer to 0, we want our loss to waver around this value as much as possible, and hope for the best. End of explanation """ t_i = 0 batch_i = 0 epoch_i = 0 n_files = len(files) while epoch_i < n_epochs: batch_i += 1 batch_xs = sess.run(batch) / 255.0 batch_zs = np.random.uniform( 0.0, 1.0, [batch_size, n_latent]).astype(np.float32) real_cost, fake_cost = sess.run([ loss_D_real, loss_D_fake], feed_dict={ X: batch_xs, Z: batch_zs}) real_cost = np.mean(real_cost) fake_cost = np.mean(fake_cost) if (batch_i % 20) == 0: print(batch_i, 'real:', real_cost, '/ fake:', fake_cost) gen_update = True dis_update = True if real_cost > (equilibrium + margin) or \ fake_cost > (equilibrium + margin): gen_update = False if real_cost < (equilibrium - margin) or \ fake_cost < (equilibrium - margin): dis_update = False if not (gen_update or dis_update): gen_update = True dis_update = True if gen_update: sess.run(opt_g, feed_dict={ Z: batch_zs, lr_g: learning_rate}) if dis_update: sess.run(opt_d, feed_dict={ X: batch_xs, Z: batch_zs, lr_d: learning_rate}) if batch_i % (n_files // batch_size) == 0: batch_i = 0 epoch_i += 1 print('---------- EPOCH:', epoch_i) # Plot example reconstructions from latent layer recon = sess.run(G, feed_dict={Z: zs}) recon = np.clip(recon, 0, 1) m1 = utils.montage(recon.reshape([-1] + crop_shape), 'imgs/manifold_%08d.png' % t_i) recon = sess.run(G, feed_dict={Z: batch_zs}) recon = np.clip(recon, 0, 1) m2 = utils.montage(recon.reshape([-1] + crop_shape), 'imgs/reconstructions_%08d.png' % t_i) fig, axs = plt.subplots(1, 2, figsize=(15, 10)) axs[0].imshow(m1) axs[1].imshow(m2) plt.show() t_i += 1 # Save the variables to disk. save_path = saver.save(sess, "./" + ckpt_name, global_step=batch_i, write_meta_graph=False) print("Model saved in file: %s" % save_path) # Tell all the threads to shutdown. coord.request_stop() # Wait until all threads have finished. coord.join(threads) # Clean up the session. sess.close() """ Explanation: When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders. Don't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell. From there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the "real" and "fake" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once. End of explanation """ tf.reset_default_graph() """ Explanation: <a name="part-2---variational-auto-encoding-generative-adversarial-network-vaegan"></a> Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN) In our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image. The closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN: https://arxiv.org/abs/1512.09300. It is just like the regular GAN, except we also use an encoder to create our feature vector Z. We then get the best of both worlds: a GAN that looks more or less the same, but uses the encoding from an encoder instead of an arbitrary feature vector; and an autoencoder that can model an input distribution using a trained distance function, the discriminator, leading to nicer encodings/decodings. Let's try to build it! Refer to the paper for the intricacies and a great read. Luckily, by building the encoder and decoder functions, we're almost there. We just need a few more components and will change these slightly. Let's reset our graph and recompose our network as a VAEGAN: End of explanation """ # placeholder for batch normalization is_training = tf.placeholder(tf.bool, name='istraining') """ Explanation: <a name="batch-normalization"></a> Batch Normalization You may have noticed from the VAE code that I've used something called "batch normalization". This is a pretty effective technique for regularizing the training of networks by "reducing internal covariate shift". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by "smoothing" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates. In order for us to use batch normalization, we need another placeholder which is a simple boolean: True or False, denoting when we are training. We'll use this placeholder to conditionally update batch normalization's statistics required for normalizing our minibatches. Let's create the placeholder and then I'll get into how to use this. End of explanation """ from libs.batch_norm import batch_norm help(batch_norm) """ Explanation: The original paper that introduced the idea suggests to use batch normalization "pre-activation", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the libs/batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you. End of explanation """ def encoder(x, is_training, channels, filter_sizes, activation=tf.nn.tanh, reuse=None): # Set the input to a common variable name, h, for hidden layer h = x print('encoder/input:', h.get_shape().as_list()) # Now we'll loop over the list of dimensions defining the number # of output filters in each layer, and collect each hidden layer hs = [] for layer_i in range(len(channels)): with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse): # Convolve using the utility convolution function # This requirs the number of output filter, # and the size of the kernel in `k_h` and `k_w`. # By default, this will use a stride of 2, meaning # each new layer will be downsampled by 2. h, W = utils.conv2d(h, channels[layer_i], k_h=filter_sizes[layer_i], k_w=filter_sizes[layer_i], d_h=2, d_w=2, reuse=reuse) h = batch_norm(h, is_training) # Now apply the activation function h = activation(h) print('layer:', layer_i, ', shape:', h.get_shape().as_list()) # Store each hidden layer hs.append(h) # Finally, return the encoding. return h, hs """ Explanation: Note that Tensorflow also includes numerous batch normalization implementations now that it did not include at the time of filming (Tensorflow is evolving very quickly)! These exist in tf.contrib.layers.batch_norm, tf.contrib.learn.ops.batch_norm, and tf.contrib.slim.batch_norm. They work slightly differently to the libs/batch_norm.py implementation in that they take a boolean for whether or not you are training, rather than a tf.Placeholder. This requires you to reconstruct the network when you are training/inferring, or create two networks, which is preferable for "deploying" a model. For instance, if you have trained a model and you want to hand it out, you don't necessarily want the batch norm operations for training the network in there. For the libraries in this course, we'll be using the libs/batch_norm implementation which means you will have to use feed_dict to denote when you are training or not. <a name="building-the-encoder-1"></a> Building the Encoder We can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied: End of explanation """ n_pixels = 64 n_channels = 3 input_shape = [None, n_pixels, n_pixels, n_channels] # placeholder for the input to the network X = tf.placeholder(...) """ Explanation: Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ channels = [64, 64, 64] filter_sizes = [5, 5, 5] activation = tf.nn.elu n_hidden = 128 with tf.variable_scope('encoder'): H, Hs = encoder(... Z = utils.linear(H, n_hidden)[0] """ Explanation: And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def variational_bayes(h, n_code): # Model mu and log(\sigma) z_mu = tf.nn.tanh(utils.linear(h, n_code, name='mu')[0]) z_log_sigma = 0.5 * tf.nn.tanh(utils.linear(h, n_code, name='log_sigma')[0]) # Sample from noise distribution p(eps) ~ N(0, 1) epsilon = tf.random_normal(tf.pack([tf.shape(h)[0], n_code])) # Sample from posterior z = z_mu + tf.mul(epsilon, tf.exp(z_log_sigma)) # Measure loss loss_z = -0.5 * tf.reduce_sum( 1.0 + 2.0 * z_log_sigma - tf.square(z_mu) - tf.exp(2.0 * z_log_sigma), 1) return z, z_mu, z_log_sigma, loss_z """ Explanation: <a name="building-the-variational-layer"></a> Building the Variational Layer In Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a "black box" in this course. For those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called "Pattern Recognition and Machine Learning" and reading the original Kingma and Welling paper on Variational Bayes. Now back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z. End of explanation """ # Experiment w/ values between 2 - 100 # depending on how difficult the dataset is n_code = 32 with tf.variable_scope('encoder/variational'): Z, Z_mu, Z_log_sigma, loss_Z = variational_bayes(h=Z, n_code=n_code) """ Explanation: Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def decoder(z, is_training, dimensions, channels, filter_sizes, activation=tf.nn.elu, reuse=None): h = z for layer_i in range(len(dimensions)): with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse): h, W = utils.deconv2d(x=h, n_output_h=dimensions[layer_i], n_output_w=dimensions[layer_i], n_output_ch=channels[layer_i], k_h=filter_sizes[layer_i], k_w=filter_sizes[layer_i], reuse=reuse) h = batch_norm(h, is_training) h = activation(h) return h """ Explanation: <a name="building-the-decoder-1"></a> Building the Decoder In the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix: add a second argument for is_training and then apply batch normalization just after the deconv2d operation and just before the nonlinear activation. End of explanation """ dimensions = [n_pixels // 8, n_pixels // 4, n_pixels // 2, n_pixels] channels = [30, 30, 30, n_channels] filter_sizes = [4, 4, 4, 4] activation = tf.nn.elu n_latent = n_code * (n_pixels // 16)**2 with tf.variable_scope('generator'): Z_decode = utils.linear( Z, n_output=n_latent, name='fc', activation=activation)[0] Z_decode_tensor = tf.reshape( Z_decode, [-1, n_pixels//16, n_pixels//16, n_code], name='reshape') G = decoder( Z_decode_tensor, is_training, dimensions, channels, filter_sizes, activation) """ Explanation: Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network. End of explanation """ def discriminator(X, is_training, channels=[50, 50, 50, 50], filter_sizes=[4, 4, 4, 4], activation=tf.nn.elu, reuse=None): # We'll scope these variables to "discriminator_real" with tf.variable_scope('discriminator', reuse=reuse): H, Hs = encoder( X, is_training, channels, filter_sizes, activation, reuse) shape = H.get_shape().as_list() H = tf.reshape( H, [-1, shape[1] * shape[2] * shape[3]]) D, W = utils.linear( x=H, n_output=1, activation=tf.nn.sigmoid, name='fc', reuse=reuse) return D, Hs """ Explanation: Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs. End of explanation """ D_real, Hs_real = discriminator(X, is_training) D_fake, Hs_fake = discriminator(G, is_training, reuse=True) """ Explanation: Recall the regular GAN and DCGAN required 2 discriminators: one for the generated samples in Z, and one for the input samples in X. We'll do the same thing here. One discriminator for the real input data, X, which the discriminator will try to predict as 1s, and another discriminator for the generated samples that go from X through the encoder to Z, and finally through the decoder to G. The discriminator will be trained to try and predict these as 0s, whereas the generator will be trained to try and predict these as 1s. End of explanation """ with tf.variable_scope('loss'): # Loss functions loss_D_llike = 0 for h_real, h_fake in zip(Hs_real, Hs_fake): loss_D_llike += tf.reduce_sum(tf.squared_difference( utils.flatten(h_fake), utils.flatten(h_real)), 1) eps = 1e-12 loss_real = tf.log(D_real + eps) loss_fake = tf.log(1 - D_fake + eps) loss_GAN = tf.reduce_sum(loss_real + loss_fake, 1) gamma = 0.75 loss_enc = tf.reduce_mean(loss_Z + loss_D_llike) loss_dec = tf.reduce_mean(gamma * loss_D_llike - loss_GAN) loss_dis = -tf.reduce_mean(loss_GAN) nb_utils.show_graph(tf.get_default_graph().as_graph_def()) """ Explanation: <a name="building-vaegan-loss-functions"></a> Building VAE/GAN Loss Functions Let's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper: https://arxiv.org/abs/1512.09300 - please refer to this paper for more details that are way beyond the scope of this course! One parameter within this to pay attention to is gamma, which the authors of the paper suggest control the weighting between content and style, just like in Session 4's Style Net implementation. End of explanation """ learning_rate = 0.0001 opt_enc = tf.train.AdamOptimizer( learning_rate=learning_rate).minimize( loss_enc, var_list=[var_i for var_i in tf.trainable_variables() if ...]) opt_gen = tf.train.AdamOptimizer( learning_rate=learning_rate).minimize( loss_dec, var_list=[var_i for var_i in tf.trainable_variables() if ...]) opt_dis = tf.train.AdamOptimizer( learning_rate=learning_rate).minimize( loss_dis, var_list=[var_i for var_i in tf.trainable_variables() if var_i.name.startswith('discriminator')]) """ Explanation: <a name="creating-the-optimizers"></a> Creating the Optimizers We now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network: the encoder's optimizer will only update the encoder variables, the generator's optimizer will only update the generator variables, and the discriminator's optimizer will only update the discriminator variables. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ from libs import datasets, dataset_utils batch_size = 64 n_epochs = 100 crop_shape = [n_pixels, n_pixels, n_channels] crop_factor = 0.8 input_shape = [218, 178, 3] # Try w/ CELEB first to make sure it works, then explore w/ your own dataset. files = datasets.CELEB() batch = dataset_utils.create_input_pipeline( files=files, batch_size=batch_size, n_epochs=n_epochs, crop_shape=crop_shape, crop_factor=crop_factor, shape=input_shape) """ Explanation: <a name="loading-the-dataset"></a> Loading the Dataset We'll now load our dataset just like in Part 1. Here is where you should explore with your own data! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ n_samples = 10 zs = np.random.uniform( -1.0, 1.0, [4, n_code]).astype(np.float32) zs = utils.make_latent_manifold(zs, n_samples) """ Explanation: We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between then to create a "hyper-plane" and show the decoding of 10 x 10 points on that hyperplane. End of explanation """ # We create a session to use the graph sess = tf.Session() init_op = tf.initialize_all_variables() saver = tf.train.Saver() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) sess.run(init_op) """ Explanation: Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners: End of explanation """ if os.path.exists("vaegan.ckpt"): saver.restore(sess, "vaegan.ckpt") print("GAN model restored.") """ Explanation: Load an existing checkpoint if it exists to continue training. End of explanation """ n_files = len(files) test_xs = sess.run(batch) / 255.0 if not os.path.exists('imgs'): os.mkdir('imgs') m = utils.montage(test_xs, 'imgs/test_xs.png') plt.imshow(m) """ Explanation: We'll also try resythesizing a test set of images. This will help us understand how well the encoder/decoder network is doing: End of explanation """ t_i = 0 batch_i = 0 epoch_i = 0 """ Explanation: <a name="training-1"></a> Training Almost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number. End of explanation """ equilibrium = 0.693 margin = 0.4 """ Explanation: Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds. End of explanation """ while epoch_i < n_epochs: if epoch_i % (n_files // batch_size) == 0: batch_i = 0 epoch_i += 1 print('---------- EPOCH:', epoch_i) batch_i += 1 batch_xs = sess.run(batch) / 255.0 real_cost, fake_cost, _ = sess.run([ loss_real, loss_fake, opt_enc], feed_dict={ X: batch_xs, is_training: True}) real_cost = -np.mean(real_cost) fake_cost = -np.mean(fake_cost) gen_update = True dis_update = True if real_cost > (equilibrium + margin) or \ fake_cost > (equilibrium + margin): gen_update = False if real_cost < (equilibrium - margin) or \ fake_cost < (equilibrium - margin): dis_update = False if not (gen_update or dis_update): gen_update = True dis_update = True if gen_update: sess.run(opt_gen, feed_dict={ X: batch_xs, is_training: True}) if dis_update: sess.run(opt_dis, feed_dict={ X: batch_xs, is_training: True}) if batch_i % 50 == 0: print('real:', real_cost, '/ fake:', fake_cost) # Plot example reconstructions from latent layer recon = sess.run(G, feed_dict={ Z: zs, is_training: False}) recon = np.clip(recon, 0, 1) m1 = utils.montage(recon.reshape([-1] + crop_shape), 'imgs/manifold_%08d.png' % t_i) # Plot example reconstructions recon = sess.run(G, feed_dict={ X: test_xs, is_training: False}) recon = np.clip(recon, 0, 1) m2 = utils.montage(recon.reshape([-1] + crop_shape), 'imgs/reconstruction_%08d.png' % t_i) fig, axs = plt.subplots(1, 2, figsize=(15, 10)) axs[0].imshow(m1) axs[1].imshow(m2) plt.show() t_i += 1 if batch_i % 200 == 0: # Save the variables to disk. save_path = saver.save(sess, "./" + ckpt_name, global_step=batch_i, write_meta_graph=False) print("Model saved in file: %s" % save_path) # One of the threads has issued an exception. So let's tell all the # threads to shutdown. coord.request_stop() # Wait until all threads have finished. coord.join(threads) # Clean up the session. sess.close() """ Explanation: Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU. End of explanation """ tf.reset_default_graph() from libs import celeb_vaegan as CV net = CV.get_celeb_vaegan_model() """ Explanation: <a name="part-3---latent-space-arithmetic"></a> Part 3 - Latent-Space Arithmetic <a name="loading-the-pre-trained-model"></a> Loading the Pre-Trained Model We're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model: End of explanation """ sess = tf.Session() g = tf.get_default_graph() tf.import_graph_def(net['graph_def'], name='net', input_map={ 'encoder/variational/random_normal:0': np.zeros(512, dtype=np.float32)}) names = [op.name for op in g.get_operations()] print(names) """ Explanation: We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes). End of explanation """ X = g.get_tensor_by_name('net/x:0') Z = g.get_tensor_by_name('net/encoder/variational/z:0') G = g.get_tensor_by_name('net/generator/x_tilde:0') """ Explanation: Now let's get the relevant parts of the network: X, the input image to the network, Z, the input image's encoding, and G, the decoded image. In many ways, this is just like the Autoencoders we learned about in Session 3, except instead of Y being the output, we have G from our generator! And the way we train it is very different: we use an adversarial process between the generator and discriminator, and use the discriminator's own distance measure to help train the network, rather than pixel-to-pixel differences. End of explanation """ files = datasets.CELEB() img_i = 50 img = plt.imread(files[img_i]) plt.imshow(img) """ Explanation: Let's get some data to play with: End of explanation """ p = CV.preprocess(img) synth = sess.run(G, feed_dict={X: p[np.newaxis]}) fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(p) axs[1].imshow(synth[0] / synth.max()) """ Explanation: Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding). End of explanation """ net.keys() len(net['labels']) net['labels'] """ Explanation: So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected! <a name="exploring-the-celeb-net-attributes"></a> Exploring the Celeb Net Attributes Let's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary: End of explanation """ plt.imshow(img) [net['labels'][i] for i, attr_i in enumerate(net['attributes'][img_i]) if attr_i] """ Explanation: Let's see what attributes exist for one of the celeb images: End of explanation """ Z.get_shape() """ Explanation: <a name="find-the-latent-encoding-for-an-attribute"></a> Find the Latent Encoding for an Attribute The Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. "smiling". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space: End of explanation """ bald_label = net['labels'].index('Bald') bald_label """ Explanation: We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute: End of explanation """ bald_img_idxs = np.where(net['attributes'][:, bald_label])[0] bald_img_idxs """ Explanation: Let's get all the bald image indexes: End of explanation """ bald_imgs = [plt.imread(files[bald_img_i])[..., :3] for bald_img_i in bald_img_idxs[:100]] """ Explanation: Now let's just load 100 of their images: End of explanation """ plt.imshow(np.mean(bald_imgs, 0).astype(np.uint8)) """ Explanation: Let's see if the mean image looks like a good bald person or not: End of explanation """ bald_p = np.array([CV.preprocess(bald_img_i) for bald_img_i in bald_imgs]) """ Explanation: Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the "bald" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness. Let's first preprocess the images: End of explanation """ bald_zs = sess.run(Z, feed_dict=... """ Explanation: Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ bald_feature = np.mean(bald_zs, 0, keepdims=True) bald_feature.shape """ Explanation: Now let's calculate the mean encoding: End of explanation """ bald_generated = sess.run(G, feed_dict=... plt.imshow(bald_generated[0] / bald_generated.max()) """ Explanation: Let's try and synthesize from the mean bald feature now and see how it looks: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def get_features_for(label='Bald', has_label=True, n_imgs=50): label_i = net['labels'].index(label) label_idxs = np.where(net['attributes'][:, label_i] == has_label)[0] label_idxs = np.random.permutation(label_idxs)[:n_imgs] imgs = [plt.imread(files[img_i])[..., :3] for img_i in label_idxs] preprocessed = np.array([CV.preprocess(img_i) for img_i in imgs]) zs = sess.run(Z, feed_dict={X: preprocessed}) return np.mean(zs, 0) """ Explanation: <a name="latent-feature-arithmetic"></a> Latent Feature Arithmetic Let's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have... End of explanation """ # Explore different attributes z1 = get_features_for('Male', True, n_imgs=10) z2 = get_features_for('Male', False, n_imgs=10) z3 = get_features_for('Smiling', True, n_imgs=10) z4 = get_features_for('Smiling', False, n_imgs=10) b1 = sess.run(G, feed_dict={Z: z1[np.newaxis]}) b2 = sess.run(G, feed_dict={Z: z2[np.newaxis]}) b3 = sess.run(G, feed_dict={Z: z3[np.newaxis]}) b4 = sess.run(G, feed_dict={Z: z4[np.newaxis]}) fig, axs = plt.subplots(1, 4, figsize=(15, 6)) axs[0].imshow(b1[0] / b1.max()), axs[0].set_title('Male'), axs[0].grid('off'), axs[0].axis('off') axs[1].imshow(b2[0] / b2.max()), axs[1].set_title('Not Male'), axs[1].grid('off'), axs[1].axis('off') axs[2].imshow(b3[0] / b3.max()), axs[2].set_title('Smiling'), axs[2].grid('off'), axs[2].axis('off') axs[3].imshow(b4[0] / b4.max()), axs[3].set_title('Not Smiling'), axs[3].grid('off'), axs[3].axis('off') """ Explanation: Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values? <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ notmale_vector = z2 - z1 n_imgs = 5 amt = np.linspace(0, 1, n_imgs) zs = np.array([z1 + notmale_vector*amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: Now let's interpolate between the "Male" and "Not Male" categories: End of explanation """ smiling_vector = z3 - z4 amt = np.linspace(0, 1, n_imgs) zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1)) ax_i.grid('off') """ Explanation: And the same for smiling: End of explanation """ n_imgs = 5 amt = np.linspace(-1.5, 2.5, n_imgs) zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space. End of explanation """ def slerp(val, low, high): """Spherical interpolation. val has a range of 0 to 1.""" if val <= 0: return low elif val >= 1: return high omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high))) so = np.sin(omega) return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high amt = np.linspace(0, 1, n_imgs) zs = np.array([slerp(amt_i, z1, z2) for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: <a name="extensions"></a> Extensions Tom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint: https://arxiv.org/abs/1609.04468 - as well, be sure to check out his twitter bot, https://twitter.com/smilevector - which adds smiles to people :) - Note that the network we're using is only trained on aligned faces that are frontally facing, though this twitter bot is capable of adding smiles to any face. I suspect that he is running a face detection algorithm such as AAM, CLM, or ASM, cropping the face, aligning it, and then running a similar algorithm to what we've done above. Or else, perhaps he has trained a new model on faces that are not aligned. In any case, it is well worth checking out! Let's now try and use sinusoidal interpolation using his implementation in plat which I've copied below: End of explanation """ img = plt.imread('parag.png')[..., :3] img = CV.preprocess(img, crop_factor=1.0)[np.newaxis] """ Explanation: It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways. Let's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to "align" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face... End of explanation """ img_ = sess.run(G, feed_dict={X: img}) fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(img[0]), axs[0].grid('off') axs[1].imshow(np.clip(img_[0] / np.max(img_), 0, 1)), axs[1].grid('off') """ Explanation: Let's see how the network encodes it: End of explanation """ z1 = get_features_for('Blurry', True, n_imgs=25) z2 = get_features_for('Blurry', False, n_imgs=25) unblur_vector = z2 - z1 z = sess.run(Z, feed_dict={X: img}) n_imgs = 5 amt = np.linspace(0, 1, n_imgs) zs = np.array([z[0] + unblur_vector * amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the "Blurry" attribute vector: End of explanation """ from scipy.ndimage import gaussian_filter idxs = np.random.permutation(range(len(files))) imgs = [plt.imread(files[idx_i]) for idx_i in idxs[:100]] blurred = [] for img_i in imgs: img_copy = np.zeros_like(img_i) for ch_i in range(3): img_copy[..., ch_i] = gaussian_filter(img_i[..., ch_i], sigma=3.0) blurred.append(img_copy) # Now let's preprocess the original images and the blurred ones imgs_p = np.array([CV.preprocess(img_i) for img_i in imgs]) blur_p = np.array([CV.preprocess(img_i) for img_i in blurred]) # And then compute each of their latent features noblur = sess.run(Z, feed_dict={X: imgs_p}) blur = sess.run(Z, feed_dict={X: blur_p}) synthetic_unblur_vector = np.mean(noblur - blur, 0) n_imgs = 5 amt = np.linspace(0, 1, n_imgs) zs = np.array([z[0] + synthetic_unblur_vector * amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so: End of explanation """ z1 = get_features_for('Eyeglasses', True) z2 = get_features_for('Eyeglasses', False) glass_vector = z1 - z2 z = sess.run(Z, feed_dict={X: img}) n_imgs = 5 amt = np.linspace(0, 1, n_imgs) zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: For some reason, it also doesn't like my glasses very much. Let's try and add them back. End of explanation """ n_imgs = 5 amt = np.linspace(0, 1.0, n_imgs) zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i + amt_i * smiling_vector for amt_i in amt]) g = sess.run(G, feed_dict={Z: zs}) fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4)) for i, ax_i in enumerate(axs): ax_i.imshow(np.clip(g[i], 0, 1)) ax_i.grid('off') ax_i.axis('off') """ Explanation: Well, more like sunglasses then. Let's try adding everything in there now! End of explanation """ n_imgs = 5 amt = np.linspace(0, 1.5, n_imgs) z = sess.run(Z, feed_dict={X: imgs_p}) imgs = [] for amt_i in amt: zs = z + synthetic_unblur_vector * amt_i + amt_i * smiling_vector g = sess.run(G, feed_dict={Z: zs}) m = utils.montage(np.clip(g, 0, 1)) imgs.append(m) gif.build_gif(imgs, saveto='celeb.gif') ipyd.Image(url='celeb.gif?i={}'.format( np.random.rand()), height=1000, width=1000) """ Explanation: Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result: End of explanation """ imgs = [] ... DO SOMETHING AWESOME ! ... gif.build_gif(imgs=imgs, saveto='vaegan.gif') """ Explanation: Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as "celeb.gif" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance: http://www.scipy-lectures.org/advanced/image_processing/ - and see if you can find the encoding of another attribute that you then apply to your own images. You can even try it with many images and use the utils.montage function to create a large grid of images that evolves over your attributes. Or create a set of expressions perhaps. Up to you just explore! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """
cwharland/data-science-from-scratch
Clustering.ipynb
mit
class KMeans: """k-means algo""" def __init__(self, k): self.k = k # number of clusters self.means = None # means of clusters def classify(self, input): """return the index of the cluster to closest to input""" return min(range(self.k), key = lambda i: squared_distance(input, self.means[i])) def train(self, inputs): # choose k rand points as initials self.means = random.sample(inputs, self.k) assignments = None while True: # Find new assignments new_assignments = map(self.classify, inputs) # If nothing changed we're good to go if assignments == new_assignments: return # otherwise keep assignments = new_assignments # And compute new means based on assigments for i in range(self.k): # get points in cluster i_points = [p for p,a in zip(inputs, assignments) if a == i] # check for membership if i_points: self.means[i] = vector_mean(i_points) inputs = [[-14,-5],[13,13],[20,23],[-19,-11],[-9,-16],[21,27],[-49,15],[26,13],[-46,5],[-34,-1],[11,15],[-49,0],[-22,-16],[19,28],[-12,-8],[-13,-19],[-41,8],[-11,-6],[-25,-9],[-18,-3]] random.seed(0) clusterer = KMeans(2) clusterer.train(inputs) clusterer.means """ Explanation: k - means Randomly select starting locations for k points Assign data points to closest k point If no data changed its cluster membership stop If there was a change, compute new means and repeat End of explanation """ def squared_clustering_errors(inputs, k): """finds total square error for k""" clusterer = KMeans(k) clusterer.train(inputs) means = clusterer.means assignments = map(clusterer.classify, inputs) return sum(squared_distance(input, means[cluster]) for input, cluster in zip(inputs, assignments)) ks = range(1, len(inputs) + 1) errors = [squared_clustering_errors(inputs, k) for k in ks] %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_context('talk') plt.plot(ks, errors, '.') """ Explanation: Choosing k End of explanation """ def is_leaf(cluster): """a cluster is a leaf if it has len 1""" return len(cluster) == 1 def get_children(cluster): """returns children of the cluster if merged else exception""" if is_leaf(cluster): raise TypeError("a leaf cluster has no children") else: return cluster[1] def get_values(cluster): """returns the value in the cluster (if leaf) or all values in leaf clusters below""" if is_leaf(cluster): return cluster else: return [value for child in get_children(cluster) for value in get_values(child)] def cluster_distance(cluster1, cluster2, distance_agg = min): """compute all pairwise distances btw clusters and apply distance_agg to the list""" return distance_agg([distance(input1, input2) for input1 in get_values(cluster1) for input2 in get_values(cluster2)]) def get_merge_order(cluster): if is_leaf(cluster): return float('inf') else: return cluster[0] def bottom_up_cluster(inputs, distance_agg = min): # we start with all leaf clusters (this is bottom up after all) clusters = [(input,) for input in inputs] # Don't stop until we have one cluster while len(clusters) > 1: # the two clusters we want to merge # are the clusters that are closest without touching c1, c2 = min([(cluster1, cluster2) for i, cluster1 in enumerate(clusters) for cluster2 in clusters[:i]], key = lambda (x,y): cluster_distance(x, y, distance_agg)) # the above is really inefficient in distance calc # we should instead "look up" the distance # once we merge them we remove them from the list clusters = [c for c in clusters if c != c1 and c != c2] # merge them with order = # of clusters left (so that last merge is "0") merged_cluster = (len(clusters), [c1, c2]) # append the merge clusters.append(merged_cluster) return clusters[0] base_cluster = bottom_up_cluster(inputs) base_cluster def generate_clusters(base_cluster, num_clusters): clusters = [base_cluster] # keep going till we have the desired number of clusters while len(clusters) < num_clusters: # choose the last-merge next_cluster = min(clusters, key = get_merge_order) # remove it from the list clusters = [c for c in clusters if c != next_cluster] # add its children to the list (this is an unmerge) clusters.extend(get_children(next_cluster)) return clusters three_clusters = [get_values(cluster) for cluster in generate_clusters(base_cluster, 3)] three_clusters """ Explanation: Hierarchical Clustering End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/text_classification/solutions/rnn_encoder_decoder.ipynb
apache-2.0
import os import pickle import sys import nltk import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras.layers import ( Dense, Embedding, GRU, Input, ) from tensorflow.keras.models import ( load_model, Model, ) import utils_preproc print(tf.__version__) SEED = 0 MODEL_PATH = 'translate_models/baseline' DATA_URL = 'http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip' LOAD_CHECKPOINT = False tf.random.set_seed(SEED) """ Explanation: Simple RNN Encode-Decoder for Translation Learning Objectives 1. Learn how to create a tf.data.Dataset for seq2seq problems 1. Learn how to train an encoder-decoder model in Keras 1. Learn how to save the encoder and the decoder as separate models 1. Learn how to piece together the trained encoder and decoder into a translation function 1. Learn how to use the BLUE score to evaluate a translation model Introduction In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture. We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function. At last, we'll benchmark our results using the industry standard BLEU score. End of explanation """ path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin=DATA_URL, extract=True) path_to_file = os.path.join( os.path.dirname(path_to_zip), "spa-eng/spa.txt" ) print("Translation data stored at:", path_to_file) data = pd.read_csv( path_to_file, sep='\t', header=None, names=['english', 'spanish']) data.sample(3) """ Explanation: Downloading the Data We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format: May I borrow this book? ¿Puedo tomar prestado este libro? The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers. End of explanation """ raw = [ "No estamos comiendo.", "Está llegando el invierno.", "El invierno se acerca.", "Tom no comio nada.", "Su pierna mala le impidió ganar la carrera.", "Su respuesta es erronea.", "¿Qué tal si damos un paseo después del almuerzo?" ] processed = [utils_preproc.preprocess_sentence(s) for s in raw] processed """ Explanation: From the utils_preproc package we have written for you, we will use the following functions to pre-process our dataset of sentence pairs. Sentence Preprocessing The utils_preproc.preprocess_sentence() method does the following: 1. Converts sentence to lower case 2. Adds a space between punctuation and words 3. Replaces tokens that aren't a-z or punctuation with space 4. Adds &lt;start&gt; and &lt;end&gt; tokens For example: End of explanation """ integerized, tokenizer = utils_preproc.tokenize(processed) integerized """ Explanation: Sentence Integerizing The utils_preproc.tokenize() method does the following: Splits each sentence into a token list Maps each token to an integer Pads to length of longest sentence It returns an instance of a Keras Tokenizer containing the token-integer mapping along with the integerized sentences: End of explanation """ tokenizer.sequences_to_texts(integerized) """ Explanation: The outputted tokenizer can be used to get back the actual works from the integers representing them: End of explanation """ def load_and_preprocess(path, num_examples): with open(path_to_file, 'r') as fp: lines = fp.read().strip().split('\n') # TODO 1a sentence_pairs = [ [utils_preproc.preprocess_sentence(sent) for sent in line.split('\t')] for line in lines[:num_examples] ] return zip(*sentence_pairs) en, sp = load_and_preprocess(path_to_file, num_examples=10) print(en[-1]) print(sp[-1]) """ Explanation: Creating the tf.data.Dataset load_and_preprocess Let's first implement a function that will read the raw sentence-pair file and preprocess the sentences with utils_preproc.preprocess_sentence. The load_and_preprocess function takes as input - the path where the sentence-pair file is located - the number of examples one wants to read in It returns a tuple whose first component contains the english preprocessed sentences, while the second component contains the spanish ones: End of explanation """ def load_and_integerize(path, num_examples=None): targ_lang, inp_lang = load_and_preprocess(path, num_examples) # TODO 1b input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang) target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang) return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer """ Explanation: load_and_integerize Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple: python (input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer) where input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences inp_lang_tokenizer is the source language tokenizer targ_lang_tokenizer is the target language tokenizer End of explanation """ TEST_PROP = 0.2 NUM_EXAMPLES = 30000 """ Explanation: Train and eval splits We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU. Let us set variable for that: End of explanation """ input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize( path_to_file, NUM_EXAMPLES) """ Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively: End of explanation """ max_length_targ = target_tensor.shape[1] max_length_inp = input_tensor.shape[1] """ Explanation: Let us store the maximal sentence length of both languages into two variables: End of explanation """ splits = train_test_split( input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED) input_tensor_train = splits[0] input_tensor_val = splits[1] target_tensor_train = splits[2] target_tensor_val = splits[3] """ Explanation: We are now using scikit-learn train_test_split to create our splits: End of explanation """ (len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)) """ Explanation: Let's make sure the number of example in each split looks good: End of explanation """ print("Input Language; int to word mapping") print(input_tensor_train[0]) print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), '\n') print("Target Language; int to word mapping") print(target_tensor_train[0]) print(utils_preproc.int2word(targ_lang, target_tensor_train[0])) """ Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the &lt;start&gt; token is alwasy encoded as 1, while the &lt;end&gt; token is always encoded as 0: End of explanation """ def create_dataset(encoder_input, decoder_input): # TODO 1c # shift ahead by 1 target = tf.roll(decoder_input, -1, 1) # replace last column with 0s zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32) target = tf.concat((target[:, :-1], zeros), axis=-1) dataset = tf.data.Dataset.from_tensor_slices( ((encoder_input, decoder_input), target)) return dataset """ Explanation: Create tf.data dataset for train and eval Below we implement the create_dataset function that takes as input * encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences * decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences It returns a tf.data.Dataset containing examples for the form python ((source_sentence, target_sentence), shifted_target_sentence) where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1. Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences. End of explanation """ BUFFER_SIZE = len(input_tensor_train) BATCH_SIZE = 64 train_dataset = create_dataset( input_tensor_train, target_tensor_train).shuffle( BUFFER_SIZE).repeat().batch(BATCH_SIZE, drop_remainder=True) eval_dataset = create_dataset( input_tensor_val, target_tensor_val).batch( BATCH_SIZE, drop_remainder=True) """ Explanation: Let's now create the actual train and eval dataset using the function above: End of explanation """ EMBEDDING_DIM = 256 HIDDEN_UNITS = 1024 INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1 TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1 """ Explanation: Training the RNN encoder-decoder model We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN. End of explanation """ encoder_inputs = Input(shape=(None,), name="encoder_input") # TODO 2a encoder_inputs_embedded = Embedding( input_dim=INPUT_VOCAB_SIZE, output_dim=EMBEDDING_DIM, input_length=max_length_inp)(encoder_inputs) encoder_rnn = GRU( units=HIDDEN_UNITS, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded) """ Explanation: Let's implement the encoder network with Keras functional API. It will * start with an Input layer that will consume the source language integerized sentences * then feed them to an Embedding layer of EMBEDDING_DIM dimensions * which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS The output of the encoder will be the encoder_outputs and the encoder_state. End of explanation """ decoder_inputs = Input(shape=(None,), name="decoder_input") # TODO 2b decoder_inputs_embedded = Embedding( input_dim=TARGET_VOCAB_SIZE, output_dim=EMBEDDING_DIM, input_length=max_length_targ)(decoder_inputs) decoder_rnn = GRU( units=HIDDEN_UNITS, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') decoder_outputs, decoder_state = decoder_rnn( decoder_inputs_embedded, initial_state=encoder_state) """ Explanation: We now implement the decoder network, which is very similar to the encoder network. It will * start with an Input layer that will consume the source language integerized sentences * then feed that input to an Embedding layer of EMBEDDING_DIM dimensions * which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked! The output of the encoder will be the decoder_outputs and the decoder_state. End of explanation """ decoder_dense = Dense(TARGET_VOCAB_SIZE, activation='softmax') predictions = decoder_dense(decoder_outputs) """ Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output: End of explanation """ # TODO 2c model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.summary() """ Explanation: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model. While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder: End of explanation """ STEPS_PER_EPOCH = len(input_tensor_train)//BATCH_SIZE EPOCHS = 1 history = model.fit( train_dataset, steps_per_epoch=STEPS_PER_EPOCH, validation_data=eval_dataset, epochs=EPOCHS ) """ Explanation: Let's now train the model! End of explanation """ if LOAD_CHECKPOINT: encoder_model = load_model(os.path.join(MODEL_PATH, 'encoder_model.h5')) decoder_model = load_model(os.path.join(MODEL_PATH, 'decoder_model.h5')) else: # TODO 3a encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state) decoder_state_input = Input(shape=(HIDDEN_UNITS,), name="decoder_state_input") # Reuses weights from the decoder_rnn layer decoder_outputs, decoder_state = decoder_rnn( decoder_inputs_embedded, initial_state=decoder_state_input) # Reuses weights from the decoder_dense layer predictions = decoder_dense(decoder_outputs) decoder_model = Model( inputs=[decoder_inputs, decoder_state_input], outputs=[predictions, decoder_state] ) """ Explanation: Implementing the translation (or decoding) function We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)! We do however know the first token of the decoder input, which is the &lt;start&gt; token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the &lt;end&gt; token, or we reach some defined max length. So, the strategy now is to split our trained network into two independent Keras models: an encoder model with signature encoder_inputs -&gt; encoder_state a decoder model with signature [decoder_inputs, decoder_state_input] -&gt; [predictions, decoder_state] This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the &lt;start&gt; token at step 1. Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state. At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word. This process can be continued until the decoder produces the token &lt;stop&gt;. This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model. Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want. End of explanation """ def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50): """ Arguments: input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN) output_tokenizer: Tokenizer used to conver from int to words Returns translated sentences """ # Encode the input as state vectors. states_value = encoder_model.predict(input_seqs) # Populate the first character of target sequence with the start character. batch_size = input_seqs.shape[0] target_seq = tf.ones([batch_size, 1]) decoded_sentences = [[] for _ in range(batch_size)] # TODO 4: Sampling loop for i in range(max_decode_length): output_tokens, decoder_state = decoder_model.predict( [target_seq, states_value]) # Sample a token sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1) tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index) for j in range(batch_size): decoded_sentences[j].append(tokens[j]) # Update the target sequence (of length 1). target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1) # Update states states_value = decoder_state return decoded_sentences """ Explanation: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems). decode_sequences will take as input * input_seqs which is the integerized source language sentence tensor that the encoder can consume * output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers * max_decode_length which is the length after which we stop decoding if the &lt;stop&gt; token has not been predicted Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method. End of explanation """ sentences = [ "No estamos comiendo.", "Está llegando el invierno.", "El invierno se acerca.", "Tom no comio nada.", "Su pierna mala le impidió ganar la carrera.", "Su respuesta es erronea.", "¿Qué tal si damos un paseo después del almuerzo?" ] reference_translations = [ "We're not eating.", "Winter is coming.", "Winter is coming.", "Tom ate nothing.", "His bad leg prevented him from winning the race.", "Your answer is wrong.", "How about going for a walk after lunch?" ] machine_translations = decode_sequences( utils_preproc.preprocess(sentences, inp_lang), targ_lang, max_length_targ ) for i in range(len(sentences)): print('-') print('INPUT:') print(sentences[i]) print('REFERENCE TRANSLATION:') print(reference_translations[i]) print('MACHINE TRANSLATION:') print(machine_translations[i]) """ Explanation: Now we're ready to predict! End of explanation """ if not LOAD_CHECKPOINT: os.makedirs(MODEL_PATH, exist_ok=True) # TODO 3b model.save(os.path.join(MODEL_PATH, 'model.h5')) encoder_model.save(os.path.join(MODEL_PATH, 'encoder_model.h5')) decoder_model.save(os.path.join(MODEL_PATH, 'decoder_model.h5')) with open(os.path.join(MODEL_PATH, 'encoder_tokenizer.pkl'), 'wb') as fp: pickle.dump(inp_lang, fp) with open(os.path.join(MODEL_PATH, 'decoder_tokenizer.pkl'), 'wb') as fp: pickle.dump(targ_lang, fp) """ Explanation: Checkpoint Model Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse: End of explanation """ def bleu_1(reference, candidate): reference = list(filter(lambda x: x != '', reference)) # remove padding candidate = list(filter(lambda x: x != '', candidate)) # remove padding smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1 return nltk.translate.bleu_score.sentence_bleu( reference, candidate, (1,), smoothing_function) def bleu_4(reference, candidate): reference = list(filter(lambda x: x != '', reference)) # remove padding candidate = list(filter(lambda x: x != '', candidate)) # remove padding smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1 return nltk.translate.bleu_score.sentence_bleu( reference, candidate, (.25, .25, .25, .25), smoothing_function) """ Explanation: Evaluation Metric (BLEU) Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation. Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU). It is quick and inexpensive to calculate. It allows flexibility for the ordering of words and phrases. It is easy to understand. It is language independent. It correlates highly with human evaluation. It has been widely adopted. The score is from 0 to 1, where 1 is an exact match. It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4 It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric. The NLTK framework has an implementation that we will use. We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now. For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/ End of explanation """ %%time num_examples = len(input_tensor_val) bleu_1_total = 0 bleu_4_total = 0 for idx in range(num_examples): # TODO 5 reference_sentence = utils_preproc.int2word( targ_lang, target_tensor_val[idx][1:]) decoded_sentence = decode_sequences( input_tensor_val[idx:idx+1], targ_lang, max_length_targ)[0] bleu_1_total += bleu_1(reference_sentence, decoded_sentence) bleu_4_total += bleu_4(reference_sentence, decoded_sentence) print('BLEU 1: {}'.format(bleu_1_total/num_examples)) print('BLEU 4: {}'.format(bleu_4_total/num_examples)) """ Explanation: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes. End of explanation """
Jackporter415/phys202-2015-work
assignments/assignment05/InteractEx03.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 3 Imports End of explanation """ def soliton(x, t, c, a): """Return phi(x, t) for a soliton wave with constants c and a.""" if type(x) or type(t) == np.array: answer = np.array(0.5 * c/(np.cosh(np.sqrt(c)/2*(x-c*t-a)))**2) else: answer = 0.5 * c/(np.cosh(np.sqrt(c)/2*(x-c*t-a))) return answer assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5])) """ Explanation: Using interact for animation with data A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution: $$ \phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right] $$ The constant c is the velocity and the constant a is the initial location of the soliton. Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself. End of explanation """ tmin = 0.0 tmax = 10.0 tpoints = 100 t = np.linspace(tmin, tmax, tpoints) xmin = 0.0 xmax = 10.0 xpoints = 200 x = np.linspace(xmin, xmax, xpoints) c = 1.0 a = 0.0 """ Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays: End of explanation """ phi = np.ndarray(shape=(xpoints,tpoints), dtype = float) for i in x: for j in t: phi[i,j] = soliton(x[i],t[j],c,a) assert phi.shape==(xpoints, tpoints) assert phi.ndim==2 assert phi.dtype==np.dtype(float) assert phi[0,0]==soliton(x[0],t[0],c,a) """ Explanation: Compute a 2d NumPy array called phi: It should have a dtype of float. It should have a shape of (xpoints, tpoints). phi[i,j] should contain the value $\phi(x[i],t[j])$. End of explanation """ def plot_soliton_data(i=0): """Plot the soliton data at t[i] versus x.""" plt.plot(soliton(x,t[i],c,a)) ax = plt.gca() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.get_xaxis().tick_bottom() ax.axes.get_yaxis().tick_left() plt.title('Soliton Wave') plt.xlabel('X') plt.ylabel('Psi(x,t)') plot_soliton_data(0) assert True # leave this for grading the plot_soliton_data function """ Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful. End of explanation """ interact(plot_soliton_data,i = (0,50)) assert True # leave this for grading the interact with plot_soliton_data cell """ Explanation: Use interact to animate the plot_soliton_data function versus time. End of explanation """
CUBoulder-ASTR2600/lectures
lecture_11_arrays_plotting.ipynb
isc
%matplotlib inline """ Explanation: Plotting Arrays Using matplotlib End of explanation """ import numpy as np import matplotlib.pyplot as pl # import this for plotting routines """ Explanation: The argument after the ipython magic is called the backend for plotting. There are several available, also for creating their own zoomable windows. But we also can zoom within the notebook, see below. End of explanation """ a = 9.8 # Acceleration m s^{-2} count = 101 # Number of numbers timeArray = np.linspace(0, 10, count) # Create an array of 101 times between 0 and 10 (inclusive) distArray = 0.5 * a * timeArray**2 # Create an array of distances calculate from the times """ Explanation: Refresher -- acceleration with no initial velocity or displacement End of explanation """ print(timeArray) print print(distArray) """ Explanation: Q. What do these arrays (distArray and timeArray) contain? End of explanation """ pl.scatter(timeArray, distArray, color = 'k') """ Explanation: To plot distArray vs. timeArray with a scatter plot: End of explanation """ pl.scatter(timeArray, distArray, color = 'k') pl.xlim(4, 6) pl.ylim(50, 200) pl.xlabel('time (s)') pl.ylabel('distance (m)') """ Explanation: To plot just a section to see the discrete nature (and add labels): End of explanation """ %matplotlib notebook pl.scatter(timeArray, distArray, color = 'k') pl.xlim(4, 6) pl.ylim(50, 200) pl.xlabel('time (s)') pl.ylabel('distance (m)') %matplotlib inline """ Explanation: Now with the notebook backend: End of explanation """ pl.plot(timeArray, distArray, color='b', ls='-') pl.xlabel('time (s)') # xlabel is the abscissa pl.ylabel('distance (m)') # ylabel is the ordinate """ Explanation: To plot distArray vs. timeArray with a blue solid line: End of explanation """ pl.xlabel('time1 (s)') pl.plot(timeArray, distArray, color='b', ls='-') pl.ylabel('distance (m)') pl.title('Position vs. Time') pl.savefig('position_v_time.pdf') # In the same cell as pl.plot pl.savefig('position_v_time.eps') pl.savefig('position_v_time.png') """ Explanation: To save the figure, use savefig('filename') and the .pdf, or .eps, or .png, or ... extension (which Python interprets for you!): End of explanation """ ls position_v_time* """ Explanation: Q. Where will these files be saved on our computer? End of explanation """ yArray = np.linspace(0, 5, 6) # take care of differences of interval determination here! zArray = yArray[1:4] print(yArray, zArray) # Q. What will y and z contain? yArray[3] = 10 """ Explanation: More array methods Three topics today: Array slicing vs. copying "Allocating" or "initializing" arrays Boolean logic on arrays Making copies of arrays End of explanation """ print(yArray, zArray) """ Explanation: Q. What does the next command yield? End of explanation """ yArray = np.linspace(0, 5, 6) zArray = yArray.copy() print(yArray, zArray) zArray = yArray.copy()[1:4] # you only `catch` the slice into new variable, rest of copy NOT print(yArray, zArray) yArray[3] = 10 print(yArray, zArray) """ Explanation: zArray is not a copy of yArray, it is a slice of yArray! AND: All arrays generated by basic slicing are always views of the original array. In other words, the variable zArray is a reference to three elements within yArray, elements 1, 2, and 3. If this is not the desired behavior, copy arrays: End of explanation """ xArray = np.array([1, 2, 3]) aArray = xArray.copy() aArray """ Explanation: "copy" is an attribute of every numpy array, as are "shape", "size", "min", "max", etc. Allocating Arrays If we want an array with the same "shape" as another array, we've seen that we can copy an array with: End of explanation """ print(xArray.shape) # this is a 1D vector print(xArray.ndim) xArray xArray.shape = (3,1) print(xArray.shape) # Now it's a 3x1 2D matrix! print(xArray.ndim) xArray xArray.shape = (3,) aArray = np.zeros(xArray.shape, xArray.dtype) print(aArray.shape) aArray """ Explanation: then fill the array with the appropriate values. However, we could also use numpy.zeros with the attributes xArray.shape and xArray.dtype: End of explanation """ np.zeros((2,3,4)) """ Explanation: Which gives aArray the same "shape" and data type as xArray. Q. What do I mean by the "shape" of the array? End of explanation """ aArray = np.zeros_like(xArray) np.zeros?? bArray = np.ones_like(xArray) print(aArray, bArray) """ Explanation: Q. And what is the data type (dtype)? Alternatively we could do: End of explanation """ # remember, we already imported numpy (as np)! xArray = np.linspace(1, 10, 10) xArray """ Explanation: Generalized Indexing Subarrays can be sliced too, with or without range: End of explanation """ # Note the double brackets indicating a subarray xArray[[1, 5, 6]] = -1 xArray # Using range instead: xArray = np.linspace(1, 10, 10) xArray[range(3, 10, 3)] = -1 xArray """ Explanation: Q. What will xArray contain? End of explanation """ # Compare xArray = np.linspace(1, 10, 10) xArray[[3, 6, 9]] = -1 xArray """ Explanation: Q. What will xArray contain? End of explanation """ xArray myArray = xArray < 0 myArray xArray[xArray < 0] """ Explanation: Boolean Logic When do I use that? * missing or invalid data * investigating subset of a dataset * masking/filtering etc. Complementary methods for dealing with missing or invalid data: numpy masked arrays http://docs.scipy.org/doc/numpy/reference/maskedarray.html (masked arrays are a bit harder to use, but offer more powerful features) For example, return a slice of the array consisting of negative elements only: End of explanation """ xArray = np.arange(-5, 5) xArray xArray[xArray < 0] = xArray.max() xArray """ Explanation: This will replace the elements of a new xArray with values less than zero with the maximum of xArray: End of explanation """ xArray = np.arange(-5, 5) xArray """ Explanation: Compound Conditionals & Arrays numpy has routines for doing boolean logic: End of explanation """ np.logical_and(xArray > 0, xArray % 2 == 1) # % is the modulus: x % 2 == 1 means the remainder of x/2 is 1 # Q. So, what should running this cell give us? """ Explanation: "and" End of explanation """ np.logical_or(xArray == xArray.min(), xArray == xArray.max()) np.logical_not(xArray == xArray.min()) print(np.any(xArray > 10)) print(np.any(xArray < -2)) print(np.all(xArray > -10)) print(np.all(xArray > -2)) """ Explanation: "or" End of explanation """
hpparvi/PyTransit
notebooks/roadrunner/roadrunner_model_example_3.ipynb
gpl-2.0
%pylab inline rc('figure', figsize=(13,6)) def plot_lc(time, flux, c=None, ylim=(0.9865, 1.0025), ax=None, alpha=1): if ax is None: fig, ax = subplots() else: fig, ax = None, ax ax.plot(time, flux, c=c, alpha=alpha) ax.autoscale(axis='x', tight=True) setp(ax, xlabel='Time [d]', ylabel='Flux', xlim=time[[0,-1]], ylim=ylim) if fig is not None: fig.tight_layout() return ax """ Explanation: RoadRunner transit model example III - LDTk-based limb darkening Author: Hannu Parviainen<br> Last modified: 16.9.2020 The LDTk limb darkening model, pytransit.LDTkLDModel, works as an example of a more complex limb darkening model that is best implemented as a subclass of pytransit.LDModel. The LDTk limb darkening model uses LDTk to create a set of stellar limb darkening profile samples given the stellar $T_\mathrm{Eff}$, $\log g$, and $z$ with their uncertainties, and uses the profiles directly to calculate the transit. The profiles are created from the PHOENIX-calculated specific intensity spectra by Husser (2013), and the model completely avoids approximating the limb darkening profile with an analytical function. The model is parameter free after the stellar parameters have been given. The model can be frozen for model optimisation, and thawn for MCMC posterior estimation. When frozen, the model returns the average limb darkening profile interpolated from the profile at the given $\mu$ locations. When thawn, each model evaluation chooses a random limb darkening profile from the sample and uses interpolation to evaluate the model at the wanted $\mu$ values. End of explanation """ from pytransit import RoadRunnerModel, LDTkLDModel from ldtk import sdss_g, sdss_i, sdss_z time = linspace(-0.05, 0.05, 1500) """ Explanation: Import the model First, we import the RoadRunnerModel and LDTkLDModel and some simple transmission functions from LDTk. End of explanation """ ldm = LDTkLDModel(teff=(5500, 150), logg=(4.5, 0.1), z=(0.0, 0.1), pbs=[sdss_i], frozen=True) """ Explanation: Example 1: single passband The LDTkLDModel is initialised by giving it the stellar parameters and passband transmission functions, End of explanation """ tm = RoadRunnerModel(ldm) tm.set_data(time) """ Explanation: and given to the RoadRunnnerModel as any other limb darkening model. End of explanation """ flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0) plot_lc(time, flux1); """ Explanation: after which the transit model evaluation goes as usual End of explanation """ ldm = LDTkLDModel([sdss_g, sdss_z], (5500, 150), (4.5, 0.1), (0.0, 0.1), frozen=True) lcids = zeros(time.size, int) lcids[time.size//2:] = 1 tm = RoadRunnerModel(ldm) tm.set_data(time, lcids=lcids, pbids=[0,1]) flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0) plot_lc(time, flux1, ylim=(0.986, 1.0025)); """ Explanation: Example 2: multiple passbands End of explanation """ ldm.frozen = False flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0) ax = plot_lc(time, flux1); for i in range(10): flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0) ax = plot_lc(time, flux1, ax=ax, c='C0', alpha=0.25); setp(ax, ylim=(0.986, 1.0025)) """ Explanation: Thawing the model After thawing, the model takes a random sample from the limb darkening profile sample set every time it is evaluated. We don't want this behaviour when fitting a model to observations, since this sort of randomness can easily confuse even the best optimiser, but is exactly what we want then doing MCMC for parameter posterior estimation. End of explanation """
dereneaton/ipyrad
tests/API_user-guide.ipynb
gpl-3.0
import ipyrad as ip """ Explanation: User guide to the ipyrad API Welcome! This tutorial will introduce you to the basic and advanced features of working with the ipyrad API to assemble RADseq data in Python. The API offers many advantages over the command-line interface, but requires a little more work up front to learn the necessary tools for using it. This includes knowing some very rudimentary Python, and setting up a Jupyter notebook. End of explanation """ ## this is a comment, it is not executed, but the code below it is. import ipyrad as ip ## here we print the version print ip.__version__ """ Explanation: Getting started with Jupyter notebooks This tutorial is an example of a Jupyter Notebook. If you've installed ipyrad then you already have jupyter installed as well, which you can start from the command-line (type jupyter-notebook) to launch an interactive notebook like this one. For some background on how jupyter notebooks work I would recommend searching on google, or watching this YouTube video. Once you have the hang of it, follow along with this code in your own notebook. Connecting your notebook to a cluster We have a separate tutorial about using Jupyter notebooks and connecting Jupyter notebooks to a computing cluster (see here). For this notebook I will assume that you are running this code in a Jupyter notebook, and that you have an ipcluster instance running either locally or remotely on a cluster. If an ipcluster instance is running on its default settings (default profile) then ipyrad will automatically use all available cores started on that cluster instance. Import Python libraries The only library we need to import is ipyrad. The import command is usually the first code called in a Python document to load any necessary packages. In the code below, we use a convenient trick in Python to tell it that we want to refer to ipyrad simply as ip. This saves us a little space since we might type the name many times. Below that, we use the print statement to print the version number of ipyrad. This is good practice to keep a record of which software version we are using. End of explanation """ ## create an Assembly object named data1. data1 = ip.Assembly("data1") """ Explanation: The ipyrad API data structures There are two main objects in ipyrad: Assembly class objects and Sample class objects. And in fact, most users will only ever interact with the Assembly class objects, since Sample objects are stored inside of the Assembly objects, and the Assembly objects have functions, such as merge, and branch, that are designed for manipulating and exchanging Samples between different Assemblies. Assembly Class objects Assembly objects are a unique data structure that ipyrad uses to store and organize information about how to Assemble RAD-seq data. It contains functions that can be applied to data, such as clustering, and aligning sequences. And it stores information about which settings (prarmeters) to use for assembly functions, and which Samples the functions should be applied to. You can think of it mostly as a container that has a set of rules associated with it. To create a new Assembly object use the ip.Assembly() function and pass it the name of your new Assembly. Creating an object in this way has exactly the same effect as using the -n {name} argument in the ipyrad command line tool, except in the API instead of creating a params.txt file, we store the new Assembly information in a Python variable. This can be named anything you want. Below I name the variable data1 so it is easy to remember that the Assembly name is also data1. End of explanation """ ## setting/modifying parameters for this Assembly object data1.set_params('project_dir', "pedicularis") data1.set_params('sorted_fastq_path', "./example_empirical_rad/*.gz") data1.set_params('filter_adapters', 2) data1.set_params('datatype', 'rad') ## prints the parameters to the screen data1.get_params() """ Explanation: Setting parameters You now have a Assembly object with a default set of parameters associated with it, analogous to the params file in the command line tool. You can view and modify these parameters using two arguments to the Assembly object, set_params() and get_params(). End of explanation """ ## this should raise an error, since clust_threshold cannot be 2.0 data1.set_params("clust_threshold", 2.0) """ Explanation: Instantaneous parameter (and error) checking A nice feature of the set_params() function in the ipyrad API is that it checks your parameter settings at the time that you change them to make sure that they are compatible. By contrast, the ipyrad CLI does not check params until you try to run a step function. Below you can see that an error is raised when we try to set the "clust_threshold" parameters with an integer, since it requires the value to be a float (decimal). It's hard to catch every possible error, but we've tried to catch many of the most common errors in parameter settings. End of explanation """ print data1.name ## another example attribute listing directories ## associated with this object. Most are empty b/c ## we haven't started creating files yet. But you ## can see that it shows the fastq directory. print data1.dirs """ Explanation: Attributes of Assembly objects Assembly objects have many attributes which you can access to learn more about your Assembly. To see the full list of options you can type the name of your Assembly variable, followed by a '.', and then press <tab>. This will use tab-completion to list all of the available options. Below I print a few examples. End of explanation """ ## run step 1 to create Samples objects data1.run("1") """ Explanation: Sample Class objects Sample Class objects correspond to individual samples in your study. They store the file paths pointing to the data that is saved on disk, and they store statistics about the results of each step of the Assembly. Sample class objects are stored inside Assembly class objects, and can be added, removed, or merged with other Sample class objects between differnt Assemblies. Creating Samples Samples are created during step 1 of the ipyrad Assembly. This involves either demultiplexing raw data files or loading data files that are already demultiplexed. For this example we are loading demultiplexed data files. Because we've already entered the path to our data files in sorted_fastq_path of our Asssembly object, we can go ahead and run step 1 to create Sample objects that are linked to the data files. End of explanation """ ## The force flag allows you to re-run a step that is already finished data1.run("1", force=True) """ Explanation: The .run() command The run function is equivalent to the -s argument in the ipyrad command line tool, and tell ipyrad which steps (1-7) of the assembly to run. If a step has already been run on your samples they will be skipped and it will print a warning. You can enforce overwriting the existing data using the force flag. End of explanation """ ## this is the explicit way to connect to ipcluster import ipyparallel ## connect to a running ipcluster instance ipyclient = ipyparallel.Client() ## or, if you used a named profile then enter that ipyclient = ipyparallel.Client(profile="default") ## call the run function of ipyrad and pass it the ipyclient ## process that you want the work distributed on. data1.run("1", ipyclient=ipyclient, force=True) """ Explanation: The run command will automatically parallelize work across all cores of a running ipcluster instance (remember, you should have started this outside of notebook. Or you can start it now.) If ipcluster is running on the default profile then ipyrad will detect and use it when the run command is called. However, if you start an ipcluster instance with a specific profile name then you will need to connect to it using the ipyparallel library and then pass the connection client object to ipyrad. I'll show an example of that here. End of explanation """ ## Sample objects stored as a dictionary data1.samples """ Explanation: Samples stored in an Assembly You can see below that after step 1 has been run there will be a collection of Sample objects stored in an Assembly that can be accessed from the attribute .Samples. They are stored as a dictionary in which the keys are Sample names and the values of the dictionary are the Sample objects. End of explanation """ ## run step 1 to create Samples objects data1.run("1", show_cluster=True, force=True) """ Explanation: The progress bar As you can see running a step of the analysis prints a progress bar similar to what you would see in the ipyrad command line tool. There are some differences, however. It shows on the far right "s1" to indicate that this was step 1 of the assembly, and it does not print information about our cluster setup (e.g., number of nodes and cores). This was a stylistic choice to provide a cleaner output for analyses inside Jupyter notebooks. You can view the cluster information when running the step functions by adding the argument show_cluster=True. End of explanation """ ## print full stats summary print data1.stats ## print full stats for step 1 (in this case it's the same but for other ## steps the stats_dfs often contains more information.) print data1.stats_dfs.s1 """ Explanation: Viewing results of Assembly steps Results for each step are stored in Sample class objects, however, Assembly class objects have functions available for summarizing the stats of all Sample class objects that they contain, which provides an easy way to view results. This includes .stats attribute, and the .stats_dfs attributes for each step. End of explanation """ ## access all Sample names in data1 allsamples = data1.samples.keys() print "Samples in data1:\n", "\n".join(allsamples) ## Drop the two samples from this list that have "prz" in their names. ## This is a programmatic way to remove the outgroup samples. subs = [i for i in allsamples if "prz" not in i] ## use branching to create new Assembly named 'data2' ## with only Samples whose name is in the subs list data2 = data1.branch("data2", subsamples=subs) print "Samples in data2:\n", "\n".join(data2.samples) """ Explanation: Branching to subsample taxa Branching in the ipyrad API works the same as in the CLI, but in many ways is easier to use because you can access attributes of the Assembly objects much more easily, such as when you want to provide a list of Sample names in order to subsample (exclude samples) during the branching process. Below is an example. End of explanation """ ## Start by creating an initial assembly, setting the path to your data, ## and running step1. I set a project-dir so that all of our data sets ## will be grouped into a single directory called 'branch-test'. data = ip.Assembly("base") data.set_params("project_dir", "branch-test") data.set_params("raw_fastq_path", "./ipsimdata/rad_example_R1_.fastq.gz") data.set_params("barcodes_path", "./ipsimdata/rad_example_barcodes.txt") ## step 1: load in the data data.run('1') ## let's create a dictionary to hold the finished assemblies adict = {} ## iterate over parameters settings creating a new named assembly for filter_setting in [1, 2]: ## create a new name for the assembly and branch newname = data.name + "_f{}".format(filter_setting) child1 = data.branch(newname) child1.set_params("filter_adapters", filter_setting) child1.run("2") ## iterate over clust thresholds for clust_threshold in ['0.85', '0.90']: newname = child1.name + "_c{}".format(clust_threshold[2:]) child2 = child1.branch(newname) child2.set_params("clust_threshold", clust_threshold) child2.run("3456") ## iterate over min_sample coverage for min_samples_locus in [4, 12]: newname = child2.name + "_m{}".format(min_samples_locus) child3 = child2.branch(newname) child3.set_params("min_samples_locus", min_samples_locus) child3.run("7") ## store the complete assembly in the dictionary by its name ## so it is easy for us to access and retrieve, since we wrote ## over the variable name 'child' during the loop. You can do ## this using dictionaries, lists, etc., or, as you'll see below, ## we can use the 'load_json()' command to load a finished assembly ## from its saved file object. adict[newname] = child3 """ Explanation: Branching to iterate over parameter settings This is the real bread and butter of the ipyrad API. You can write simple for-loops using Python code to apply a range of parameter settings to different branched assemblies. Furthermore, using branching this can be done in a way that greatly reduces the amount of computation needed to produce multiple data sets. Essentially, branching allows you to recycle intermediate states that are shared between branched Assemblies. This is particularly useful when assemblies differ by only one or few parameters that are applied late in the assembly process. To set up efficient branching code in this way requires some prior knowledge about when (which step) each parameter is applied in ipyrad. That information is available in the documentation (http://ipyrad.readthedocs.io/parameters.html). When setting up for-loop routines like the one below it may be helpful to break the script up among multiple cells of a Jupyter notebook so that you can easily restart from one step or another. It may also be useful to subsample your data set to a small number of samples to test the code first, and if all goes well, then proceed with your full data set. An example to create many assemblies In the example below we will create 8 complete Assemblies which vary in three different parameter combinations (filter_setting, clust_threshold, and min_sample). End of explanation """ ## run an assembly up to step 3 data.run("123", force=True) ## select clade 1 from the sample names subs = [i for i in data.samples if "1" in i] ## branch selecting only those samples data1 = data.branch("data1", subs) ## select clade 2 from the sample names subs = [i for i in data.samples if "2" in i] ## branch selecting only those samples data2 = data.branch("data2", subs) ## make diploid base calls on 'data1' samples data1.set_params("max_alleles_consens", 2) ## make haploid base calls on 'data2' samples data2.set_params("max_alleles_consens", 1) ## run both assemblies through base-calling steps data1.run("45", force=True) data2.run("45", force=True) ## merge assemblies back together for across-sample steps data3 = ip.merge("data3", [data1, data2]) data3.run("67") """ Explanation: Working with your data programmatically A key benefit of using the ipyrad API is that all of the statistics of your analysis are more-or-less accessible through the Assembly and Sample objects. For example, if you want to examine how different minimum depth setting affect your heterozygosity estimates, then you can create two separate branches with different parameter values and access the heterozygosity estimates from the .stats attributes. Advanced branching and merging If you wanted to apply a set of parameters to only a subset of your Samples during part of the assembly you can do so easily with branching and merging. In the example below I create two new branches from the Assembly before the base-calling steps, where each Assembly selects a different subset of the samples. Then I run steps 4 and 5 with a different set of parameters applied to each, so that one makes haploid base calls and the other makes diploid base calls. Then I merged the Assemblies back together so that all Samples are assembled together in steps 6 and 7. End of explanation """ ## create a branch for a population-filtered assembly pops = data3.branch("populations") ## assign samples to populations pops.populations = { "clade1": (1, [i for i in pops.samples if "1" in i]), "clade2": (1, [i for i in pops.samples if "2" in i]), } ## print the population dictionary pops.populations ## run assembly pops.run("7") """ Explanation: Population assignments You can easily make population assignments in ipyrad using the Python dictionaries. This is useful for applying min_sample_locus filters to different groups of samples to maximize data that is shared across all of your samples. For example, if we wanted to ensure that every locus had data that was shared across all three clades in our data set then we would set a min_samples_locus value of 1 for each clade. You can see below that I use list-comprehension to select all samples in each clade based on the presence of characters in their names that define them (i.e., the presence of "1" for all samples in clade 1). When possible, this makes group assignments much easier than having to write every sample name by hand. End of explanation """ ## save assembly object (also auto-saves after every run() command) data1.save() ## load assembly object data1 = ip.load_json("pedicularis/data1.json") ## write params file for use by the CLI data1.write_params(force=True) """ Explanation: Saving Assembly objects Assembly objects (and the Sample objects they contain) are automatically saved each time that you use the .run() function. However, you can also save by calling the .save() function of an Assembly object. This updates the JSON file. Additionally, Assembly objects have a function called .write_params() which can be invoked to create a params file for use by the ipyrad command line tool. End of explanation """
sz2472/foundations-homework
data and database/Homework_4_database_shengyingzhao.ipynb
mit
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120' """ Explanation: Homework #4 These problem sets focus on list comprehensions, string operations and regular expressions. Problem set #1: List slices and list comprehensions Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str: End of explanation """ raw_numbers = numbers_str.split(",") numbers_list=[int(x) for x in raw_numbers] max(numbers_list) """ Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985'). End of explanation """ sorted(numbers_list)[-10:] """ Explanation: Great! We'll be using the numbers list you created above in the next few problems. In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output: [506, 528, 550, 581, 699, 721, 736, 804, 855, 985] (Hint: use a slice.) End of explanation """ sorted(x for x in numbers_list if x%3==0) """ Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output: [120, 171, 258, 279, 528, 699, 804, 855] End of explanation """ from math import sqrt [sqrt(x) for x in numbers_list if x < 100] """ Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output: [2.6457513110645907, 8.06225774829855, 8.246211251235321] (These outputs might vary slightly depending on your platform.) End of explanation """ planets = [ {'diameter': 0.382, 'mass': 0.06, 'moons': 0, 'name': 'Mercury', 'orbital_period': 0.24, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 0.949, 'mass': 0.82, 'moons': 0, 'name': 'Venus', 'orbital_period': 0.62, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 1.00, 'mass': 1.00, 'moons': 1, 'name': 'Earth', 'orbital_period': 1.00, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 0.532, 'mass': 0.11, 'moons': 2, 'name': 'Mars', 'orbital_period': 1.88, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 11.209, 'mass': 317.8, 'moons': 67, 'name': 'Jupiter', 'orbital_period': 11.86, 'rings': 'yes', 'type': 'gas giant'}, {'diameter': 9.449, 'mass': 95.2, 'moons': 62, 'name': 'Saturn', 'orbital_period': 29.46, 'rings': 'yes', 'type': 'gas giant'}, {'diameter': 4.007, 'mass': 14.6, 'moons': 27, 'name': 'Uranus', 'orbital_period': 84.01, 'rings': 'yes', 'type': 'ice giant'}, {'diameter': 3.883, 'mass': 17.2, 'moons': 14, 'name': 'Neptune', 'orbital_period': 164.8, 'rings': 'yes', 'type': 'ice giant'}] """ Explanation: Problem set #2: Still more list comprehensions Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed. End of explanation """ [x['name'] for x in planets if x['diameter']>4] """ Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output: ['Jupiter', 'Saturn', 'Uranus'] End of explanation """ sum(x['mass'] for x in planets) """ Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79 End of explanation """ [x['name'] for x in planets if 'giant' in x['type']] """ Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output: ['Jupiter', 'Saturn', 'Uranus', 'Neptune'] End of explanation """ [x['name'] for x in sorted(planets, key=lambda x:x['moons'])] #can't sort a dictionary, sort the dictionary by the number of moons def get_moon_count(d): return d['moons'] [x['name'] for x in sorted(planets, key=get_moon_count)] #sort the dictionary by reverse order of the diameter: [x['name'] for x in sorted(planets, key=lambda d:d['diameter'],reverse=True)] [x['name'] for x in \ sorted(planets, key=lambda d:d['diameter'], reverse=True) \ if x['diameter'] >4] """ Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output: ['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter'] End of explanation """ import re poem_lines = ['Two roads diverged in a yellow wood,', 'And sorry I could not travel both', 'And be one traveler, long I stood', 'And looked down one as far as I could', 'To where it bent in the undergrowth;', '', 'Then took the other, as just as fair,', 'And having perhaps the better claim,', 'Because it was grassy and wanted wear;', 'Though as for that the passing there', 'Had worn them really about the same,', '', 'And both that morning equally lay', 'In leaves no step had trodden black.', 'Oh, I kept the first for another day!', 'Yet knowing how way leads on to way,', 'I doubted if I should ever come back.', '', 'I shall be telling this with a sigh', 'Somewhere ages and ages hence:', 'Two roads diverged in a wood, and I---', 'I took the one less travelled by,', 'And that has made all the difference.'] """ Explanation: Problem set #3: Regular expressions In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed. End of explanation """ [line for line in poem_lines if re.search(r"\b\w{4}\s\w{4}\b",line)] """ Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library. In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.) Expected result: ['Then took the other, as just as fair,', 'Had worn them really about the same,', 'And both that morning equally lay', 'I doubted if I should ever come back.', 'I shall be telling this with a sigh'] End of explanation """ [line for line in poem_lines if re.search(r"\b\w{5}\b.?$",line)] """ Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output: ['And be one traveler, long I stood', 'And looked down one as far as I could', 'And having perhaps the better claim,', 'Though as for that the passing there', 'In leaves no step had trodden black.', 'Somewhere ages and ages hence:'] End of explanation """ all_lines = " ".join(poem_lines) all_lines """ Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell. End of explanation """ match = re.findall(r"I \w+", all_lines) #():only find things after 'I', re.search() returns object in regular expression, #while re.findall() return a list match = re.findall(r"I (\w+)", all_lines) match """ Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output: ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took'] End of explanation """ entrees = [ "Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95", "Lavender and Pepperoni Sandwich $8.49", "Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v", "Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v", "Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95", "Rutabaga And Cucumber Wrap $8.49 - v" ] """ Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries. End of explanation """ menu = [] for item in entrees: pass # replace 'pass' with your code menu """ Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop. Expected output: [{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ', 'price': 10.95, 'vegetarian': False}, {'name': 'Lavender and Pepperoni Sandwich ', 'price': 8.49, 'vegetarian': False}, {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ', 'price': 12.95, 'vegetarian': True}, {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ', 'price': 9.95, 'vegetarian': True}, {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ', 'price': 19.95, 'vegetarian': False}, {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}] End of explanation """
johnnyliu27/openmc
examples/jupyter/pandas-dataframes.ipynb
mit
import glob from IPython.display import Image import matplotlib.pyplot as plt import scipy.stats import numpy as np import pandas as pd import openmc %matplotlib inline """ Explanation: This notebook demonstrates how systematic analysis of tally scores is possible using Pandas dataframes. A dataframe can be automatically generated using the Tally.get_pandas_dataframe(...) method. Furthermore, by linking the tally data in a statepoint file with geometry and material information from a summary file, the dataframe can be shown with user-supplied labels. End of explanation """ # 1.6 enriched fuel fuel = openmc.Material(name='1.6% Fuel') fuel.set_density('g/cm3', 10.31341) fuel.add_nuclide('U235', 3.7503e-4) fuel.add_nuclide('U238', 2.2625e-2) fuel.add_nuclide('O16', 4.6007e-2) # borated water water = openmc.Material(name='Borated Water') water.set_density('g/cm3', 0.740582) water.add_nuclide('H1', 4.9457e-2) water.add_nuclide('O16', 2.4732e-2) water.add_nuclide('B10', 8.0042e-6) # zircaloy zircaloy = openmc.Material(name='Zircaloy') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_nuclide('Zr90', 7.2758e-3) """ Explanation: Generate Input Files First we need to define materials that will be used in the problem. We will create three materials for the fuel, water, and cladding of the fuel pin. End of explanation """ # Instantiate a Materials collection materials_file = openmc.Materials([fuel, water, zircaloy]) # Export to "materials.xml" materials_file.export_to_xml() """ Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file. End of explanation """ # Create cylinders for the fuel and clad fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218) clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720) # Create boundary planes to surround the geometry # Use both reflective and vacuum boundaries to make life interesting min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective') max_x = openmc.XPlane(x0=+10.71, boundary_type='vacuum') min_y = openmc.YPlane(y0=-10.71, boundary_type='vacuum') max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective') min_z = openmc.ZPlane(z0=-10.71, boundary_type='reflective') max_z = openmc.ZPlane(z0=+10.71, boundary_type='reflective') """ Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem. End of explanation """ # Create fuel Cell fuel_cell = openmc.Cell(name='1.6% Fuel', fill=fuel, region=-fuel_outer_radius) # Create a clad Cell clad_cell = openmc.Cell(name='1.6% Clad', fill=zircaloy) clad_cell.region = +fuel_outer_radius & -clad_outer_radius # Create a moderator Cell moderator_cell = openmc.Cell(name='1.6% Moderator', fill=water, region=+clad_outer_radius) # Create a Universe to encapsulate a fuel pin pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin', cells=[ fuel_cell, clad_cell, moderator_cell ]) """ Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces. End of explanation """ # Create fuel assembly Lattice assembly = openmc.RectLattice(name='1.6% Fuel - 0BA') assembly.pitch = (1.26, 1.26) assembly.lower_left = [-1.26 * 17. / 2.0] * 2 assembly.universes = [[pin_cell_universe] * 17] * 17 """ Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch. End of explanation """ # Create root Cell root_cell = openmc.Cell(name='root cell', fill=assembly) # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z # Create root Universe root_universe = openmc.Universe(name='root universe') root_universe.add_cell(root_cell) """ Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe. End of explanation """ # Create Geometry and export to "geometry.xml" geometry = openmc.Geometry(root_universe) geometry.export_to_xml() """ Explanation: We now must create a geometry that is assigned a root universe and export it to XML. End of explanation """ # OpenMC simulation parameters min_batches = 20 max_batches = 200 inactive = 5 particles = 2500 # Instantiate a Settings object settings = openmc.Settings() settings.batches = min_batches settings.inactive = inactive settings.particles = particles settings.output = {'tallies': False} settings.trigger_active = True settings.trigger_max_batches = max_batches # Create an initial uniform spatial source distribution over fissionable zones bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings.source = openmc.source.Source(space=uniform_dist) # Export to "settings.xml" settings.export_to_xml() """ Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 minimum active batches each with 2500 particles. We also tell OpenMC to turn tally triggers on, which means it will keep running until some criterion on the uncertainty of tallies is reached. End of explanation """ # Instantiate a Plot plot = openmc.Plot(plot_id=1) plot.filename = 'materials-xy' plot.origin = [0, 0, 0] plot.width = [21.5, 21.5] plot.pixels = [250, 250] plot.color_by = 'material' # Instantiate a Plots collection and export to "plots.xml" plot_file = openmc.Plots([plot]) plot_file.export_to_xml() """ Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully. End of explanation """ # Run openmc in plotting mode openmc.plot_geometry(output=False) # Convert OpenMC's funky ppm to png !convert materials-xy.ppm materials-xy.png # Display the materials plot inline Image(filename='materials-xy.png') """ Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility. End of explanation """ # Instantiate an empty Tallies object tallies = openmc.Tallies() """ Explanation: As we can see from the plot, we have a nice array of pin cells with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies. End of explanation """ # Instantiate a tally Mesh mesh = openmc.Mesh(mesh_id=1) mesh.type = 'regular' mesh.dimension = [17, 17] mesh.lower_left = [-10.71, -10.71] mesh.width = [1.26, 1.26] # Instantiate tally Filter mesh_filter = openmc.MeshFilter(mesh) # Instantiate energy Filter energy_filter = openmc.EnergyFilter([0, 0.625, 20.0e6]) # Instantiate the Tally tally = openmc.Tally(name='mesh tally') tally.filters = [mesh_filter, energy_filter] tally.scores = ['fission', 'nu-fission'] # Add mesh and Tally to Tallies tallies.append(tally) """ Explanation: Instantiate a fission rate mesh Tally End of explanation """ # Instantiate tally Filter cell_filter = openmc.CellFilter(fuel_cell) # Instantiate the tally tally = openmc.Tally(name='cell tally') tally.filters = [cell_filter] tally.scores = ['scatter'] tally.nuclides = ['U235', 'U238'] # Add mesh and tally to Tallies tallies.append(tally) """ Explanation: Instantiate a cell Tally with nuclides End of explanation """ # Instantiate tally Filter distribcell_filter = openmc.DistribcellFilter(moderator_cell) # Instantiate tally Trigger for kicks trigger = openmc.Trigger(trigger_type='std_dev', threshold=5e-5) trigger.scores = ['absorption'] # Instantiate the Tally tally = openmc.Tally(name='distribcell tally') tally.filters = [distribcell_filter] tally.scores = ['absorption', 'scatter'] tally.triggers = [trigger] # Add mesh and tally to Tallies tallies.append(tally) # Export to "tallies.xml" tallies.export_to_xml() """ Explanation: Create a "distribcell" Tally. The distribcell filter allows us to tally multiple repeated instances of the same cell throughout the geometry. End of explanation """ # Remove old HDF5 (summary, statepoint) files !rm statepoint.* # Run OpenMC! openmc.run() """ Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation. End of explanation """ # We do not know how many batches were needed to satisfy the # tally trigger(s), so find the statepoint file(s) statepoints = glob.glob('statepoint.*.h5') # Load the last statepoint file sp = openmc.StatePoint(statepoints[-1]) """ Explanation: Tally Data Processing End of explanation """ # Find the mesh tally with the StatePoint API tally = sp.get_tally(name='mesh tally') # Print a little info about the mesh tally to the screen print(tally) """ Explanation: Analyze the mesh fission rate tally End of explanation """ # Get the relative error for the thermal fission reaction # rates in the four corner pins data = tally.get_values(scores=['fission'], filters=[openmc.MeshFilter, openmc.EnergyFilter], \ filter_bins=[((1,1),(1,17), (17,1), (17,17)), \ ((0., 0.625),)], value='rel_err') print(data) # Get a pandas dataframe for the mesh tally data df = tally.get_pandas_dataframe(nuclides=False) # Set the Pandas float display settings pd.options.display.float_format = '{:.2e}'.format # Print the first twenty rows in the dataframe df.head(20) # Create a boxplot to view the distribution of # fission and nu-fission rates in the pins bp = df.boxplot(column='mean', by='score') # Extract thermal nu-fission rates from pandas fiss = df[df['score'] == 'nu-fission'] fiss = fiss[fiss['energy low [eV]'] == 0.0] # Extract mean and reshape as 2D NumPy arrays mean = fiss['mean'].values.reshape((17,17)) plt.imshow(mean, interpolation='nearest') plt.title('fission rate') plt.xlabel('x') plt.ylabel('y') plt.colorbar() """ Explanation: Use the new Tally data retrieval API with pure NumPy End of explanation """ # Find the cell Tally with the StatePoint API tally = sp.get_tally(name='cell tally') # Print a little info about the cell tally to the screen print(tally) # Get a pandas dataframe for the cell tally data df = tally.get_pandas_dataframe() # Print the first twenty rows in the dataframe df.head(20) """ Explanation: Analyze the cell+nuclides scatter-y2 rate tally End of explanation """ # Get the standard deviations the total scattering rate data = tally.get_values(scores=['scatter'], nuclides=['U238', 'U235'], value='std_dev') print(data) """ Explanation: Use the new Tally data retrieval API with pure NumPy End of explanation """ # Find the distribcell Tally with the StatePoint API tally = sp.get_tally(name='distribcell tally') # Print a little info about the distribcell tally to the screen print(tally) """ Explanation: Analyze the distribcell tally End of explanation """ # Get the relative error for the scattering reaction rates in # the first 10 distribcell instances data = tally.get_values(scores=['scatter'], filters=[openmc.DistribcellFilter], filter_bins=[tuple(range(10))], value='rel_err') print(data) """ Explanation: Use the new Tally data retrieval API with pure NumPy End of explanation """ # Get a pandas dataframe for the distribcell tally data df = tally.get_pandas_dataframe(nuclides=False) # Print the last twenty rows in the dataframe df.tail(20) # Show summary statistics for absorption distribcell tally data absorption = df[df['score'] == 'absorption'] absorption[['mean', 'std. dev.']].dropna().describe() # Note that the maximum standard deviation does indeed # meet the 5e-5 threshold set by the tally trigger """ Explanation: Print the distribcell tally dataframe End of explanation """ # Extract tally data from pins in the pins divided along y=-x diagonal multi_index = ('level 2', 'lat',) lower = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] < 16] upper = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] > 16] lower = lower[lower['score'] == 'absorption'] upper = upper[upper['score'] == 'absorption'] # Perform non-parametric Mann-Whitney U Test to see if the # absorption rates (may) come from same sampling distribution u, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean']) print('Mann-Whitney Test p-value: {0}'.format(p)) """ Explanation: Perform a statistical test comparing the tally sample distributions for two categories of fuel pins. End of explanation """ # Extract tally data from pins in the pins divided along y=x diagonal multi_index = ('level 2', 'lat',) lower = df[df[multi_index + ('x',)] > df[multi_index + ('y',)]] upper = df[df[multi_index + ('x',)] < df[multi_index + ('y',)]] lower = lower[lower['score'] == 'absorption'] upper = upper[upper['score'] == 'absorption'] # Perform non-parametric Mann-Whitney U Test to see if the # absorption rates (may) come from same sampling distribution u, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean']) print('Mann-Whitney Test p-value: {0}'.format(p)) """ Explanation: Note that the symmetry implied by the y=-x diagonal ensures that the two sampling distributions are identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would not reject the null hypothesis that the two sampling distributions are identical. Next, perform the same test but with two groupings of pins which are not symmetrically identical to one another. End of explanation """ # Extract the scatter tally data from pandas scatter = df[df['score'] == 'scatter'] scatter['rel. err.'] = scatter['std. dev.'] / scatter['mean'] # Show a scatter plot of the mean vs. the std. dev. scatter.plot(kind='scatter', x='mean', y='rel. err.', title='Scattering Rates') # Plot a histogram and kernel density estimate for the scattering rates scatter['mean'].plot(kind='hist', bins=25) scatter['mean'].plot(kind='kde') plt.title('Scattering Rates') plt.xlabel('Mean') plt.legend(['KDE', 'Histogram']) """ Explanation: Note that the asymmetry implied by the y=x diagonal ensures that the two sampling distributions are not identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would reject the null hypothesis that the two sampling distributions are identical. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-2/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: PCMDI Source ID: SANDBOX-2 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:36 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
jjdblast/RoadTrafficSimulator
experiments/report.ipynb
mit
data = pd.read_table("./1.data", sep=" ") plt.plot(data['multiplier'], data['avg_speed'], '-o') """ Explanation: Запустим симулятор с фиксированными значениями времени переключения светофоров. End of explanation """ data = pd.read_table("./2.data", sep=" ") plt.plot(data['it'], data['avg_speed'], '-o') """ Explanation: Теперь рассмотрим случайные значения в качестве времени между переключениями светофоров. End of explanation """ data = pd.read_table("./3.data", sep=" ") data = data.sort(columns='it') plt.plot(data['it'], data['avg_speed'], '-o') """ Explanation: Посмотрим на влияние phaseOffset на величину средней скорости. End of explanation """
buruzaemon/natto-py
notebooks/02_わかち書き.ipynb
bsd-2-clause
from natto import MeCab text = "卓球に人生かけるなんて、気味悪いです。" wakati = MeCab("-Owakati") """ Explanation: わかち書き Parsing -O オプション natto-py を利用して文章にある語の区切りに空白を挟んで、わかち書き出力ができます。 End of explanation """ wakati.parse(text) """ Explanation: 文字列として出力 mecab の -O オプションを利用して wakati 出力を指定して返り値を文字列にする方法です。MeCab インスタンスを取得する際に下記の通り出力フォーマットの指定をします。 End of explanation """ [n.surface for n in wakati.parse(text, as_nodes=True) if n.is_nor()] """ Explanation: 単語のリストとして出力 しかし文字列より各語をもつリストの内包表記の方が取り扱いやすくて便利です。 下記の例は、MeCab インスタンスを取得するときに as_modes=True を指定します。 ここは未知語形態素、形態素解析の結果に先頭(header)や後を位置する footer などの出力をしません。 End of explanation """
tensorflow/docs-l10n
site/ko/agents/tutorials/9_c51_tutorial.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TF-Agents Authors. End of explanation """ !sudo apt-get install -y xvfb ffmpeg !pip install 'gym==0.10.11' !pip install 'imageio==2.4.0' !pip install PILLOW !pip install 'pyglet==1.3.2' !pip install pyvirtualdisplay !pip install tf-agents from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.categorical_dqn import categorical_dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import categorical_q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common tf.compat.v1.enable_v2_behavior() # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() """ Explanation: DQN C51/레인보우 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/agents/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td> </table> 소개 이 예는 TF-Agents 라이브러리를 사용하여 Cartpole 환경에서 Categorical DQN(C51) 에이전트를 훈련하는 방법을 보여줍니다. 전제 조건으로 DQN 튜토리얼을 살펴보아야 합니다. 이 튜토리얼에서는 DQN 튜토리얼에 익숙하다고 가정합니다. 여기서는 주로 DQN과 C51의 차이점에 중점을 둡니다. 설정 tf-agents를 아직 설치하지 않은 경우 다음을 실행합니다. End of explanation """ env_name = "CartPole-v1" # @param {type:"string"} num_iterations = 15000 # @param {type:"integer"} initial_collect_steps = 1000 # @param {type:"integer"} collect_steps_per_iteration = 1 # @param {type:"integer"} replay_buffer_capacity = 100000 # @param {type:"integer"} fc_layer_params = (100,) batch_size = 64 # @param {type:"integer"} learning_rate = 1e-3 # @param {type:"number"} gamma = 0.99 log_interval = 200 # @param {type:"integer"} num_atoms = 51 # @param {type:"integer"} min_q_value = -20 # @param {type:"integer"} max_q_value = 20 # @param {type:"integer"} n_step_update = 2 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 1000 # @param {type:"integer"} """ Explanation: 하이퍼 매개변수 End of explanation """ train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) """ Explanation: 환경 이전과 같이 훈련용과 평가용으로 각각 하나의 환경을 로드합니다. 여기서는 200개가 아니라 500개의 보다 큰 최대 보상을 가진 CartPole-v1(DQN 튜토리얼에서는 CartPole-v0이었음)을 사용합니다. End of explanation """ categorical_q_net = categorical_q_network.CategoricalQNetwork( train_env.observation_spec(), train_env.action_spec(), num_atoms=num_atoms, fc_layer_params=fc_layer_params) """ Explanation: 에이전트 C51은 DQN을 기반으로 하는 Q-러닝 알고리즘으로, DQN과 마찬가지로 불연속 행동 공간이 있는 모든 환경에서 사용할 수 있습니다. C51이 DQN과 가장 다른 점은 각 상태-행동 쌍에 대한 Q-value를 단순히 예측하는 것이 아니라 Q-값의 확률 분포에 대한 히스토그램 모델을 예측한다는 것입니다. 알고리즘은 단순히 예상 값이 아닌 분포를 학습함으로써 훈련 중에 더 안정적으로 유지되어 최종 성능을 향상할 수 있습니다. 단일 평균이 정확한 그림을 제공하지 않는 바이 모달 또는 멀티 모달 값 분포가 있는 상황에서 특히 그렇습니다. 값이 아닌 확률 분포에 대해 훈련하려면 손실 함수를 계산하기 위해 C51에서 복잡한 분포 계산을 수행해야 합니다. 하지만 걱정하지 마세요. 이 모든 작업은 TF-Agents에서 처리합니다! C51 에이전트를 만들려면 먼저 CategoricalQNetwork를 만들어야 합니다. CategoricalQNetwork의 API는 추가 인수 num_atoms가 있다는 점을 제외하고 QNetwork의 API와 동일합니다. 이것은 확률 분포 추정의 지원 포인트 수를 나타냅니다. (위의 이미지에는 각각 파란색 세로 막대로 표시된 10개의 지원 포인트가 포함되어 있습니다.) 이름에서 알 수 있듯이 기본 원자 수는 51개입니다. End of explanation """ optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.compat.v2.Variable(0) agent = categorical_dqn_agent.CategoricalDqnAgent( train_env.time_step_spec(), train_env.action_spec(), categorical_q_network=categorical_q_net, optimizer=optimizer, min_q_value=min_q_value, max_q_value=max_q_value, n_step_update=n_step_update, td_errors_loss_fn=common.element_wise_squared_loss, gamma=gamma, train_step_counter=train_step_counter) agent.initialize() """ Explanation: 또한, 방금 생성한 네트워크를 훈련하기 위해 optimizer가 필요하고, 네트워크가 몇 번 업데이트되었는지 추적하기 위해train_step_counter 변수가 필요합니다. 바닐라 DqnAgent와 다른 또 다른 주된 차이점은 이제 min_q_value 및 max_q_value를 인수로 지정해야 한다는 것입니다. 이러한 인수는 지원의 가장 극단적인 값(즉, 양쪽에서 51개 원자 중 가장 극단의 값)을 지정합니다. 특정 환경에 적합하게 선택해야 합니다. 여기서는 -20과 20을 사용합니다. End of explanation """ #@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) compute_avg_return(eval_env, random_policy, num_eval_episodes) # Please also see the metrics module for standard implementations of different # metrics. """ Explanation: 마지막으로 주목해야 할 점은 n-step 업데이트를 $n$ = 2로 사용하는 인수를 추가했다는 것입니다. 단일 스텝 Q-learning ($n$ = 1)에서는 단일 스텝 이익을 사용하여 현재 타임스텝과 다음 타임스텝에서 Q-값 사이의 오차만 계산합니다(Bellman 최적 방정식에 기초). 단일 스텝 이익은 다음과 같이 정의됩니다. $G_t = R_{t + 1} + \gamma V(s_{t + 1})$ 여기서 $V(s) = \max_a{Q(s, a)}$로 정의합니다. N-step 업데이트에는 표준 단일 스텝 이익 함수를 $n$번 확장하는 과정이 포함됩니다. $G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$ N-step 업데이트를 통해 에이전트는 더 먼 미래로부터 부트스트랩을 수행할 수 있으며 $n$의 올바른 값으로 대개 학습 속도가 더 빨라집니다. C51 및 n-step 업데이트는 종종 우선 순위가 지정된 재현과 결합하여 레인보우 에이전트의 코어를 형성하지만, 우선 순위가 지정된 재현을 구현하는 것으로는 측정 가능한 개선이 이루어지지 않았습니다. 또한, C51 에이전트와 n-step 업데이트만 결합하면 이 에이전트뿐만 아니라 다른 레인보우 에이전트도 테스트한 Atari 환경의 샘플에서 실행됩니다. 메트릭 및 평가 정책을 평가하는 데 사용되는 가장 일반적인 메트릭은 평균 이익입니다. 이익은 에피소드의 환경에서 정책을 실행하는 동안 얻은 보상의 합계이며 일반적으로 몇 개 에피소드에서 평균을 계산합니다. 다음과 같이 평균 이익 메트릭을 계산할 수 있습니다. End of explanation """ #@test {"skip": true} replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity) def collect_step(environment, policy): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer replay_buffer.add_batch(traj) for _ in range(initial_collect_steps): collect_step(train_env, random_policy) # This loop is so common in RL, that we provide standard implementations of # these. For more details see the drivers module. # Dataset generates trajectories with shape [BxTx...] where # T = n_step_update + 1. dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=n_step_update + 1).prefetch(3) iterator = iter(dataset) """ Explanation: 데이터 수집 DQN 튜토리얼에서와 같이 임의 정책으로 재현 버퍼 및 초기 데이터 수집을 설정합니다. End of explanation """ #@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. for _ in range(collect_steps_per_iteration): collect_step(train_env, agent.collect_policy) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience) step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return)) returns.append(avg_return) """ Explanation: 에이전트 훈련하기 훈련 루프에는 환경에서 데이터를 수집하고 에이전트의 네트워크를 최적화하는 작업이 포함됩니다. 그 과정에서 에이전트의 정책을 평가하여 진행 상황을 파악할 수 있습니다. 다음을 실행하는 데 약 7분이 걸립니다. End of explanation """ #@test {"skip": true} steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=550) """ Explanation: 시각화 플롯하기 에이전트의 성과를 확인하기 위해 수익 대 글로벌 스텝을 플롯할 수 있습니다. Cartpole-v1에서 환경은 극이 머무르는 각 타임스텝에 대해 +1의 보상을 제공하며 최대 스텝 수는 500이므로 가능한 최대 수익도 500입니다. End of explanation """ def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) """ Explanation: 비디오 각 스텝에서 환경을 렌더링하여 에이전트의 성능을 시각화하면 도움이 됩니다. 이를 수행하기 전에 먼저 이 Colab에 비디오를 포함하는 함수를 작성하겠습니다. End of explanation """ num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename) """ Explanation: 다음 코드는 몇 가지 에피소드에 대한 에이전트 정책을 시각화합니다. End of explanation """
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Revisiting alexnet based experiment (small).ipynb
mit
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600. fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(111) ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record) ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record) ax1.set_xlabel('Epochs') ax1.legend(['Valid', 'Train']) ax1.set_ylabel('NLL') ax1.set_ylim(0., 5.) ax1.grid(True) ax2 = ax1.twiny() ax2.set_xticks(np.arange(0,tr.shape[0],20)) ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]]) ax2.set_xlabel('Hours') print(model.yaml_src) pv = get_weights_report(model=model) img = pv.get_img() img = img.resize((4*img.size[0], 4*img.size[1])) img_data = io.BytesIO() img.save(img_data, format='png') display(Image(data=img_data.getvalue(), format='png')) plt.plot(model.monitor.channels['learning_rate'].val_record) """ Explanation: Plot train and valid set NLL End of explanation """ h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record]) h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record]) plt.plot(h1_W_norms / h1_W_up_norms) plt.ylim(0,1000) plt.show() plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record) h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record]) h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record]) plt.plot(h2_W_norms / h2_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record) h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record]) h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record]) plt.plot(h3_W_norms / h3_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record) h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_kernel_norm_mean'].val_record]) h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_kernel_norms_mean'].val_record]) plt.plot(h4_W_norms / h4_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h4_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h4_kernel_norms_max'].val_record) h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_kernel_norm_mean'].val_record]) h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_kernel_norms_mean'].val_record]) plt.plot(h5_W_norms / h5_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h5_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h5_kernel_norms_max'].val_record) h6_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h6_W_col_norm_mean'].val_record]) h6_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h6_col_norms_mean'].val_record]) plt.plot(h6_W_norms / h6_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h6_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h6_col_norms_max'].val_record) y_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_softmax_W_col_norm_mean'].val_record]) y_W_norms = np.array([float(v) for v in model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record]) plt.plot(y_W_norms / y_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_y_y_1_col_norms_max'].val_record) """ Explanation: Plot ratio of update norms to parameter norms across epochs for different layers End of explanation """
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Object Oriented Programming-checkpoint.ipynb
apache-2.0
l = [1,2,3] """ Explanation: Object Oriented Programming Object Oriented Programming (OOP) tends to be one of the major obstacles for beginners when they are first starting to learn Python. There are many,many tutorials and lessons covering OOP so feel free to Google search other lessons, and I have also put some links to other useful tutorials online at the bottom of this Notebook. For this lesson we will construct our knowledege of OOP in Python by building on the following topics: Objects Using the class keyword Creating class attributes Creating methods in a class Learning about Inheritance Learning about Special Methods for classes Lets start the lesson by remembering about the Basic Python Objects. For example: End of explanation """ l.count(2) """ Explanation: Remember how we could call methods on a list? End of explanation """ print type(1) print type([]) print type(()) print type({}) """ Explanation: What we will basically be doing in this lecture is exploring how we could create an Object type like a list. We've already learned about how to create functions. So lets explore Objects in general: Objects In Python, everything is an object. Remember from previous lectures we can use type() to check the type of object something is: End of explanation """ # Create a new object type called Sample class Sample(object): pass # Instance of Sample x = Sample() print type(x) """ Explanation: So we know all these things are objects, so how can we create our own Object types? That is where the class keyword comes in. class The user defined objects are created using the class keyword. The class is a blueprint that defines a nature of a future object. From classes we can construct instances. An instance is a specific object created from a particular class. For example, above we created the object 'l' which was an instance of a list object. Let see how we can use class: End of explanation """ class Dog(object): def __init__(self,breed): self.breed = breed sam = Dog(breed='Lab') frank = Dog(breed='Huskie') """ Explanation: By convention we give classes a name that starts with a capital letter. Note how x is now the reference to our new instance of a Sample class. In other words, we instanciate the Sample class. Inside of the class we currently just have pass. But we can define class attributes and methods. An attribute is a characteristic of an object. A method is an operation we can perform with the object. For example we can create a class called Dog. An attribute of a dog may be its breed or its name, while a method of a dog may be defined by a .bark() method which returns a sound. Let's get a better understanding of attributes through an example. Attributes The syntax for creating an attribute is: self.attribute = something There is a special method called: __init__() This method is used to initialize the attributes of an object. For example: End of explanation """ sam.breed frank.breed """ Explanation: Lets break down what we have above.The special method __init__() is called automatically right after the object has been created: def __init__(self, breed): Each attribute in a class definition begins with a reference to the instance object. It is by convention named self. The breed is the argument. The value is passed during the class instantiation. self.breed = breed Now we have created two instances of the Dog class. With two breed types, we can then access these attributes like this: End of explanation """ class Dog(object): # Class Object Attribute species = 'mammal' def __init__(self,breed,name): self.breed = breed self.name = name sam = Dog('Lab','Sam') sam.name """ Explanation: Note how we don't have any parenthesis after breed, this is because it is an attribute and doesn't take any arguments. In Python there are also class object attributes. These Class Object Attributes are the same for any instance of the class. For example, we could create the attribute species for the Dog class. Dogs (regardless of their breed,name, or other attributes will always be mammals. We apply this logic in the following manner: End of explanation """ sam.species """ Explanation: Note that the Class Object Attribute is defined outside of any methods in the class. Also by convention, we place them first before the init. End of explanation """ class Circle(object): pi = 3.14 # Circle get instantiaed with a radius (default is 1) def __init__(self, radius=1): self.radius = radius # Area method calculates the area. Note the use of self. def area(self): return self.radius * self.radius * Circle.pi # Method for resetting Radius def setRadius(self, radius): self.radius = radius # Method for getting radius (Same as just calling .radius) def getRadius(self): return self.radius c = Circle() c.setRadius(2) print 'Radius is: ',c.getRadius() print 'Area is: ',c.area() """ Explanation: Methods Methods are functions defined inside the body of a class. They are used to perform operations with the attributes of our objects. Methods are essential in encapsulation concept of the OOP paradigm. This is essential in dividing responsibilities in programming, especially in large applications. You can basically think of methods as functions acting on an Object that take the Object itself into account through its self argument. Lets go through an example of creating a Circle class: End of explanation """ class Animal(object): def __init__(self): print "Animal created" def whoAmI(self): print "Animal" def eat(self): print "Eating" class Dog(Animal): def __init__(self): Animal.__init__(self) print "Dog created" def whoAmI(self): print "Dog" def bark(self): print "Woof!" d = Dog() d.whoAmI() d.eat() d.bark() """ Explanation: Great! Notice how we used self. notation to reference attributes of the class within the method calls. Review how the code above works and try creating your own method Inheritance Inheritance is a way to form new classes using classes that have already been defined. The newly formed classes are called derived classes, the classes that we derive from are called base classes. Important benefits of inheritance are code reuse and reduction of complexity of a program. The derived classes (descendants) override or extend the functionality of base classes (ancestors). Lets see an example by incorporating our rpevious work on the Dog class: End of explanation """ class Book(object): def __init__(self, title, author, pages): print "A book is created" self.title = title self.author = author self.pages = pages def __str__(self): return "Title:%s , author:%s, pages:%s " %(self.title, self.author, self.pages) def __len__(self): return self.pages def __del__(self): print "A book is destroyed" book = Book("Python Rocks!", "Jose Portilla", 159) #Special Methods print book print len(book) del book """ Explanation: In this example, we have two classes: Animal and Dog. The Animal is the base class, the Dog is the derived class. The derived class inherits the functionality of the base class. It is shown by the eat() method. The derived class modifies existing behaviour of the base class. shown by the whoAmI() method. Finally, the derived class extends the functionality of the base class, by defining a new bark() method. Special Methods Finally lets go over special methods. Classes in Python can implement certain operations with special method names. These methods are not actually called directly but by Python specific language syntax. For example Lets create a Book class: End of explanation """
weleen/mxnet
example/notebooks/moved-from-mxnet/cifar10-recipe.ipynb
apache-2.0
import mxnet as mx import logging import numpy as np # setup logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) """ Explanation: CIFAR-10 Recipe In this notebook, we will show how to train a state-of-art CIFAR-10 network with MXNet and extract feature from the network. This example wiil cover Network/Data definition Multi GPU training Model saving and loading Prediction/Extracting Feature End of explanation """ # Basic Conv + BN + ReLU factory def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), act_type="relu"): # there is an optional parameter ```wrokshpace``` may influece convolution performance # default, the workspace is set to 256(MB) # you may set larger value, but convolution layer only requires its needed but not exactly # MXNet will handle reuse of workspace without parallelism conflict conv = mx.symbol.Convolution(data=data, workspace=256, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad) bn = mx.symbol.BatchNorm(data=conv) act = mx.symbol.Activation(data = bn, act_type=act_type) return act # A Simple Downsampling Factory def DownsampleFactory(data, ch_3x3): # conv 3x3 conv = ConvFactory(data=data, kernel=(3, 3), stride=(2, 2), num_filter=ch_3x3, pad=(1, 1)) # pool pool = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pad=(1,1), pool_type='max') # concat concat = mx.symbol.Concat(*[conv, pool]) return concat # A Simple module def SimpleFactory(data, ch_1x1, ch_3x3): # 1x1 conv1x1 = ConvFactory(data=data, kernel=(1, 1), pad=(0, 0), num_filter=ch_1x1) # 3x3 conv3x3 = ConvFactory(data=data, kernel=(3, 3), pad=(1, 1), num_filter=ch_3x3) #concat concat = mx.symbol.Concat(*[conv1x1, conv3x3]) return concat """ Explanation: First, let's make some helper function to let us build a simplified Inception Network. More details about how to composite symbol into component can be found at composite_symbol End of explanation """ data = mx.symbol.Variable(name="data") conv1 = ConvFactory(data=data, kernel=(3,3), pad=(1,1), num_filter=96, act_type="relu") in3a = SimpleFactory(conv1, 32, 32) in3b = SimpleFactory(in3a, 32, 48) in3c = DownsampleFactory(in3b, 80) in4a = SimpleFactory(in3c, 112, 48) in4b = SimpleFactory(in4a, 96, 64) in4c = SimpleFactory(in4b, 80, 80) in4d = SimpleFactory(in4c, 48, 96) in4e = DownsampleFactory(in4d, 96) in5a = SimpleFactory(in4e, 176, 160) in5b = SimpleFactory(in5a, 176, 160) pool = mx.symbol.Pooling(data=in5b, pool_type="avg", kernel=(7,7), name="global_avg") flatten = mx.symbol.Flatten(data=pool) fc = mx.symbol.FullyConnected(data=flatten, num_hidden=10) softmax = mx.symbol.SoftmaxOutput(name='softmax',data=fc) # If you'd like to see the network structure, run the plot_network function #mx.viz.plot_network(symbol=softmax,node_attrs={'shape':'oval','fixedsize':'false'}) # We will make model with current current symbol # For demo purpose, this model only train 1 epoch # We will use the first GPU to do training num_epoch = 1 model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, learning_rate=0.05, momentum=0.9, wd=0.00001) # we can add learning rate scheduler to the model # model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, # learning_rate=0.05, momentum=0.9, wd=0.00001, # lr_scheduler=mx.misc.FactorScheduler(2)) # In this example. learning rate will be reduced to 0.1 * previous learning rate for every 2 epochs """ Explanation: Now we can build a network with these component factories End of explanation """ # num_devs = 4 # model = mx.model.FeedForward(ctx=[mx.gpu(i) for i in range(num_devs)], symbol=softmax, num_epoch = 1, # learning_rate=0.05, momentum=0.9, wd=0.00001) """ Explanation: If we have multiple GPU, for eaxmple, 4 GPU, we can utilize them without any difficulty End of explanation """ # Use utility function in test to download the data # or manualy prepar import sys sys.path.append("../../tests/python/common") # change the path to mxnet's tests/ import get_data get_data.GetCifar10() # After we get the data, we can declare our data iterator # The iterator will automatically create mean image file if it doesn't exist batch_size = 128 total_batch = 50000 / 128 + 1 # Train iterator make batch of 128 image, and random crop each image into 3x28x28 from original 3x32x32 train_dataiter = mx.io.ImageRecordIter( shuffle=True, path_imgrec="data/cifar/train.rec", mean_img="data/cifar/cifar_mean.bin", rand_crop=True, rand_mirror=True, data_shape=(3,28,28), batch_size=batch_size, preprocess_threads=1) # test iterator make batch of 128 image, and center crop each image into 3x28x28 from original 3x32x32 # Note: We don't need round batch in test because we only test once at one time test_dataiter = mx.io.ImageRecordIter( path_imgrec="data/cifar/test.rec", mean_img="data/cifar/cifar_mean.bin", rand_crop=False, rand_mirror=False, data_shape=(3,28,28), batch_size=batch_size, round_batch=False, preprocess_threads=1) """ Explanation: Next step is declaring data iterator. The original CIFAR-10 data is 3x32x32 in binary format, we provides RecordIO format, so we can use Image RecordIO format. For more infomation about Image RecordIO Iterator, check document. End of explanation """ model.fit(X=train_dataiter, eval_data=test_dataiter, eval_metric="accuracy", batch_end_callback=mx.callback.Speedometer(batch_size)) # if we want to save model after every epoch, we can add check_point call back # model_prefix = './cifar_' # model.fit(X=train_dataiter, # eval_data=test_dataiter, # eval_metric="accuracy", # batch_end_callback=mx.helper.Speedometer(batch_size), # epoch_end_callback=mx.callback.do_checkpoint(model_prefix)) """ Explanation: Now we can fit the model with data. End of explanation """ # using pickle import pickle smodel = pickle.dumps(model) # using saving (recommended) # We get the benefit being able to directly load/save from cloud storage(S3, HDFS) prefix = "cifar10" model.save(prefix) """ Explanation: After only 1 epoch, our model is able to acheive about 65% accuracy on testset(If not, try more times). We can save our model by calling either save or using pickle. End of explanation """ # use pickle model2 = pickle.loads(smodel) # using load method (able to load from S3/HDFS directly) model3 = mx.model.FeedForward.load(prefix, num_epoch, ctx=mx.gpu()) """ Explanation: To load saved model, you can use pickle if the model is generated by pickle, or use load if it is generated by save End of explanation """ prob = model3.predict(test_dataiter) logging.info('Finish predict...') # Check the accuracy from prediction test_dataiter.reset() # get label # Because the iterator pad each batch same shape, we want to remove paded samples here y_batch = [] for dbatch in test_dataiter: label = dbatch.label[0].asnumpy() pad = test_dataiter.getpad() real_size = label.shape[0] - pad y_batch.append(label[0:real_size]) y = np.concatenate(y_batch) # get prediction label from py = np.argmax(prob, axis=1) acc1 = float(np.sum(py == y)) / len(y) logging.info('final accuracy = %f', acc1) """ Explanation: We can use the model to do prediction End of explanation """ # Predict internal featuremaps # From a symbol, we are able to get all internals. Note it is still a symbol internals = softmax.get_internals() # We get get an internal symbol for the feature. # By default, the symbol is named as "symbol_name + _output" # in this case we'd like to get global_avg" layer's output as feature, so its "global_avg_output" # You may call ```internals.list_outputs()``` to find the target # but we strongly suggests set a special name for special symbol fea_symbol = internals["global_avg_output"] # Make a new model by using an internal symbol. We can reuse all parameters from model we trained before # In this case, we must set ```allow_extra_params``` to True # Because we don't need params of FullyConnected Layer feature_extractor = mx.model.FeedForward(ctx=mx.gpu(), symbol=fea_symbol, arg_params=model.arg_params, aux_params=model.aux_params, allow_extra_params=True) # Predict as normal global_pooling_feature = feature_extractor.predict(test_dataiter) print(global_pooling_feature.shape) """ Explanation: From any symbol, we are able to know its internal feature_maps and bind a new model to extract that feature map End of explanation """
AaronCWong/phys202-2015-work
assignments/assignment04/MatplotlibExercises.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Visualization 1: Matplotlib Basics Exercises End of explanation """ x = np.random.randn(100) y = np.random.randn(100) plt.scatter(x,y, s = 20, c = 'b') plt.xlabel('Random Number 2') plt.ylabel('Random Number') plt.title('Random 2d Scatter Plot') axis = plt.gca() axis.spines['top'].set_visible(False) axis.spines['right'].set_visible(False) axis.get_xaxis().tick_bottom() axis.get_yaxis().tick_left() plt.tight_layout() """ Explanation: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. End of explanation """ x = np.random.randn(10) plt.hist(x,4) plt.xlabel('X value') plt.ylabel('Y value') plt.title('Random Histogram Bins') """ Explanation: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title. End of explanation """
locie/locie_notebook
base_python/multiprocessing.ipynb
lgpl-3.0
import multiprocessing as mp from time import sleep def a_long_running_function(time): sleep(time) return time # These lines are not blocking process = mp.Process(target=a_long_running_function, args=(10, )) process.start() print(f"before join, process.is_alive: {process.is_alive()}") # These one will block until `a_long_running_function` is over process.join() print(f"after join, process.is_alive: {process.is_alive()}") # These lines are not blocking class ALongRunningProcess(mp.Process): def __init__(self, *args): super().__init__() self._args = args def run(self): a_long_running_function(*self._args) process = ALongRunningProcess(10) process.start() print(f"before join, process.is_alive: {process.is_alive()}") # These one will block until `a_long_running_function` is over process.join() print(f"after join, process.is_alive: {process.is_alive()}") """ Explanation: Unlock the power of your computer with multiprocessing computation Process, threads? Threads are sequence of programmed instructions that can be managed independently by the os. They share the memory space (so we have to be carefull and ensure thread safety, in order to avoid two threads writing in the same emplacement at the same time), and they are the common way to deal with asynchrone computation and to improve performance when the machine has more than one CPU. Processes are instance of a computer program that is executed. They do not share memory and require special object to share information, objects as queue, shared objects, pipes, semaphores... They are heavier than the threads, but are safer due to the lack of common memory space. Multiprocessing / threading in python Due to the GIL (Global interpreter lock), multiple threads cannot execute Python bytecodes at once. It reduced the usefullness of threading : only the function that release the GIL can run at the same time. It's the case for I/O operations (web protocol requests as http/ftp, on-disk reading / writing), and for most numpy operations that rely on C routines. That's why, in the Python ecosystem, multiprocessing is prefered over threading. NB: some trial has been made to take the GIL away from CPython, and led to drastic performance regression (more info). Python has a nice standard library that allow multiprocessing computation. It's called [multiprocessing]. Good to know: the library [threading] is for multiple threads computation and has a very similar API, even if the tendancy is to use asyncio based library to deal with I/O operations. Some libraries allow abstraction that help multiprocessing computation, as joblib (part of the sklearn ecosystem), [concurrent.futures] (in the stdlib, allow future-based API), [distributed] (part of the dask ecosystem and allow local and distributed computation that can live on other computers) and so on... I will focus on the stdlib [multiprocessing] first, then have few words on the other tools. NB: a nice tool called [joblib] can be used to provide an unified way to do embarassingly parallel computation. I will also have a word on this tool at the end. Multiprocessing usage Process object A process object represent a processus that can be started and run a function. It can be initialized in two different way: directly by passing a target function that will be ran by the process by writing a child object with a run method. The latter is usefull for complexe cases. The processes objets have some useful methods. Some of there: start() will start the process. This is a non-blocking method join() will wait that the process finish his job. terminate() will send a sigterm to the process: it will be gently terminated. is_alive() return True if the process is alive, False otherwise For example, these two snippets do exactly the same thing : End of explanation """ with mp.Pool() as p: # A future is a result that we expect. future = p.apply_async(a_long_running_function, args=(5, )) print(f"future object: {future}") # We have to use the get method: otherwise, # the pool will be closed before we obtain the result # We can use the wait method too: in that case, the result is not returned result = future.get() print(f"future.get(): {result}") # The map allow to run multiple time the function over a range on inputs # then return the result as a list. It can be blocking or not. # If it's async, it will return a MapResult, an equivalent of the future for # multiple results. results = p.map(a_long_running_function, [5] * mp.cpu_count()) print(f"results: {results}") futures = p.map_async(a_long_running_function, [5] * mp.cpu_count()) print(f"futures: {futures}") print(f"futures.get: {futures.get()}") """ Explanation: Pool object Often, we do not want a complex workflow with a lot of different processes sharing informations. We just want N independant computation of the same function with different inputs. In that case, managing by ourself the processes can be harmful, and it's worst considering that we should avoid to restart a process each time because it add some overhead. There come the Pool object : it's a pool of N processes (often the same number as the machine CPU) that can be fed with tasks (function and input), one by one, or with a range of parameters. That way: End of explanation """
ocean-color-ac-challenge/evaluate-pearson
evaluation-participant-c.ipynb
apache-2.0
w_412 = 0.56 w_443 = 0.73 w_490 = 0.71 w_510 = 0.36 w_560 = 0.01 """ Explanation: E-CEO Challenge #3 Evaluation Weights Define the weight of each wavelength End of explanation """ run_id = '0000000-150630000034908-oozie-oozi-W' run_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-c/0000000-150630000034908-oozie-oozi-W/results.metalink?' participant = 'participant-c' """ Explanation: Run Provide the run information: * run id * run metalink containing the 3 by 3 kernel extractions * participant End of explanation """ import glob import pandas as pd from scipy.stats.stats import pearsonr import numpy import math """ Explanation: Define all imports in a single cell End of explanation """ !curl $run_meta | aria2c -d $participant -M - path = participant # use your path allFiles = glob.glob(path + "/*.txt") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) frame = pd.concat(list_) """ Explanation: Manage run results Download the results and aggregate them in a single Pandas dataframe End of explanation """ len(frame.index) """ Explanation: Number of points extracted from MERIS level 2 products End of explanation """ insitu_path = './insitu/AAOT.csv' insitu = pd.read_csv(insitu_path) frame_full = pd.DataFrame.merge(frame.query('Name == "AAOT"'), insitu, how='inner', on = ['Date', 'ORBIT']) frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna() r_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @412") frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna() r_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @443") frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna() r_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @490") r_aaot_510 = 0 print("0 observations for band @510") frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna() r_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @560") insitu_path = './insitu/BOUSS.csv' insitu = pd.read_csv(insitu_path) frame_full = pd.DataFrame.merge(frame.query('Name == "BOUS"'), insitu, how='inner', on = ['Date', 'ORBIT']) frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna() r_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @412") frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna() r_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @443") frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna() r_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @490") frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna() r_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @510") frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna() r_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @560") insitu_path = './insitu/MOBY.csv' insitu = pd.read_csv(insitu_path) frame_full = pd.DataFrame.merge(frame.query('Name == "MOBY"'), insitu, how='inner', on = ['Date', 'ORBIT']) frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna() r_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @12") frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna() r_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @443") frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna() r_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @490") frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna() r_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @510") frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna() r_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] print(str(len(frame_xxx.index)) + " observations for band @560") [r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560] [r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560] [r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560] r_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \ + numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \ + numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \ + numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \ + numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \ / (w_412 + w_443 + w_490 + w_510 + w_560) r_final """ Explanation: Calculate Pearson For all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band. Note AAOT does not have measurements for band @510 AAOT site End of explanation """
DJCordhose/ai
notebooks/workshops/d2d/nn-intro.ipynb
mit
import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np from distutils.version import StrictVersion import sklearn print(sklearn.__version__) assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0') import keras print(keras.__version__) assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0') """ Explanation: Einführung in Neuronale Netzwerke End of explanation """ %load https://djcordhose.github.io/ai/fragments/neuron.py """ Explanation: Iris mit Neuronalen Netzwerken Das künstliche Neuron Hands-On Erzeuge eine Python-Implementierung eines Neurons mit zwei Eingabevariablen ohne Activation Funktion Denke die Werte für w1, w2 und den Bias aus Kannst du eine Skizze des Graphs der Funktion mit x1 und x2 an den Achsen erstellen? Was ist das für eine Funktion? End of explanation """ from sklearn.datasets import load_iris iris = load_iris() iris.data[0] neuron_no_activation(5.1, 3.5) """ Explanation: Wir probieren unser Modell mit dem Iris Dataset End of explanation """ def centerAxis(uses_negative=False): # http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot ax = plt.gca() ax.spines['left'].set_position('center') if uses_negative: ax.spines['bottom'].set_position('center') ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') """ Explanation: Wie sollen wir das interpretieren? Damit können wir nicht viel anfangen Activation Functions End of explanation """ def np_sigmoid(X): return 1 / (1 + np.exp(X * -1)) x = np.arange(-10,10,0.01) y = np_sigmoid(x) centerAxis() plt.plot(x,y,lw=3) """ Explanation: Sigmoid End of explanation """ def np_relu(x): return np.maximum(0, x) x = np.arange(-10, 10, 0.01) y = np_relu(x) centerAxis() plt.plot(x,y,lw=3) """ Explanation: Relu End of explanation """ w0 = 3 w1 = -4 w2 = 2 import math as math def sigmoid(x): return 1 / (1 + math.exp(x * -1)) def neuron(x1, x2): sum = w0 + x1 * w1 + x2 * w2 return sigmoid(sum) neuron(5.1, 3.5) """ Explanation: Das komplette Neuron End of explanation """ from keras.layers import Input inputs = Input(shape=(4, )) from keras.layers import Dense fc = Dense(3)(inputs) from keras.models import Model model = Model(input=inputs, output=fc) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) """ Explanation: Unser erste Neuronales Netz mit Keras End of explanation """ inputs = Input(shape=(4, )) fc = Dense(3)(inputs) predictions = Dense(3, activation='softmax')(fc) model = Model(input=inputs, output=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) """ Explanation: End of explanation """ X = np.array(iris.data) y = np.array(iris.target) X.shape, y.shape y[100] # tiny little pieces of feature engeneering from keras.utils.np_utils import to_categorical num_categories = 3 y = to_categorical(y, num_categories) y[100] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y) X_train.shape, X_test.shape, y_train.shape, y_test.shape # !rm -r tf_log # tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # https://keras.io/callbacks/#tensorboard # To start tensorboard # tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log # open http://localhost:6006 # %time model.fit(X_train, y_train, epochs=500, validation_split=0.3, callbacks=[tb_callback]) %time model.fit(X_train, y_train, epochs=500, validation_split=0.3) """ Explanation: Training End of explanation """ model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) X[0], y[0] train_loss, train_accuracy = model.evaluate(X_train, y_train) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test) test_loss, test_accuracy """ Explanation: Bewertung End of explanation """ model.save('nn-iris.hdf5') """ Explanation: Hands-On Vollziehe das Notebook bis hier nach und spiele mit den einigen Parametern Variiere die Anzahl der Neuronen im Hidden Layer. Wieso geht das überhaupt mit 3 Neuronen Ziehe eine weitere Schicht ein Kannst du eine Skizze des Graphs der Funktion mit x1 und x2 an den Achsen erstellen? Was ist das für eine Funktion? Stop Here Optionaler Teil Model im Keras und TensorFlow Format speichern End of explanation """ import os from keras import backend as K K.set_learning_phase(0) sess = K.get_session() !rm -r tf tf.app.flags.DEFINE_integer('model_version', 1, 'version number of the model.') tf.app.flags.DEFINE_string('work_dir', '/tmp', 'Working directory.') FLAGS = tf.app.flags.FLAGS export_path_base = 'tf' export_path = os.path.join( tf.compat.as_bytes(export_path_base), tf.compat.as_bytes(str(FLAGS.model_version))) classification_inputs = tf.saved_model.utils.build_tensor_info(model.input) classification_outputs_scores = tf.saved_model.utils.build_tensor_info(model.output) from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def signature = predict_signature_def(inputs={'inputs': model.input}, outputs={'scores': model.output}) builder = tf.saved_model.builder.SavedModelBuilder(export_path) builder.add_meta_graph_and_variables( sess, tags=[tf.saved_model.tag_constants.SERVING], signature_def_map={ tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature }) builder.save() !ls -lhR tf """ Explanation: Export as raw tf model https://tensorflow.github.io/serving/serving_basic.html https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py End of explanation """ # cd tf # gsutil cp -R 1 gs://irisnn # create model and version at https://console.cloud.google.com/mlengine # gcloud ml-engine predict --model=irisnn --json-instances=./sample_iris.json # SCORES # [0.9954029321670532, 0.004596732556819916, 3.3544753819114703e-07] """ Explanation: Dieses Tensorflow Modell kann man bei Google Cloud ML hochladen und für Berechnungen nutzen End of explanation """
kaysg/NLPatelier
libexp_spaCy/libexp_spaCy.ipynb
gpl-3.0
import spacy nlp = spacy.load('en') text = u"We are living in Singapore.\nIt's blazing outside today!\n" doc = nlp(text) for token in doc: print((token.text, token.lemma, token.tag, token.pos)) for token in doc: print((token.text, token.lemma_, token.tag_, token.pos_)) # lemma means *root form* """ Explanation: Library Exploration: spaCy Parsing End of explanation """ #https://spacy.io/docs/api/token doc_ps = nlp("Mr.Sakamoto told us the Dragon Fruits was very yummy!") #for t in doc: t = doc_ps[2] print("token:",t) print("vocab (The vocab object of the parent Doc):", t.vocab) print("doc (The parent document.):", t.doc) print("i (The index of the token within the parent document.):", t.i) print("ent_type_ (Named entity type.):", t.ent_type_) print("ent_iob_ (IOB code of named entity tag):", t.ent_iob_) print("ent_id_ (ID of the entity the token is an instance of):", t.ent_id_) print("lemma_ (Base form of the word, with no inflectional suffixes.):", t.lemma_) print("lower_ (Lower-case form of the word.):", t.lower_) print("shape_ (A transform of the word's string, to show orthographic features.):", t.shape_) print("prefix_ (Integer ID of a length-N substring from the start of the word):", t.prefix_) print("suffix_ (Length-N substring from the end of the word):", t.suffix_) print("like_url (Does the word resemble a URL?):", t.like_url) print("like_num (Does the word represent a number? ):", t.like_num) print("like_email (Does the word resemble an email address?):", t.like_email) print("is_oov (Is the word out-of-vocabulary?):", t.is_oov) print("is_stop (Is the word part of a stop list?):", t.is_stop) print("pos_ (Coarse-grained part-of-speech.):", t.pos_) print("tag_ (Fine-grained part-of-speech.):", t.tag_) print("dep_ (Syntactic dependency relation.):", t.dep_) print("lang_ (Language of the parent document's vocabulary.):", t.lang_) print("prob: (Smoothed log probability estimate of token's type.)", t.prob) print("idx (The character offset of the token within the parent document.):", t.idx) print("sentiment (A scalar value indicating the positivity or negativity of the token):", t.sentiment) print("lex_id (ID of the token's lexical type.):", t.lex_id) print("text (Verbatim text content.):", t.text) print("text_with_ws (Text content, with trailing space character if present.):", t.text_with_ws) print("whitespace_ (Trailing space character if present.):", t.whitespace_) """ Explanation: Corresponded Tag-POStag Table <table class="c-table o-block"><tr class="c-table__row"><th class="c-table__head-cell u-text-label">Tag</th><th class="c-table__head-cell u-text-label">POS</th><th class="c-table__head-cell u-text-label">Morphology</th></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-LRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=ini</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-PRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>,</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=comm</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>:</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>.</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=peri</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>''</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>&quot;&quot;</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>#</code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SymType=numbersign</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>``</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=ini</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code></code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SymType=currency</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>ADD</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>AFX</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Hyph=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>BES</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>CC</code></td><td class="c-table__cell u-text"> <code>CONJ</code></td><td class="c-table__cell u-text"> <code>ConjType=coor</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>CD</code></td><td class="c-table__cell u-text"> <code>NUM</code></td><td class="c-table__cell u-text"> <code>NumType=card</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>DT</code></td><td class="c-table__cell u-text"> <code>DET</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>EX</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>AdvType=ex</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>FW</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"> <code>Foreign=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>GW</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>HVS</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>HYPH</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=dash</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>IN</code></td><td class="c-table__cell u-text"> <code>ADP</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJ</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=pos</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJR</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=comp</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJS</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=sup</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>LS</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>NumType=ord</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>MD</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbType=mod</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NFP</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NIL</code></td><td class="c-table__cell u-text"></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NN</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>Number=sing</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNP</code></td><td class="c-table__cell u-text"> <code>PROPN</code></td><td class="c-table__cell u-text"> <code>NounType=prop</code> <code>Number=sign</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNPS</code></td><td class="c-table__cell u-text"> <code>PROPN</code></td><td class="c-table__cell u-text"> <code>NounType=prop</code> <code>Number=plur</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNS</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>Number=plur</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PDT</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>AdjType=pdt</code> <code>PronType=prn</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>POS</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"> <code>Poss=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PRP</code></td><td class="c-table__cell u-text"> <code>PRON</code></td><td class="c-table__cell u-text"> <code>PronType=prs</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PRP</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>PronType=prs</code> <code>Poss=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RB</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=pos</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RBR</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=comp</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RBS</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=sup</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RP</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>SP</code></td><td class="c-table__cell u-text"> <code>SPACE</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>TO</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"> <code>PartType=inf</code> <code>VerbForm=inf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>UH</code></td><td class="c-table__cell u-text"> <code>INTJ</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VB</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=inf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBD</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=past</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBG</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=part</code> <code>Tense=pres</code> <code>Aspect=prog</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBN</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=part</code> <code>Tense=past</code> <code>Aspect=perf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBP</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=pres</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBZ</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=pres</code> <code>Number=sing</code> <code>Person=3</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WDT</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WP</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WP</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Poss=yes</code> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WRB</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>XX</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr></table> Definition of Tags <table cellpadding="2" cellspacing="2" border="0"> <tr bgcolor="#DFDFFF" align="none"> <td align="none"> <div align="left">Number</div> </td> <td> <div align="left">Tag</div> </td> <td> <div align="left">Description</div> </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 1. </td> <td>CC </td> <td>Coordinating conjunction </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 2. </td> <td>CD </td> <td>Cardinal number </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 3. </td> <td>DT </td> <td>Determiner </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 4. </td> <td>EX </td> <td>Existential <i>there<i> </i></i></td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 5. </td> <td>FW </td> <td>Foreign word </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 6. </td> <td>IN </td> <td>Preposition or subordinating conjunction </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 7. </td> <td>JJ </td> <td>Adjective </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 8. </td> <td>JJR </td> <td>Adjective, comparative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 9. </td> <td>JJS </td> <td>Adjective, superlative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 10. </td> <td>LS </td> <td>List item marker </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 11. </td> <td>MD </td> <td>Modal </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 12. </td> <td>NN </td> <td>Noun, singular or mass </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 13. </td> <td>NNS </td> <td>Noun, plural </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 14. </td> <td>NNP </td> <td>Proper noun, singular </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 15. </td> <td>NNPS </td> <td>Proper noun, plural </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 16. </td> <td>PDT </td> <td>Predeterminer </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 17. </td> <td>POS </td> <td>Possessive ending </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 18. </td> <td>PRP </td> <td>Personal pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 19. </td> <td>PRP </td> <td>Possessive pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 20. </td> <td>RB </td> <td>Adverb </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 21. </td> <td>RBR </td> <td>Adverb, comparative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 22. </td> <td>RBS </td> <td>Adverb, superlative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 23. </td> <td>RP </td> <td>Particle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 24. </td> <td>SYM </td> <td>Symbol </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 25. </td> <td>TO </td> <td><i>to</i> </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 26. </td> <td>UH </td> <td>Interjection </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 27. </td> <td>VB </td> <td>Verb, base form </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 28. </td> <td>VBD </td> <td>Verb, past tense </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 29. </td> <td>VBG </td> <td>Verb, gerund or present participle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 30. </td> <td>VBN </td> <td>Verb, past participle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 31. </td> <td>VBP </td> <td>Verb, non-3rd person singular present </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 32. </td> <td>VBZ </td> <td>Verb, 3rd person singular present </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 33. </td> <td>WDT </td> <td>Wh-determiner </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 34. </td> <td>WP </td> <td>Wh-pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 35. </td> <td>WP </td> <td>Possessive wh-pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 36. </td> <td>WRB </td> <td>Wh-adverb </table> End of explanation """ doc_dep = nlp(u'I like chicken rice and Laksa.') for np in doc_dep.noun_chunks: print((np.text, np.root.text, np.root.dep_, np.root.head.text)) for t in doc_dep: print((t.text, t.dep_, t.tag_)) """ Explanation: Dependency Analysis End of explanation """ for token in doc_dep: # Orth: Original, Head: head of subtree print((token.text, token.dep_, token.n_lefts, token.n_rights, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])) dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------' for token in doc_dep: print (dependency_pattern.format(word=token.orth_, w_type=token.dep_, left=[t.orth_ for t in token.lefts], right=[t.orth_ for t in token.rights])) """ Explanation: Visualization using displaCy (https://demos.explosion.ai/displacy/) <img src="spacy_dependency01.png"> End of explanation """ for t in doc_dep: print((t.text, t.dep_,t.tag_,t.pos_),(t.head.text, t.head.dep_,t.head.tag_,t.head.pos_)) """ Explanation: Head and Child in dependency tree spaCy uses the terms head and child to describe the words connected by a single arc in the dependency tree. The term dep is used for the arc label, which describes the type of syntactic relation that connects the child to the head. https://spacy.io/docs/usage/dependency-parse End of explanation """ # Load symbols from spacy.symbols import nsubj, VERB verbs = set() for token in doc: print ((token, token.dep, token.head, token.head.pos)) if token.dep == nsubj and token.head.pos == VERB: verbs.add(token.head) verbs """ Explanation: Verb extraction End of explanation """ from numpy import dot from numpy.linalg import norm # cosine similarity cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2)) target_word = 'Singapore' sing = nlp.vocab[target_word] sing # gather all known words except for taget word all_words = list({w for w in nlp.vocab if w.has_vector and w.orth_.islower() and w.lower_ != target_word.lower()}) len(all_words) # sort by similarity #all_words.sort(key=lambda w: cosine(w.vector, sing.vector)) #all_words.reverse() #print("Top 10 most similar words to",target_word) #for word in all_words[:10]: # print(word.orth_) """ Explanation: Extract similar words End of explanation """ country1 = nlp.vocab['china'] race1 = nlp.vocab['chinese'] country2 = nlp.vocab['japan'] result = country1.vector - race1.vector + country2.vector all_words = list({w for w in nlp.vocab if w.has_vector and w.orth_.islower() and w.lower_ != "china" and w.lower_ != "chinese" and w.lower_ != "japan"}) all_words.sort(key=lambda w: cosine(w.vector, result)) all_words[0].orth_ # Top 3 results for word in all_words[:3]: print(word.orth_) """ Explanation: Vector representation End of explanation """ example_sent = "NTUC has raised S$25 million to help workers re-skill and upgrade their skills, secretary-general Chan Chun Sing said at the May Day Rally on Monday " parsed = nlp(example_sent) for token in parsed: print((token.orth_, token.ent_type_ if token.ent_type_ != "" else "(not an entity)")) """ Explanation: Entity Recognition End of explanation """ import random from spacy.gold import GoldParse from spacy.language import EntityRecognizer train_data = [ ('Who is Chaka Khan?', [(7, 17, 'PERSON')]), ('I like Bangkok and Buangkok.', [(7, 14, 'LOC'), (19, 27, 'LOC')]) ] nlp2 = spacy.load('en', entity=False, parser=False) ner = EntityRecognizer(nlp2.vocab, entity_types=['PERSON', 'LOC']) for itn in range(5): random.shuffle(train_data) for raw_text, entity_offsets in train_data: doc2 = nlp2.make_doc(raw_text) gold = GoldParse(doc2, entities=entity_offsets) nlp.tagger(doc2) ner.update(doc2, gold) ner.model.end_training() nlp.save_to_directory('./sample_ner/') nlp3 = spacy.load('en', path='./sample_ner/') example_sent = "Who is Tai Seng Tan?" doc3 = nlp3(example_sent) for ent in doc3.ents: print(ent.label_, ent.text) """ Explanation: Visualization using displaCy Named Entity Visualizer (https://demos.explosion.ai/displacy-ent/) <img src="spacy_ner01.png"> List of entity types https://spacy.io/docs/usage/entity-recognition <table class="c-table o-block"><tr class="c-table__row"><th class="c-table__head-cell u-text-label">Type</th><th class="c-table__head-cell u-text-label">Description</th></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>PERSON</code></td><td class="c-table__cell u-text">People, including fictional.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>NORP</code></td><td class="c-table__cell u-text">Nationalities or religious or political groups.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>FACILITY</code></td><td class="c-table__cell u-text">Buildings, airports, highways, bridges, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>ORG</code></td><td class="c-table__cell u-text">Companies, agencies, institutions, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>GPE</code></td><td class="c-table__cell u-text">Countries, cities, states.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>LOC</code></td><td class="c-table__cell u-text">Non-GPE locations, mountain ranges, bodies of water.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>PRODUCT</code></td><td class="c-table__cell u-text">Objects, vehicles, foods, etc. (Not services.)</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>EVENT</code></td><td class="c-table__cell u-text">Named hurricanes, battles, wars, sports events, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>WORK_OF_ART</code></td><td class="c-table__cell u-text">Titles of books, songs, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>LANGUAGE</code></td><td class="c-table__cell u-text">Any named language.</td></tr></table> Build own entity recognizer End of explanation """
jjonte/udacity-deeplearning-nd
py3/project-4/dlnd_language_translation.ipynb
unlicense
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [[source_vocab_to_int[y] for y in x] for x in [sentence.split() for sentence in source_text.split('\n')]] target_id_text = [[target_vocab_to_int[y] for y in x] for x in [sentence.split() for sentence in target_text.split('\n')]] for l in target_id_text: l.append(target_vocab_to_int['<EOS>']) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) decoding_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return decoding_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32) return enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ decoder = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) prediction, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder, dec_embed_input, sequence_length, scope=decoding_scope) logits = output_fn(prediction) return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ decoder = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder, scope=decoding_scope) return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ with tf.variable_scope("decoding") as decoding_scope: dec_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([dec_cell] * num_layers) _, dec_state = tf.nn.dynamic_rnn(dec_cell, dec_embed_input, dtype=tf.float32) output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) t_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: i_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return t_logits, i_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ rnn_inputs = tf.contrib.layers.embed_sequence(input_data, vocab_size=source_vocab_size, embed_dim=enc_embedding_size) encoder_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) t_logits, i_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return t_logits, i_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation """ # Number of Epochs epochs = 4 # Batch Size batch_size = 128 # RNN Size rnn_size = 384 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.6 """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ return [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
AlbanoCastroSousa/RESSPyLab
examples/UVC_Calibration_Example_1.ipynb
mit
import RESSPyLab as rpl import numpy as np """ Explanation: Updated Voce-Chaboche Model Fitting Example 1 An example of fitting the updated Voce-Chaboche (UVC) model to a set of test data is provided. Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions. End of explanation """ # Specify the true stress-strain to be used in the calibration # Only one test used, see the VC_Calibration_Example_1 example for multiple tests data_files = ['example_1.csv'] # Set initial parameters for the UVC model with one backstresses # [E, \sigma_{y0}, Q_\infty, b, D_\infty, a, C_1, \gamma_1] x_0 = np.array([200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]) # Log files for the parameters at each step, and values of the objective function at each step # The logs are only kept for step 4b, the result of 4a will be the first entry of the log file x_log = './output/x_log_upd.txt' fxn_log = './output/fxn_log_upd.txt' # (Optional) Set the number of iterations to run in step 4b # The recommended number of iterations is its = [300, 1000, 3000] # For the purpose of this example less iterations are run its = [30, 30, 40] # Run the calibration # Set filter_data=True if you have NOT already filtered/reduced the data # We recommend that you filter/reduce the data beforehand x_sol = rpl.uvc_param_opt(x_0, data_files, x_log, fxn_log, find_initial_point=True, filter_data=False, step_iterations=its) """ Explanation: Run optimization with single test data set This is a simple example for fitting the UVC model to a set of test data. We only use one backstresses in this model, additional backstresses can be specified by adding pairs of 0.1's to the list of x_0. E.g., three backstresses would be x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] Likewise, two backstresses can be specified by removing two pairs of 0.1's from the list below. The overall steps to calibrate the model parameters are as follows: 1. Load the set of test data 2. Choose a starting point 3. Set the location to save the analysis history 4. Run the analysis Step 4. from above is slightly more complicated for the updated model than it is for the original model. This step is divided into two parts: a) Run the original model with the same number of backstresses to obtain an initial set of parameters (without the updated parameters) b) Run the updated model from the point found in 4a. If you already have an initial set of parameters you can skip substep 4a by setting find_initial_point=False. End of explanation """ data = rpl.load_data_set(data_files) rpl.uvc_data_plotter(x_sol[0], data, output_dir='', file_name='uvc_example_plots', plot_label='Fitted-UVC') """ Explanation: The minimization problem in 4b above is solved in multiple steps because it is typically difficult to find a minimum to the UVC problem with a strict tolerance. Each step successively relaxes the tolerance on the norm of the gradient of the Lagrangian. The first step is 30 iterations at 1e-8, then 30 iterations at 1e-2, then a maximum of 50 iterations at 5e-2. Confidence in the solution point can be gained using the visualization tools shown in the Visualization_Example_1 Notebook. In the case shown above, the analysis exits during the third step. Plot results After the analysis is finished we can plot the test data versus the fitted model. If we set output_dir='./output/' instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below. End of explanation """
napjon/krisk
notebooks/declarative-visualization.ipynb
bsd-3-clause
# Use this when you want to nbconvert the notebook (used by nbviewer) from krisk import init_notebook; init_notebook() from krisk import Chart chart = Chart() chart """ Explanation: You can use krisk for Declarative Visualization. You don't have to use krisk.plot package, and directly use Chart class to make any charts that are supported by ECharts. End of explanation """ chart.option """ Explanation: Here you see that there is a blank figure for chart that you want to use. You can inspect the characteristic of the chart by using its option member. End of explanation """ chart.set_title('This is a blank visualization', x_pos='center') chart.set_theme('vintage') """ Explanation: As soon as Chart objects are build, the chart option will be using this minimum Python dictionary. This template will translate to JSON object that represent ECharts option. Eventhough there is no visualization in the plot, you can still tweak other type of customization. Here is an example if you want to add a title and a theme to the figure. End of explanation """ chart.option['series'] = [{'data': [10, 3, 7, 4, 5], 'name': 'continent', 'type': 'bar'}] chart.option['xAxis'] = {'data': ['Americas', 'Asia', 'Africa', 'Oceania', 'Europe']} chart """ Explanation: There is no plot visualization since there is no data, but we can still customization like theme and title. This also benefit you for saving the figure later to be used as a basis for other plots. With minimal effort, you can insert all type of charts. Here below we change x-axis name for each of the bar. The value of every bar is what we want to insert as a series data. End of explanation """
bjsmith/motivation-simulation
test-jupyter-widgets-clone2.ipynb
gpl-3.0
from matplotlib.pyplot import figure, plot, xlabel, ylabel, title, show from IPython.display import display text = widgets.FloatText() floatText = widgets.FloatText(description='MyField',min=-5,max=5) floatSlider = widgets.FloatSlider(description='MyField',min=-5,max=5) #https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.html float_link = widgets.jslink((floatText, 'value'), (floatSlider, 'value')) """ Explanation: Basic plot example End of explanation """ floatSlider.value=1 txtArea = widgets.Text() display(txtArea) myb= widgets.Button(description="234") def add_text(b): txtArea.value = txtArea.value + txtArea.value myb.on_click(add_text) display(myb) """ Explanation: Here we will set the fields to one of several values so that we can see pre-configured examples. End of explanation """
marius311/cosmoslik
cosmoslik_plugins/likelihoods/spt_lowl/spt_lowl.ipynb
gpl-3.0
%pylab inline from cosmoslik import * """ Explanation: South Pole Telescope low-$\ell$ This plugin implements the South Pole Telescope likelihood from Story et al. (2012) and Keisler et al. (2011). The data comes included with this plugin and was downloaded from here and here, respectively. You can choose which likelihood to use by specifying which='s12' or which='k11' when you initialize the plugin. The plugin also supports specifying an $\ell_{\rm min}$, $\ell_{\rm max}$, or dropping certain data bins. API End of explanation """ class spt(SlikPlugin): def __init__(self, **kwargs): super().__init__() self.cosmo = models.cosmology("lcdm") self.spt_lowl = likelihoods.spt_lowl(**kwargs) self.cmb = models.camb(lmax=4000) self.sampler = samplers.metropolis_hastings(self) def __call__(self): return self.spt_lowl(self.cmb(**self.cosmo)) """ Explanation: Basic Script Here's a script which runs a basic SPT-only chain: End of explanation """ spt().spt_lowl.find_sampled().keys() """ Explanation: Four sampled parameters, one for calibration and three for foregrounds, come by default with the SPT plugin: End of explanation """ s = Slik(spt(which='s12')) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot() """ Explanation: The plugin has a convenience method for plotting the data and current model: End of explanation """ s = Slik(spt(which='k11')) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot() """ Explanation: Or you can plot the "k11" bandpowers. You can see they're less constraining. End of explanation """ s = Slik(spt(which='s12',lmin=1000,lmax=1500)) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot() """ Explanation: Choosing subsets of data You can also set some $\ell$-limits, End of explanation """ s = Slik(spt(which='s12',drop=range(10,15))) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot() """ Explanation: Or drop some individual data points (by bin index), End of explanation """ s = Slik(spt(which='s12',cal=2)) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot() """ Explanation: Calibration parameter The calibration parameter is called cal and defined so it multiplies the data bandpowers. For "s12" it comes by default with a prior $1 \pm 0.026$. You can't use it for "k11" because this likelihood has the calibration pre-folded into the covariance. End of explanation """ s = Slik(spt(which='s12')) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot(show_comps=True) yscale('log') """ Explanation: Foreground model By default the foreground model is the one used in the Story/Keisler et al. papers (same for both). There's an option to plot which shows you the CMB and foreground components separately so you can see it. End of explanation """ class spt_myfgs(SlikPlugin): def __init__(self): super().__init__() self.cosmo = models.cosmology("lcdm") self.spt_lowl = likelihoods.spt_lowl(egfs=None) #turn off default model & params self.Aps = param(start=30, scale=10, min=0) #add our own sampled parameter here self.cmb = models.camb(lmax=4000) self.sampler = samplers.metropolis_hastings(self) def __call__(self): return self.spt_lowl(self.cmb(**self.cosmo), egfs=lambda lmax,**_: self.Aps * (arange(lmax)/3000.)**2) #compute our foregroud model s = Slik(spt_myfgs()) lnl, e = s.evaluate(**s.get_start()) e.spt_lowl.plot(show_comps=True) yscale("log") """ Explanation: The foreground model is taken from the spt_lowl.egfs attribute which is expected to be a function which can be called with parameters lmax to specify the length of the array returned, and egfs_specs which provides some info about frequencies/fluxcut of the SPT bandpowers for more advanced foreground models. You can customize this by attaching your own callable function to spt_lowl.egfs when you call __init__, or passing something in during __call__. For example, say we wanted a Poisson-only foreground model, we could write the script like so: End of explanation """
jmhsi/justin_tinker
data_science/courses/Transforms with Pytorch and Torchsample.ipynb
apache-2.0
# some imports we will need import os import matplotlib.pyplot as plt import torch as th from torchvision import datasets %matplotlib inline """ Explanation: Overview I will go over the following topics using the pytorch and torchsample packages: Dataset Creation and Loading How you create pytorch-suitable datasets from arrays or from file in a variety of data formats, including from numpy arrays, from arbitrary data formats stored in folders, and from a list of files in a single CSV or TXT file. Dataset Sampling and Feeding How you create pytorch-suitable dataset iterators and actually sample from these iterators in a variety of ways. Data Transforms and Augmentation How you alter the input and/or target samples in real-time to ensure your model is robust, including how to do augmentation directly on the GPU. This tutorial will be almost solely about Transforms. I will be using 4 different datasets in this tutorial. They are all unique and show a different side of the process. You can skip any of the datasets if you wish - all code for each dataset will be contained to isolated code cells: 1. MNIST for 2D-Grayscale processing 2. CIFAR-10 for 2D-Color processing 3. Arrays saved in arbitrary file paths with teh file paths and labels stored in a CSV file for kaggle-like processing. 4. A Brain Image and it Segmented brain Mask for 3D-Image + Segmentation processing (NOTE: requires the nilearn package). Understanding Datasets, DataLoaders, and Transforms in Pytorch When it comes to loading and feeding data in pytorch, there are three main concepts: Datasets, DataLoaders, and Transforms. Transforms are small classes which take in one or more arrays, perform some operation, then return the altered version of the array(s). They almost always belong to a Dataset which carries out those transforms, but can belong to more than one dataset or can actually stand on their own. This is where a lot of the custom user-specific code happens. Thankfully, Transforms are very easy to build as I will show soon. Datasets actually store your data arrays, or the filepaths to your data if loading from file. If you can load your data completely into memory - such as with MNIST - you should use the TensorDataset class. If you cant load your complete data into memory - such as with Imagenet - you should use the FolderDataset. I will describe these later, including how to create your own dataset (the CSVDataset class to load from a csv) DataLoaders are used to actually sample and iter through your data. This is where all the multi-processing magic happens to load your data in multiple threads and avoid starving your model. A DataLoader always takes a Dataset as input (this is object composition), along with a few other parameters such as the batch size. You will basically NEVER need to alter the DataLoaders - just use the built-in ones. The order in which I presented these topics above are usually the order in which you will create the objects! First, make your transforms. Next, make your Dataset and pass in the transforms. Next, make your DataLoader and pass in your Dataset. Here is a small pseudo-code example of the process: ```python Create the transforms my_transform = Compose([SomeTransform(), SomeOtherTransform()]) Create the dataset - pass in your arrays and the transforms my_dataset = Dataset(x_array, y_array, transform=my_transform) Create the Dataloader - pass in your dataset and some other args my_loader = DataLoader(my_dataset, batch_size=32) Iterate through the loader for x, y in my_loader: do_something(x, y) ``` 0. Loading our test data End of explanation """ # Change this to where you want to save the data SAVE_DIR = os.path.expanduser('~/desktop/data/MNIST/') # train data mnist_train = datasets.MNIST(SAVE_DIR, train=True, download=True) x_train_mnist, y_train_mnist = mnist_train.train_data.type(th.FloatTensor), mnist_train.train_labels # test data mnist_test = datasets.MNIST(SAVE_DIR, train=False, download=True) x_test_mnist, y_test_mnist = mnist_test.test_data.type(th.FloatTensor), mnist_test.test_labels print('Training Data Size: ' ,x_train_mnist.size(), '-', y_train_mnist.size()) print('Testing Data Size: ' ,x_test_mnist.size(), '-', y_test_mnist.size()) plt.imshow(x_train_mnist[0].numpy(), cmap='gray') plt.title('DIGIT: %i' % y_train_mnist[0]) plt.show() """ Explanation: 0a. Load MNIST MNIST is a collection of 28x28 images of Digits between 0-9, with 60k training images and 10k testing images. The images are grayscale, so there is only a single channel dimension (1x28x28) End of explanation """ import numpy as np # Change this to where you want to save the data SAVE_DIR = os.path.expanduser('~/desktop/data/CIFAR/') # train data cifar_train = datasets.CIFAR10(SAVE_DIR, train=True, download=True) x_train_cifar, y_train_cifar = cifar_train.train_data, np.array(cifar_train.train_labels) # test data cifar_test = datasets.CIFAR10(SAVE_DIR, train=False, download=True) x_test_cifar, y_test_cifar = cifar_test.test_data, np.array(cifar_test.test_labels) print('Training Data Size: ' ,x_train_cifar.shape, '-', y_train_cifar.shape) print('Testing Data Size: ' ,x_test_cifar.shape, '-', y_test_cifar.shape) plt.imshow(x_train_cifar[0], cmap='gray') plt.title('Class: %i' % y_train_cifar[0]) plt.show() """ Explanation: 0b. CIFAR-10 CIFAR10 is an image recognition dataset, with images of 3x98x98 size. End of explanation """ import numpy as np import pandas as pd import os import random import string # create data X = np.zeros((10,1,30,30)) for i in range(10): X[i,:,5:25,5:25] = i+1 Y = [i for i in range(10)] plt.imshow(X[0,0,:,:]) plt.show() # save to file SAVE_DIR = os.path.expanduser('~/desktop/data/CSV/') if not os.path.exists(SAVE_DIR): os.mkdir(SAVE_DIR) else: import shutil shutil.rmtree(SAVE_DIR) os.mkdir(SAVE_DIR) paths = [] for x in X: file_path = os.path.join(SAVE_DIR,''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6)) ) #print(file_path+'.npy') np.save('%s.npy' % file_path, x) paths.append(file_path+'.npy') # create data frame from file paths and labels df = pd.DataFrame(data={'files':paths, 'labels':Y}) print(df.head()) # save data frame as CSV file df.to_csv(os.path.join(SAVE_DIR, '_DATA.csv'), index=False) """ Explanation: 0c. A CSV File of arbitrary Filepaths to 2D arrays For the third dataset, I will create some random 2D arrays and save them to disk without any real order. I will then write the file-paths to each of these images to a CSV file and create a dataset from that CSV file. This is a common feature request in the pytorch community, I think because many Kaggle competitions and the like provide input data in this format. Here, I will just generate a random string for each of the file names to show just how arbitrary this is. End of explanation """ import nilearn.datasets as nidata import nibabel as nib icbm = nidata.fetch_icbm152_2009() #print(icbm.keys()) t1 = nib.load(icbm['t1']).get_data() mask = nib.load(icbm['mask']).get_data() print('Image Sizes: ' , t1.shape, ',' , mask.shape) plt.figure(1) plt.subplot(131) plt.imshow(t1[120,:,:].T[::-1], cmap='gray') plt.subplot(132) plt.imshow(t1[:,120,:].T[::-1],cmap='gray') plt.subplot(133) plt.imshow(t1[:,:,100],cmap='gray') plt.show() plt.figure(2) plt.subplot(131) plt.imshow(mask[120,:,:].T[::-1], cmap='gray') plt.subplot(132) plt.imshow(mask[:,120,:].T[::-1],cmap='gray') plt.subplot(133) plt.imshow(mask[:,:,100],cmap='gray') plt.show() """ Explanation: 0d. 3D Brain Images Finally, I'll grab a standard structural MRI scan and the brain binary mask from the nilearn package to show how the processing you can do with 3D images using pytorch and torchsample. The MRI scan will include the skull and head, and the mask will be for just the brain. This is a common task in processing neuroimages -- to segment just the brain from the head. This data will also be useful to show the processing step involved when BOTH the input and target tensors are images. End of explanation """ from torchsample.transforms import AddChannel # add channel to 0th dim - remember the transform will only get individual samples add_channel = AddChannel(axis=0) """ Explanation: 2. Creating a Pytorch-compatible Dataset Now that we have our transforms, we will create a Dataset for them! There are three main datasets in torchsample : - torchsample.TensorDataset - torchsample.FolderDataset - torchsample.FileDataset The first two are extensions of the pytorch equivalent classes: - torch.utils.data.TensorDataset - torch.utils.data.FolderDataset The last one (torchsample.FileDataset) is unique to torchsample, and allows you to read data from a CSV file containing a list of arbitrary filepaths to data. You should feel free to use the official classes instead if you don't need the extra functionality, but there is really no difference between them internally. Also, you may find that you actually need the torchsample versions of these classes to do many of the transforms presented below. The extra functionality in the torchsample versions includes the following: - support for target transforms - support for co-transforms (same transform applied to both input and target) 1. Creating Transforms Now that we have our tutorial datasets stored in files or in memory, we will create our transforms! Transforms generally are classes and have the following structure: ```python class MyTransform(object): def __init__(self, some_args): self.some_args = some_args def __call__(self, x): x_transform = do_something(x, self.some_args) return x_transform ``` So you see any arguments for the transform should be passed into the initializer, then the transform should implement the __call__ function. You simply instantiate the transform class then use the transform exactly as you would use a function, with your array to be transformed as the function argument. Here's some pseudo-code for how to use a transform: python tform = MyTransform(some_args=some_value) x_transformed = tform(x) It's also important to note that TRANSFORMS ACT ON INDIVIDUAL SAMPLES - NOT BATCHES. Therefore, if you have a dataset of size (10000, 1, 32, 32) then your transform's __call__ function should assume it only receives individual samples of size (1, 32, 32). There will be no sample dimension. 1a. Creating Transforms for MNIST Here, I will create some transforms for MNIST. I will use the following transforms, available in the torchsample package: AddChannel RangeNormalize RandomCrop RandomRotate If you remember, the MNIST data was of size (60000, 28, 28). We will need to add a channel dimension, so we will use the AddChannel transform to add a channel to the first dimension. End of explanation """ print('Before Tform: ' , x_train_mnist[0].size()) x_with_channel = add_channel(x_train_mnist[0]) print('After Tform: ' , x_with_channel.size()) """ Explanation: Because our MNIST is already in-memory, we can actually test the transform on one of the images. Note, however, that we couldn't do this if we were loading data from file. End of explanation """ from torchsample.transforms import RangeNormalize norm_01 = RangeNormalize(0, 1) """ Explanation: Now, it would be kind of wasteful to have to add a channel every time we draw a sample. In reality, we would just do this transform once on the entire dataset since it's already in memory: python x_train_mnist = AddChannel(axis=0)(x_train_mnist) Next, we know that the MNIST data is valued between 0 and 255, so we will use the RangeNormalize transform to normalize the data between 0 and 1. We will pass in the min and max value of the normalized range, along with the values for fixed_min and fixed_max since we already know that value so the transform doesnt have to calculate the min and max each sample. End of explanation """ print('Before Tform: ' , x_train_mnist[0].min(), ' - ', x_train_mnist[0].max()) x_norm = norm_01(x_train_mnist[0]) print('After Tform: ' , x_norm.min(), ' - ', x_norm.max()) """ Explanation: Again, we can test this: End of explanation """ from torchsample.transforms import RandomCrop # note that we DONT add the channel dim to transform - the same crop will be applied to each channel rand_crop = RandomCrop((20,20)) x_example = add_channel(x_train_mnist[0]) print('Before TFORM: ' , x_example.size()) x_crop = rand_crop(x_example) print('After TFORM: ' , x_crop.size()) plt.imshow(x_crop[0].numpy()) plt.show() """ Explanation: Next, we will add a transform to randomly crop the MNIST image. Suppose our network takes in images of size (1, 20, 20), then we will randomly crop our (1, 28, 28) images to this size: End of explanation """ from torchsample.transforms import RandomRotate x_example = add_channel(x_train_mnist[0]) rotation = RandomRotate(30) x_rotated = rotation(x_example) plt.imshow(x_rotated[0].numpy()) plt.show() """ Explanation: Finally, we will add a RandomRotate transform from the torchsample package to randomly rotate the image some number of degrees: End of explanation """ from torchsample.transforms import Compose tform_chain = Compose([add_channel, norm_01, rand_crop, rotation]) """ Explanation: Now, we will chain all of these above transforms into a single pipeline using the Compose class. This class is necessary for Datasets because they only take in a single transform. You can chain multiple Compose classes if you want. End of explanation """ x_example = x_train_mnist[5] x_tformed = tform_chain(x_example) plt.imshow(x_tformed[0].numpy()) plt.show() """ Explanation: Now let's test the entire pipeline: End of explanation """ from torchsample.transforms import ToTensor x_cifar_tensor = ToTensor()(x_train_cifar[0]) print(type(x_cifar_tensor)) """ Explanation: There you have it - an MNIST digit for which we 1) added a channel dimension, 2) normalized between 0-1, 3) made a random 20x20 crop, then 4) randomly rotated between -30 and 30 degrees. 1b. Creating Transforms for CIFAR-10 Here, I will create some transforms for CIFAR-10. Remember, this data is 2D color images so there will be 3 channel dimensions. Because we have color images, we can use a lot of cool image transforms to mess with the color, saturation, I will use the following transforms, available in the torchsample package: ToTensor TypeCast RangeNormalize RandomAdjustGamma AdjustBrightness RandomAdjustSaturation You'll note that one of the transforms AdjustBrightness doesn't have "Random" in front of it. Just like with the Affine transforms, you can either specify a specific value for the transform or simply specific a range from which a uniform random selection will be made. First, you'll note that the CIFAR data was in NUMPY format. All of these transforms I'm showing only work on torch tensors. For that reason, we will first use the ToTensor transform to convert the data into a torch tensor. Again, it might be best to simply do this on the entire dataset as a pre-processing step instead of during real-time sampling. End of explanation """ from torchsample.transforms import TypeCast x_cifar_tensor = TypeCast('float')(x_cifar_tensor) print(type(x_cifar_tensor)) """ Explanation: Oh No.. This data is still in ByteTensor format! We should be smart and simply cast the entire dataset to torch.FloatTensor, but for the sake of demonstration let's use the TypeCast transform: End of explanation """ print(x_cifar_tensor.min() , ' - ' , x_cifar_tensor.max()) x_cifar_tensor = RangeNormalize(0,1)(x_cifar_tensor) print(x_cifar_tensor.min() , ' - ' , x_cifar_tensor.max()) """ Explanation: Great! Now, we will perform some actual image transforms. But first, we should RangeNormalize because these transforms assume the image is valued between 0 and 1: End of explanation """ from torchsample.transforms import RandomAdjustGamma, AdjustGamma gamma_tform = RandomAdjustGamma(0.2,1.8) x_cifar_gamma = gamma_tform(x_cifar_tensor) """ Explanation: For the RandomAdjustGamma transform, a value less than 1 will tend to make the image lighter, and a value greater than 1 will tend to make the image lighter. Therefore, we will make our range between 0.5 and 1.5. End of explanation """ plt.imshow(x_train_cifar[0]) plt.show() plt.imshow(x_cifar_gamma.numpy()) plt.show() """ Explanation: Ok, now let's plot the difference: End of explanation """ from torchsample.transforms import AdjustBrightness # make our image a little brighter bright_tform = AdjustBrightness(0.2) x_cifar_bright = bright_tform(x_cifar_gamma) plt.imshow(x_cifar_bright.numpy()) plt.show() from torchsample.transforms import RandomAdjustSaturation, ChannelsFirst, ChannelsLast sat_tform = RandomAdjustSaturation(0.5,0.9) x_cifar_sat = sat_tform(ChannelsFirst()(x_cifar_bright)) plt.imshow(ChannelsLast()(x_cifar_sat).numpy()) plt.show() """ Explanation: Cool, the sampled Gamma value was greater than 1, so the image became a little darker. It's important to note that the gamma value will be randomly sampled every time you call the transform. This means every sample will be different. This is a good transform to make your classifier robust to different inputs. Let's do the other transforms: End of explanation """ cifar_compose = Compose([ToTensor(), TypeCast('float'), ChannelsFirst(), RangeNormalize(0,1), RandomAdjustGamma(0.2,1.8), AdjustBrightness(0.2), RandomAdjustSaturation(0.5,0.9)]) """ Explanation: Now the image is a little more saturated. However, you'll notice we had to do a little trick. The pytorch and torchsample packages assume the tensors are in CHW format - that is, the channels are first. Our CIFAR data was naturally in HWC format, which Matplotlib likes. Therefore, we had to do the ChannelsFirst transform then the ChannelsLast format to go between the two. We will add the ChannelsFirst transform to our pipeline, although it might be best to do that first! Let's make our final pipeline for cifar: End of explanation """ x_cifar_example = x_train_cifar[20] x_cifar_tformed = cifar_compose(x_cifar_example) plt.imshow(x_cifar_example) plt.show() plt.imshow(ChannelsLast()(x_cifar_tformed).numpy()) plt.show() """ Explanation: Again, let's test this on a single example to make sure it works: End of explanation """ # grab a 2D slice from the data t1_slice = np.expand_dims(t1[100,:,:],0) mask_slice = np.expand_dims(mask[100,:,:],0) plt.imshow(t1_slice[0].T[::-1], cmap='gray') plt.show() plt.imshow(mask_slice[0].T[::-1],cmap='gray') plt.show() """ Explanation: So Awesome! We will skip transforms for the data saved to random image files, because the point of that data is to show how to make a custom Dataset which will be in the next section. Transforms for the Segmentation Data For the 3D brain images, we had a brain image and its segmentation. I will quickly show now how you can perform the same transform on both input and target images. It's pretty simple. End of explanation """ from torchsample.transforms import RandomAffine t1_slice, mask_slice = TypeCast('float')(*ToTensor()(t1_slice, mask_slice)) tform = RandomAffine(rotation_range=30, translation_range=0.2, zoom_range=(0.8,1.2)) t1_slice_tform, mask_slice_tform = tform(t1_slice, mask_slice) plt.imshow(t1_slice_tform[0].numpy().T[::-1]) plt.show() plt.imshow(mask_slice_tform[0].numpy().T[::-1]) plt.show() """ Explanation: Ok, now let's do a random Affine transform and show how it correctly performs the same transform on both images: End of explanation """
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160517화_4일차_시각화 Visualization/3.seaborn 시각화 패키지 소개.ipynb
mit
sns.set() #스타일이 정해짐 sns.set_color_codes() x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(x, y) axarr[0].set_title('Sharing X axis') axarr[1].scatter(x, y); """ Explanation: seaborn 시각화 패키지 소개 seaborn은 matplotlib을 기반으로 다양한 색상 테마와 통계용 챠트 등의 기능을 추가한 시각화 패키지이다. 당연히 matplotlib 패키지에 의존하며 통계 기능의 경우에는 statsmodels 패키지에 의존한다. seaborn에서 제공하는 플롯의 종류는 다음과 같다. 분포 플롯 (distribution plot) 회귀 분석 플롯 (regression plot) 카테고리 플롯 (categorical plot) 행렬 플롯 (matrix plot) 시계열 플롯 (time series plot) seaborn에 대한 자세한 내용은 다음 웹사이트를 참조한다. http://stanford.edu/~mwaskom/software/seaborn/index.html R에서 쓰는 스타일 통계쪽 차트나 플롯 위주로 보강했음 classsical은 statsmodels를 쓰고 머신러닝은 scikit-learn을 쓸 것임 -판다스에서는 R에서 쓰는 라이브러리를 그대로 쓴다? NA는 데이터를 구하지 못했다. Not a Number 판다스가 NA를 받았을 때 몰래 정수를 float로 바꾼다. -시계열 플롯과 라인 플롯은 다르다. -선(라인)은 각각 다른 샘플이다. 색상 테마 seaborn을 임포트하게 되면 바탕화면, axis, 색상 팔레트 등을 matplotlib에서 제공하는 기본 스타일이 아닌 seaborn에서 지정한 디폴트 스타일 집합으로 변경한다. 따라서 동일한 matplotlib 명령을 수행해도 seaborn을 임포트 한 것과 하지 않은 플롯은 다음과 같이 스타일 차이가 난다. <img src="https://datascienceschool.net/upfiles/1c0306b736904f3d8dbed00e646565cd.png" style="width: 70%; margin-left: 0px;"> End of explanation """ np.random.seed(0) x = np.random.randn(100) sns.rugplot(x); """ Explanation: 분포 플롯 분포 플롯은 자료의 분포를 묘사하기 위한 것으로 matplotlib의 단순한 히스토그램과 달리 커널 밀도(kernel density) 및 러그(rug) 표시 기능 및 다차원 복합 분포 기능 등을 제공한다. 분포 플롯 명령에는 다음과 같은 것들이 있다. rugplot kdeplot distplot jointplot pairplot 커널이라는 용어는 너무나도 많이 쓰이나 히스토그램에서 쓰이는 것은 하나의 값에 대해서 나타낼 때 러그(rug) 데이터 위치를 x축 위에 작은 선분으로 나타내어 실제 데이터 분포를 보여준다. End of explanation """ sns.kdeplot(x); """ Explanation: 커널 밀도(kernel density)는 커널이라고 하는 단위 플롯을 겹치는 방법으로 히스토그램보다 부드러운 형태의 분포 곡선을 보여준다. 커널 밀도 추정에 대한 자세한 내용은 scikit-learn 패키지를 참조한다. http://scikit-learn.org/stable/modules/density.html End of explanation """ sns.distplot(x, kde=True, rug=True); """ Explanation: seaborn의 distplot 명령은 matplotlib의 히스토그램 명령을 대체하여 많이 쓰인다. 러그와 커널 밀도 기능을 가지고 있다. End of explanation """ tips = sns.load_dataset("tips") sns.jointplot(x='total_bill', y='tip', data=tips); iris = sns.load_dataset("iris") sns.jointplot("sepal_width", "petal_length", data=iris, kind="kde", space=0, color="g"); #2차원 커널을 그려서 컨투어로 표현한 것 """ Explanation: 두 개 이상의 데이터 집합에 대한 분포를 시각화 할 때는 jointplot을 사용한다. 두 데이터 집합의 관계는 스캐터 플롯, 컨투어 플롯으로 볼 수 있으며 각 개별 데이터에 대한 히스토그램도 추가적으로 보여준다. End of explanation """ sns.regplot(x="total_bill", y="tip", data=tips); """ Explanation: 회귀 분석 플롯 회귀 분석 플롯은 선형 회귀 분석 결과를 묘사하기 위한 것이다. 회귀 분석 플롯 명령에는 다음과 같은 것들이 있다. regplot residplot lmplot regplot 명령은 내부적으로 회귀 분석을 실시하고 그 결과를 시각화한다. 데이터 자체는 스케터 플롯으로, 회귀 분석 결과는 라인플롯으로, 신뢰구간은 fill 플롯으로 그린다. End of explanation """ sns.residplot(x="total_bill", y="tip", data=tips); """ Explanation: residplot은 잔차항(residual)를 스캐터 플롯으로 그린다. End of explanation """ sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips); sns.lmplot(x="total_bill", y="tip", col="smoker", data=tips); """ Explanation: lmplot 명령을 사용하면 여러가지 회귀 분석 결과를 하나의 그림에 보일 수 있다. End of explanation """ sns.barplot(x="day", y="total_bill", hue="sex", data=tips); """ Explanation: 카테고리 플롯 카테고리 플롯은 1차원 카테고리 값 자료의 분포를 묘사하기 위한 것이다. 카테고리 플롯 명령에는 다음과 같은 것들이 있다. barplot countplot boxplot pointplot violinplot stripplot swarmplot barplot 은 기본적인 바 차트를 생성한다. End of explanation """ titanic = sns.load_dataset("titanic") sns.countplot(x="class", hue="who", data=titanic); """ Explanation: countplot은 카테고리의 다른 데이터 갯수를 시각화한다. End of explanation """ sns.boxplot(x="day", y="total_bill", hue="smoker", data=tips); sns.pointplot(x="time", y="total_bill", hue="smoker", data=tips, dodge=True); # dodge는 살짝 띄우라는 의미 """ Explanation: boxplot과 pointplot은 카테고리 인자에 따라 분포의 특성이 어떻게 바뀌는지를 보여준다. End of explanation """ sns.violinplot(x="day", y="total_bill", hue="smoker", data=tips, palette="muted"); sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, palette="Set2", split=True, scale="count", inner="quartile"); sns.stripplot(x="day", y="total_bill", hue="smoker", data=tips, jitter=True, palette="Set2", split=True); sns.boxplot(x="tip", y="day", data=tips, whis=np.inf) sns.stripplot(x="tip", y="day", data=tips, jitter=True, color=".3"); #jitter 조금씩 왔다갔다가 하게 해서 눈에 띄게끔 sns.swarmplot(x="day", y="total_bill", hue="sex", data=tips); sns.violinplot(x="day", y="total_bill", data=tips, inner=None) sns.swarmplot(x="day", y="total_bill", data=tips, color="white", edgecolor="gray"); """ Explanation: boxplot과 pointplot이 중앙값, 표준 편차 등, 분포의 간략한 특성만 보여주는데 반해 violinplot, stripplot. swarmplot 등은 카테고리값에 따른 각 분포의 전체 형상을 보여준다는 장점이 있다. stripplot 과 swarmplot 은 보통 boxplot이나 violinplot과 같이 사용된다. End of explanation """ flights = sns.load_dataset("flights") flights = flights.pivot("month", "year", "passengers") sns.heatmap(flights, annot=True, fmt="d"); """ Explanation: 행렬 플롯 행렬 플롯은 2차원 카테고리 값 자료의 분포를 묘사하기 위한 것이다. 행렬 플롯 명령에는 다음과 같은 것들이 있다. heatmap clustermap heatmap 명령은 카테고리 값에 따른 자료의 수를 세고 이를 matplotlib 패키지의 imshow와 유사한 형태로 보여준다. End of explanation """ sns.clustermap(flights); #최대한 모양이 비슷한 애들끼리 옆에다 두어서 섞게 만들었다. """ Explanation: clustermap 명령은 히트맵에 계층 클러스터링(hierarchical clustering) 결과를 덴드로그램(dendrogram)으로 추가한다. 계층 클러스터링에 대해서는 다음 웹사이트를 참조한다. http://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html End of explanation """ np.random.seed(22) x = np.linspace(0, 15, 31) data = np.sin(x) + np.random.rand(10, 31) + np.random.randn(10, 1) sns.tsplot(data=data) sns.tsplot(data=data, err_style="boot_traces", n_boot=500); sns.tsplot(data=data, ci=[68, 95], color="m"); """ Explanation: 시계열 플롯 시계열 플롯은 시계열 자료를 묘사하기 위한 것이다. matplotlib의 단순한 라인 플롯에 불확정 범위를 표시하는 기능 등을 추가하였으며 주로 같은 시계열 모형에서 샘플링 된 복수개의 시계열을 묘사하는데 사용된다. tsplot End of explanation """
farfan92/SpringBoard-
statistics project 1/.ipynb_checkpoints/cfarfan_statistics_exercise_1-checkpoint.ipynb
mit
%matplotlib inline import pandas as pd import numpy as np import scipy.stats as st from scipy.stats import norm import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) from IPython.core.display import HTML css = open('style-table.css').read() + open('style-notebook.css').read() HTML('<style>{}</style>'.format(css)) bodytemp_df = pd.read_csv('data/human_body_temperature.csv') bodytemp_df """ Explanation: What is the true normal human body temperature? Background The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. In 1992, this value was revised to 36.8$^{\circ}$C or 98.2$^{\circ}$F. Exercise In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance. Answer the following questions in this notebook below and submit to your Github account. Is the distribution of body temperatures normal? Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply. Is the true population mean really 98.6 degrees F? Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different? At what temperature should we consider someone's temperature to be "abnormal"? Start by computing the margin of error and confidence interval. Is there a significant difference between males and females in normal temperature? Set up and solve for a two sample hypothesis testing. You can include written notes in notebook cells using Markdown: - In the control panel at the top, choose Cell > Cell Type > Markdown - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet Resources Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet End of explanation """ sns.distplot(bodytemp_df.temperature, bins = 25) st.normaltest(df['temperature']) """ Explanation: So, we see that we have 130 data points to work with. First, we want to take a look at the overall distribution. End of explanation """ hyp_mean = 98.6 sample_meantemp = bodytemp_df['temperature'].mean() sample_std = bodytemp_df['temperature'].std() print('The sample mean is : ' , bodytemp_df['temperature'].mean(), ' degrees Farenheit') print('The sample standard deviation is : ' , bodytemp_df['temperature'].std(), ' degrees Farenheit') """ Explanation: We see that our sample distribution does look like a normal distribution, albeit slightly left skewed. Nonetheless, we feel that it is reasonable to assume the CLT holds for this data. We see from our normaltest that the p-value returned is quite high, 25%. So we cannot reject the null hypothesis of this sample coming from a normal distribution. Thus both a visual inspection and a more rigorous computational one lets us conclude that the popluation is normally distributed in this case. Now, we put forth the hypothesis that the true population mean is 98.6. To try and check this, we first require the sample mean, and sample standard deviation. Note that the pandas DataFrame.std method normalizes by N-1 by default. End of explanation """ sem_temp = sample_std/np.sqrt(len(bodytemp_df)) sem_temp sample_std/np.sqrt(130) z_score = (sample_meantemp - hyp_mean) / (sem_temp) z_score p_value=st.norm.cdf(z_score) p_value new_hyp = 98.2 z_score_new = (sample_meantemp - new_hyp)/ (sem_temp) print(z_score_new) p_value_new = 1-st.norm.cdf(z_score_new) p_value_new """ Explanation: Since we have a 'large' sample size, we can estimate the population standard deviation, and population mean as being equal to the sample standard deviation and the sample mean. We use the sample standard deviation and sample size to obtain our best estimate of the standard error of the mean. End of explanation """ z_critical = st.norm.ppf(.975) conf_int = z_critical*sem_temp print ('margin of error: ', conf_int) print('upper limit of normal: ', sample_meantemp + conf_int) print('lower limit of normal: ', sample_meantemp - conf_int) """ Explanation: Thus from our hypothesis that our sample population mean is incorrect, and that the real mean is 98.6 degrees seems decidedly unlikely. We are confident that the probablity of finding a value at least as low in our sample population (more than 5 stds below the mean!), is only $2.45\times10^{-8}$. Thus we choose to reject the original hypothesis that the population mean is 98.6 degrees, based on the available data. Instead we shall accept the new value of 98.2, which as seen above, is well within one standard deviation of the sample mean. To find the normal range of human body temperatures, we need a confidence interval. Let us use the usual 95% condifence interval as our threshold. That is, we will be resonably confident that there is a 95% the true popluation mean is within our confidence interval. End of explanation """ female_df = bodytemp_df[bodytemp_df.gender == 'F'].copy() male_df = bodytemp_df[bodytemp_df.gender == 'M'].copy() male_mean = male_df['temperature'].mean() print('Male mean is: ', male_mean) female_mean = female_df['temperature'].mean() print('Female mean is: ', female_mean) male_std = male_df['temperature'].std() print('Male standard deviation is: ', male_std) female_std = female_df['temperature'].std() print('Female standard deviation is: ', female_std) difference_mean = female_mean - male_mean print('Mean difference between two populations: ', difference_mean) difference_sem = np.sqrt(male_std**2/len(male_df) + female_std**2/len(female_df)) print( 'Standard error of the mean: ', difference_sem) """ Explanation: So if human body temperature is outside of the range given above, then we are reasonably sure that the temperature is abnormal, as our range should encompass 95% of the population. Now we move on to testing if there is a significant differnce between males and females. End of explanation """ z_score_diff = (difference_mean - 0)/difference_sem z_score_diff p_value_diff = 1-st.norm.cdf(z_score_diff) p_value_diff """ Explanation: Now that we have our population mean difference, as well as the standard error of this mean, we can go ahead and look up p-values for our data, and compare it to some threshold. Let's use the standard 5% threshold. End of explanation """
sraejones/phys202-2015-work
days/day19/FittingModels.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy import optimize as opt from IPython.html.widgets import interact """ Explanation: Fitting Models Learning Objectives: learn to fit models to data using linear and non-linear regression. This material is licensed under the MIT license and was developed by Brian Granger. It was adapted from material from Jake VanderPlas and Jennifer Klay. End of explanation """ N = 50 m_true = 2 b_true = -1 dy = 2.0 # uncertainty of each point np.random.seed(0) xdata = 10 * np.random.random(N) # don't use regularly spaced data ydata = b_true + m_true * xdata + np.random.normal(0.0, dy, size=N) # our errors are additive plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel('x') plt.ylabel('y'); """ Explanation: Introduction In Data Science it is common to start with data and develop a model of that data. Such models can help to explain the data and make predictions about future observations. In fields like Physics, these models are often given in the form of differential equations, whose solutions explain and predict the data. In most other fields, such differential equations are not known. Often, models have to include sources of uncertainty and randomness. Given a set of data, fitting a model to the data is the process of tuning the parameters of the model to best explain the data. When a model has a linear dependence on its parameters, such as $a x^2 + b x + c$, this process is known as linear regression. When a model has a non-linear dependence on its parameters, such as $ a e^{bx} $, this process in known as non-linear regression. Thus, fitting data to a straight line model of $m x + b $ is linear regression, because of its linear dependence on $m$ and $b$ (rather than $x$). Fitting a straight line A classical example of fitting a model is finding the slope and intercept of a straight line that goes through a set of data points ${x_i,y_i}$. For a straight line the model is: $$ y_{model}(x) = mx + b $$ Given this model, we can define a metric, or cost function, that quantifies the error the model makes. One commonly used metric is $\chi^2$, which depends on the deviation of the model from each data point ($y_i - y_{model}(x_i)$) and the measured uncertainty of each data point $ \sigma_i$: $$ \chi^2 = \sum_{i=1}^N \left(\frac{y_i - y_{model}(x)}{\sigma_i}\right)^2 $$ When $\chi^2$ is small, the model's predictions will be close the data points. Likewise, when $\chi^2$ is large, the model's predictions will be far from the data points. Given this, our task is to minimize $\chi^2$ with respect to the model parameters $\theta = [m, b]$ in order to find the best fit. To illustrate linear regression, let's create a synthetic data set with a known slope and intercept, but random noise that is additive and normally distributed. End of explanation """ def chi2(theta, x, y, dy): # theta = [b, m] return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2) def manual_fit(b, m): modely = m*xdata + b plt.plot(xdata, modely) plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel('x') plt.ylabel('y') plt.text(1, 15, 'b={0:.2f}'.format(b)) plt.text(1, 12.5, 'm={0:.2f}'.format(m)) plt.text(1, 10.0, '$\chi^2$={0:.2f}'.format(chi2([b,m],xdata,ydata, dy))) interact(manual_fit, b=(-3.0,3.0,0.01), m=(0.0,4.0,0.01)); """ Explanation: Fitting by hand It is useful to see visually how changing the model parameters changes the value of $\chi^2$. By using IPython's interact function, we can create a user interface that allows us to pick a slope and intercept interactively and see the resulting line and $\chi^2$ value. Here is the function we want to minimize. Note how we have combined the two parameters into a single parameters vector $\theta = [m, b]$, which is the first argument of the function: End of explanation """ theta_guess = [0.0,1.0] result = opt.minimize(chi2, theta_guess, args=(xdata,ydata,dy)) """ Explanation: Go ahead and play with the sliders and try to: Find the lowest value of $\chi^2$ Find the "best" line through the data points. You should see that these two conditions coincide. Minimize $\chi^2$ using scipy.optimize.minimize Now that we have seen how minimizing $\chi^2$ gives the best parameters in a model, let's perform this minimization numerically using scipy.optimize.minimize. We have already defined the function we want to minimize, chi2, so we only have to pass it to minimize along with an initial guess and the additional arguments (the raw data): End of explanation """ theta_best = result.x print(theta_best) """ Explanation: Here are the values of $b$ and $m$ that minimize $\chi^2$: End of explanation """ xfit = np.linspace(0,10.0) yfit = theta_best[1]*xfit + theta_best[0] plt.plot(xfit, yfit) plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel('x') plt.ylabel('y'); """ Explanation: These values are close to the true values of $b=-1$ and $m=2$. The reason our values are different is that our data set has a limited number of points. In general, we expect that as the number of points in our data set increases, the model parameters will converge to the true values. But having a limited number of data points is not a problem - it is a reality of most data collection processes. We can plot the raw data and the best fit line: End of explanation """ def deviations(theta, x, y, dy): return (y - theta[0] - theta[1] * x) / dy result = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True) """ Explanation: Minimize $\chi^2$ using scipy.optimize.leastsq Performing regression by minimizing $\chi^2$ is known as least squares regression, because we are minimizing the sum of squares of the deviations. The linear version of this is known as linear least squares. For this case, SciPy provides a purpose built function, scipy.optimize.leastsq. Instead of taking the $\chi^2$ function to minimize, leastsq takes a function that computes the deviations: End of explanation """ theta_best = result[0] theta_cov = result[1] print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) """ Explanation: Here we have passed the full_output=True option. When this is passed the covariance matrix $\Sigma_{ij}$ of the model parameters is also returned. The uncertainties (as standard deviations) in the parameters are the square roots of the diagonal elements of the covariance matrix: $$ \sigma_i = \sqrt{\Sigma_{ii}} $$ A proof of this is beyond the scope of the current notebook. End of explanation """ yfit = theta_best[0] + theta_best[1] * xfit plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray'); plt.plot(xfit, yfit, '-b'); """ Explanation: We can again plot the raw data and best fit line: End of explanation """ def model(x, b, m): return m*x+b """ Explanation: Fitting using scipy.optimize.curve_fit SciPy also provides a general curve fitting function, curve_fit, that can handle both linear and non-linear models. This function: Allows you to directly specify the model as a function, rather than the cost function (it assumes $\chi^2$). Returns the covariance matrix for the parameters that provides estimates of the errors in each of the parameters. Let's apply curve_fit to the above data. First we define a model function. The first argument should be the independent variable of the model. End of explanation """ theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy) """ Explanation: Then call curve_fit passing the model function and the raw data. The uncertainties of each data point are provided with the sigma keyword argument. If there are no uncertainties, this can be omitted. By default the uncertainties are treated as relative. To treat them as absolute, pass the absolute_sigma=True argument. End of explanation """ print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) """ Explanation: Again, display the optimal values of $b$ and $m$ along with their uncertainties: End of explanation """ xfit = np.linspace(0,10.0) yfit = theta_best[1]*xfit + theta_best[0] plt.plot(xfit, yfit) plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel('x') plt.ylabel('y'); """ Explanation: We can again plot the raw data and best fit line: End of explanation """ npoints = 20 Atrue = 10.0 Btrue = -0.2 xdata = np.linspace(0.0, 20.0, npoints) dy = np.random.normal(0.0, 0.1, size=npoints) ydata = Atrue*np.exp(Btrue*xdata) + dy """ Explanation: Non-linear models So far we have been using a linear model $y_{model}(x) = m x +b$. Remember this model was linear, not because of its dependence on $x$, but on $b$ and $m$. A non-linear model will have a non-linear dependece on the model parameters. Examples are $A e^{B x}$, $A \cos{B x}$, etc. In this section we will generate data for the following non-linear model: $$y_{model}(x) = Ae^{Bx}$$ and fit that data using curve_fit. Let's start out by using this model to generate a data set to use for our fitting: End of explanation """ plt.plot(xdata, ydata, 'k.') plt.xlabel('x') plt.ylabel('y'); """ Explanation: Plot the raw data: End of explanation """ def exp_model(x, A, B): return A*np.exp(x*B) """ Explanation: Let's see if we can use non-linear regression to recover the true values of our model parameters. First define the model: End of explanation """ theta_best, theta_cov = opt.curve_fit(exp_model, xdata, ydata) """ Explanation: Then use curve_fit to fit the model: End of explanation """ print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('B = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) """ Explanation: Our optimized parameters are close to the true values of $A=10$ and $B=-0.2$: End of explanation """ xfit = np.linspace(0,20) yfit = exp_model(xfit, theta_best[0], theta_best[1]) plt.plot(xfit, yfit) plt.plot(xdata, ydata, 'k.') plt.xlabel('x') plt.ylabel('y'); """ Explanation: Plot the raw data and fitted model: End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_python_intro.ipynb
bsd-3-clause
a = 3 print(type(a)) b = [1, 2.5, 'This is a string'] print(type(b)) c = 'Hello world!' print(type(c)) """ Explanation: Introduction to Python Python is a modern, general-purpose, object-oriented, high-level programming language. First make sure you have a working python environment and dependencies (see install_python_and_mne_python). If you are completely new to python, don't worry, it's just like any other programming language, only easier. Here are a few great resources to get you started: SciPy lectures &lt;http://scipy-lectures.github.io&gt;_ Learn X in Y minutes: Python &lt;https://learnxinyminutes.com/docs/python/&gt;_ NumPy for MATLAB users &lt;https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html&gt;_ # noqa We highly recommend watching the Scipy videos and reading through these sites to get a sense of how scientific computing is done in Python. Here are few bulletin points to familiarise yourself with python: Everything is dynamically typed. No need to declare simple data structures or variables separately. End of explanation """ a = [1, 2, 3, 4] print('This is the zeroth value in the list: {}'.format(a[0])) """ Explanation: If you come from a background of matlab, remember that indexing in python starts from zero: End of explanation """
jupyter/nbgrader
nbgrader/docs/source/user_guide/submitted/hacker/ps1/problem1.ipynb
bsd-3-clause
NAME = "Alyssa P. Hacker" COLLABORATORS = "Ben Bitdiddle" """ Explanation: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: End of explanation """ def squares(n): """Compute the squares of numbers from 1 to n, such that the ith element of the returned list equals i^2. """ if n < 1: raise ValueError return [i ** 2 for i in range(1, n + 1)] """ Explanation: For this problem set, we'll be using the Jupyter notebook: Part A (2 points) Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError. End of explanation """ squares(10) """Check that squares returns the correct output for several inputs""" assert squares(1) == [1] assert squares(2) == [1, 4] assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121] """Check that squares raises an error for invalid inputs""" try: squares(0) except ValueError: pass else: raise AssertionError("did not raise") try: squares(-4) except ValueError: pass else: raise AssertionError("did not raise") """ Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does: End of explanation """ def sum_of_squares(n): """Compute the sum of the squares of numbers from 1 to n.""" return sum(squares(n)) """ Explanation: Part B (1 point) Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality. End of explanation """ sum_of_squares(10) """Check that sum_of_squares returns the correct answer for various inputs.""" assert sum_of_squares(1) == 1 assert sum_of_squares(2) == 5 assert sum_of_squares(10) == 385 assert sum_of_squares(11) == 506 """Check that sum_of_squares relies on squares.""" orig_squares = squares del squares try: sum_of_squares(1) except NameError: pass else: raise AssertionError("sum_of_squares does not use squares") finally: squares = orig_squares """ Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get: End of explanation """ import math def hypotenuse(n): """Finds the hypotenuse of a right triangle with one side of length n and the other side of length n-1.""" # find (n-1)**2 + n**2 if (n < 2): raise ValueError("n must be >= 2") elif n == 2: sum1 = 5 sum2 = 0 else: sum1 = sum_of_squares(n) sum2 = sum_of_squares(n-2) return math.sqrt(sum1 - sum2) print(hypotenuse(2)) print(math.sqrt(2**2 + 1**2)) print(hypotenuse(10)) print(math.sqrt(10**2 + 9**2)) """ Explanation: Part C (1 point) Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function. $\sum_{i=1}^n i^2$ Part D (2 points) Find a usecase for your sum_of_squares function and implement that usecase in the cell below. End of explanation """
clubmliimas/cancer
notebooks/sentdex.ipynb
mit
import dicom # for reading dicom files import os # for doing directory operations import pandas as pd # for some simple data analysis (right now, just to load in the labels data and quickly reference it) # Change this to wherever you are storing your data: # IF YOU ARE FOLLOWING ON KAGGLE, YOU CAN ONLY PLAY WITH THE SAMPLE DATA, WHICH IS MUCH SMALLER data_dir = '../input/sample_images/' patients = os.listdir(data_dir) labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0) labels_df.head() """ Explanation: Applying a 3D convolutional neural network to the data Welcome everyone to my coverage of the Kaggle Data Science Bowl 2017. My goal here is that anyone, even people new to kaggle, can follow along. If you are completely new to data science, I will do my best to link to tutorials and provide information on everything you need to take part. This notebook is my actual personal initial run through this data and my notes along the way. I am by no means an expert data analyst, statistician, and certainly not a doctor. This initial pass is not going to win the competition, but hopefully it can serve as a starting point or, at the very least, you can learn something new along with me. This is a "raw" look into the actual code I used on my first pass, there's a ton of room for improvment. If you see something that you could improve, share it with me! Quick introduction to Kaggle <iframe width="560" height="315" src="https://www.youtube.com/embed/ulq9DjCJPDU?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> If you are new to kaggle, create an account, and start downloading the data. It's going to take a while. I found the torrent to download the fastest, so I'd suggest you go that route. When you create an account, head to competitions in the nav bar, choose the Data Science Bowl, then head to the "data" tab. You will need to accept the terms of the competition to proceed with downloading the data. Just in case you are new, how does all this work? In general, Kaggle competitions will come with training and testing data for you to build a model on, where both the training and testing data comes with labels so you can fit a model. Then there will be actual "blind" or "out of sample" testing data that you will actually use your model on, which will spit out an output CSV file with your predictions based on the input data. This is what you will upload to kaggle, and your score here is what you compete with. There's always a sample submission file in the dataset, so you can see how to exactly format your output predictions. In this case, the submission file should have two columns, one for the patient's id and another for the prediction of the liklihood that this patient has cancer, like: id,cancer 01e349d34c02410e1da273add27be25c,0.5 05a20caf6ab6df4643644c923f06a5eb,0.5 0d12f1c627df49eb223771c28548350e,0.5 ... You can submit up to 3 entries a day, so you want to be very happy with your model, and you are at least slightly disincentivised from trying to simply fit the answer key over time. It's still possible to cheat. If you do cheat, you wont win anything, since you will have to disclose your model for any prizes. At the end, you can submit 2 final submissions (allowing you to compete with 2 models if you like). This current competition is a 2 stage competition, where you have to participate in both stages to win. Stage one has you competing based on a validation dataset. At the release of stage 2, the validation set answers are released and then you make predictions on a new test set that comes out at the release of this second stage. About this specific competition At its core, the aim here is to take the sample data, consisting of low-dose CT scan information, and predict what the liklihood of a patient having lung cancer is. Your submission is scored based on the log loss of your predictions. The dataset is pretty large at ~140GB just in initial training data, so this can be somewhat restrictive right out of the gate. I am going to do my best to make this tutorial one that anyone can follow within the built-in Kaggle kernels Requirements and suggestions for following along ## I will be using Python 3, and you should at least know the basics of Python 3. We will also be making use of: Pandas for some data analysis Matplotlib for data visualization You do not need to go through all of those tutorials to follow here, but, if you are confused, it might be useful to poke around those. For the actual dependency installs and such, I will link to them as we go. Alright, let's get started! Section 1: Handling Data Assuming you've downloaded the data, what exactly are we working with here? The data consists of many 2D "slices," which, when combined, produce a 3-dimensional rendering of whatever was scanned. In this case, that's the chest cavity of the patient. We've got CT scans of about 1500 patients, and then we've got another file that contains the labels for this data. There are numerous ways that we could go about creating a classifier. Being a realistic data science problem, we actually don't really know what the best path is going to be. That's why this is a competition. Thus, we have to begin by simply trying things and seeing what happens! I have a few theories about what might work, but my first interest was to try a 3D Convolutional Neural Network. I've never had data to try one on before, so I was excited to try my hand at it! Before we can feed the data through any model, however, we need to at least understand the data we're working with. We know the scans are in this "dicom" format, but what is that? If you're like me, you have no idea what that is, or how it will look in Python! You can learn more about DICOM from Wikipedia if you like, but our main focus is what this will actually be in Python terms. Luckily for us, there already exists a Python package for reading dicom files: Pydicom. Do a pip install pydicom and pip install pandas and let's see what we've got! <iframe width="560" height="315" src="https://www.youtube.com/embed/KlffppN47lc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> End of explanation """ for patient in patients[:1]: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient # a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) print(len(slices),label) print(slices[0]) """ Explanation: At this point, we've got the list of patients by their IDs, and their associated labels stored in a dataframe. Now, we can begin to iterate through the patients and gather their respective data. We're almost certainly going to need to do some preprocessing of this data, but we'll see. End of explanation """ for patient in patients[:3]: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient # a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) print(slices[0].pixel_array.shape, len(slices)) """ Explanation: Above, we iterate through each patient, we grab their label, we get the full path to that specific patient (inside THAT path contains ~200ish scans which we also iterate over, BUT also want to sort, since they wont necessarily be in proper order). Do note here that the actual scan, when loaded by dicom, is clearly not JUST some sort of array of values, instead it's got attributes. There are a few attributes here of arrays, but not all of them. We're sorting by the actual image position in the scan. Later, we could actually put these together to get a full 3D rendering of the scan. That's not in my plans here, since that's already been something covered very well, see this kernel: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial One immediate thing to note here is those rows and columns...holy moly, 512 x 512! This means, our 3D rendering is a 195 x 512 x 512 right now. That's huge! Alright, so we already know that we're going to absolutely need to resize this data. Being 512 x 512, I am already expecting all this data to be the same size, but let's see what we have from other patients too: End of explanation """ len(patients) """ Explanation: Alright, so above we just went ahead and grabbed the pixel_array attribute, which is what I assume to be the scan slice itself (we will confirm this soon), but immediately I am surprised by this non-uniformity of slices. This isn't quite ideal and will cause a problem later. All of our images are the same size, but the slices arent. In terms of a 3D rendering, these actually are not the same size. We've got to actually figure out a way to solve that uniformity problem, but also...these images are just WAY too big for a convolutional neural network to handle without some serious computing power. Thus, we already know out of the gate that we're going to need to downsample this data quite a bit, AND somehow make the depth uniform. Welcome to datascience! Okay, next question is...just how much data do we have here? End of explanation """ import matplotlib.pyplot as plt for patient in patients[:1]: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) # the first slice plt.imshow(slices[0].pixel_array) plt.show() """ Explanation: Oh. (1595 in real data, 20 if you're in the Kaggle sample dataset) Well, that's also going to be a challenge for the convnet to figure out, but we're going to try! Also, there are outside datasources for more lung scans. For example, you can grab data from the LUNA2016 challenge: https://luna16.grand-challenge.org/data/ for another 888 scans. Do note that, if you do wish to compete, you can only use free datasets that are available to anyone who bothers to look. I'll have us stick to just the base dataset, again mainly so anyone can poke around this code in the kernel environment. Now, let's see what an actual slice looks like. If you do not have matplotlib, do pip install matplotlib <iframe width="560" height="315" src="https://www.youtube.com/embed/MqcZYw8Tgpc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> Want to learn more about Matplotlib? Check out the Data Visualization with Python and Matplotlib tutorial. End of explanation """ import cv2 import numpy as np IMG_PX_SIZE = 150 for patient in patients[:1]: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) fig = plt.figure() for num,each_slice in enumerate(slices[:12]): y = fig.add_subplot(3,4,num+1) new_img = cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) y.imshow(new_img) plt.show() """ Explanation: Now, I am not a doctor, but I'm going to claim a mini-victory and say that's our first CT scan slice. We have about 200 slices though, I'd feel more comfortable if I saw a few more. Let's look at the first 12, and resize them with opencv. If you do not have opencv, do a pip install cv2 Want to learn more about what you can do with Open CV? Check out the Image analysis and manipulation with OpenCV and Python tutorial. You will also need numpy here. You probably already have numpy if you installed pandas, but, just in case, numpy is pip install numpy Section 2: Processing and viewing our Data <iframe width="560" height="315" src="https://www.youtube.com/embed/lqhMTkouBx0?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> End of explanation """ import math def chunks(l, n): # Credit: Ned Batchelder # Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] def mean(l): return sum(l) / len(l) IMG_PX_SIZE = 150 HM_SLICES = 20 data_dir = '../input/sample_images/' patients = os.listdir(data_dir) labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0) for patient in patients[:10]: try: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) new_slices = [] slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices] chunk_sizes = math.ceil(len(slices) / HM_SLICES) for slice_chunk in chunks(slices, chunk_sizes): slice_chunk = list(map(mean, zip(*slice_chunk))) new_slices.append(slice_chunk) print(len(slices), len(new_slices)) except: # some patients don't have labels, so we'll just pass on this for now pass """ Explanation: Alright, so we're resizing our images from 512x512 to 150x150. 150 is still going to wind up likely being waaaaaaay to big. That's fine, we can play with that constant more later, we just want to know how to do it. Okay, so now what? I think we need to address the whole non-uniformity of depth next. To be honest, I don't know of any super smooth way of doing this, but that's fine. I can at least think of A way, and that's all we need. My thought is that, what we have is really a big list of slices. What we need is to be able to just take any list of images, whether it's got 200 scans, 150 scans, or 300 scans, and set it to be some fixed number. Let's say we want to have 20 scans instead. How can we do this? Well, first, we need something that will take our current list of scans, and chunk it into a list of lists of scans. I couldn't think of anything off the top of my head for this, so I Googled "how to chunk a list into a list of lists." This is how real programming is happens. As per Ned Batchelder via Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks, we've got ourselves a nice chunker generator. Awesome! Thanks Ned! Okay, once we've got these chunks of these scans, what are we going to do? Well, we can just average them together. My theory is that a scan is a few millimeters of actual tissue at most. Thus, we can hopefully just average this slice together, and maybe we're now working with a centimeter or so. If there's a growth there, it should still show up on scan. This is just a theory, it has to be tested. As we continue through this, however, you're hopefully going to see just how many theories we come up with, and how many variables we can tweak and change to possibly get better results. End of explanation """ for patient in patients[:10]: try: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) new_slices = [] slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices] chunk_sizes = math.ceil(len(slices) / HM_SLICES) for slice_chunk in chunks(slices, chunk_sizes): slice_chunk = list(map(mean, zip(*slice_chunk))) new_slices.append(slice_chunk) if len(new_slices) == HM_SLICES-1: new_slices.append(new_slices[-1]) if len(new_slices) == HM_SLICES-2: new_slices.append(new_slices[-1]) new_slices.append(new_slices[-1]) if len(new_slices) == HM_SLICES+2: new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],]))) del new_slices[HM_SLICES] new_slices[HM_SLICES-1] = new_val if len(new_slices) == HM_SLICES+1: new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],]))) del new_slices[HM_SLICES] new_slices[HM_SLICES-1] = new_val print(len(slices), len(new_slices)) except Exception as e: # again, some patients are not labeled, but JIC we still want the error if something # else is wrong with our code print(str(e)) """ Explanation: The struggle is real. Okay, what you're about to see you shouldn't attempt if anyone else is watching, like if you're going to show your code to the public... End of explanation """ for patient in patients[:1]: label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) new_slices = [] slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices] chunk_sizes = math.ceil(len(slices) / HM_SLICES) for slice_chunk in chunks(slices, chunk_sizes): slice_chunk = list(map(mean, zip(*slice_chunk))) new_slices.append(slice_chunk) if len(new_slices) == HM_SLICES-1: new_slices.append(new_slices[-1]) if len(new_slices) == HM_SLICES-2: new_slices.append(new_slices[-1]) new_slices.append(new_slices[-1]) if len(new_slices) == HM_SLICES+2: new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],]))) del new_slices[HM_SLICES] new_slices[HM_SLICES-1] = new_val if len(new_slices) == HM_SLICES+1: new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],]))) del new_slices[HM_SLICES] new_slices[HM_SLICES-1] = new_val fig = plt.figure() for num,each_slice in enumerate(new_slices): y = fig.add_subplot(4,5,num+1) y.imshow(each_slice, cmap='gray') plt.show() """ Explanation: Okay, the Python gods are really not happy with me for that hacky solution. If any of you would like to improve this chunking/averaging code, feel free. Really, any of this code...if you have improvements, share them! This is going to stay pretty messy. But hey, we did it! We figured out a way to make sure our 3 dimensional data can be at any resolution we want or need. Awesome! That's actually a decently large hurdle. Are we totally done? ...maybe not. One major issue is these colors and ranges of data. It's unclear to me whether or not a model would appreciate that. Even if we do a grayscale colormap in the imshow, you'll see that some scans are just darker overall than others. This might be problematic and we might need to actually normalize this dataset. I expect that, with a large enough dataset, this wouldn't be an actual issue, but, with this size of data, it might be of huge importance. In effort to not turn this notebook into an actual book, however, we're going to move forward! We can now see our new data by doing: End of explanation """ import numpy as np import pandas as pd import dicom import os import matplotlib.pyplot as plt import cv2 import math IMG_SIZE_PX = 50 SLICE_COUNT = 20 def chunks(l, n): # Credit: Ned Batchelder # Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] def mean(a): return sum(a) / len(a) def process_data(patient,labels_df,img_px_size=50, hm_slices=20, visualize=False): label = labels_df.get_value(patient, 'cancer') path = data_dir + patient slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)] slices.sort(key = lambda x: int(x.ImagePositionPatient[2])) new_slices = [] slices = [cv2.resize(np.array(each_slice.pixel_array),(img_px_size,img_px_size)) for each_slice in slices] chunk_sizes = math.ceil(len(slices) / hm_slices) for slice_chunk in chunks(slices, chunk_sizes): slice_chunk = list(map(mean, zip(*slice_chunk))) new_slices.append(slice_chunk) if len(new_slices) == hm_slices-1: new_slices.append(new_slices[-1]) if len(new_slices) == hm_slices-2: new_slices.append(new_slices[-1]) new_slices.append(new_slices[-1]) if len(new_slices) == hm_slices+2: new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],]))) del new_slices[hm_slices] new_slices[hm_slices-1] = new_val if len(new_slices) == hm_slices+1: new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],]))) del new_slices[hm_slices] new_slices[hm_slices-1] = new_val if visualize: fig = plt.figure() for num,each_slice in enumerate(new_slices): y = fig.add_subplot(4,5,num+1) y.imshow(each_slice, cmap='gray') plt.show() if label == 1: label=np.array([0,1]) elif label == 0: label=np.array([1,0]) return np.array(new_slices),label # stage 1 for real. data_dir = '../input/sample_images/' patients = os.listdir(data_dir) labels = pd.read_csv('../input/stage1_labels.csv', index_col=0) much_data = [] for num,patient in enumerate(patients): if num % 100 == 0: print(num) try: img_data,label = process_data(patient,labels,img_px_size=IMG_SIZE_PX, hm_slices=SLICE_COUNT) #print(img_data.shape,label) much_data.append([img_data,label]) except KeyError as e: print('This is unlabeled data!') np.save('muchdata-{}-{}-{}.npy'.format(IMG_SIZE_PX,IMG_SIZE_PX,SLICE_COUNT), much_data) """ Explanation: Section 3: Preprocessing our Data <iframe width="560" height="315" src="https://www.youtube.com/embed/_DAeMDMHgtY?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> Okay, so we know what we've got, and what we need to do with it. We have a few options at this point, we could take the code that we have already and do the processing "online." By this, I mean, while training the network, we can actually just loop over our patients, resize the data, then feed it through our neural network. We actually don't have to have all of the data prepared before we go through the network. If you can preprocess all of the data into one file, and that one file doesn't exceed your available memory, then training should likely be faster, so you can more easily tweak your neural network and not be processing your data the same way over and over. In many more realistic examples in the world, however, your dataset will be so large, that you wouldn't be able to read it all into memory at once anyway, but you could still maintain one big database or something. Bottom line: There are tons of options here. Our dataset is only 1500 (even less if you are following in the Kaggle kernel) patients, and will be, for example, 20 slices of 150x150 image data if we went off the numbers we have now, but this will need to be even smaller for a typical computer most likely. Regardless, this much data wont be an issue to keep in memory or do whatever the heck we want. If at all possible, I prefer to separate out steps in any big process like this, so I am going to go ahead and pre-process the data, so our neural network code is much simpler. Also, there's no good reason to maintain a network in GPU memory while we're wasting time processing the data which can be easily done on a CPU. Now, I will just make a slight modification to all of the code up to this point, and add some new final lines to preprocess this data and save the array of arrays to a file: End of explanation """ import tensorflow as tf import numpy as np IMG_SIZE_PX = 50 SLICE_COUNT = 20 n_classes = 2 batch_size = 10 x = tf.placeholder('float') y = tf.placeholder('float') keep_rate = 0.8 def conv3d(x, W): return tf.nn.conv3d(x, W, strides=[1,1,1,1,1], padding='SAME') def maxpool3d(x): # size of window movement of window as you slide about return tf.nn.max_pool3d(x, ksize=[1,2,2,2,1], strides=[1,2,2,2,1], padding='SAME') """ Explanation: Section 4: 3D Convolutional Neural Network Moment-o-truth <iframe width="560" height="315" src="https://www.youtube.com/embed/CPZ5ihaNfJc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe> Okay, we've got preprocessed, normalized, data. Now we're ready to feed it through our 3D convnet and...see what happens! Now, I am not about to stuff a neural networks tutorial into this one. If you're already familiar with neural networks and TensorFlow, great! If not, as you might guess, I have a tutorial...or tutorials... for you! To install the CPU version of TensorFlow, just do pip install tensorflow To install the GPU version of TensorFlow, you need to get alllll the dependencies and such. Installation tutorials: Installing the GPU version of TensorFlow in Ubuntu Installing the GPU version of TensorFlow on a Windows machine Using TensorFlow and concept tutorials: Introduction to deep learning with neural networks Introduction to TensorFlow Intro to Convolutional Neural Networks Convolutional Neural Network in TensorFlow tutorial Now, the data we have is actually 3D data, not 2D data that's covered in most convnet tutorials, including mine above. So what changes? EVERYTHING! OMG IT'S THE END OF THE WORLD AS WE KNOW IT!! It's not really all too bad. Your convolutional window/padding/strides need to change. Do note that, now, to have a bigger window, your processing penalty increases significantly as we increase in size, obviously much more than with 2D windows. Okay, let's begin. End of explanation """ def convolutional_neural_network(x): # # 5 x 5 x 5 patches, 1 channel, 32 features to compute. weights = {'W_conv1':tf.Variable(tf.random_normal([3,3,3,1,32])), # 5 x 5 x 5 patches, 32 channels, 64 features to compute. 'W_conv2':tf.Variable(tf.random_normal([3,3,3,32,64])), # 64 features 'W_fc':tf.Variable(tf.random_normal([54080,1024])), 'out':tf.Variable(tf.random_normal([1024, n_classes]))} biases = {'b_conv1':tf.Variable(tf.random_normal([32])), 'b_conv2':tf.Variable(tf.random_normal([64])), 'b_fc':tf.Variable(tf.random_normal([1024])), 'out':tf.Variable(tf.random_normal([n_classes]))} # image X image Y image Z x = tf.reshape(x, shape=[-1, IMG_SIZE_PX, IMG_SIZE_PX, SLICE_COUNT, 1]) conv1 = tf.nn.relu(conv3d(x, weights['W_conv1']) + biases['b_conv1']) conv1 = maxpool3d(conv1) conv2 = tf.nn.relu(conv3d(conv1, weights['W_conv2']) + biases['b_conv2']) conv2 = maxpool3d(conv2) fc = tf.reshape(conv2,[-1, 54080]) fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc']) fc = tf.nn.dropout(fc, keep_rate) output = tf.matmul(fc, weights['out'])+biases['out'] return output """ Explanation: Now we're ready for the network itself: End of explanation """ much_data = np.load('muchdata-50-50-20.npy') # If you are working with the basic sample data, use maybe 2 instead of 100 here... you don't have enough data to really do this train_data = much_data[:-100] validation_data = much_data[-100:] def train_neural_network(x): prediction = convolutional_neural_network(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost) hm_epochs = 10 with tf.Session() as sess: sess.run(tf.initialize_all_variables()) saver.restore(sess, MODEL_PATH) successful_runs = 0 total_runs = 0 for epoch in range(hm_epochs): epoch_loss = 0 for data in train_data: total_runs += 1 try: X = data[0] Y = data[1] _, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y}) epoch_loss += c successful_runs += 1 except Exception as e: # I am passing for the sake of notebook space, but we are getting 1 shaping issue from one # input tensor. Not sure why, will have to look into it. Guessing it's # one of the depths that doesn't come to 20. pass #print(str(e)) print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]})) print('Done. Finishing accuracy:') print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]})) print('fitment percent:',successful_runs/total_runs) # Run this locally: # train_neural_network(x) """ Explanation: Why 54080 magic number? To get this, I simply run the script once, and see what the error yells at me for the expected size multiple. This is certainly not the right way to go about it, but that's my 100% honest method, and my first time working in a 3D convnet. AFAIK, it's the padding that causes this to not be EXACTLY 50,000, (50 x 50 x 20 is the size of our actual input data, which is 50,000 total). Someone feel free to enlighten me how one could actually calculate this number beforehand. Now we're set to train the network. I am not going to ask the Kaggle online kernel to even bother building this computation graph, so I will comment out the line to actually run this. Just uncomment it locally and it will run. When running locally, make sure your training data is NOT the sample images, it should be the stage1 images. Your training file should be ~700mb with ~1400 total labeled samples. End of explanation """ labels_df.cancer.value_counts() """ Explanation: Example output that I got: Epoch 1 completed out of 10 loss: 195148607547.0 Accuracy: 0.63 Epoch 2 completed out of 10 loss: 14236109414.9 Accuracy: 0.6 Epoch 3 completed out of 10 loss: 5744945978.94 Accuracy: 0.7 Epoch 4 completed out of 10 loss: 3268944715.44 Accuracy: 0.6 Epoch 5 completed out of 10 loss: 1916325681.66 Accuracy: 0.6 Epoch 6 completed out of 10 loss: 1014763813.3 Accuracy: 0.46 Epoch 7 completed out of 10 loss: 680146186.953 Accuracy: 0.54 Epoch 8 completed out of 10 loss: 289082075.259 Accuracy: 0.62 Epoch 9 completed out of 10 loss: 122785997.913 Accuracy: 0.57 Epoch 10 completed out of 10 loss: 96427552.5371 Accuracy: 0.51 Done. Finishing accuracy: Accuracy: 0.69 fitment percent: 0.9992289899768697 Section 5: Concluding Remarks So how did we do? Well, we overfit almost certainly. How about our accuracy? Due to the lower amount of data on Kaggle, I have no idea what number you're seeing, just know it's probably not all that great. Even if it was, what was the number to beat? Was it 50%, since it's either cancer or not? Not quite. The real number we need to beat is if our network was to always predict a single class. Let's see what the best score our classifer could get is if it just always picked the most common class: End of explanation """ labels_df.ix[-100:].cancer.value_counts() """ Explanation: So, actually, our dataset has 1035 non-cancer examples and 362 cancerous examples. Thus, an algorithm that always predicted no-cancer with our model would be ~ 74% accurate (1035/1397). We'd definitely want to confirm our testing set actually has this ratio before assuming anything. It might be the case our testing set has more cancerous examples, or maybe less, we really don't know. We can though: End of explanation """
AEW2015/PYNQ_PR_Overlay
Pynq-Z1/notebooks/examples/pmod_grove_tmp.ipynb
bsd-3-clause
from pynq.pl import Overlay Overlay("base.bit").download() """ Explanation: Grove Temperature Sensor 1.2 This example shows how to use the Grove Temperature Sensor v1.2 on the Pynq-Z1 board. You will also see how to plot a graph using matplotlib. The Grove Temperature sensor produces an analog signal, and requires an ADC. A Grove Temperature sensor and Pynq Grove Adapter, or Pynq Shield is required. The Grove Temperature Sensor, Pynq Grove Adapter, and Grove I2C ADC are used for this example. You can read a single value of temperature or read multiple values at regular intervals for a desired duration. At the end of this notebook, a Python only solution with single-sample read functionality is provided. 1. Load overlay End of explanation """ import math from pynq.iop import Grove_TMP from pynq.iop import PMODB from pynq.iop import PMOD_GROVE_G4 tmp = Grove_TMP(PMODB, PMOD_GROVE_G4) temperature = tmp.read() print(float("{0:.2f}".format(temperature)),'degree Celsius') """ Explanation: 2. Read single temperature This example shows on how to get a single temperature sample from the Grove TMP sensor. The Grove ADC is assumed to be attached to the GR4 connector of the StickIt. The StickIt module is assumed to be plugged in the 1st PMOD labeled JB. The Grove TMP sensor is connected to the other connector of the Grove ADC. Grove ADC provides a raw sample which is converted into resistance first and then converted into temperature. End of explanation """ import time %matplotlib inline import matplotlib.pyplot as plt tmp.set_log_interval_ms(100) tmp.start_log() # Change input during this time time.sleep(10) tmp_log = tmp.get_log() plt.plot(range(len(tmp_log)), tmp_log, 'ro') plt.title('Grove Temperature Plot') min_tmp_log = min(tmp_log) max_tmp_log = max(tmp_log) plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log]) plt.show() """ Explanation: 3. Start logging once every 100ms for 10 seconds Executing the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature. You can vary the logging interval and the duration by changing the values 100 and 10 in the cellbelow. The raw samples are stored in the internal memory, and converted into temperature values. End of explanation """ from time import sleep from math import log from pynq.iop import PMOD_GROVE_G3 from pynq.iop import PMOD_GROVE_G4 from pynq.iop.pmod_iic import Pmod_IIC class Python_Grove_TMP(Pmod_IIC): """This class controls the grove temperature sensor. This class inherits from the PMODIIC class. Attributes ---------- iop : _IOP The _IOP object returned from the DevMode. scl_pin : int The SCL pin number. sda_pin : int The SDA pin number. iic_addr : int The IIC device address. """ def __init__(self, pmod_id, gr_pins, model = 'v1.2'): """Return a new instance of a grove OLED object. Parameters ---------- pmod_id : int The PMOD ID (1, 2) corresponding to (PMODA, PMODB). gr_pins: list The group pins on Grove Adapter. G3 or G4 is valid. model : string Temperature sensor model (can be found on the device). """ if gr_pins in [PMOD_GROVE_G3, PMOD_GROVE_G4]: [scl_pin,sda_pin] = gr_pins else: raise ValueError("Valid group numbers are G3 and G4.") # Each revision has its own B value if model == 'v1.2': # v1.2 uses thermistor NCP18WF104F03RC self.bValue = 4250 elif model == 'v1.1': # v1.1 uses thermistor NCP18WF104F03RC self.bValue = 4250 else: # v1.0 uses thermistor TTC3A103*39H self.bValue = 3975 super().__init__(pmod_id, scl_pin, sda_pin, 0x50) # Initialize the Grove ADC self.send([0x2,0x20]); def read(self): """Read temperature in Celsius from grove temperature sensor. Parameters ---------- None Returns ------- float Temperature reading in Celsius. """ val = self._read_grove_adc() R = 4095.0/val - 1.0 temp = 1.0/(log(R)/self.bValue + 1/298.15)-273.15 return temp def _read_grove_adc(self): self.send([0]) bytes = self.receive(2) return 2*(((bytes[0] & 0x0f) << 8) | bytes[1]) from pynq import PL # Flush IOP state PL.reset_ip_dict() py_tmp = Python_Grove_TMP(PMODB, PMOD_GROVE_G4) temperature = py_tmp.read() print(float("{0:.2f}".format(temperature)),'degree Celsius') """ Explanation: 4. A Pure Python class to exercise the AXI IIC Controller inheriting from PMOD_IIC This class is ported from http://www.seeedstudio.com/wiki/Grove_-_Temperature_Sensor. End of explanation """
lilleswing/deepchem
examples/tutorials/27_Using_Reinforcement_Learning_to_Play_Pong.ipynb
mit
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ !pip install 'gym[atari]' """ Explanation: Tutorial Part 27: Using Reinforcement Learning to Play Pong This tutorial demonstrates using reinforcement learning to train an agent to play Pong. This task isn't directly related to chemistry, but video games make an excellent demonstration of reinforcement learning techniques. Colab This tutorial and the rest in this sequence can be done in Google Colab (although the visualization at the end doesn't work correctly on Colab, so you might prefer to run this tutorial locally). If you'd like to open this notebook in colab, you can use the following link. Setup To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. To install gym you should also use pip install 'gym[atari]' (We need the extra modifier since we'll be using an atari game). We'll add this command onto our usual Colab installation commands for you End of explanation """ import deepchem as dc import numpy as np class PongEnv(dc.rl.GymEnvironment): def __init__(self): super(PongEnv, self).__init__('Pong-v0') self._state_shape = (80, 80) @property def state(self): # Crop everything outside the play area, reduce the image size, # and convert it to black and white. cropped = np.array(self._state)[34:194, :, :] reduced = cropped[0:-1:2, 0:-1:2] grayscale = np.sum(reduced, axis=2) bw = np.zeros(grayscale.shape) bw[grayscale != 233] = 1 return bw def __deepcopy__(self, memo): return PongEnv() env = PongEnv() """ Explanation: Reinforcement Learning Reinforcement learning involves an agent that interacts with an environment. In this case, the environment is the video game and the agent is the player. By trial and error, the agent learns a policy that it follows to perform some task (winning the game). As it plays, it receives rewards that give it feedback on how well it is doing. In this case, it receives a positive reward every time it scores a point and a negative reward every time the other player scores a point. The first step is to create an Environment that implements this task. Fortunately, OpenAI Gym already provides an implementation of Pong (and many other tasks appropriate for reinforcement learning). DeepChem's GymEnvironment class provides an easy way to use environments from OpenAI Gym. We could just use it directly, but in this case we subclass it and preprocess the screen image a little bit to make learning easier. End of explanation """ import tensorflow as tf from tensorflow.keras.layers import Input, Concatenate, Conv2D, Dense, Flatten, GRU, Reshape class PongPolicy(dc.rl.Policy): def __init__(self): super(PongPolicy, self).__init__(['action_prob', 'value', 'rnn_state'], [np.zeros(16)]) def create_model(self, **kwargs): state = Input(shape=(80, 80)) rnn_state = Input(shape=(16,)) conv1 = Conv2D(16, kernel_size=8, strides=4, activation=tf.nn.relu)(Reshape((80, 80, 1))(state)) conv2 = Conv2D(32, kernel_size=4, strides=2, activation=tf.nn.relu)(conv1) dense = Dense(256, activation=tf.nn.relu)(Flatten()(conv2)) gru, rnn_final_state = GRU(16, return_state=True, return_sequences=True, time_major=True)( Reshape((-1, 256))(dense), initial_state=rnn_state) concat = Concatenate()([dense, Reshape((16,))(gru)]) action_prob = Dense(env.n_actions, activation=tf.nn.softmax)(concat) value = Dense(1)(concat) return tf.keras.Model(inputs=[state, rnn_state], outputs=[action_prob, value, rnn_final_state]) policy = PongPolicy() """ Explanation: Next we create a model to implement our policy. This model receives the current state of the environment (the pixels being displayed on the screen at this moment) as its input. Given that input, it decides what action to perform. In Pong there are three possible actions at any moment: move the paddle up, move it down, or leave it where it is. The policy model produces a probability distribution over these actions. It also produces a value output, which is interpreted as an estimate of how good the current state is. This turns out to be important for efficient learning. The model begins with two convolutional layers to process the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game logic. We also add a small Gated Recurrent Unit (GRU). That gives the network a little bit of memory, so it can keep track of which way the ball is moving. Just from the screen image, you cannot tell whether the ball is moving to the left or to the right, so having memory is important. We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the network's outputs. One computes the action probabilities, and the other computes an estimate of the state value function. We also provide an input for the initial state of the GRU, and return its final state at the end. This is required by the learning algorithm. End of explanation """ from deepchem.models.optimizers import Adam a2c = dc.rl.A2C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002)) """ Explanation: We will optimize the policy using the Advantage Actor Critic (A2C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate. End of explanation """ # Change this to train as many steps as you have patience for. a2c.fit(1000) """ Explanation: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps. End of explanation """ # This code doesn't work well on Colab env.reset() while not env.terminated: env.env.render() env.step(a2c.select_action(env.state)) """ Explanation: Let's watch it play and see how it does! End of explanation """
alexandrejaguar/strata-sv-2015-tutorial
resources/Vis1.ipynb
bsd-3-clause
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Visualization 1: Matplotlib Basics Imports The following imports should be used in all of your notebooks where Matplotlib in used: End of explanation """ t = np.linspace(0,4*np.pi,100) plt.plot(t, np.sin(t)) plt.xlabel('Time') plt.ylabel('Signal') plt.title('My Plot') """ Explanation: Basic plotting For now, we will work with basic x, y plots to show how the Matplotlib plotting API works. End of explanation """ plt.plot(t, np.sin(t), 'm-.*') """ Explanation: Quick series styling With a third argument you can provide the series color and line/marker style: End of explanation """ from matplotlib import lines lines.lineStyles.keys() from matplotlib import markers markers.MarkerStyle.markers.keys() """ Explanation: Here is a list of the single character color strings: b: blue g: green r: red c: cyan m: magenta y: yellow k: black w: white The following will show all of the line and marker styles: End of explanation """ plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo') plt.axis([-1,3,0,1.]) """ Explanation: To change the plot's viewport, use plt.axis([xmin,xmax,ymin,ymax]): End of explanation """ plt.plot(t, np.sin(t), 'r.', t, np.cos(t), 'g-') """ Explanation: Multiple series You can provide multiple series in a single call to plot: End of explanation """ plt.plot(t, np.sin(t)) plt.plot(t, np.cos(t)) """ Explanation: Or you can make multiple calls to plot: End of explanation """ plt.subplot(1,2,1) plt.plot(t, np.exp(0.1*t)) plt.ylabel('Exponential') plt.subplot(1,2,2) plt.plot(t, np.sin(t)) plt.ylabel('Quadratic') plt.xlabel('x') """ Explanation: Subplots You can use the subplot function to create a grid of plots in a single figure. End of explanation """ plt.plot(t, np.sin(t), marker='o', color='darkblue', linestyle='--', alpha=0.3, markersize=10) """ Explanation: More line styling All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. See Controlling line properties for more details: End of explanation """
jupyter/nbgrader
nbgrader/docs/source/user_guide/managing_the_database.ipynb
bsd-3-clause
%%bash # remove the existing database, to start fresh rm gradebook.db """ Explanation: Managing the database Most of the important information that nbgrader has access to---information about students, assignments, grades, etc.---is stored in the nbgrader database. Much of this is added to the database automatically by nbgrader, with the exception of two types of information: which students are in your class, and which assignments you have. There are two methods for adding students and assignments to the database. End of explanation """ %%bash nbgrader db assignment add ps1 --duedate="2015-02-02 17:00:00 UTC" """ Explanation: Managing assignments To add assignments, we can use the nbgrader db assignment add command, which takes the name of the assignment as well as optional arguments (such as its due date): End of explanation """ %%bash nbgrader db assignment list """ Explanation: After we have added the assignment, we can view what assignments exist in the database with nbgrader db assignment list: End of explanation """ %%file assignments.csv name,duedate ps1,2015-02-02 17:00:00 UTC ps2,2015-02-09 17:00:00 UTC """ Explanation: An alternate way to add assignments is a batch method of importing a CSV file. The file must have a column called name, and may optionally have columns for other assignment properties (such as the due date): End of explanation """ %%bash nbgrader db assignment import assignments.csv """ Explanation: Then, to import this file, we use the nbgrader db assignment import command: End of explanation """ %%bash nbgrader db assignment remove ps1 """ Explanation: We can also remove assignments from the database with nbgrader db assignment remove. Be very careful using this command, as it is possible you could lose data! End of explanation """ %%bash nbgrader db student add bitdiddle --last-name=Bitdiddle --first-name=Ben nbgrader db student add hacker --last-name=Hacker --first-name=Alyssa """ Explanation: Managing students Managing students in the database works almost exactly the same as managing assignments. To add students, we use the nbgrader db student add command: End of explanation """ %%bash nbgrader db student list """ Explanation: And to list the students in the database, we use the nbgrader db student list command: End of explanation """ %%file students.csv id,last_name,first_name,email bitdiddle,Bitdiddle,Ben, hacker,Hacker,Alyssa, %%bash nbgrader db student import students.csv """ Explanation: Like with the assignments, we can also batch add students to the database using the nbgrader db student import command. We first have to create a CSV file, which is required to have a column for id, and optionally may have columns for other student information (such as their name): End of explanation """ %%bash nbgrader db student remove bitdiddle """ Explanation: We can also remove students from the database with nbgrader db student remove. Be very careful using this command, as it is possible you could lose data! End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
mit
import numpy as np import matplotlib.pyplot as plt %matplotlib notebook """ Explanation: (Re)Introduction to Image Processing Version 0.1 During Session 1 of the DSFP, Robert Lupton provided a problem that brilliantly introduced some of the basic challenges associated with measuring the flux of a point source. As such, we will revisit that problem as a review/introduction to the remainder of the week. By AA Miller (CIERA/Northwestern & Adler) <br> [But please note that this is essentially a copy of Robert's lecture.] End of explanation """ def phi(x, mu, fwhm): # complete """ Explanation: Problem 1) An (oversimplified) 1-D Model For this introductory problem we are going to simulate a 1 dimensional detector (the more complex issues associated will real stars on 2D detectors will be covered tomorrow by Dora). We will generate stars as Gaussians $N(\mu, \sigma^2)$, with mean $\mu$ and variance $\sigma^2$. As observed by LSST, all stars are point sources that reflect the point spread function (PSF), which is produced by a combination of the atmosphere, telescope, and detector. A standard measure of the PSF's width is the Full Width Half Maximum (FWHM). There is also a smooth background of light from several sources that I previously mentioned (the atmosphere, the detector, etc). We will refer to this background simply as "The Sky". Problem 1a Write a function phi() to simulate a (noise-free) 1D Gaussian PSF. The function should take mu and fwhm as arguments, and evaluate the PSF along a user-supplied array x. Hint - for a Gaussian $N(0, \sigma^2)$, the FWHM is $2\sqrt{2\ln(2)}\,\sigma \approx 2.3548\sigma$. End of explanation """ x = # complete plt.plot( # complete print("The flux of the star is: {:.3f}".format( # complete """ Explanation: Problem 1b Plot the noise-free PSF for a star with $\mu = 10$ and $\mathrm{FWHM} = 3$. What is the flux of this star? End of explanation """ plt.plot( # complete """ Explanation: Problem 1c Add Sky noise (a constant in this case) to your model. Define the sky as S, with total stellar flux F. Plot the model for S = 100 and F = 500. End of explanation """ # complete noisy_flux = # complete """ Explanation: Problem 2) Add Noise We will add noise to this simulation assuming that photon counting contributes the only source of uncertainty (this assumption is far from sufficient in real life). Within each pixel, $n$ photons are detected with an uncertainty that follows a Poisson distribution, which has the property that the mean $\mu$ is equal to the variance $\mu$. If $n \gg 1$ then $P(\mu) \approx N(\mu, \mu)$ [you can safely assume we will be in this regime for the remainder of this problem]. Problem 2a Calculate the noisy flux for the simulated star in Problem 1c. Hint - you may find the function np.random.normal() helpful. End of explanation """ plt.plot( # complete plt.errorbar( # complete """ Explanation: Problem 2b Overplot the noisy signal, with the associated uncertainties, on top of the noise-free signal. End of explanation """ def simulate(# complete # complete # complete """ Explanation: Problem 3) Flux Measurement We will now attempt to measure the flux from a simulated star. Problem 3a Write a function simulate() to simulate the noisy flux measurements of a star with centroid mu, FWHM fwhm, sky background S, and flux F. Hint - it may be helpful to plot the output of your function. End of explanation """ # complete sim_star = simulate( # complete ap_flux = # complete print("The star has flux = {:.3f}".format( # complete """ Explanation: Problem 3b Using an aperture with radius of 5 pixels centered on the source, measure the flux from a star centered at mu = 0, with fwhm = 5, S = 100, and F = 1000. Hint - assume you can perfectly measure the background, and subtract this prior to the measurement. End of explanation """ sim_fluxes = # complete for # complete print("The mean flux = {:.3f} with variance = {:.3f}".format( # complete """ Explanation: Problem 3c Write a Monte Carlo simulator to estimate the mean and standard deviation of the flux from the simulated star. Food for thought - what do you notice if you run your simulator many times? End of explanation """ psf = # complete """ Explanation: Problem 4) PSF Flux measurement In this problem we are going to use our knowledge of the PSF to estimate the flux of the star. We will compare these measurements to the aperture flux measurements above. Problem 4a Create the psf model, psf, which is equivalent to a noise-free star with fwhm = 5. End of explanation """ sim_star = simulate( # complete psf_flux = # complete print("The PSF flux is {:.3f}".format( # complete """ Explanation: Problem 4b Using the same parameters as problem 3, simulate a star and measure it's PSF flux. End of explanation """ sim_fluxes = # complete for # complete print("The mean flux = {:.3f} with variance = {:.3f}".format( # complete """ Explanation: Problem 4c As before, write a Monte Carlo simulator to estimate the PSF flux of the star. How do your results compare to above? End of explanation """
CAChemE/curso-python-datos
notebooks/005_SWC_defensive_programming.ipynb
bsd-3-clause
# This code has an intentional error. You can type it directly or # use it for reference to understand the error message below. def favorite_ice_cream(): ice_creams = [ "chocolate", "vanilla", "strawberry" ] print(ice_creams[3]) favorite_ice_cream() # Syntax error def some_function() msg = "hello, world!" print(msg) return msg # Indentation Error def some_function(): msg = "hello, world!" print(msg) return msg # Tab Error def some_function(): msg = "hello, world!" print(msg) return msg # Not defined for number in range(10): count = count + number print("The count is:", count) # File Error file_handle = open('myfile.txt', 'r') """ Explanation: Errors and Exceptions Every programmer encounters errors, both those who are just beginning, and those who have been programming for years. Encountering errors and exceptions can be very frustrating at times, and can make coding feel like a hopeless endeavour. However, understanding what the different types of errors are and when you are likely to encounter them can help a lot. Once you know why you get certain types of errors, they become much easier to fix. Errors in Python have a very specific form, called a traceback. Let’s examine some: End of explanation """ # This code has an intentional error. Do not type it directly; # use it for reference to understand the error message below. def print_message(day): messages = { "monday": "Hello, world!", "tuesday": "Today is tuesday!", "wednesday": "It is the middle of the week.", "thursday": "Today is Donnerstag in German!", "friday": "Last day of the week!", "saturday": "Hooray for the weekend!", "sunday": "Aw, the weekend is almost over." } print(messages[day]) def print_friday_message(): print_message("Friday") print_friday_message() """ Explanation: Excercise Read the python code and the resulting traceback below, and answer the following questions: How many levels does the traceback have? What is the function name where the error occurred? On which line number in this function did the error occurr? What is the type of error? What is the error message? End of explanation """ seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[4]) """ Explanation: Excercise: Fix the following code: End of explanation """ numbers = [1.5, 2.3, 0.7, -0.001, 4.4] total = 0.0 for n in numbers: # Data should only contain positive values total += n print('total is:', total) def normalize_rectangle(rect): '''Normalizes a rectangle so that it is at the origin and 1.0 units long on its longest axis.''' assert len(rect) == 4, 'Rectangles must contain 4 coordinates' x0, y0, x1, y1 = rect assert x0 < x1, 'Invalid X coordinates' assert y0 < y1, 'Invalid Y coordinates' dx = x1 - x0 dy = y1 - y0 if dx > dy: scaled = float(dx) / dy upper_x, upper_y = 1.0, scaled else: scaled = float(dx) / dy upper_x, upper_y = scaled, 1.0 assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid' assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid' return (0, 0, upper_x, upper_y) """ Explanation: Excercise: Read the code below, and (without running it) try to identify what the errors are. * Run the code, and read the error message. Is it a SyntaxError or an IndentationError? * Fix the error. * Repeat, until you have fixed all the errors. Excercise: Read the code below, and (without running it) try to identify what the errors are. Run the code, and read the error message. What type of NameError do you think this is? In other words, is it a string with no quotes, a misspelled variable, or a variable that should have been defined but was not? Fix the error. Repeat steps 2 and 3, until you have fixed all the errors. Assertions End of explanation """ patients = [[70, 1.8], [80, 1.9], [150, 1.7]] def calculate_bmi(weight, height): return weight / (height ** 2) for patient in patients: weight, height = patients[0] bmi = calculate_bmi(height, weight) print("Patient's BMI is: %f" % bmi) """ Explanation: Debugging You are assisting a researcher with Python code that computes the Body Mass Index (BMI) of patients. The researcher is concerned because all patients seemingly have unusual and identical BMIs, despite having different physiques. BMI is calculated as weight in kilograms divided by the the square of height in metres. Use the debugging principles in this exercise and locate problems with the code. What suggestions would you give the researcher for ensuring any later changes they make work correctly? End of explanation """
briennakh/BIOF509
Wk07/Wk07_solutions.ipynb
mit
def plot_arm_frequency(simulation, ax, marker='.', linestyle='', color='k', label=''): """Plot the frequency with which the second arm is chosen NOTE: Currently only works for two arms""" ax.plot(simulation.arm_choice.mean(axis=0), marker=marker, linestyle=linestyle, color=color, label=label) ax.set_title('Frequency of arm choice') ax.set_xlabel('Trial') ax.set_ylabel('Frequency') return ax def plot_reward(simulation, ax, marker='.', linestyle='-', color='k', label=''): """Plot the average reward for each trial across all simulations""" ax.plot(simulation.reward.mean(axis=0), marker=marker, linestyle=linestyle, color=color, label=label) ax.set_title('Reward') ax.set_xlabel('Trial') ax.set_ylabel('Reward') return ax def plot_cumulative_reward(simulation, ax, marker='', linestyle='-', color='k', label=''): """Plot the cumulative reward across all simulations""" ax.plot(np.cumsum(simulation.reward, axis=1).mean(axis=0), marker=marker, linestyle=linestyle, color=color, label=label) ax.set_title('Cumulative Reward') ax.set_xlabel('Trial') ax.set_ylabel('Cumulative Reward') return ax def plot_summary(model, axes, color='', label=''): plot_arm_frequency(model, ax=axes[0], color=color, label=label) plot_reward(model, ax=axes[1], color=color, label=label) plot_cumulative_reward(model, ax=axes[2], color=color, label=label) for ax in axes: ax.legend(loc=4) return axes fig, axes = plt.subplots(1,3, figsize=(18,6)) model = Model(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.05}, weights=[0.1, 0.2]) model.repeat_simulation() plot_summary(model, axes, color='k', label='epsilon=0.05') model = Model(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.25}, weights=[0.1, 0.2]) model.repeat_simulation() plot_summary(model, axes, color='b', label='epsilon=0.25') model = Model(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.5}, weights=[0.1, 0.2]) model.repeat_simulation() plot_summary(model, axes, color='c', label='epsilon=0.5') plt.show() """ Explanation: Assignments Send in a rough outline of your project idea. This is not graded, I will ask for a more complete description later for inclusion in grading. Plot the performance of the RCT and EpsilonGreedy algorithms on the same plots so that they can more easily be compared. Investigate how changing the value of epsilon changes the performance of the EpsilonGreedy algorithm. Reuse your approach from assignment #2 to plot the performance for different epsilon values. When we have very little information on the relative performance of the two arms a high exploration rate quickly provides us with additional information. However, after several hundred trials we are relatively confident in the performance of each arm and a high exploration rate is detrimental as we will be choosing an arm we know to be inferior at a high rate. A better approach would be to reduce the exploration rate as we acquire more information. This is a very similar approach to the simulated annealing optimizer we looked at in week 2. Create a new class that inherits from EpsilonGreedy and gradually reduces the value of epsilon over time. Due for next week Rough outline of project idea Plots showing both EpsilonGreedy and RCT performance Plots showing EpsilonGreedy performance with different epsilon values Class implementing EpsilonGreedy with an adaptive epsilon value. The same code can be used for both #2 and #3 so only #3 will be covered: The three plotting methods have been moved into standalone functions. This isn't strictly necessary, but is one approach. An extra function has been created combining the three plotting functions. Labels and plotting options are used to differentiate between each of the algorithms being plotted. End of explanation """ t = np.arange(1000) plt.plot(t, (1+t)**-0.5, label='1/sqrt') plt.plot(t, (1+t)**-0.2, label='1/5th-root') plt.plot(t, np.exp(-(t/200.)), label='exp^-t/200') u = np.concatenate((np.ones(100), np.ones(200) * 0.75, np.ones(200) * 0.5, np.ones(200) * 0.25, np.ones(300) * 0.05)) plt.plot(t,u, label='Steps') plt.legend() plt.show() class AdaptiveEpsilonGreedy(RCT): @property def epsilon(self): return self._epsilon * np.exp(-(sum(self.counts)/200.)) fig, axes = plt.subplots(1,3, figsize=(18,6)) model = Model(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.05}, weights=[0.1, 0.2], size=1000) model.repeat_simulation() plot_summary(model, axes, color='k', label='epsilon=0.05') model = Model(AdaptiveEpsilonGreedy, {'n_arms': 2, 'epsilon':1.00}, weights=[0.1, 0.2], size=1000) model.repeat_simulation() plot_summary(model, axes, color='c', label='AdaptiveEpsilonGreedy') plt.show() """ Explanation: The Adaptive Epsilon Greedy algorithm for #4 should change the value of epsilon (and the likelihood of choosing randomly) as the number of trials increase. There are many ways this could be implemented: End of explanation """ class DynamicModel(object): def __init__(self, algo, algo_kwargs, weights=[0.1, 0.1], size=100, repeats=200): self.algo = algo self.algo_kwargs = algo_kwargs self.weights = weights self.size = size self.repeats = repeats def run_simulation(self): """Run a single simulation, recording the performance""" algo = self.algo(**self.algo_kwargs) arm_choice_record = [] reward_record = [] weights = self.weights[:] for i in range(self.size): arm = algo.choose_arm() arm_choice_record.append(arm) reward = np.random.random() < weights[arm] reward_record.append(reward) algo.update(arm, reward) if i == self.size / 2: #print('Switching rewards') weights[0], weights[1] = weights[1], weights[0] return arm_choice_record, reward_record def repeat_simulation(self): """Run multiple simulations, recording the performance of each""" arm_choice = [] reward = [] for i in range(self.repeats): arm_choice_record, reward_record = self.run_simulation() arm_choice.append(arm_choice_record) reward.append(reward_record) self.arm_choice = np.array(arm_choice) self.reward = np.array(reward) fig, axes = plt.subplots(1,3, figsize=(18,6)) model = DynamicModel(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.05}, weights=[0.1, 0.2], size=2000) model.repeat_simulation() plot_summary(model, axes, color='k', label='epsilon=0.05') model = DynamicModel(AdaptiveEpsilonGreedy, {'n_arms': 2, 'epsilon':1.00}, weights=[0.1, 0.2], size=2000) model.repeat_simulation() plot_summary(model, axes, color='c', label='AdaptiveEpsilonGreedy') plt.show() """ Explanation: The falling epsilon value means the AdaptiveEpsilonGreedy algorithm will explore less and less of the time, instead exploiting the arm it knows to be best. This has advantages and disadvantages. If the environment is stable the cumulative reward will be higher but it will be very slow to respond to any changes. For example, if the rewards for the two arms were to switch: End of explanation """ class DynamicEpsilonGreedy(AdaptiveEpsilonGreedy): def update(self, arm, reward): """Update an arm with the reward""" self.counts[arm] = self.counts[arm] + 1 n = self.counts[arm] # New experiences will represent at least 100th of the running value estimation if n > 100: n = 100 value = self.values[arm] self.values[arm] = ((n - 1) / n) * self.values[arm] + (1/n) * reward fig, axes = plt.subplots(1,3, figsize=(18,6)) model = DynamicModel(EpsilonGreedy, {'n_arms': 2, 'epsilon':0.05}, weights=[0.1, 0.2], size=2000) model.repeat_simulation() plot_summary(model, axes, color='k', label='epsilon=0.05') model = DynamicModel(AdaptiveEpsilonGreedy, {'n_arms': 2, 'epsilon':1.00}, weights=[0.1, 0.2], size=2000) model.repeat_simulation() plot_summary(model, axes, color='c', label='AdaptiveEpsilonGreedy') model = DynamicModel(DynamicEpsilonGreedy, {'n_arms': 2, 'epsilon':1.00}, weights=[0.1, 0.2], size=2000) model.repeat_simulation() plot_summary(model, axes, color='r', label='DynamicEpsilonGreedy') plt.show() """ Explanation: There are two reasons why these algorithms are so slow to respond: The low exploration rate The use of the mean reward values. As the number of trials gets large the ability of any new experiences to alter the mean value falls. This second issue can be addressed by giving recent experience greater value than experience from further in the past. End of explanation """
jlandmann/oggm
docs/notebooks/getting_started.ipynb
gpl-3.0
import oggm from oggm import cfg from oggm.utils import get_demo_file cfg.initialize() srtm_f = get_demo_file('srtm_oetztal.tif') rgi_f = get_demo_file('rgi_oetztal.shp') print(srtm_f) """ Explanation: <img src="https://raw.githubusercontent.com/OGGM/oggm/master/docs/_static/logo.png" width="40%" align="left"> Getting started with OGGM: Ötztal case study The OGGM workflow is best explained with an example. We are going to use the case we use for testing the oggm codebase. The test files are located in a dedicated online repository, oggm-sample-data. Input data In the test-workflow directory you can have a look at the various files we will need. oggm also needs them for testing, so they are automatically available to everybody with a simple mechanism: End of explanation """ import salem # https://github.com/fmaussion/salem rgi_shp = salem.read_shapefile(rgi_f).set_index('RGIId') """ Explanation: The very first time that you make a call to get_demo_file(), oggm will create a hidden .oggm directory in your home folder$^*$ and download the demo files in it. <sub>*: this path might vary depending on your platform, see python's expanduser</sub> DEM and glacier outlines The data directory contains a subset of the RGI V4 for the Ötztal: End of explanation """ # Plot defaults %matplotlib inline import matplotlib.pyplot as plt # Packages import os import numpy as np import xarray as xr import shapely.geometry as shpg plt.rcParams['figure.figsize'] = (8, 8) # Default plot size """ Explanation: We'll have a look at it, but first we will need to make some imports and set some defaults: End of explanation """ rgi_shp.plot(); """ Explanation: Plot the glaciers of the Ötztal case study: End of explanation """ fig = plt.figure(figsize=(9, 3)) ds = xr.open_dataset(get_demo_file('HISTALP_oetztal.nc')) ds.temp[:, 3, 3].resample('AS', dim='time').plot() plt.title('HISTALP annual temperature (°C)'); """ Explanation: Calibration / validation data These 19 glaciers were selected because they have either mass-balance data (WGMS) or total volume information (GlaThiDa). These data are required for calibration/validation and are available automatically in OGGM. Climate data For this test case we use HISTALP data (which goes back further in time than CRU), stored in the NetCDF format. The resolution of HISTALP (5 minutes of arc) is relatively high, but some kind of downscaling will be necessary to compute the mass-balance at the glacier scale. We can plot a timeseries of the data, for example for the grid point (3, 3): End of explanation """ from oggm import cfg from oggm import workflow cfg.initialize() # read the default parameter file """ Explanation: Setting up an OGGM run OGGM parameters are gathered in a configuration file. The default file is shipped with the code. It is used to initialize the configuration module: End of explanation """ cfg.PATHS """ Explanation: For example, the cfg module has a global variable PATHS (a dictionary) storing the file paths to the data and working directories: End of explanation """ cfg.PATHS['dem_file'] = get_demo_file('srtm_oetztal.tif') cfg.PATHS['climate_file'] = get_demo_file('HISTALP_oetztal.nc') """ Explanation: The path to the input data files are missing. Let's set them so that the oggm modules know where to look for them (the default would be to download them automatically, which we would like to avoid for this example): End of explanation """ cfg.PARAMS['border'] = 80 """ Explanation: We will set the "border" option to a larger value, since we will do some dynamical simulations ("border" decides on the number of DEM grid points we'd like to add to each side of the glacier for the local map: the larger the glacier will grow, the larger border should be): End of explanation """ cfg.PARAMS['prcp_scaling_factor'] """ Explanation: We keep the other parameters to their default values, for example the precipitation scaling factor: End of explanation """ # Read in the RGI file import geopandas as gpd rgi_file = get_demo_file('rgi_oetztal.shp') rgidf = gpd.GeoDataFrame.from_file(rgi_file) # Initialise directories # reset=True will ask for confirmation if the directories are already present: # this is very useful if you don't want to loose hours of computations because of a command gone wrong gdirs = oggm.workflow.init_glacier_regions(rgidf, reset=True) """ Explanation: Glacier working directories An OGGM "run" is made of several successive tasks to be applied on each glacier. Because these tasks can be computationally expensive they are split in smaller tasks, each of them storing their results in a glacier directory. The very first task of an OGGM run is always init_glacier_regions: End of explanation """ gdirs = workflow.init_glacier_regions(rgidf) """ Explanation: Note that if I run init_glacier_regions a second time without reset=True, nothing special happens. The directories will not be overwritten, just "re-opened": End of explanation """ gdir = gdirs[13] gdir """ Explanation: Now what is the variable gdirs? It is a list of 19 GlacierDirectory objects. They are here to help us to handle data input/output and to store several glacier properties. Here are some examples: End of explanation """ gdir.get_filepath('dem') """ Explanation: gdir provides a get_filepath function which gives access to the data files present in the directory: End of explanation """ from oggm import graphics graphics.plot_googlemap(gdir) """ Explanation: dem.tif is a local digital elevation map with a spatial resolution chosen by OGGM as a function of the glacier size. These GlacierDirectory objects are going to be the input of almost every OGGM task. This data model has been chosen so that even complex functions requires serval input data can be called with one single argument: End of explanation """ from oggm import tasks # run the glacier_masks task on all gdirs workflow.execute_entity_task(tasks.glacier_masks, gdirs) """ Explanation: OGGM tasks The workflow of OGGM is oriented around the concept of "tasks". There are two different types: Entity Task: Standalone operations to be realized on one single glacier entity, independently from the others. The majority of OGGM tasks are entity tasks. They are parallelisable. Global Task: tasks which require to work on several glacier entities at the same time. Model parameter calibration or interpolation of degree day factors belong to this type of task. They are not parallelisable. OGGM implements a simple mechanism to run a specific task on a list of GlacierDir objects (here, the function glacier_masks() from the module oggm.prepro.gis): End of explanation """ list_talks = [ tasks.compute_centerlines, tasks.compute_downstream_lines, tasks.catchment_area, tasks.initialize_flowlines, tasks.catchment_width_geom, tasks.catchment_width_correction, tasks.compute_downstream_bedshape ] for task in list_talks: workflow.execute_entity_task(task, gdirs) """ Explanation: We just computed gridded boolean masks out of the RGI outlines. It is also possible to apply several tasks sequentially: End of explanation """ graphics.plot_catchment_width(gdir, corrected=True) """ Explanation: The function execute_task can run a task on different glaciers at the same time, if the use_multiprocessing option is set to True in the configuration file. With all these tasks we just computed the glacier flowlines and their width: End of explanation """ workflow.climate_tasks(gdirs) """ Explanation: Global tasks, climate tasks We will go into more detail about tasks in the documentation. For now, we will use the helper function: End of explanation """ workflow.execute_entity_task(tasks.prepare_for_inversion, gdirs) """ Explanation: We just read the climate data, "downscaled" it to each glacier, computed possible $\mu^$ for the reference glaciers, picked the best one, interpolated the corresponding $t^$ to glaciers without mass-balance observations, computed the mass-balance sensitivity $\mu$ for all glaciers and finally computed the mass-balance at equilibrium (the "apparent mb" in Farinotti et al., 2009). Finally, we will prepare the data for the inversion, which is an easy: End of explanation """ from oggm.core.preprocessing.inversion import mass_conservation_inversion """ Explanation: Inversion This is where things become a bit more complicated. The inversion is already fully automated in OGGM, but for this tutorial we will try to explain in more detail what is happening. Let's start with the funcion mass_conservation_inversion: End of explanation """ # Select HEF out of all glaciers gdir_hef = [gd for gd in gdirs if (gd.rgi_id == 'RGI50-11.00897')][0] glen_a = cfg.A vol_m3, area_m3 = mass_conservation_inversion(gdir_hef, glen_a=glen_a) print('With A={}, the mean thickness of HEF is {:.1f} m'.format(glen_a, vol_m3/area_m3)) graphics.plot_inversion(gdir_hef) """ Explanation: This function will compute the ice thickness along the flowline. It has one free parameter (or too, if you also want to consider the basal sliding term in the inversion): Glen's deformation parameter A. Let's compute the bed inversion for Hintereisferner and the default A: End of explanation """ factor = np.linspace(0.1, 10, 30) thick = factor*0 for i, f in enumerate(factor): vol_m3, area_m3 = mass_conservation_inversion(gdir_hef, glen_a=glen_a*f) thick[i] = vol_m3/area_m3 plt.figure(figsize=(6, 4)) plt.plot(factor, thick); plt.ylabel('Mean thickness (m)'); plt.xlabel('Multiplier'); """ Explanation: We know from the literature (Fisher et al, 2013) that the HEF should have an average thickness of 67$\pm$7 m. How sensitive is the inversion to changes in the A parameter? End of explanation """ optim_resuls = tasks.optimize_inversion_params(gdirs) """ Explanation: The A parameter controls the deformation of the ice, and therefore the thickness. It is always possible to find a "perfect" A for each glacier with measurements, for example by using an optimisation function. The current way to deal with this in OGGM is to use all glaciers with a volume estimate from the GLaThiDa database, and define A so that the volume RMSD is minimized. The reason for choosing the volume (which is strongly affected by the area) over the thickness is that with this method, larger glaciers will have more influence on the final results. End of explanation """ import pandas as pd fpath = os.path.join(cfg.PATHS['working_dir'], 'inversion_optim_results.csv') df = pd.read_csv(fpath, index_col=0) df['ref_thick'] = df['ref_volume_km3'] / df['ref_area_km2'] * 1e3 df['oggm_thick'] = df['oggm_volume_km3'] / df['ref_area_km2'] * 1e3 df['vas_thick'] = df['vas_volume_km3'] / df['ref_area_km2'] * 1e3 fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6)) ax1.scatter(df['ref_thick'], df['oggm_thick'], s=100) ax1.set_title('OGGM RMSD: {:.2f}'.format(oggm.utils.rmsd(df['ref_thick'], df['oggm_thick']))) ax1.set_xlabel('Ref thickness') ax1.set_ylabel('OGGM thickness') ax1.plot([0, 100], [0, 100], '.:k', zorder=0); ax1.set_xlim([0, 100]), ax1.set_ylim([0, 100]); ax2.scatter(df['ref_thick'], df['vas_thick'], s=100) ax2.set_title('Volume-Area RMSD: {:.2f}'.format(oggm.utils.rmsd(df['ref_thick'], df['vas_thick']))) ax2.set_xlabel('Ref thickness') ax2.set_ylabel('VAS thickness') ax2.plot([0, 100], [0, 100], '.:k', zorder=0); ax2.set_xlim([0, 100]), ax2.set_ylim([0, 100]); """ Explanation: The optimize_inversion_params task also writes some statistics in the working directory: End of explanation """ # Use the optimal parameters for inveting all glaciers and apply a simple correction filter workflow.execute_entity_task(tasks.volume_inversion, gdirs) workflow.execute_entity_task(tasks.filter_inversion_output, gdirs) """ Explanation: Finalize the inversion End of explanation """ print(gdir_hef.dir) """ Explanation: Flowline model All the previous steps are necessary to run the flowline model: the computation of the flowline(s) and their width, the interpolation of the climate data, the mass-balance sensitivity $\mu$, an estimate of the glacier bed... All this data are stored in the glacier directories. For example for HEF the data should be approx 2.2 Mb. You can explore the various files available in the directory printed below: End of explanation """ tasks.init_present_time_glacier(gdir_hef) """ Explanation: The files are partly documented here. The first task to apply before using the model is the init_present_time_glacier function: End of explanation """ from oggm.core.models.flowline import FluxBasedModel # the flowlines alone fls = gdir_hef.read_pickle('model_flowlines') model = FluxBasedModel(fls) graphics.plot_modeloutput_map(gdir_hef, model=model); """ Explanation: This task is required to merge the various glacier divides back together and to allow the glacier to grow by adding the downstream flowlines. This function also decides on the shape of the glacier bed along the flowlines and downstream (currently an "average" parabolic shape is chosen). Let's initialize our model with this geometry: End of explanation """ graphics.plot_modeloutput_section(gdir, model=model); """ Explanation: A cross-section along the glacier can be visualized with the following function: End of explanation """ from oggm.core.models.massbalance import ConstantMassBalanceModel from oggm.core.models.massbalance import PastMassBalanceModel """ Explanation: Mass balance To run the model, one has to define a mass-balance function. They are implemented in the massbalance module: End of explanation """ today_model = ConstantMassBalanceModel(gdir_hef, y0=1985) tstar_model = ConstantMassBalanceModel(gdir_hef) hist_model = PastMassBalanceModel(gdir_hef) # Altitude of the main flowline: z = model.fls[-1].surface_h # Get the mass balance and convert to m per year mb_today = today_model.get_annual_mb(z) * cfg.SEC_IN_YEAR * cfg.RHO / 1000. mb_tstar = tstar_model.get_annual_mb(z) * cfg.SEC_IN_YEAR * cfg.RHO / 1000. mb_2003 = hist_model.get_annual_mb(z, 2003) * cfg.SEC_IN_YEAR * cfg.RHO / 1000. # Plot plt.figure(figsize=(8, 5)) plt.plot(mb_today, z, '*', label='1970-2000'); plt.plot(mb_tstar, z, '*', label='t*'); plt.plot(mb_2003, z, '*', label='2003'); plt.ylabel('Altitude (m)'); plt.xlabel('Annual MB (m we)'); plt.legend(loc='best'); """ Explanation: For example, let's have a look at the mass-balance profile of HEF for the period 1970-2000, for the period $t^*$, and for the year 2003: End of explanation """ fls = gdir_hef.read_pickle('model_flowlines') commit_model = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A) """ Explanation: Define a model run For a complete run you need to specify an initial state, a mass-balance model and the ice-flow parameter(s): End of explanation """ # Run for 50 years commit_model.run_until(50) graphics.plot_modeloutput_section(gdir_hef, model=commit_model) """ Explanation: It is now possible to run the model for any period of time: End of explanation """ commit_model.run_until_equilibrium() graphics.plot_modeloutput_section(gdir_hef, model=commit_model) graphics.plot_modeloutput_map(gdir_hef, model=commit_model) """ Explanation: Or until an equilibrium is reached (in this case it is possible because the mass-balance is constant in time): End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') commit_model = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A) # Run and store years = np.arange(200) * 2 volume = np.array([]) for y in years: commit_model.run_until(y) volume = np.append(volume, commit_model.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume) plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); """ Explanation: This is a very good example of how surprising glaciers can be. Let's redo this run and store the glacier evolution with time: End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') commit_model_1 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*1) fls = gdir_hef.read_pickle('model_flowlines') commit_model_2 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*2) fls = gdir_hef.read_pickle('model_flowlines') commit_model_3 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*3) # Run and store years = np.arange(200) * 2 volume_1 = np.array([]) volume_2 = np.array([]) volume_3 = np.array([]) for y in years: commit_model_1.run_until(y) volume_1 = np.append(volume_1, commit_model_1.volume_m3) commit_model_2.run_until(y) volume_2 = np.append(volume_2, commit_model_2.volume_m3) commit_model_3.run_until(y) volume_3 = np.append(volume_3, commit_model_3.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume_1, label='1.0 A') plt.plot(years, volume_2, label='2.0 A') plt.plot(years, volume_3, label='3.0 A') plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); plt.legend(loc='best'); """ Explanation: How important is the A parameter for the equilibrium volume? End of explanation """ from oggm.core.models.massbalance import RandomMassBalanceModel # Define the mass balance model random_today = RandomMassBalanceModel(gdir, y0=1985, seed=0) # Plot th specifi mass-balance h, w = gdir.get_inversion_flowline_hw() years = np.arange(1000) mb_ts = random_today.get_specific_mb(h, w, year=years) plt.figure(figsize=(10, 4)) plt.plot(years, mb_ts); """ Explanation: Random runs Equilibrium runs of course are not realistic. The normal variability of climate can lead to retreats and advances without any external forcing. OGGM therfore implements a random mass-balance model, which simply shuffles the observed years during a selected period of time. End of explanation """ fls = gdir_hef.read_pickle('model_flowlines') random_model = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3) # Run and store years = np.arange(500) * 2 volume = np.array([]) for y in years: random_model.run_until(y) volume = np.append(volume, random_model.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume) plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); """ Explanation: As you can see, the mass-balance has no visible trend. The time-series are not stricly gaussians, since only "observed" years can happen: the randomness occurs in the sequence of the events. Let's make a run with this mass-balance: End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') random_model_1 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*1) fls = gdir_hef.read_pickle('model_flowlines') random_model_2 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*2) fls = gdir_hef.read_pickle('model_flowlines') random_model_3 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3) # Run and store years = np.arange(250) * 2 volume_1 = np.array([]) volume_2 = np.array([]) volume_3 = np.array([]) for y in years: random_model_1.run_until(y) volume_1 = np.append(volume_1, random_model_1.volume_m3) random_model_2.run_until(y) volume_2 = np.append(volume_2, random_model_2.volume_m3) random_model_3.run_until(y) volume_3 = np.append(volume_3, random_model_3.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume_1, label='1.0 A') plt.plot(years, volume_2, label='2.0 A') plt.plot(years, volume_3, label='3.0 A') plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); plt.legend(loc='best'); """ Explanation: Let's see what influence does the Glen's parameter A have on the glacier evolution. Note that if we use the same mass-balance model for all runs they will all have the same random climate sequence! This is very useful for various reasons. End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') random_today = RandomMassBalanceModel(gdir, y0=1985, seed=1) random_model_1 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3) fls = gdir_hef.read_pickle('model_flowlines') random_today = RandomMassBalanceModel(gdir, y0=1985, seed=2) random_model_2 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3) fls = gdir_hef.read_pickle('model_flowlines') random_today = RandomMassBalanceModel(gdir, y0=1985, seed=3) random_model_3 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3) # Run and store years = np.arange(250) * 2 volume_1 = np.array([]) volume_2 = np.array([]) volume_3 = np.array([]) for y in years: random_model_1.run_until(y) volume_1 = np.append(volume_1, random_model_1.volume_m3) random_model_2.run_until(y) volume_2 = np.append(volume_2, random_model_2.volume_m3) random_model_3.run_until(y) volume_3 = np.append(volume_3, random_model_3.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume_1, label='Seed 1') plt.plot(years, volume_2, label='Seed 2') plt.plot(years, volume_3, label='Seed 3') plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); plt.legend(loc='best'); """ Explanation: After the spin-up time, the three models have quite similar evolutions but quite different volumes! Let's use different random series this time, keeping A constant: End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') # Same as before with another mass-balance model and a real starting year y0: model = FluxBasedModel(fls, mb_model=hist_model, glen_a=cfg.A*3, y0=1850) # Run and store years = np.arange(153) + 1851 volume = np.array([]) for y in years: model.run_until(y) volume = np.append(volume, model.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume) plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); """ Explanation: Historical runs Now this is where it becomes interesting. Let's define a run with "real" mass-balance time-series. Let's assume that the 1850 glacier geometry is the same as today's, and run the model over 153 years: End of explanation """ # Reinitialize the model (important!) fls = gdir_hef.read_pickle('model_flowlines') # Grow model: tstar climate mb_model = ConstantMassBalanceModel(gdir_hef) mb_model.temp_bias = -0.2 grow_model = FluxBasedModel(fls, y0=0, mb_model=mb_model, glen_a=cfg.A*3) # run until equilibrium grow_model.run_until_equilibrium() # plot graphics.plot_modeloutput_map(gdir_hef, model=grow_model) """ Explanation: Today's HEF would probably be too small to be in equilibrium with the climate during most of the simulation period. The time it needs to re-adjust depends on the glacier characteristics as well as A. We need a way to make our glacier grow first, so that it can shrink as we expect it to do. End of explanation """ # Reinitialize the model with the new geom fls = grow_model.fls model = FluxBasedModel(fls, mb_model=hist_model, glen_a=cfg.A*3, y0=1850) # Run and store years = np.arange(153) + 1851 volume = np.array([]) for y in years: model.run_until(y) volume = np.append(volume, model.volume_m3) # Plot plt.figure(figsize=(8, 5)) plt.plot(years, volume) plt.ylabel('Volume (m3)'); plt.xlabel('Time (years)'); """ Explanation: Ok. Now reinitialize the historical run with this new input and see: End of explanation """ graphics.plot_modeloutput_map(gdir_hef, model=model) """ Explanation: Looks better! But still not perfect: End of explanation """
laurajchang/NPTFit
examples/Example3_Running_Poissonian_Scans.ipynb
mit
# Import relevant modules %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import corner import matplotlib.pyplot as plt from NPTFit import nptfit # module for performing scan from NPTFit import create_mask as cm # module for creating the mask from NPTFit import dnds_analysis # module for analysing the output """ Explanation: Example 3: Running Poissonian Scans with MultiNest In this example we demonstrate how to run a scan using only templates that follow Poisson statistics. Nevertheless many aspects of how the code works in general, such as initialization, loading data, masks and templates, and running the code with MultiNest carry over to the non-Poissonian case. In detail we will perform an analysis of the inner galaxy involving all five background templates discussed in Example 1. We will show that the fit prefers a non-zero value for the GCE template. NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details. End of explanation """ n = nptfit.NPTF(tag='Poissonian_Example') """ Explanation: Step 1: Setting up an instance of NPTFit To begin with we need to create an instance of NPTF from nptfit.py. We will load it with the tag set to "Poissonian_Example", which is the name attached to the folder within the chains directory where the output will be stored. Note for long runs the chains output can become large, so periodically deleting runs you are no longer using is recommended. End of explanation """ fermi_data = np.load('fermi_data/fermidata_counts.npy') fermi_exposure = np.load('fermi_data/fermidata_exposure.npy') n.load_data(fermi_data, fermi_exposure) """ Explanation: The full list of parameters that can be set with the initialization are as follows (all are optional). | Argument | Defaults | Purpose | | ------------- | ------------- | ------------- | | tag | "Untagged" | The label of the file where the output of MultiNest will be stored, specifically they are stored at work_dir/chains/tag/. | | work_dir | $pwd | The directory where all outputs from the NPTF will be stored. This defaults to the notebook directory, but an alternative can be specified. | | psf_dir | work_dir/psf_dir/ | Where the psf corrections will be stored (this correction is discussed in the next notebook). | Step 2: Add in Data, a Mask and Background Templates Next we need to pass the code some data to analyze. For this purpose we use the Fermi Data described in Example 1. The format for load_data is data and then exposure. NB: we emphasize that although we use the example of HEALPix maps here, the code more generally works on any 1-d arrays, as long as the data, exposure, mask, and templates all have the same length. End of explanation """ pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool) analysis_mask = cm.make_mask_total(band_mask = True, band_mask_range = 2, mask_ring = True, inner = 0, outer = 30, custom_mask = pscmask) n.load_mask(analysis_mask) """ Explanation: In order to study the inner galaxy, we restrict ourselves to a smaller ROI defined by the analysis mask discussed in Example 2. The mask must be the same length as the data and exposure. End of explanation """ dif = np.load('fermi_data/template_dif.npy') iso = np.load('fermi_data/template_iso.npy') bub = np.load('fermi_data/template_bub.npy') psc = np.load('fermi_data/template_psc.npy') gce = np.load('fermi_data/template_gce.npy') n.add_template(dif, 'dif') n.add_template(iso, 'iso') n.add_template(bub, 'bub') n.add_template(psc, 'psc') n.add_template(gce, 'gce') """ Explanation: Add in the templates we will want to use as background models. When adding templates, the first entry is the template itself and the second the string by which it is identified. The length for each template must again match the data. End of explanation """ n.add_poiss_model('dif', '$A_\mathrm{dif}$', False, fixed=True, fixed_norm=15.) n.add_poiss_model('iso', '$A_\mathrm{iso}$', [-2,1], True) n.add_poiss_model('bub', '$A_\mathrm{bub}$', [0,2], False) n.add_poiss_model('psc', '$A_\mathrm{psc}$', [0,2], False) n.add_poiss_model('gce', '$A_\mathrm{gce}$', [0,2], False) """ Explanation: Step 3: Add Background Models to the Fit Now from this list of templates the NPTF now knows about, we add in a series of background models which will be passed to MultiNest. In Example 6 we will show how to evaluate the likelihood without MultiNest, so that it can be interfaced with alternative inference packages. Poissonian templates only have one parameter associated with them: $A$ the template normalisation. Poissonian models are added to the fit via add_poiss_model. The first argument sets the spatial template for this background model, and should match the string used in add_template. The second argument is a LaTeX ready string used to identify the floated parameter later on. By default added models will be floated. For floated templates the next two parameters are the prior range, added in the form [param_min, param_max] and then whether the prior is log flat (True) or linear flat (False). For log flat priors the priors are specified as indices, so that [-2,1] floats over a linear range [0.01,10]. Templates can also be added with a fixed normalisation. In this case no prior need be specified and instead fixed=True should be specified as well as fixed_norm=value, where value is $A$ the template normalisation. We use each of these possibilities in the example below. End of explanation """ n.configure_for_scan() """ Explanation: Note the diffuse model is normalised to a much larger value than the maximum prior of the other templates. This is because the diffuse model explains the majority of the flux in our ROI. The value of 15 was determined from a fit where the diffuse model was not fixed. Step 4: Configure the Scan Now the scan knows what models we want to fit to the data, we can configure the scan. In essence this step prepares all the information given above into an efficient format for calculating the likelihood. The main actions performed are: 1. Take the data and templates, and reduce them to only the ROI we will use as defined by the mask; 2. Further for a non-Poissonian scan an accounting for the number of exposure regions requested is made; and 3. Take the priors and parameters and prepare them into an efficient form for calculating the likelihood function that can then be used directly or passed to MultiNest. End of explanation """ n.perform_scan(nlive=500) """ Explanation: Step 5: Perform the Scan Having setup all the parameters, we can now perform the scan using MultiNest. We will show an example of how to manually calculate the likelihood in Example 6. | Argument | Default Value | Purpose | | ------------- | ------------- | ------------- | | run_tag | None | An additional tag can be specified to create a subdirectory of work_dir/chains/tag/ in which the output is stored. | nlive | 100 | Number of live points to be used during the MultiNest scan. A higher value thatn 100 is recommended for most runs, although larger values correspond to increased run time. | | pymultinest_options | None | When set to None our default choices for MultiNest will be used (explained below). To alter these options, a dictionary of parameters and their values should be placed here. | Our default MultiNest options are defined as follows: python pymultinest_options = {'importance_nested_sampling': False, 'resume': False, 'verbose': True, 'sampling_efficiency': 'model', 'init_MPI': False, 'evidence_tolerance': 0.5, 'const_efficiency_mode': False} For variations on these, a dictionary in the same format should be passed to perform_scan. A detailed explanation of the MultiNest options can be found here: https://johannesbuchner.github.io/PyMultiNest/pymultinest_run.html End of explanation """ n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() """ Explanation: Step 6: Analyze the Output Here we show a simple example of the output - the triangle plot. The full list of possible analysis options is explained in more detail in Example 8. In order to do this we need to first load the scan using load_scan, which takes as an optional argument the same run_tag as used for the run. Note that load_scan can be used to load a run performed in a previous instance of NPTF, as long as the various parameters match. After the scan is loaded we then create an instance of dnds_analysis, which takes an instance of nptfit.NPTF as an argument - which must already have a scan loaded. From here we simply make a triangle plot. End of explanation """ an.plot_intensity_fraction_poiss('gce', bins=800, color='tomato', label='GCE') an.plot_intensity_fraction_poiss('iso', bins=800, color='cornflowerblue', label='Iso') an.plot_intensity_fraction_poiss('bub', bins=800, color='plum', label='Bub') plt.xlabel('Flux fraction (%)') plt.legend(fancybox = True) plt.xlim(0,8); """ Explanation: The triangle plot makes it clear that a non-zero value of the GCE template is preferred by the fit. Note also that as we gave the isotropic template a log flat prior, the parameter in the triangle plot is $\log_{10} A_\mathrm{iso}$. We also show the relative fraction of the Flux obtained by the GCE as compared to other templates. Note the majority of the flux is absorbed by the diffuse model. End of explanation """
MehtapIsik/assaytools
examples/direct-fluorescence-assay/2 MLE fit for two component binding - simulated and real data.ipynb
lgpl-2.1
import numpy as np import matplotlib.pyplot as plt from scipy import optimize import seaborn as sns %pylab inline """ Explanation: MLE fit for two component binding - simulated and real data In part one of this notebook we see how well we can reproduce Kd from simulated experimental data with a maximum likelihood function. In part two of this notebook we see how well it can estimate the Kd from real experimental binding data. End of explanation """ Kd = 2e-9 # M Ptot = 1e-9 * np.ones([12],np.float64) # M Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M def two_component_binding(Kd, Ptot, Ltot): """ Parameters ---------- Kd : float Dissociation constant Ptot : float Total protein concentration Ltot : float Total ligand concentration Returns ------- P : float Free protein concentration L : float Free ligand concentration PL : float Complex concentration """ PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM) P = Ptot - PL; # free protein concentration in sample cell after n injections (uM) L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM) return [P, L, PL] [L, P, PL] = two_component_binding(Kd, Ptot, Ltot) # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,PL, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$[PL]$ / M') plt.ylim(0,1.3e-9) plt.axhline(Ptot[0],color='0.75',linestyle='--',label='$[P]_{tot}$') plt.legend(); """ Explanation: Part I We use the same setup here as we do in the 'Simulating Experimental Fluorescence Binding Data' notebook. Experimentally we won't know the Kd, but we know the Ptot and Ltot concentrations. End of explanation """ # Making max 400 relative fluorescence units, and scaling all of PL to that npoints = len(Ltot) sigma = 10.0 # size of noise F_i = (400/1e-9)*PL + sigma * np.random.randn(npoints) #Pstated = np.ones([npoints],np.float64)*Ptot #Lstated = Ltot # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); #And makeup an F_L F_L = 0.3 F_i def find_Kd_from_fluorescence(params): [F_background, F_PL, Kd] = params N = len(Ltot) Fmodel_i = np.zeros([N]) for i in range(N): [P, L, PL] = two_component_binding(Kd, Ptot[0], Ltot[i]) Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background return Fmodel_i 400/1E-9 initial_guess = [0,400/1e-9,2e-9] prediction = find_Kd_from_fluorescence(initial_guess) plt.semilogx(Ltot,prediction,color='k') plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); def sumofsquares(params): prediction = find_Kd_from_fluorescence(params) return np.sum((prediction - F_i)**2) initial_guess = [0,3E11,1E-9] fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead') print "The predicted parameters are", fit.x fit_prediction = find_Kd_from_fluorescence(fit.x) plt.semilogx(Ltot,fit_prediction,color='k') plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); Kd_MLE = fit.x[2] if (Kd_MLE < 1e-12): Kd_summary = "Kd = %.1f nM " % (Kd_MLE/1e-15) elif (Kd_MLE < 1e-9): Kd_summary = "Kd = %.1f pM " % (Kd_MLE/1e-12) elif (Kd_MLE < 1e-6): Kd_summary = "Kd = %.1f nM " % (Kd_MLE/1e-9) elif (Kd_MLE < 1e-3): Kd_summary = "Kd = %.1f uM " % (Kd_MLE/1e-66) elif (Kd_MLE < 1): Kd_summary = "Kd = %.1f mM " % (Kd_MLE/1e-3) else: Kd_summary = "Kd = %.3e M " % (Kd_MLE) delG_summary = "delG = %s kT" %np.log(Kd_MLE) Kd_summary delG_summary """ Explanation: Now make this a fluorescence experiment. End of explanation """ # This requires that we import a few new libraries from assaytools import platereader import string Ptot = 0.5e-6 * np.ones([24],np.float64) # protein concentration, M Ltot = np.array([20.0e-6,14.0e-6,9.82e-6,6.88e-6,4.82e-6,3.38e-6,2.37e-6,1.66e-6,1.16e-6,0.815e-6,0.571e-6,0.4e-6,0.28e-6,0.196e-6,0.138e-6,0.0964e-6,0.0676e-6,0.0474e-6,0.0320e-6,0.0240e-6,0.0160e-6,0.0120e-6,0.008e-6,0.00001e-6], np.float64) # ligand concentration, M singlet_file = './data/p38_singlet1_20160420_153238.xml' data = platereader.read_icontrol_xml(singlet_file) #I want the Bosutinib-p38 data from rows I (protein) and J (buffer). data_protein = platereader.select_data(data, '280_480_TOP_120', 'I') data_buffer = platereader.select_data(data, '280_480_TOP_120', 'J') data_protein #Sadly we also need to reorder our data and put it into an array to make the analysis easier #This whole thing should be moved to assaytools.platereader hopefully before too many other people see this. well = dict() for j in string.ascii_uppercase: for i in range(1,25): well['%s' %j + '%s' %i] = i def reorder2list(data,well): sorted_keys = sorted(well.keys(), key=lambda k:well[k]) reorder_data = [] for key in sorted_keys: try: reorder_data.append(data[key]) except: pass reorder_data = [r.replace('OVER','70000') for r in reorder_data] reorder_data = np.asarray(reorder_data,np.float64) return reorder_data reorder_protein = reorder2list(data_protein,well) reorder_buffer = reorder2list(data_buffer,well) reorder_protein plt.semilogx(Ltot,reorder_protein, 'ro', label='PL') plt.semilogx(Ltot,reorder_buffer, 'ko', label='L') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('fluorescence') plt.xlim(5e-9,1.3e-4) plt.legend(loc=2); # for this to work we need to provide some initial values # some of these we already have F_i = reorder_protein #And makeup an F_L F_L = 0.3 # initial guess for [F_background, F_PL, Kd] initial_guess = [0,400/1e-9,2e-9] F_i fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead') print "The predicted parameters [F_background, F_PL, Kd] are ", fit.x fit.x[0] fit_prediction = find_Kd_from_fluorescence(fit.x) plt.semilogx(Ltot,fit_prediction,color='k') plt.semilogx(Ltot,reorder_protein, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); plt.semilogx(Ltot,fit_prediction,color='k', label='prediction') plt.semilogx(Ltot,reorder_protein, 'o', label='data') plt.axhline(fit.x[0],color='k',linestyle='--', label='$[F]_{background}$') plt.axvline(fit.x[2],color='r',linestyle='--', label='$K_d$') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(loc=2); Kd_summary delG_summary """ Explanation: Part II Now we will see how well this does for real data. End of explanation """
tensorflow/docs-l10n
site/ko/probability/examples/Eight_Schools.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow.compat.v2 as tf import tensorflow_probability as tfp from tensorflow_probability import distributions as tfd import warnings tf.enable_v2_behavior() plt.style.use("ggplot") warnings.filterwarnings('ignore') """ Explanation: 8개 학교 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Eight_Schools"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td> </table> 8개 학교 문제(The eight schools problem)(Rubin 1981)는 8개 학교에서 동시에 수행되는 SAT 코칭 프로그램의 효과를 고려하는 문제입니다. 이는 교환 가능한 그룹 간에 정보를 공유하기 위한 계층적 모델링의 유용성을 보여주는 고전적인 문제(Bayesian Data Analysis, Stan)가 되었습니다. 아래 구현은 Edward 1.0 튜토리얼을 수정한 것입니다. 가져오기 End of explanation """ num_schools = 8 # number of schools treatment_effects = np.array( [28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects treatment_stddevs = np.array( [15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE fig, ax = plt.subplots() plt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs) plt.title("8 Schools treatment effects") plt.xlabel("School") plt.ylabel("Treatment effect") fig.set_size_inches(10, 8) plt.show() """ Explanation: 데이터 베이지안 데이터 분석, 5.5 섹션(Gelman 외, 2013)은 다음과 같습니다. 각 8개 고등학교에서 SAT-V(Scholastic Aptitude Test-Verbal)에 대한 특수 코칭 프로그램의 효과를 분석하기 위해 교육 테스트 서비스를 위한 연구가 수행되었습니다. 각 연구의 결과 변수는 SAT-V의 특별 관리 점수로, 교육 시험 서비스에서 관리하고 대학의 입학 결정을 돕는 데 사용되는 표준화된 객관식 시험입니다. 점수는 200에서 800까지 다양하며 평균은 약 500, 표준 편차는 약 100입니다. SAT 시험은 특히 시험 성적 향상을 위한 단기적인 노력에 저항하도록 설계되었습니다. 대신 이들 시험은 수년간의 교육을 통해 습득한 지식과 개발된 능력을 반영하도록 설계되었습니다. 그런데도 이 연구에 참여한 각각의 8개 학교는 단기 코칭 프로그램이 SAT 점수를 높이는 데 매우 성공적이라고 생각했습니다. 또한 8개 프로그램 중 어떤 것이 다른 프로그램보다 더 효과적이거나 어떤 프로그램이 다른 프로그램보다 효과가 더 유사하다고 믿을만한 선행 사유가 없었습니다. 각 8개 학교($J = 8$)에 대해 추정된 처치 효과 $y_j$와 효과 추정치 $\sigma_j$의 표준 오차가 있습니다. 이 연구에서 처치 효과는 PSAT-M 및 PSAT-V 점수를 통제 변수로 사용하여 처치 집단에 대한 선형 회귀를 통해 얻었습니다. 어느 학교가 서로 어느 정도 유사하거나, 어떤 코칭 프로그램이 더 효과적일 것이라는 기존 믿음이 없었으므로 처치 효과를 교환 가능한 것으로 간주할 수 있습니다. End of explanation """ model = tfd.JointDistributionSequential([ tfd.Normal(loc=0., scale=10., name="avg_effect"), # `mu` above tfd.Normal(loc=5., scale=1., name="avg_stddev"), # `log(tau)` above tfd.Independent(tfd.Normal(loc=tf.zeros(num_schools), scale=tf.ones(num_schools), name="school_effects_standard"), # `theta_prime` reinterpreted_batch_ndims=1), lambda school_effects_standard, avg_stddev, avg_effect: ( tfd.Independent(tfd.Normal(loc=(avg_effect[..., tf.newaxis] + tf.exp(avg_stddev[..., tf.newaxis]) * school_effects_standard), # `theta` above scale=treatment_stddevs), name="treatment_effects", # `y` above reinterpreted_batch_ndims=1)) ]) def target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard): """Unnormalized target density as a function of states.""" return model.log_prob(( avg_effect, avg_stddev, school_effects_standard, treatment_effects)) """ Explanation: 모델 데이터를 캡처하기 위해 계층적 정규 모델을 사용합니다. 다음의 생성 과정을 따릅니다. $$ \begin{align} \mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \ \log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \ \text{for } & i=1\ldots 8:\ & \theta_i \sim \text{Normal}\left(\text{loc}{=}\mu,\ \text{scale}{=}\tau \right) \ & y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right) \end{align} $$ 여기서 $\mu$는 이전 평균 처치 효과를 나타내며 $\tau$는 학교 간의 분산을 제어합니다. $y_i$ 및 $\sigma_i$가 관찰됩니다. $\tau \rightarrow \infty$로 모델은 풀링 없음 모델에 접근합니다. 즉, 각 학교 처치 효과 추정치가 더 독립적일 수 있습니다. $\tau \rightarrow 0$로 모델은 완전 풀링 모델에 접근합니다. 즉, 모든 학교 처치 효과가 그룹 평균 $\mu$에 더 가깝습니다. 표준 편차를 양수로 제한하기 위해 로그 정규 분포에서 $\tau$를 그립니다(정규 분포에서 $\log(\tau)$를 그리는 것과 동일함). 발산(divergence)을 사용한 바이어스 추론 진단에 따라 위의 모델을 동등한 비 중심 모델로 변환합니다. $$ \begin{align} \mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \ \log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \ \text{for } & i=1\ldots 8:\ & \theta_i' \sim \text{Normal}\left(\text{loc}{=}0,\ \text{scale}{=}1 \right) \ & \theta_i = \mu + \tau \theta_i' \ & y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right) \end{align} $$ 이 모델을 JointDistributionSequential 인스턴스로 실체화합니다. End of explanation """ num_results = 5000 num_burnin_steps = 3000 # Improve performance by tracing the sampler using `tf.function` # and compiling it using XLA. @tf.function(autograph=False, experimental_compile=True) def do_sampling(): return tfp.mcmc.sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, current_state=[ tf.zeros([], name='init_avg_effect'), tf.zeros([], name='init_avg_stddev'), tf.ones([num_schools], name='init_school_effects_standard'), ], kernel=tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=target_log_prob_fn, step_size=0.4, num_leapfrog_steps=3)) states, kernel_results = do_sampling() avg_effect, avg_stddev, school_effects_standard = states school_effects_samples = ( avg_effect[:, np.newaxis] + np.exp(avg_stddev)[:, np.newaxis] * school_effects_standard) num_accepted = np.sum(kernel_results.is_accepted) print('Acceptance rate: {}'.format(num_accepted / num_results)) fig, axes = plt.subplots(8, 2, sharex='col', sharey='col') fig.set_size_inches(12, 10) for i in range(num_schools): axes[i][0].plot(school_effects_samples[:,i].numpy()) axes[i][0].title.set_text("School {} treatment effect chain".format(i)) sns.kdeplot(school_effects_samples[:,i].numpy(), ax=axes[i][1], shade=True) axes[i][1].title.set_text("School {} treatment effect distribution".format(i)) axes[num_schools - 1][0].set_xlabel("Iteration") axes[num_schools - 1][1].set_xlabel("School effect") fig.tight_layout() plt.show() print("E[avg_effect] = {}".format(np.mean(avg_effect))) print("E[avg_stddev] = {}".format(np.mean(avg_stddev))) print("E[school_effects_standard] =") print(np.mean(school_effects_standard[:, ])) print("E[school_effects] =") print(np.mean(school_effects_samples[:, ], axis=0)) # Compute the 95% interval for school_effects school_effects_low = np.array([ np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools) ]) school_effects_med = np.array([ np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools) ]) school_effects_hi = np.array([ np.percentile(school_effects_samples[:, i], 97.5) for i in range(num_schools) ]) fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True) ax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60) ax.scatter( np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60) plt.plot([-0.2, 7.4], [np.mean(avg_effect), np.mean(avg_effect)], 'k', linestyle='--') ax.errorbar( np.array(range(8)), school_effects_med, yerr=[ school_effects_med - school_effects_low, school_effects_hi - school_effects_med ], fmt='none') ax.legend(('avg_effect', 'HMC', 'Observed effect'), fontsize=14) plt.xlabel('School') plt.ylabel('Treatment effect') plt.title('HMC estimated school treatment effects vs. observed data') fig.set_size_inches(10, 8) plt.show() """ Explanation: 베이지안 추론 주어진 데이터에서 해밀턴 몬테카를로(HMC)를 수행하여 모델의 매개변수에 대한 사후 확률 분포를 계산합니다. End of explanation """ print("Inferred posterior mean: {0:.2f}".format( np.mean(school_effects_samples[:,]))) print("Inferred posterior mean se: {0:.2f}".format( np.std(school_effects_samples[:,]))) """ Explanation: 위의 avg_effect 그룹에 대한 축소를 관찰할 수 있습니다. End of explanation """ sample_shape = [5000] _, _, _, predictive_treatment_effects = model.sample( value=(tf.broadcast_to(np.mean(avg_effect, 0), sample_shape), tf.broadcast_to(np.mean(avg_stddev, 0), sample_shape), tf.broadcast_to(np.mean(school_effects_standard, 0), sample_shape + [num_schools]), None)) fig, axes = plt.subplots(4, 2, sharex=True, sharey=True) fig.set_size_inches(12, 10) fig.tight_layout() for i, ax in enumerate(axes): sns.kdeplot(predictive_treatment_effects[:, 2*i].numpy(), ax=ax[0], shade=True) ax[0].title.set_text( "School {} treatment effect posterior predictive".format(2*i)) sns.kdeplot(predictive_treatment_effects[:, 2*i + 1].numpy(), ax=ax[1], shade=True) ax[1].title.set_text( "School {} treatment effect posterior predictive".format(2*i + 1)) plt.show() # The mean predicted treatment effects for each of the eight schools. prediction = np.mean(predictive_treatment_effects, axis=0) """ Explanation: 비평 사후 예측 분포, 즉 관측된 데이터 $y$를 고려하여 새 데이터 $y^*$의 모델을 구합니다. $$ p(y^|y) \propto \int_\theta p(y^ | \theta)p(\theta |y)d\theta$$ 모델의 확률 변수값을 재정의하여 사후 분포의 평균으로 설정하고, 해당 모델에서 샘플링하여 새 데이터 $y^*$를 생성합니다. End of explanation """ treatment_effects - prediction """ Explanation: 처치 효과 데이터와 모델 사후 예측 간의 잔차를 볼 수 있습니다. 이는 모집단 평균에 대한 예상 효과의 축소를 보여주는 위의 플롯과 일치합니다. End of explanation """ residuals = treatment_effects - predictive_treatment_effects fig, axes = plt.subplots(4, 2, sharex=True, sharey=True) fig.set_size_inches(12, 10) fig.tight_layout() for i, ax in enumerate(axes): sns.kdeplot(residuals[:, 2*i].numpy(), ax=ax[0], shade=True) ax[0].title.set_text( "School {} treatment effect residuals".format(2*i)) sns.kdeplot(residuals[:, 2*i + 1].numpy(), ax=ax[1], shade=True) ax[1].title.set_text( "School {} treatment effect residuals".format(2*i + 1)) plt.show() """ Explanation: 각 학교에 대한 예측 분포가 있으므로 잔차 분포도 고려할 수 있습니다. End of explanation """
getsmarter/bda
module_4/M4_NB3_NetworkClustering.ipynb
mit
import networkx as nx import pandas as pd import numpy as np %matplotlib inline import matplotlib.pylab as plt from networkx.drawing.nx_agraph import graphviz_layout from collections import defaultdict, Counter import operator ## For hierarchical clustering. from scipy.cluster import hierarchy from scipy.spatial import distance ## For spectral graph partitioning. from sklearn.cluster import spectral_clustering as spc ## For Community Detection (Louvain Method). import community import sys sys.path.append('..') from utils import draw_partitioned_graph from utils import fancy_dendrogram plt.rcParams['figure.figsize'] = (15, 9) plt.rcParams['axes.titlesize'] = 'large' """ Explanation: <div align="right">Python 3.6 Jupyter Notebook</div> Finding connected components using clustering <br><div class="alert alert-warning"> <b>Note that this notebook contains advanced exercises applicable only to students who wish to deepen their understanding and qualify for bonus marks on the course.</b> You will be able to achieve 100% for this notebook by only completing Exercise 1. Optional, additional exercises can be completed to qualify for bonus marks. </div> Your completion of the notebook exercises will be graded based on your ability to do the following: Understand: Do your pseudo-code and comments show evidence that you recall and understand technical concepts? Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets? Analyze: Are you able to pick the relevant method or library to resolve specific stated questions? Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data? Notebook objectives By the end of this notebook, you will be expected to: Find connected components in networks (using the techniques of hierarchical clustering, modularity maximization, and spectral graph partitioning); and Interpret clustering results. List of exercises Exercise 1: Understanding hierarchical clustering. Exercise 2 [Advanced]: Interpreting the results of hierarchical clustering. Exercise 3 [Advanced]: Summarizing clustering results based on modularity maximization and spectral graph partitioning. Notebook introduction Community detection is an important task in social network analysis. The idea behind it is to identify groups of people that share a common interest, based on the assumption that these people tend to link to each other more than to the rest of the network. Specifically, real-world networks exhibit clustering behavior that can be observed in the graph representation of these networks by the formation of clusters or partitions. These groups of nodes on a graph (clusters) correspond to communities that share common properties, or have a common role in the system under study. Intuitively, it is expected that such clusters are associated with a high concentration of nodes. In the following examples, you will explore the identification of these clusters using the following approaches, as discussed in the video content: Hierarchical clustering (using a distance matrix) The Louvain Algorithm (using modularity maximization) Spectral graph partitioning Import required modules End of explanation """ call_adjmatrix = pd.read_csv('./call.adjmatrix', index_col=0) call_graph = nx.from_numpy_matrix(call_adjmatrix.as_matrix()) # Display call graph object. plt.figure(figsize=(10,10)) plt.axis('off') pos = graphviz_layout(call_graph, prog='dot') nx.draw_networkx(call_graph, pos=pos, node_color='#11DD11', with_labels=False) _ = plt.axis('off') """ Explanation: 1. Data preparation You are going to read the graph from an adjacency list saved in earlier exercises. End of explanation """ def create_hc(G, linkage='average'): """ Creates hierarchical cluster of graph G from distance matrix """ path_length=nx.all_pairs_shortest_path_length(G) distances=np.zeros((G.order(),G.order())) for u,p in dict(path_length).items(): for v,d in p.items(): distances[list(G.nodes)[u]][list(G.nodes)[v]] = d distances[list(G.nodes)[v]][list(G.nodes)[u]] = d if u==v: distances[list(G.nodes)[u]][list(G.nodes)[u]]=0 # Create hierarchical cluster (HC). Y=distance.squareform(distances) if linkage == 'max': # Creates HC using farthest point linkage. Z=hierarchy.complete(Y) if linkage == 'single': # Creates HC using closest point linkage. Z=hierarchy.single(Y) if linkage == 'average': # Creates HC using average point linkage. Z=hierarchy.average(Y) return Z def get_cluster_membership(Z, maxclust): ''' Assigns cluster membership by specifying cluster size. ''' hc_out=list(hierarchy.fcluster(Z,maxclust, criterion='maxclust')) # Map cluster values to a dictionary variable. cluster_membership = {} i = 0 for i in range(len(hc_out)): cluster_membership[i]=hc_out[i] return cluster_membership """ Explanation: 2. Hierarchical clustering This notebook makes use of a hierarchical clustering algorithm, as implemented in Scipy. The following example uses the average distance measure. Since the graph is weighted, you can also use the single linkage inter-cluster distance measure (see exercises). End of explanation """ # Perform hierarchical clustering using 'average' linkage. Z = create_hc(call_graph, linkage='average') """ Explanation: Below is a demonstration of hierarchical clustering when applied to the call graph. End of explanation """ hierarchy.dendrogram(Z) plt.show() """ Explanation: The dendrogram corresponding to the partitioned graph is obtained as follows: End of explanation """ plt.title('Hierarchical Clustering Dendrogram (pruned)') plt.xlabel('sample index (or leaf size)') plt.ylabel('distance') hierarchy.dendrogram( Z, truncate_mode='lastp', # show only the last p merged clusters p=10, # show only the last p merged clusters show_leaf_counts=True, # numbers in brackets are counts for each leaf leaf_rotation=90, leaf_font_size=12) plt.show() """ Explanation: You will notice that the full dendrogram is unwieldy, and difficult to use or read. Fortunately, the dendrogram method has a feature that allows one to only show the lastp merged clusters, where $p$ is the desired number of last p merged clusters. End of explanation """ fancy_dendrogram( Z, truncate_mode='lastp', p=12, leaf_rotation=90., leaf_font_size=12.0, show_contracted=False, annotate_above=10, max_d=3.5) plt.show() opt_clust = 3 opt_clust """ Explanation: This dendrogram can help explain what happens as a result of the agglomerative method of hierarchical clustering. Starting at the bottom-most level, each node is assigned its own cluster. The closest pair of nodes (according to a distance function) are then merged into a new cluster. The distance matrix is recomputed, treating the merged cluster as an individual node. This process is repeated until the entire network has been merged into a single, large cluster, which the top level in the dendrogram above represents. You can now understand why this method is agglomerative. The linkage function is used to determine the distance between a cluster and a node, or between two clusters, using the following possibilities: Single: Merge two clusters with the smallest minimum pairwise distance. Average: Merge two clusters with the smallest average pairwise distance. Maximum or complete: Merge the two clusters with the smallest maximum pairwise distance. Now, you can finally retrieve the clusters, based on the analysis of the dendrogram. In this post-processing, there are different ways of determining $k$, the number of clusters to partition the data into. Scipy's hierarchical flat clustering function - "hierarchy.fcluster()" - is used to assign cluster membership by specifying a distance threshold, or the number of clusters required. In the function definition (above), you have been provided with a utility function, "get_cluster_membership()", which does the latter. Selecting the number of clusters $k$ is, in general, an ill-posed problem. Different interpretations are possible, depending on the nature of the problem, the scale of the distribution of points in a data set, and the required clustering resolution. In agglomerative clustering, as used in the example above, you can get zero error for the objective function by considering each data point as its own cluster. Hence, the selection of $k$ invariably involves a trade-off maximum compression of the data (using a single cluster), and maximum accuracy by assigning each data point to its own cluster. The selection of an optimal $k$ can be done using automated techniques or manually. Here, identification of an appropriate cluster is ideally done manually as this has the advantages of gaining some insights into your data as well as providing an opportunity to perform sanity checks. To select the cluster size, look for a large shift in the distance metric. In our example with dendrograms plots shown above, say a case has been made for an ideal cutoff of 3.5. The number of clusters is then simply the number of intersections of a horizontal line (with height of 3.5) with the vertical lines of the dendrogram. Therefore, 3 clusters would be obtained in this case as shown below. End of explanation """ cluster_assignments = get_cluster_membership(Z, maxclust=opt_clust) """ Explanation: You can now assign the data to these "opt_clust" clusters. End of explanation """ clust = list(set(cluster_assignments.values())) clust cluster_centers = sorted(set(cluster_assignments.values())) freq = [list(cluster_assignments.values()).count(x) for x in cluster_centers] # Creata a DataFrame object containing list of cluster centers and number of objects in each cluster df = pd.DataFrame({'cluster_centres':cluster_centers, 'number_of_objects':freq}) df.head(10) """ Explanation: The partitioned graph, corresponding to the dendrogram above, can now be visualized. End of explanation """ # Your code here. """ Explanation: <br> <div class="alert alert-info"> <b>Exercise 1 Start.</b> </div> Instructions How many clusters are obtained after the final step of a generic agglomerative clustering algorithm (before post-processing)? Note: Post-processing involves determining the optimal clusters for the problem at hand. Based on your answer above, would you consider agglomerative clustering a top-down approach, or a bottom-up approach? Which of the three linkage functions (i.e. single, average, or maximum or complete) do you think is likely to be most sensitive to outliers? Hint: Look at this single-link and complete-link clustering resource. Your markdown answer here. <br> <div class="alert alert-info"> <b>Exercise 1 End.</b> </div> Exercise complete: <br> <div class="alert alert-info"> <b>Exercise 2 [Advanced] Start.</b> </div> Instructions In this exercise, you will investigate the structural properties of the clusters generated from above. Assign the values from your "cluster_assignments" to a Pandas DataFrame named "df1", with the column name "cluster_label". Hint: The variable "cluster_assignments" is of type dict. You will need to get the values component of this dict, not the keys. Add a field called "participantID" to "df1", and assign to this the index values from the previously-loaded "call_adjmatrix" DataFrame. Load the DataFrame containing the centrality measures that you saved in Notebook 1 of this module, into "df2". Perform an inner join by merging "df1" and "df2" on the field "participantID". Assign the result of this join to variable "df3". Perform a groupby on "df3" (using "cluster_label" field), and then evaluate the mean of the four centrality measures (using the "agg()" method). Assign the aggregation result to "df4". Review "df4", and plot its barplot. Merge clusters which share the same mean values for a centrality measure into a single cluster. Assign the smallest value of the labels in the set to the merged cluster. Note:<br> Combine clusters such that, given a cluster with centrality measures $[x1, x2, x3, x4]$, and another cluster with centrality measures $[z1, z2, z3, z4]$, the following holds true:<br> $x1 = z1$ <br> $x2 = z2$ <br> $x3 = z3$ <br> $x4 = z4$<br> Print the size of each cluster, in descending order, after performing the cluster merging in the preceding step. End of explanation """ # Create the spectral partition using the spectral clustering function from Scikit-Learn. spectral_partition = spc(call_adjmatrix.as_matrix(), 9, assign_labels='discretize') pos = graphviz_layout(call_graph, prog='dot') nx.draw_networkx_nodes(call_graph, pos, cmap=plt.cm.RdYlBu, node_color=spectral_partition) nx.draw_networkx_edges(call_graph, pos, alpha=0.5) plt.axis('off') plt.show() print(spectral_partition) """ Explanation: <br> <div class="alert alert-info"> <b>Exercise 2 [Advanced] End.</b> </div> Exercise complete: This is a good time to "Save and Checkpoint". 3. Community detection Community detection is an important component in the analysis of large and complex networks. Identifying these subgraph structures helps in understanding organizational and functional characteristics of the underlying physical networks. In this section, you will study a few approaches that are widely used in community detection using graph representations. 3.1 The Louvain modularity-maximization approach The Louvain method is one of the most widely-used methods for detecting communities in large networks. It was developed by a team of researchers at the Université catholique de Louvain. The method can unveil hierarchies of communities, and allows you to zoom within communities in order to discover sub-communities, sub-sub-communities, and so forth. The modularity QQ quantifies how good a "community" or partition is, and is defined as follows: $$Q_c =\frac{1}{2m}\sum {(ij)} \left [ A{ij}-\frac{k_ik_j}{2m} \right] \delta(c_i, c_j)$$ The higher the $Q_c$ of a community is, the better the partition is. The Louvain method is a greedy optimization method that attempts to optimize the "modularity" of a partition of the network via two steps: Locally optimize the modularity to identify "small" communities. Aggregate nodes belonging to the same community, and create a new network with aggregated nodes as individual nodes. Steps 1 and 2 are then repeated until a maximum of modularity produces a hierarchy of communities. 3.2 Spectral graph partitioning Spectral graph partitioning and clustering is based on the spectrum — the eigenvalues and associated eigenvectors — of the Laplacian matrix that corresponds to a given graph. The approach is mathematically complex, but involves performing a $k$-means clustering, on a spectral projection of the graph, with $k$=2 (using an adjacency matrix as the affinity). A schematic illustration of the process is depicted in the figure below. Optional: You can read more about spectral graph processing. Now, apply spectral graph partitioning to your call graph, and visualize the resulting community structure. You can read more about Scikit-Learn, and the Spectral Clustering function utilized in this section. Spectral graph partitioning needs input in the form of the number of clusters sought (default setting is 8). There are various approaches one can take to optimize the final number of clusters, depending on problem domain knowledge. Below you will use a value of $k=9$. End of explanation """ # Your code here. """ Explanation: <br> <div class="alert alert-info"> <b>Exercise 3 [Advanced] Start.</b> </div> Instructions Compute the size of each the clusters obtained using the spectral graph partitioning method. End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
apache-2.0
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex SDK: AutoML training image object detection model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create image object detection models and do online prediction using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. Objective In this tutorial, you create an AutoML image object detection model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG if os.environ["IS_TESTING"]: ! pip3 install --upgrade tensorflow $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. End of explanation """ REGION = "us-central1" # @param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aip """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation """ aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) """ Explanation: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. End of explanation """ IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" """ Explanation: Tutorial Now you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation """ if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head """ Explanation: Quick peek at your data This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation """ dataset = aip.ImageDataset.create( display_name="Salads" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box, ) print(dataset.resource_name) """ Explanation: Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes. End of explanation """ dag = aip.AutoMLImageTrainingJob( display_name="salads_" + TIMESTAMP, prediction_type="object_detection", multi_label=False, model_type="CLOUD", base_model=None, ) print(dag) """ Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: An image classification model. object_detection: An image object detection model. multi_label: If a classification task, whether single (False) or multi-labeled (True). model_type: The type of model for deployment. CLOUD: Deployment on Google Cloud CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud. CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud. MOBILE_TF_VERSATILE_1: Deployment on an edge device. MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device. MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device. base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only. The instantiated object is the DAG (directed acyclic graph) for the training job. End of explanation """ model = dag.run( dataset=dataset, model_display_name="salads_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=20000, disable_early_stopping=False, ) """ Explanation: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 60 minutes. End of explanation """ # Get model resource ID models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) """ Explanation: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. End of explanation """ endpoint = model.deploy() """ Explanation: Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method. End of explanation """ test_items = !gsutil cat $IMPORT_FILE | head -n1 cols = str(test_items[0]).split(",") if len(cols) == 11: test_item = str(cols[1]) test_label = str(cols[2]) else: test_item = str(cols[0]) test_label = str(cols[1]) print(test_item, test_label) """ Explanation: Send a online prediction request Send a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation """ import base64 import tensorflow as tf with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() # The format of each instance should conform to the deployed model's prediction input schema. instances = [{"content": base64.b64encode(content).decode("utf-8")}] prediction = endpoint.predict(instances=instances) print(prediction) """ Explanation: Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network. The format of each instance is: { 'content': { 'b64': base64_encoded_bytes } } Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the predict() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: The predicted confidence, between 0 and 1, per class label. bboxes: The bounding box of each detected object. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. End of explanation """ endpoint.undeploy_all() """ Explanation: Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model. End of explanation """ delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/end_to_end_ml/solutions/deploy_keras_ai_platform_babyweight.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst import os """ Explanation: Deploy and predict with Keras model on Cloud AI Platform. Learning Objectives Setup up the environment Deploy trained Keras model to Cloud AI Platform Online predict from model on Cloud AI Platform Batch predict from model on Cloud AI Platform Introduction Verify that you have previously Trained your Keras model. If not, go back to train_keras_ai_platform_babyweight.ipynb create them. In this notebook, we'll be deploying our Keras model to Cloud AI Platform and creating predictions. We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform. Each learning objective will correspond to a #TODO in this student lab notebook. Set up environment variables and load necessary libraries Import necessary libraries. End of explanation """ %%bash PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT # Change these to try this notebook out PROJECT = "your-project-name-here" # TODO 1 Replace with your PROJECT BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # TODO 1 Replace with your REGION os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TFVERSION"] = "2.1" %%bash gcloud config set compute/region $REGION gcloud config set ai_platform/region global """ Explanation: Set environment variables. Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region. End of explanation """ %%bash gsutil ls gs://${BUCKET}/babyweight/trained_model %%bash MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \ | tail -1) gsutil ls ${MODEL_LOCATION} """ Explanation: Check our trained model files Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service. End of explanation """ %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* | tail -1 | tr -d '[:space:]') # TODO 2 echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION" # gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} # gcloud ai-platform models delete ${MODEL_NAME} gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION} gcloud ai-platform versions create ${MODEL_VERSION} \ --model=${MODEL_NAME} \ --origin=${MODEL_LOCATION} \ --runtime-version=2.1 \ --python-version=3.7 """ Explanation: Lab Task #2: Deploy trained model Deploying the trained model to act as a REST web service is a simple gcloud call. End of explanation """ from oauth2client.client import GoogleCredentials import requests import json MODEL_NAME = "babyweight" # TODO 3a MODEL_VERSION = "ml_on_gcp" # TODO 3a token = GoogleCredentials.get_application_default().get_access_token().access_token api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \ .format(PROJECT, MODEL_NAME, MODEL_VERSION) headers = {"Authorization": "Bearer " + token } data = { "instances": [ { "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39 }, { "is_male": "False", "mother_age": 29.0, "plurality": "Single(1)", "gestation_weeks": 38 }, { "is_male": "True", "mother_age": 26.0, "plurality": "Triplets(3)", "gestation_weeks": 39 }, { # TODO 3a "is_male": "Unknown", "mother_age": 29.0, "plurality": "Multiple(2+)", "gestation_weeks": 38 }, ] } response = requests.post(api, json=data, headers=headers) print(response.content) """ Explanation: Use model to make online prediction. Python API We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances. End of explanation """ %%writefile inputs.json {"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} {"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} """ Explanation: The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different). gcloud shell API Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud. End of explanation """ %%bash gcloud ai-platform predict \ --model=babyweight \ --json-instances=inputs.json \ --version=ml_on_gcp # TODO 3b """ Explanation: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version. End of explanation """ %%bash INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs gsutil cp inputs.json $INPUT gsutil -m rm -rf $OUTPUT gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \ --data-format=TEXT \ --region ${REGION} \ --input-paths=$INPUT \ --output-path=$OUTPUT \ --model=babyweight \ --version=ml_on_gcp # TODO 4 """ Explanation: Use model to make batch prediction. Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. End of explanation """
dusenberrymw/systemml
samples/jupyter-notebooks/Deep Learning Image Classification.ipynb
apache-2.0
from systemml import MLContext, dml ml = MLContext(sc) print "Spark Version:", sc.version print "SystemML Version:", ml.version() print "SystemML Built-Time:", ml.buildTime() from sklearn import datasets from sklearn.cross_validation import train_test_split from sklearn.metrics import classification_report import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings("ignore") """ Explanation: Deep Learning Image Classification This notebook shows SystemML Deep Learning functionality to map images of single digit numbers to their corresponding numeric representations. See Getting Started with Deep Learning and Python for an explanation of the used deep learning concepts and assumptions. The downloaded MNIST dataset contains labeled images of handwritten digits, where each example is a 28x28 pixel image of grayscale values in the range [0,255] stretched out as 784 pixels, and each label is one of 10 possible digits in [0,9]. We download 60,000 training examples, and 10,000 test examples, where the format is "label, pixel_1, pixel_2, ..., pixel_n". We train a SystemML LeNet model. The results of the learning algorithms have an accuracy of 98 percent. Install and load SystemML and other libraries Download and Access MNIST data Train a CNN classifier for MNIST handwritten digits Detect handwritten Digits <div style="text-align:center" markdown="1"> ![Image of Image to Digit](https://www.wolfram.com/mathematica/new-in-10/enhanced-image-processing/HTMLImages.en/handwritten-digits-classification/smallthumb_10.gif) Mapping images of numbers to numbers </div> <a id="load_systemml"></a> Install and load SystemML and other libraries End of explanation """ trainData = np.genfromtxt('mnist/mnist_train.csv', delimiter=",") testData = np.genfromtxt('mnist/mnist_test.csv', delimiter=",") print "Training data: ", trainData.shape print "Test data: ", testData.shape pd.set_option('display.max_columns', 200) pd.DataFrame(testData[1:10,],dtype='uint') """ Explanation: <a id="access_data"></a> Download and Access MNIST data Download the MNIST data from the MLData repository, and then split and save. Read the data. End of explanation """ !jar -xf ~/.local/lib/python2.7/site-packages/systemml/systemml-java/systemml*.jar scripts/nn/examples/mnist_lenet.dml !cat scripts/nn/examples/mnist_lenet.dml """ Explanation: <a id="train"></a> Develop LeNet CNN classifier on Training Data <div style="text-align:center" markdown="1"> ![Image of Image to Digit](http://www.ommegaonline.org/admin/journalassistance/picturegallery/896.jpg) MNIST digit recognition – LeNet architecture </div> (Optional) Display SystemML LeNet Implementation End of explanation """ script = """ source("scripts/nn/examples/mnist_lenet.dml") as mnist_lenet # Bind data; Extract images and labels n = nrow(data) images = data[,2:ncol(data)] labels = data[,1] # Scale images to [-1,1], and one-hot encode the labels images = (images / 255.0) * 2 - 1 labels = table(seq(1, n), labels+1, n, 10) # Split data into training (55,000 examples) and validation (5,000 examples) X = images[5001:nrow(images),] X_val = images[1:5000,] y = labels[5001:nrow(images),] y_val = labels[1:5000,] # Train the model using channel, height, and width to produce weights/biases. [W1, b1, W2, b2, W3, b3, W4, b4] = mnist_lenet::train(X, y, X_val, y_val, C, Hin, Win, epochs) """ rets = ('W1', 'b1','W2','b2','W3','b3','W4','b4') script = (dml(script).input(data=trainData, epochs=1, C=1, Hin=28, Win=28) .output(*rets)) W1, b1, W2, b2, W3, b3, W4, b4 = ml.execute(script).get(*rets) """ Explanation: Train Model using SystemML LeNet CNN. End of explanation """ scriptPredict = """ source("scripts/nn/examples/mnist_lenet.dml") as mnist_lenet # Separate images from lables and scale images to [-1,1] X_test = data[,2:ncol(data)] X_test = (X_test / 255.0) * 2 - 1 # Predict probs = mnist_lenet::predict(X_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, b4) predictions = rowIndexMax(probs) - 1 """ script = (dml(scriptPredict).input(data=testData, C=1, Hin=28, Win=28, W1=W1, b1=b1, W2=W2, b2=b2, W3=W3, b3=b3, W4=W4, b4=b4) .output("predictions")) predictions = ml.execute(script).get("predictions").toNumPy() print classification_report(testData[:,0], predictions) """ Explanation: Use trained model and predict on test data, and evaluate the quality of the predictions for each digit. End of explanation """ img_size = np.sqrt(testData.shape[1] - 1).astype("uint8") def displayImage(i): image = testData[i,1:].reshape((img_size, img_size)).astype("uint8") imgplot = plt.imshow(image, cmap='gray') def predictImage(i): image = testData[i,:].reshape(1,testData.shape[1]) prog = dml(scriptPredict).input(data=image, C=1, Hin=28, Win=28, W1=W1, b1=b1, W2=W2, b2=b2, W3=W3, b3=b3, W4=W4, b4=b4) \ .output("predictions") result = ml.execute(prog) return (result.get("predictions").toNumPy())[0] i = np.random.choice(np.arange(0, len(testData)), size = (1,)) p = predictImage(i) print "Image", i, "\nPredicted digit:", p, "\nActual digit: ", testData[i,0], "\nResult: ", (p == testData[i,0]) displayImage(i) pd.set_option('display.max_columns', 28) pd.DataFrame((testData[i,1:]).reshape(img_size, img_size),dtype='uint') """ Explanation: <a id="predict"></a> Detect handwritten Digits Define a function that randomly selects a test image, display the image, and scores it. End of explanation """
tpin3694/tpin3694.github.io
python/repr_vs_str.ipynb
mit
import datetime """ Explanation: Title: repr vs. str Slug: repr_vs_str Summary: repr vs. str in Python. Date: 2016-01-23 12:00 Category: Python Tags: Basics Authors: Chris Albon Interesting in learning more? Check out Fluent Python Preliminaries End of explanation """ class Regiment(object): def __init__(self, date=datetime.datetime.now()): self.date = date def __repr__(self): return date def __str__(self): return str(date) """ Explanation: Create A Simple Object End of explanation """
rishuatgithub/MLPy
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/01-MNIST-with-CNN.ipynb
apache-2.0
import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets, transforms from torchvision.utils import make_grid import numpy as np import pandas as pd from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline """ Explanation: <img src="../Pierian-Data-Logo.PNG"> <br> <strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong> MNIST Code Along with CNN Now that we've seen the results of an artificial neural network model on the <a href='https://en.wikipedia.org/wiki/MNIST_database'>MNIST dataset</a>, let's work the same data with a <a href='https://en.wikipedia.org/wiki/Convolutional_neural_network'>Convolutional Neural Network</a> (CNN). Make sure to watch the theory lectures! You'll want to be comfortable with: * convolutional layers * filters/kernels * pooling * depth, stride and zero-padding Note that in this exercise there is no need to flatten the MNIST data, as a CNN expects 2-dimensional data. Perform standard imports End of explanation """ transform = transforms.ToTensor() train_data = datasets.MNIST(root='../Data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='../Data', train=False, download=True, transform=transform) train_data test_data """ Explanation: Load the MNIST dataset PyTorch makes the MNIST train and test datasets available through <a href='https://pytorch.org/docs/stable/torchvision/index.html'><tt><strong>torchvision</strong></tt></a>. The first time they're called, the datasets will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download. Refer to the previous section for explanations of transformations, batch sizes and <a href='https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader'><tt><strong>DataLoader</strong></tt></a>. End of explanation """ train_loader = DataLoader(train_data, batch_size=10, shuffle=True) test_loader = DataLoader(test_data, batch_size=10, shuffle=False) """ Explanation: Create loaders When working with images, we want relatively small batches; a batch size of 4 is not uncommon. End of explanation """ # Define layers conv1 = nn.Conv2d(1, 6, 3, 1) conv2 = nn.Conv2d(6, 16, 3, 1) # Grab the first MNIST record for i, (X_train, y_train) in enumerate(train_data): break # Create a rank-4 tensor to be passed into the model # (train_loader will have done this already) x = X_train.view(1,1,28,28) print(x.shape) # Perform the first convolution/activation x = F.relu(conv1(x)) print(x.shape) # Run the first pooling layer x = F.max_pool2d(x, 2, 2) print(x.shape) # Perform the second convolution/activation x = F.relu(conv2(x)) print(x.shape) # Run the second pooling layer x = F.max_pool2d(x, 2, 2) print(x.shape) # Flatten the data x = x.view(-1, 5*5*16) print(x.shape) """ Explanation: Define a convolutional model In the previous section we used only fully connected layers, with an input layer of 784 (our flattened 28x28 images), hidden layers of 120 and 84 neurons, and an output size representing 10 possible digits. This time we'll employ two convolutional layers and two pooling layers before feeding data through fully connected hidden layers to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. <div class="alert alert-info"><strong>Let's walk through the steps we're about to take.</strong><br> 1. Extend the base Module class: <tt><font color=black>class ConvolutionalNetwork(nn.Module):<br> &nbsp;&nbsp;&nbsp;&nbsp;def \_\_init\_\_(self):<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super().\_\_init\_\_()</font></tt><br> 2. Set up the convolutional layers with <a href='https://pytorch.org/docs/stable/nn.html#conv2d'><tt><strong>torch.nn.Conv2d()</strong></tt></a><br><br>The first layer has one input channel (the grayscale color channel). We'll assign 6 output channels for feature extraction. We'll set our kernel size to 3 to make a 3x3 filter, and set the step size to 1.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.conv1 = nn.Conv2d(1, 6, 3, 1)</font></tt><br> The second layer will take our 6 input channels and deliver 16 output channels.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.conv2 = nn.Conv2d(6, 16, 3, 1)</font></tt><br><br> 3. Set up the fully connected layers with <a href='https://pytorch.org/docs/stable/nn.html#linear'><tt><strong>torch.nn.Linear()</strong></tt></a>.<br><br>The input size of (5x5x16) is determined by the effect of our kernels on the input image size. A 3x3 filter applied to a 28x28 image leaves a 1-pixel edge on all four sides. In one layer the size changes from 28x28 to 26x26. We could address this with zero-padding, but since an MNIST image is mostly black at the edges, we should be safe ignoring these pixels. We'll apply the kernel twice, and apply pooling layers twice, so our resulting output will be $\;(((28-2)/2)-2)/2 = 5.5\;$ which rounds down to 5 pixels per side.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc1 = nn.Linear(5\*5\*16, 120)</font></tt><br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc2 = nn.Linear(120, 84)</font></tt><br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc3 = nn.Linear(84, 10)</font></tt><br> See below for a more detailed look at this step.<br><br> 4. Define the forward method.<br><br>Activations can be applied to the convolutions in one line using <a href='https://pytorch.org/docs/stable/nn.html#id27'><tt><strong>F.relu()</strong></tt></a> and pooling is done using <a href='https://pytorch.org/docs/stable/nn.html#maxpool2d'><tt><strong>F.max_pool2d()</strong></tt></a><br> <tt><font color=black>def forward(self, X):<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.conv1(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.max_pool2d(X, 2, 2)<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.conv2(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.max_pool2d(X, 2, 2)<br> </font></tt>Flatten the data for the fully connected layers:<br><tt><font color=black> &nbsp;&nbsp;&nbsp;&nbsp;X = X.view(-1, 5\*5\*16)<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.fc1(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = self.fc2(X)<br> &nbsp;&nbsp;&nbsp;&nbsp;return F.log_softmax(X, dim=1)</font></tt> </div> <div class="alert alert-danger"><strong>Breaking down the convolutional layers</strong> (this code is for illustration purposes only.)</div> End of explanation """ class ConvolutionalNetwork(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 6, 3, 1) self.conv2 = nn.Conv2d(6, 16, 3, 1) self.fc1 = nn.Linear(5*5*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84,10) def forward(self, X): X = F.relu(self.conv1(X)) X = F.max_pool2d(X, 2, 2) X = F.relu(self.conv2(X)) X = F.max_pool2d(X, 2, 2) X = X.view(-1, 5*5*16) X = F.relu(self.fc1(X)) X = F.relu(self.fc2(X)) X = self.fc3(X) return F.log_softmax(X, dim=1) torch.manual_seed(42) model = ConvolutionalNetwork() model """ Explanation: <div class="alert alert-danger"><strong>This is how the convolution output is passed into the fully connected layers.</strong></div> Now let's run the code. End of explanation """ def count_parameters(model): params = [p.numel() for p in model.parameters() if p.requires_grad] for item in params: print(f'{item:>6}') print(f'______\n{sum(params):>6}') count_parameters(model) """ Explanation: Including the bias terms for each layer, the total number of parameters being trained is:<br> $\quad\begin{split}(1\times6\times3\times3)+6+(6\times16\times3\times3)+16+(400\times120)+120+(120\times84)+84+(84\times10)+10 &=\ 54+6+864+16+48000+120+10080+84+840+10 &= 60,074\end{split}$<br> End of explanation """ criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) """ Explanation: Define loss function & optimizer End of explanation """ import time start_time = time.time() epochs = 5 train_losses = [] test_losses = [] train_correct = [] test_correct = [] for i in range(epochs): trn_corr = 0 tst_corr = 0 # Run the training batches for b, (X_train, y_train) in enumerate(train_loader): b+=1 # Apply the model y_pred = model(X_train) # we don't flatten X-train here loss = criterion(y_pred, y_train) # Tally the number of correct predictions predicted = torch.max(y_pred.data, 1)[1] batch_corr = (predicted == y_train).sum() trn_corr += batch_corr # Update parameters optimizer.zero_grad() loss.backward() optimizer.step() # Print interim results if b%600 == 0: print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/60000] loss: {loss.item():10.8f} \ accuracy: {trn_corr.item()*100/(10*b):7.3f}%') train_losses.append(loss) train_correct.append(trn_corr) # Run the testing batches with torch.no_grad(): for b, (X_test, y_test) in enumerate(test_loader): # Apply the model y_val = model(X_test) # Tally the number of correct predictions predicted = torch.max(y_val.data, 1)[1] tst_corr += (predicted == y_test).sum() loss = criterion(y_val, y_test) test_losses.append(loss) test_correct.append(tst_corr) print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed """ Explanation: Train the model This time we'll feed the data directly into the model without flattening it first. End of explanation """ plt.plot(train_losses, label='training loss') plt.plot(test_losses, label='validation loss') plt.title('Loss at the end of each epoch') plt.legend(); test_losses """ Explanation: Plot the loss and accuracy comparisons End of explanation """ plt.plot([t/600 for t in train_correct], label='training accuracy') plt.plot([t/100 for t in test_correct], label='validation accuracy') plt.title('Accuracy at the end of each epoch') plt.legend(); """ Explanation: While there may be some overfitting of the training data, there is far less than we saw with the ANN model. End of explanation """ # Extract the data all at once, not in batches test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False) with torch.no_grad(): correct = 0 for X_test, y_test in test_load_all: y_val = model(X_test) # we don't flatten the data this time predicted = torch.max(y_val,1)[1] correct += (predicted == y_test).sum() print(f'Test accuracy: {correct.item()}/{len(test_data)} = {correct.item()*100/(len(test_data)):7.3f}%') """ Explanation: Evaluate Test Data End of explanation """ # print a row of values for reference np.set_printoptions(formatter=dict(int=lambda x: f'{x:4}')) print(np.arange(10).reshape(1,10)) print() # print the confusion matrix print(confusion_matrix(predicted.view(-1), y_test.view(-1))) """ Explanation: Recall that our [784,120,84,10] ANN returned an accuracy of 97.25% after 10 epochs. And it used 105,214 parameters to our current 60,074. Display the confusion matrix End of explanation """ misses = np.array([]) for i in range(len(predicted.view(-1))): if predicted[i] != y_test[i]: misses = np.append(misses,i).astype('int64') # Display the number of misses len(misses) # Display the first 10 index positions misses[:10] # Set up an iterator to feed batched rows r = 12 # row size row = iter(np.array_split(misses,len(misses)//r+1)) """ Explanation: Examine the misses We can track the index positions of "missed" predictions, and extract the corresponding image and label. We'll do this in batches to save screen space. End of explanation """ nextrow = next(row) print("Index:", nextrow) print("Label:", y_test.index_select(0,torch.tensor(nextrow)).numpy()) print("Guess:", predicted.index_select(0,torch.tensor(nextrow)).numpy()) images = X_test.index_select(0,torch.tensor(nextrow)) im = make_grid(images, nrow=r) plt.figure(figsize=(10,4)) plt.imshow(np.transpose(im.numpy(), (1, 2, 0))); """ Explanation: Now that everything is set up, run and re-run the cell below to view all of the missed predictions.<br> Use <kbd>Ctrl+Enter</kbd> to remain on the cell between runs. You'll see a <tt>StopIteration</tt> once all the misses have been seen. End of explanation """ x = 2019 plt.figure(figsize=(1,1)) plt.imshow(test_data[x][0].reshape((28,28)), cmap="gist_yarg"); model.eval() with torch.no_grad(): new_pred = model(test_data[x][0].view(1,1,28,28)).argmax() print("Predicted value:",new_pred.item()) """ Explanation: Run a new image through the model We can also pass a single image through the model to obtain a prediction. Pick a number from 0 to 9999, assign it to "x", and we'll use that value to select a number from the MNIST test set. End of explanation """
hasecbinusr/pysal
pysal/contrib/spint/notebooks/ODW_example.ipynb
bsd-3-clause
origins = ps.weights.lat2W(4,4) dests = ps.weights.lat2W(4,4) origins.n dests.n ODw = ODW(origins, dests) print ODw.n, 16*16 ODw.full()[0].shape """ Explanation: With an equal number of origins and destinations (n=16) End of explanation """ origins = ps.weights.lat2W(3,3) dests = ps.weights.lat2W(5,5) origins.n dests.n ODw = ODW(origins, dests) print ODw.n, 9*25 ODw.full()[0].shape """ Explanation: # With non-equal number of origins (n=9) and destinations (m=25) End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_ml/ml_scikit_learn_simple_correction.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: Rappels sur scikit-learn et le machine learning (correction) Quelques exercices simples sur scikit-learn. Le notebook est long pour ceux qui débutent en machine learning et sans doute sans suspens pour ceux qui en ont déjà fait. End of explanation """ from numpy import random n = 1000 X = random.rand(n, 2) X[:5] y = X[:, 0] * 3 - 2 * X[:, 1] ** 2 + random.rand(n) y[:5] """ Explanation: Des données synthétiques On simule un jeu de données aléatoires. End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) """ Explanation: Exercice 1 : diviser en base d'apprentissage et de test Simple train_test_split. End of explanation """ from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X_train, y_train) from sklearn.metrics import r2_score score = r2_score(y_test, reg.predict(X_test)) score """ Explanation: Exercice 2 : caler une régression linéaire Et calculer le coefficient $R^2$. Pour ceux qui ne savent pas se servir d'un moteur de recherche : LinearRegression, r2_score. End of explanation """ from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures() poly.fit(X_train) X_train2 = poly.transform(X_train) reg2 = LinearRegression() reg2.fit(X_train2, y_train) score2 = r2_score(y_test, reg2.predict(poly.transform(X_test))) score2 """ Explanation: Exercice 3 : améliorer le modèle en appliquant une transformation bien choisie Le modèle de départ est : $Y = 3 X_1 - 2 X_2^2 + \epsilon$. Il suffit de rajouter des featues polynômiales avec PolynomialFeatures. End of explanation """ from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(X_train, y_train) r2_score(y_test, rf.predict(X_test)) """ Explanation: Le coefficient $R^2$ est plus élevé car on utilise les mêmes variables que le modèle. Il n'est théoriquement pas possible d'aller au delà. Exercice 4 : caler une forêt aléatoire End of explanation """ rf2 = RandomForestRegressor() rf2.fit(X_train2, y_train) r2_score(y_test, rf2.predict(poly.transform(X_test))) """ Explanation: Le modèle linéaire est le meilleur modèle dans notre cas puisque les données ont été construites de la sorte. Il est attendu que le $R^2$ ne soit pas plus élevé tout du moins pas significativement plus élevé. On regarde avec les features polynômiales... End of explanation """ X_test2 = random.rand(n, 2) + 0.5 y_test2 = X_test2[:, 0] * 3 - 2 * X_test2[:, 1] ** 2 + random.rand(n) res = [] for model in [reg, reg2, rf, rf2]: name = model.__class__.__name__ try: pred = model.predict(X_test) pred2 = model.predict(X_test2) except Exception: pred = model.predict(poly.transform(X_test)) pred2 = model.predict(poly.transform(X_test2)) name += " + X^2" res.append(dict(name=name, r2=r2_score(y_test, pred), r2_jeu2=r2_score(y_test2, pred2))) import pandas df = pandas.DataFrame(res) df """ Explanation: Avant de tirer des conclusions hâtives, il faudrait recommencer plusieurs fois l'expérience avant de dire que la performance est plus ou moins élevée avec ces features ce que ce notebook ne fera pas puisque la réponse théorique est connue dans ce cas. Exercice 5 : un peu de math Comparer les deux modèles sur les données suivantes ? Que remarquez-vous ? Expliquez pourquoi ? End of explanation """ reg2.coef_, reg2.intercept_ """ Explanation: Le seul modèle qui s'en tire vraiment est la régression linéaire avec les features polynômiales. Comme il équivaut au modèle théorique, il est normal qu'il ne se plante pas trop même si ses coefficients ne sont pas identique au modèle théorique (il faudrait plus de données pour que cela converge). End of explanation """ import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 4)) a, b = 0.9, 1.1 index1 = (X_test2[:, 0] >= a) & (X_test2[:, 0] <= b) index2 = (X_test2[:, 1] >= a) & (X_test2[:, 1] <= b) yth = X_test2[:, 0] * 3 - 2 * X_test2[:, 1] ax[0].set_xlabel("X1") ax[0].set_ylabel("Y") ax[0].plot(X_test2[index2, 0], yth[index2], '.', label='Y théorique') ax[1].set_xlabel("X2") ax[1].set_ylabel("Y") ax[1].plot(X_test2[index1, 1], yth[index1], '.', label='Y théorique') for model in [reg, reg2, rf, rf2]: name = model.__class__.__name__ try: pred2 = model.predict(X_test2) except Exception: pred2 = model.predict(poly.transform(X_test2)) name += " + X^2" ax[0].plot(X_test2[index2, 0], pred2[index2], '.', label=name) ax[1].plot(X_test2[index1, 1], pred2[index1], '.', label=name) ax[0].legend() ax[1].legend(); """ Explanation: Pour les autes modèles, voyons déjà visuellement ce qu'il se passe. Exercice 6 : faire un graphe avec... Je laisse le code décrire l'approche choisie pour illustrer les carences des modèles précédents. Le commentaire suit le graphique pour les paresseux. End of explanation """ from sklearn.tree import DecisionTreeRegressor res = [] for md in range(1, 20): tree = DecisionTreeRegressor(max_depth=md) tree.fit(X_train, y_train) r2_train = r2_score(y_train, tree.predict(X_train)) r2_test = r2_score(y_test, tree.predict(X_test)) res.append(dict(profondeur=md, r2_train=r2_train, r2_test=r2_test)) df = pandas.DataFrame(res) df.head() ax = df.plot(x='profondeur', y=['r2_train', 'r2_test']) ax.set_title("Evolution du R2 selon la profondeur"); """ Explanation: Le graphe étudie les variables des modèles selon une coordonnées tout en restreignant l'autre dans un intervalle donné. On voit tout de suite que la forêt aléatoire devient constante au delà d'un certain seuil. C'est encore une fois tout à fait normal puisque la base d'apprentissage ne contient des $X_1, X_2$ que dans l'intervalle $[0, 1]$. En dehors, chaque arbre de décision produit une valeur constante tout simplement parce que ce sont des fonctions en escalier : une forêt aléatoire est une moyenne de fonctions en escalier, elle est bornée. Quant à la première régression linéaire, elle ne peut saisir les effets du second degré, elle est linéaire par rapport aux variables de départ. Elle s'écarte moins mais elle s'écarte quand même de la variable à prédire. Cet exercice a pour but d'illustrer qu'un modèle de machine learning est estimé sur un jeu de données qui suit une certaine distribution. Lorsque les données sur lesquelles le modèle est utilisé pour prédire ne suivent plus cette loi, les modèles retournent des réponses qui ont toutes les chances d'être fausses et ce, de manière différente selon les modèles. C'est pour cela qu'on dit qu'il faut réapprendre régulièrement les modèles de machine learning, surtout s'ils sont appliqués sur des données générées par l'activité humaine et non des données issues de problèmes physiques. Exercice 7 : illuster l'overfitting avec un arbre de décision Sur le premier jeu de données. End of explanation """ from sklearn.linear_model import Ridge, Lasso import numpy.linalg as nplin import numpy def coef_non_nuls(coef): return sum(numpy.abs(coef) > 0.001) res = [] for d in range(1, 21): poly = PolynomialFeatures(degree=d) poly.fit(X_train) X_test2 = poly.transform(X_test) reg = LinearRegression() reg.fit(poly.transform(X_train), y_train) r2_reg = r2_score(y_test, reg.predict(X_test2)) rid = Ridge(alpha=10) rid.fit(poly.transform(X_train), y_train) r2_rid = r2_score(y_test, rid.predict(X_test2)) las = Lasso(alpha=0.01) las.fit(poly.transform(X_train), y_train) r2_las = r2_score(y_test, las.predict(X_test2)) res.append(dict(degre=d, nb_features=X_test2.shape[1], r2_reg=r2_reg, r2_las=r2_las, r2_rid=r2_rid, norm_reg=nplin.norm(reg.coef_), norm_rid=nplin.norm(rid.coef_), norm_las=nplin.norm(las.coef_), nnul_reg=coef_non_nuls(reg.coef_), nnul_rid=coef_non_nuls(rid.coef_), nnul_las=coef_non_nuls(las.coef_), )) df = pandas.DataFrame(res) df fig, ax = plt.subplots(1, 2, figsize=(12, 4)) df.plot(x="nb_features", y=["r2_reg", "r2_las", "r2_rid"], ax=ax[0]) ax[0].set_xlabel("Nombre de features") ax[0].set_ylim([0, 1]) ax[0].set_title("r2") df.plot(x="nb_features", y=["nnul_reg", "nnul_las", "nnul_rid"], ax=ax[1]) ax[1].set_xlabel("Nombre de features") ax[1].set_title("Nombre de coefficients non nuls"); """ Explanation: Exercice 8 : augmenter le nombre de features et régulariser une régression logistique L'objectif est de regarder l'impact de la régularisation des coefficients d'une régression logistique lorsque le nombre de features augmentent. On utilise les features polynômiales et une régression Ridge ou Lasso. End of explanation """
nwjs/chromium.src
third_party/tensorflow-text/src/docs/guide/text_tf_lite.ipynb
bsd-3-clause
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ !pip install -q -U tf-nightly !pip install -q -U tensorflow-text-nightly from absl import app import numpy as np import tensorflow as tf import tensorflow_text as tf_text from tensorflow.lite.python import interpreter """ Explanation: Converting TensorFlow Text operators to TensorFlow Lite <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/guide/text_tf_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/text_tf_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview Machine learning models are frequently deployed using TensorFlow Lite to mobile, embedded, and IoT devices to improve data privacy and lower response times. These models often require support for text processing operations. TensorFlow Text version 2.7 and higher provides improved performance, reduced binary sizes, and operations specifically optimized for use in these environments. Text operators The following TensorFlow Text classes can be used from within a TensorFlow Lite model. FastWordpieceTokenizer WhitespaceTokenizer Model Example End of explanation """ class TokenizerModel(tf.keras.Model): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = tf_text.WhitespaceTokenizer() @tf.function(input_signature=[ tf.TensorSpec(shape=[None], dtype=tf.string, name='input') ]) def call(self, input_tensor): return { 'tokens': self.tokenizer.tokenize(input_tensor).flat_values } # Test input data. input_data = np.array(['Some minds are better kept apart']) # Define a Keras model. model = TokenizerModel() # Perform TensorFlow Text inference. tf_result = model(tf.constant(input_data)) print('TensorFlow result = ', tf_result['tokens']) """ Explanation: The following code example shows the conversion process and interpretation in Python using a simple test model. Note that the output of a model cannot be a tf.RaggedTensor object when you are using TensorFlow Lite. However, you can return the components of a tf.RaggedTensor object or convert it using its to_tensor function. See the RaggedTensor guide for more details. End of explanation """ # Convert to TensorFlow Lite. converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] converter.allow_custom_ops = True tflite_model = converter.convert() """ Explanation: Convert the TensorFlow model to TensorFlow Lite When converting a TensorFlow model with TensorFlow Text operators to TensorFlow Lite, you need to indicate to the TFLiteConverter that there are custom operators using the allow_custom_ops attribute as in the example below. You can then run the model conversion as you normally would. Review the TensorFlow Lite converter documentation for a detailed guide on the basics of model conversion. End of explanation """ # Perform TensorFlow Lite inference. interp = interpreter.InterpreterWithCustomOps( model_content=tflite_model, custom_op_registerers=tf_text.tflite_registrar.SELECT_TFTEXT_OPS) interp.get_signature_list() """ Explanation: Inference For the TensorFlow Lite interpreter to properly read your model containing TensorFlow Text operators, you must configure it to use these custom operators, and provide registration methods for them. Use tf_text.tflite_registrar.SELECT_TFTEXT_OPS to provide the full suite of registration functions for the supported TensorFlow Text operators to InterpreterWithCustomOps. Note, that while the example below shows inference in Python, the steps are similar in other languages with some minor API translations, and the necessity to build the tflite_registrar into your binary. See TensorFlow Lite Inference for more details. End of explanation """ tokenize = interp.get_signature_runner('serving_default') output = tokenize(input=input_data) print('TensorFlow Lite result = ', output['tokens']) """ Explanation: Next, the TensorFlow Lite interpreter is invoked with the input, providing a result which matches the TensorFlow result from above. End of explanation """
mitdbg/modeldb
client/workflows/demos/registry/sklearn-census-end-to-end.ipynb
mit
from __future__ import print_function import warnings from sklearn.exceptions import ConvergenceWarning warnings.filterwarnings("ignore", category=ConvergenceWarning) warnings.filterwarnings("ignore", category=FutureWarning) import itertools import os import time import six import numpy as np import pandas as pd import sklearn from sklearn import model_selection from sklearn import linear_model from sklearn import metrics """ Explanation: Deploying a scikit-learn model on Verta Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here. This notebook provides an example of how to deploy a scikit-learn model on Verta as a Verta Standard Model either via convenience functions or by extending VertaModelBase. 0. Imports End of explanation """ # restart your notebook if prompted on Colab try: import verta except ImportError: !pip install verta import os # Ensure credentials are set up, if not, use below # os.environ['VERTA_EMAIL'] = # os.environ['VERTA_DEV_KEY'] = # os.environ['VERTA_HOST'] = from verta import Client PROJECT_NAME = "Census" EXPERIMENT_NAME = "sklearn" client = Client(os.environ['VERTA_HOST']) proj = client.set_project(PROJECT_NAME) expt = client.set_experiment(EXPERIMENT_NAME) """ Explanation: 0.1 Verta import and setup End of explanation """ try: import wget except ImportError: !pip install wget # you may need pip3 import wget train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv" train_data_filename = wget.detect_filename(train_data_url) if not os.path.isfile(train_data_filename): wget.download(train_data_url) test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv" test_data_filename = wget.detect_filename(test_data_url) if not os.path.isfile(test_data_filename): wget.download(test_data_url) df_train = pd.read_csv(train_data_filename) X_train = df_train.iloc[:,:-1] y_train = df_train.iloc[:, -1] df_test = pd.read_csv(test_data_filename) X_test = df_test.iloc[:,:-1] y_test = df_test.iloc[:, -1] df_train.head() """ Explanation: 1. Model Training 1.1 Load training data End of explanation """ hyperparam_candidates = { 'C': [1e-6, 1e-4], 'solver': ['lbfgs'], 'max_iter': [15, 28], } hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values)) for values in itertools.product(*hyperparam_candidates.values())] """ Explanation: Define hyperparams End of explanation """ def run_experiment(hyperparams): # create object to track experiment run run = client.set_experiment_run() # create validation split (X_val_train, X_val_test, y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train, test_size=0.2, shuffle=True) # log hyperparameters run.log_hyperparameters(hyperparams) print(hyperparams, end=' ') # create and train model model = linear_model.LogisticRegression(**hyperparams) model.fit(X_train, y_train) # calculate and log validation accuracy val_acc = model.score(X_val_test, y_val_test) run.log_metric("val_acc", val_acc) print("Validation accuracy: {:.4f}".format(val_acc)) # NOTE: run_experiment() could also be defined in a module, and executed in parallel for hyperparams in hyperparam_sets: run_experiment(hyperparams) best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0] print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc"))) best_hyperparams = best_run.get_hyperparameters() print("Hyperparameters: {}".format(best_hyperparams)) model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams) model.fit(X_train, y_train) train_acc = model.score(X_train, y_train) print("Training accuracy: {:.4f}".format(train_acc)) """ Explanation: 1.3 Train/test code End of explanation """ registered_model = client.get_or_create_registered_model( name="census-sklearn", labels=["tabular", "sklearn"]) """ Explanation: 2. Register Model for deployment End of explanation """ from verta.environment import Python model_version_v1 = registered_model.create_standard_model_from_sklearn( model, environment=Python(requirements=["scikit-learn"]), name="v1", ) """ Explanation: 2.1 Register from the model object If you are in the same file where you have the model object handy, use the code below to package the model End of explanation """ import cloudpickle with open("model.pkl", "wb") as f: cloudpickle.dump(model, f) from verta.registry import VertaModelBase class CensusIncomeClassifier(VertaModelBase): def __init__(self, artifacts): self.model = cloudpickle.load(open(artifacts["serialized_model"], "rb")) def predict(self, batch_input): results = [] for one_input in batch_input: results.append(self.model.predict(one_input)) return results artifacts_dict = {"serialized_model" : "model.pkl"} clf = CensusIncomeClassifier(artifacts_dict) clf.predict([X_test.values.tolist()[:5]]) model_version_v2 = registered_model.create_standard_model( model_cls=CensusIncomeClassifier, environment=Python(requirements=["scikit-learn"]), artifacts=artifacts_dict, name="v2" ) """ Explanation: 2.2 (OR) Register a serialized version of the model using the VertaModelBase End of explanation """ census_endpoint = client.get_or_create_endpoint("census-model") census_endpoint.update(model_version_v1, wait=True) deployed_model = census_endpoint.get_deployed_model() deployed_model.predict(X_test.values.tolist()[:5]) census_endpoint.update(model_version_v2, wait=True) deployed_model = census_endpoint.get_deployed_model() deployed_model.predict([X_test.values.tolist()[:5]]) """ Explanation: 3. Deploy model to endpoint End of explanation """
raman-sharma/stanford-mir
basic_feature_extraction.ipynb
mit
kick_filepaths, snare_filepaths = stanford_mir.download_drum_samples() """ Explanation: &larr; Back to Index Basic Feature Extraction Somehow, we must extract the characteristics of our audio signal that are most relevant to the problem we are trying to solve. For example, if we want to classify instruments by timbre, we will want features that distinguish sounds by their timbre and not their pitch. If we want to perform pitch detection, we want features that distinguish pitch and not timbre. This process is known as feature extraction. Let's begin with twenty audio files: ten kick drum samples, and ten snare drum samples. Each audio file contains one drum hit. For convenience, we will use stanford_mir.download_drum_samples to download this data set at once. End of explanation """ kick_signals = [ librosa.load(p)[0] for p in kick_filepaths ] snare_signals = [ librosa.load(p)[0] for p in snare_filepaths ] """ Explanation: Read and store each signal: End of explanation """ for i, x in enumerate(kick_signals): plt.subplot(2, 5, i+1) librosa.display.waveplot(x[:10000]) plt.ylim(-1, 1) """ Explanation: Display the kick drum signals: End of explanation """ for i, x in enumerate(snare_signals): plt.subplot(2, 5, i+1) librosa.display.waveplot(x[:10000]) plt.ylim(-1, 1) """ Explanation: Display the snare drum signals: End of explanation """ def extract_features(signal): return [ librosa.feature.zero_crossing_rate(signal)[0, 0], librosa.feature.spectral_centroid(signal)[0, 0], ] """ Explanation: Constructing a Feature Vector A feature vector is simply a collection of features. Here is a simple function that constructs a two-dimensional feature vector from a signal: End of explanation """ kick_features = numpy.array([extract_features(x) for x in kick_signals]) snare_features = numpy.array([extract_features(x) for x in snare_signals]) """ Explanation: If we want to aggregate all of the feature vectors among signals in a collection, we can use a list comprehension as follows: End of explanation """ plt.hist(kick_features[:,0], color='b', range=(0, 0.1)) plt.hist(snare_features[:,0], color='r', range=(0, 0.1)) plt.legend(('kicks', 'snares')) plt.xlabel('Zero Crossing Rate') plt.ylabel('Count') plt.hist(kick_features[:,1], color='b', range=(0, 4000), bins=40, alpha=0.5) plt.hist(snare_features[:,1], color='r', range=(0, 4000), bins=40, alpha=0.5) plt.legend(('kicks', 'snares')) plt.xlabel('Spectral Centroid (frqeuency bin)') plt.ylabel('Count') """ Explanation: Visualize the differences in features by plotting separate histograms for each of the classes: End of explanation """ feature_table = numpy.vstack((kick_features, snare_features)) print feature_table.shape """ Explanation: Feature Scaling The features that we used in the previous example included zero crossing rate and spectral centroid. These two features are expressed using different units. This discrepancy can pose problems when performing classification later. Therefore, we will normalize each feature vector to a common range and store the normalization parameters for later use. Many techniques exist for scaling your features. For now, we'll use sklearn.preprocessing.MinMaxScaler. MinMaxScaler returns an array of scaled values such that each feature dimension is in the range -1 to 1. Let's concatenate all of our feature vectors into one feature table: End of explanation """ scaler = sklearn.preprocessing.MinMaxScaler(feature_range=(-1, 1)) training_features = scaler.fit_transform(feature_table) print training_features.min(axis=0) print training_features.max(axis=0) """ Explanation: Scale each feature dimension to be in the range -1 to 1: End of explanation """ plt.scatter(training_features[:10,0], training_features[:10,1], c='b') plt.scatter(training_features[10:,0], training_features[10:,1], c='r') plt.xlabel('Zero Crossing Rate') plt.ylabel('Spectral Centroid') """ Explanation: Plot the scaled features: End of explanation """
ALEXKIRNAS/DataScience
Coursera/Machine-learning-data-analysis/Course 5/Week_01/salary.ipynb
mit
plt.figure(figsize(15,10)) sm.tsa.seasonal_decompose(salary.WAG_C_M).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.WAG_C_M)[1]) """ Explanation: Проверка стационарности и STL-декомпозиция ряда: End of explanation """ salary['salary_box'], lmbda = stats.boxcox(salary.WAG_C_M) plt.figure(figsize(15,7)) salary.salary_box.plot() plt.ylabel(u'Transformed salary') print("Оптимальный параметр преобразования Бокса-Кокса: %f" % lmbda) print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box)[1]) """ Explanation: Стабилизация дисперсии Сделаем преобразование Бокса-Кокса для стабилизации дисперсии: End of explanation """ salary['salary_box_diff'] = salary.salary_box - salary.salary_box.shift(12) plt.figure(figsize(15,10)) sm.tsa.seasonal_decompose(salary.salary_box_diff[12:]).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff[12:])[1]) """ Explanation: Стационарность Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность: End of explanation """ salary['salary_box_diff2'] = salary.salary_box_diff - salary.salary_box_diff.shift(1) plt.figure(figsize(15,10)) sm.tsa.seasonal_decompose(salary.salary_box_diff2[13:]).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(salary.salary_box_diff2[13:])[1]) """ Explanation: Критерий Дики-Фуллера не отвергает гипотезу нестационарности, и полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование: End of explanation """ plt.figure(figsize(15,8)) ax = plt.subplot(211) sm.graphics.tsa.plot_acf(salary.salary_box_diff2[13:].values.squeeze(), lags=48, ax=ax) pylab.show() ax = plt.subplot(212) sm.graphics.tsa.plot_pacf(salary.salary_box_diff2[13:].values.squeeze(), lags=48, ax=ax) pylab.show() """ Explanation: Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — тренда больше нет. Подбор модели Посмотрим на ACF и PACF полученного ряда: End of explanation """ ps = range(0, 2) d=1 qs = range(0, 6) Ps = range(0, 6) D=1 Qs = range(0, 2) parameters = product(ps, qs, Ps, Qs) parameters_list = list(parameters) len(parameters_list) %%time results = [] best_aic = float("inf") warnings.filterwarnings('ignore') for param in parameters_list: #try except нужен, потому что на некоторых наборах параметров модель не обучается try: model=sm.tsa.statespace.SARIMAX(salary.salary_box, order=(param[0], d, param[1]), seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1) #выводим параметры, на которых модель не обучается и переходим к следующему набору except ValueError: print('wrong parameters:', param) continue aic = model.aic #сохраняем лучшую модель, aic, параметры if aic < best_aic: best_model = model best_aic = aic best_param = param results.append([param, model.aic]) warnings.filterwarnings('default') """ Explanation: Начальные приближения: Q=0, q=5, P=5, p=1 End of explanation """ result_table = pd.DataFrame(results) result_table.columns = ['parameters', 'aic'] print(result_table.sort_values(by = 'aic', ascending=True).head()) """ Explanation: Если в предыдущей ячейке возникает ошибка, убедитесь, что обновили statsmodels до версии не меньше 0.8.0rc1. End of explanation """ print(best_model.summary()) """ Explanation: Лучшая модель: End of explanation """ plt.figure(figsize(15,8)) plt.subplot(211) best_model.resid[13:].plot() plt.ylabel(u'Residuals') ax = plt.subplot(212) sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax) print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1]) """ Explanation: Её остатки: End of explanation """ salary['model'] = invboxcox(best_model.fittedvalues, lmbda) plt.figure(figsize(15,7)) salary.WAG_C_M.plot() salary.model[13:].plot(color='r') plt.ylabel('Wine sales') pylab.show() """ Explanation: Остатки стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные: End of explanation """ salary2 = salary[['WAG_C_M']] date_list = [datetime.datetime.strptime("2016-09-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)] future = pd.DataFrame(index=date_list, columns=salary2.columns) salary2 = pd.concat([salary2, future]) salary2['forecast'] = invboxcox(best_model.predict(start=284, end=310), lmbda) plt.figure(figsize(15,7)) salary2.WAG_C_M.plot() salary2.forecast.plot(color='r') plt.ylabel('Salary') pylab.show() """ Explanation: Прогноз End of explanation """
deculler/DataScienceTableDemos
BirthweightRegression.ipynb
bsd-2-clause
# HIDDEN from datascience import * %matplotlib inline import numpy as np import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') """ Explanation: Illustration of datascience Tables for multivariate analysis David E. Culler This notebook illustrates some of the use of datascience Tables to perform regressions on multiple variables. In doing so, it shows some of the elegant ways that computational concepts and statistical concepts naturally fit together. End of explanation """ # Source https://onlinecourses.science.psu.edu/stat501/node/380 births=Table.read_table("data/birthsmokers.csv") births """ Explanation: Read data from a local or remote csv file to create a Table End of explanation """ # First let's just look at what is here. This needs to be a scatter, rather # than a plot because there is no simple ordering - just relationships between # birthweight and gestation time along with whether the mother smokes. births.scatter('Gest') births.num_rows # How many samples in each category births.where('Smoke').num_rows """ Explanation: Looking at the raw data Table.scatter produces a scatter plot of columns versus a specific column. Here we look at how birthweight varies with gestation. And we note whether the mother smoked. End of explanation """ # As there is a trend among birthweight and gestation, we can show a fit line to try to # capture it births.drop('Smoke').scatter('Gest', fit_line=True) """ Explanation: Fitting a line to the data on a scatter plot Here we drop the smoke column and look at the birthweight for the whole population. End of explanation """ nosmoke=births.where('Smoke',0).drop('Smoke') smoke=births.where('Smoke',1).drop('Smoke') # we can attempt to find the trend for each nosmoke.scatter('Gest',fit_line=True) smoke.scatter('Gest',fit_line=True, marker='x') """ Explanation: Partitioning data The question is whether smoking causes the trend to be substantially different. Split the data into two tables using the Smoke column. Then we can find the trends for each. End of explanation """ # what is the coeeficients of the line fitted to these data? np.polyfit(nosmoke['Gest'],nosmoke['Wgt'],1) np.polyfit(smoke['Gest'],smoke['Wgt'],1) """ Explanation: Build a model by fitting a line to data in a Table Selecting a column of a Table yields a numpy array, allowing the numpy numerical tools to be used for fitting curves to the data. For documentation on the polyfit function, try help(np.polyfit). It returns the polynomial coefficients, highest power first, which is the slope for a line. A linear model is only meaningful in the range around the normal gestation period, and indeed the intercept is not physcially meaningful End of explanation """ # Build a linear model from data and return it is a function def make_lm(x, y): m,b = np.polyfit(x, y, 1) def lm(a): return m*a+b return lm # Create model of non-smokers that returns estimated weight a function of weeks gestation nosmoker_weight = make_lm(nosmoke['Gest'], nosmoke['Wgt']) # Evaluate it at normal gestations nosmoker_weight(40) smoker_weight = make_lm(smoke['Gest'], smoke['Wgt']) smoker_weight(40) """ Explanation: We see that both the constant and the weight increase per week is lower for the smokers. Higher order functions as models At this point, we could do mx+b all over the place, or we could utilize higher order functions to capture the concept of a model. Here is an example, building such a model directly from the data. It takes the set of x and y values and returns a function that evaluates the model built from that data at a particular x. End of explanation """ # based on this data set, fitting the data to models of weight as a function of gestation # We might conclude that at 40 weeks the effect of smoking on birthweigth in grams is smoke_diff = nosmoker_weight(40) - smoker_weight(40) smoke_diff # Or in relative terms "{0:.1f}%".format(100*(nosmoker_weight(40)-smoker_weight(40))/nosmoker_weight(40)) """ Explanation: Drawing a conclusion Using the models to remove the contribution due to gestation time, we can attempt to draw a conclusion about the effect of smoking at typical gestation age. End of explanation """ # Create a table with a column containing the independent variable estimated_birthweight = Table().with_column('week', np.arange(32,44)) # Add columns of dependent variables estimated_birthweight['nosmoke'] = estimated_birthweight.apply(nosmoker_weight,'week') estimated_birthweight['smoke'] = estimated_birthweight.apply(smoker_weight,'week') estimated_birthweight # plot it to visualize estimated_birthweight.plot('week',overlay=True) """ Explanation: Use the models to build a Table and visualize the effect of the categorical parameter - smoking End of explanation """ # Construct a new model by forming a new sample from our existing one and fiting a line to that # Here we create quite a general function, which takes a table and column names over which # the model is to be formed. def rboot(table, x, y): sample = table.sample(table.num_rows, with_replacement=True) return np.polyfit(sample[x],sample[y],1) # Try it out for non-smokers. Note that every time this cell is evaluated (ctrl-enter) # the result is a little different, since a new sample is drawn. rboot(nosmoke, 'Gest', 'Wgt') # And for smokers rboot(smoke,'Gest','Wgt') """ Explanation: Determining if the conclusion is sound At this point, we might ask how accurately these linear models fit the data. The 'residual' in the fit would give us some idea of this. But the error in summarizing a collection of empirical data with an analytical model is only a part of the question. The deeper point is that this data is not "truth", it is merely a sample of a population, and a tiny one at that. We should be asking, is an inference drawn from this data valid for the population that the data is intended to represent. Of course, the population is not directly observable, only the sample of it. How can we use the sample we have to get some idea of how representative it is of the larger population. That is what bootstrap seeks to accomplish. Tables provide a method sample for just this purpose. Here we return to looking at the coefficients, rather than build a function for the model. End of explanation """ # Bootstrap a distribution of models by drawing many random samples, with replacement, from our samples num_samples = 1000 nosamples = Table(['slope','intercept']).with_rows([rboot(nosmoke,'Gest','Wgt') for i in range(num_samples)]) nosamples.hist(bins=50,normed=True, overlay=False) """ Explanation: Bootstrap Using this model builder as a function, draw many samples and form a model for each. Then we can look at the distribution of the model parameters over lots of models. This illustrates the construction of tables by rows. The Table constructor accepts the column names and with_rows fills them in row by row. hist forms and shows a histogram of the result. End of explanation """ smokesamples = Table(['slope','intercept']).with_rows([rboot(smoke,'Gest','Wgt') for i in range(num_samples)]) smokesamples.hist(bins=50,normed=True, overlay=False) """ Explanation: And we repeate this for the other category. End of explanation """ nosamples.stats([np.min,np.mean,np.max]) smokesamples.stats([np.min,np.mean,np.max]) """ Explanation: Summary of sample distributions of the regression At this point we could compute a statistic over the sample distributions of these parameters, such as the total variational distance, or the mean. End of explanation """ smokesamples['slope']*40+smokesamples['intercept'] # So now we have an estimate of the distribution of birthweights at week 40 for # something closer to the populations that these small samples represent. weights_40 = Table().with_columns([ ('nosmoke', nosamples['slope']*40+nosamples['intercept']), ('smoke', smokesamples['slope']*40+smokesamples['intercept'])]) weights_40 weights_40['Smoke Wgt Loss'] = weights_40['nosmoke'] - weights_40['smoke'] # what do we expect the distribution of birthweight reduction due to smoking to look like # for the population represented by the original sample? weights_40.select('Smoke Wgt Loss').hist(bins=30,normed=True) smoke_diff def firstQtile(x) : return np.percentile(x,25) def thirdQtile(x) : return np.percentile(x,25) summary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max) summary = weights_40.stats(summary_ops) summary summary['diff']=summary['nosmoke']-summary['smoke'] # the bottom line summary """ Explanation: Estimation of birthweights at 40 weeks Selecting a column of a Table yields a numpy array. Arithmetic operators work elementwise on the entire array. End of explanation """ weights_40.select(['smoke','nosmoke']).hist(overlay=True,bins=30,normed=True) """ Explanation: Visualizing the separation of these distributions End of explanation """ # As an example, split the original data into two random halves A, B = births.split(births.num_rows//2) A B make_lm(A['Gest'], A['Wgt'])(40) make_lm(B['Gest'], B['Wgt'])(40) """ Explanation: Empirical p-values A more formal approach would be to take as the null hypothesis that smoking does not affect the birthweight. Repeatedly split the data into random halves and model the birthweight difference. What is the chance that the difference we see in summary table is an artifact? End of explanation """ def null_diff_at(tbl, x, y, w): A, B = tbl.split(tbl.num_rows//2) return make_lm(A[x], A[y])(w) - make_lm(B[x], B[y])(w) null_diff_at(births, 'Gest', 'Wgt', 40) null = Table().with_column('Diff', [null_diff_at(births, 'Gest', 'Wgt', 40) for i in range(1000)]) null.hist() """ Explanation: Capturing statistical computations and general tools Rather than compute the null hypothesis for this particular table, we can build a very general tool, as a function, that will do it in general. Then we can use it to build a sample distribution under the null hypothesis End of explanation """ null.stats() """ Explanation: What is the probablility that we got a 260g difference in birthweight at 40 weeks as an artifact of the sample? Zero End of explanation """
pligor/predicting-future-product-prices
02_preprocessing/exploration02-price_history-remove-spikes.ipynb
agpl-3.0
axis_indifferent = np.arange(len(df.columns)) axis_indifferent[:4] plt.figure(figsize=(17,8)) for ind, history in df.loc[price_histories_big_outliers.index].iterrows(): #nums = [float(str) for str in history.values] #print history.values plt.plot(axis_indifferent, history.values) plt.title('Original Price Histories with Spikes') plt.xlabel('point in time') plt.ylabel('Price') plt.show() """ Explanation: Note that we are passing the dataframe reversed Let's plot all of these price 17 price histories to see what is going on End of explanation """ inds = MyOutliers.getOutliersIndices(data = df.T, bounds=bounds, filtering = lambda arr: arr > 0) outliers_per_sku = inds.loc[price_histories_big_outliers.index] outliers_per_sku df_no_spikes = PreprocessPriceHistory.removeSpikes(df=df, outliers_per_sku=outliers_per_sku) bounds = PreprocessPriceHistory.getBounds(df=df_no_spikes.T, kk=15, outlier_columns=df_no_spikes.T.columns) print PreprocessPriceHistory.countOutliers(df=df_no_spikes.T, bounds=bounds) plt.figure(figsize=(17,8)) for ind, history in df_no_spikes.loc[price_histories_big_outliers.index].iterrows(): plt.plot(axis_indifferent, history.values) plt.title('Spikes Removed from Price Histories') plt.xlabel('point in time') plt.ylabel('Price') plt.show() """ Explanation: Conclusion We notice that we have indeed some strange spikes for cellphones with two characteristics: - Their price does not change a lot, they have constant trend - The outliers are seen only once causing these spikes in the plot One idea is to remove these spikes by taking the average of the two nearest points, only would would suffice they way we observe the data. This is because we would not expect our prediction system to be able to predict these spikes which indicate something abnormal and very temporary has occured. On the customer side most customers would not be able to react that fast. End of explanation """ final_df = pd.concat( (df_no_spikes, orig_df[PriceHistory.SPECIAL_COLS]), axis=1 ) df_no_spikes.shape, final_df.shape #final_df.to_csv(csv_out, encoding='utf-8', quoting=csv.QUOTE_ALL) """ Explanation: Conclusion We have seen that by removing spikes the cellphones with steady price remain steady without experiencing any extreme changes which could have come from various sources End of explanation """
intel-analytics/BigDL
python/chronos/use-case/fsi/stock_prediction_prophet.ipynb
apache-2.0
import numpy as np import pandas as pd import os FILE_NAME = 'all_stocks_5yr.csv' filepath = os.path.join('data', FILE_NAME) print(filepath) # read data data = pd.read_csv(filepath) print(data[:10]) # change input column name data = data[data['Name']=='MMM'].rename(columns={"date":"ds", "close":"y"}) data.head() """ Explanation: Stock Price Prediction with ProphetForecaster and AutoProphet (with AutoML) In this notebook, we demonstrate a reference use case where we use historical stock price data to predict the future price using the ProphetForecaster and AutoProphet. The dataset we use is the daily stock price of S&P500 stocks during 2013-2018 (data source). Reference: https://facebook.github.io/prophet, https://github.com/jwkanggist/tf-keras-stock-pred Download raw dataset and load into dataframe Now we download the dataset and load it into a pandas dataframe. Steps are as below. First, run the script get_data.sh to download the raw data. It will download daily stock price of S&P500 stocks during 2013-2018 into data folder, preprocess and merge them into a single csv file all_stocks_5yr.csv. Second, use pandas to load data/data.csv into a dataframe as shown below End of explanation """ from bigdl.chronos.data import TSDataset from sklearn.preprocessing import MinMaxScaler df = data[["ds", "y"]] df_train = df[:-24] df_test = df[-24:] tsdata_train = TSDataset.from_pandas(df_train, dt_col="ds", target_col="y") tsdata_test = TSDataset.from_pandas(df_test, dt_col="ds", target_col="y") minmax_scaler = MinMaxScaler() for tsdata in [tsdata_train, tsdata_test]: tsdata.scale(minmax_scaler, fit=(tsdata is tsdata_train)) train_data = tsdata_train.to_pandas() validation_data = tsdata_test.to_pandas() print(train_data.shape[0]) print(validation_data.shape[0]) """ Explanation: Data Pre-processing Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. For the stock price data we're using, we add normlization such that the normalized stock prices fall in the range of 0 to 1. And here we aims at using historical values to predict stock prices of future 24 days. We generate a built-in TSDataset to complete the whole processing. End of explanation """ from bigdl.chronos.forecaster.prophet_forecaster import ProphetForecaster model = ProphetForecaster() val_mse = model.fit(data=train_data, validation_data=validation_data)['mse'] print(f"Validation MSE = {val_mse}") # Plot predictions pred = model.predict(ds_data=validation_data[["ds"]]) import matplotlib.pyplot as plt plt.plot(validation_data[['y']].values, color='blue', label="MMM daily price Raw") plt.plot(pred[['yhat']].values, color='red', label="MMM daily price Predicted") plt.xlabel("Time Period") plt.ylabel("Normalized Stock Price") plt.legend() plt.show() """ Explanation: ProphetForecaster Demonstration Here we provide a simple demonstration of basic operations with the ProphetForecaster. End of explanation """ from bigdl.chronos.autots.model.auto_prophet import AutoProphet from bigdl.orca.automl import hp from bigdl.orca import init_orca_context init_orca_context(cores=10, init_ray_on_spark=True) %%time auto_prophet = AutoProphet() auto_prophet.fit(data=train_data, cross_validation=False, freq="1D") print("Training completed.") y_hat = auto_prophet.predict(ds_data=validation_data[["ds"]]) test_mse = auto_prophet.evaluate(data=validation_data, metrics=["mse"]) print(f"Validation MSE = {test_mse}") # Plot predictions import matplotlib.pyplot as plt plt.plot(validation_data[['y']].values, color='blue', label="MMM daily price Raw") plt.plot(y_hat[['yhat']].values, color='red', label="MMM daily price Predicted") plt.xlabel("Time Period") plt.ylabel("Normalized Stock Price") plt.legend() plt.show() """ Explanation: AutoProphet Demonstration Here we provide a demonstration of our AutoProphet AutoEstimator that could search for best hyperparameters for the model automatically. End of explanation """
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course4-cnn/week4-facenet-nstyle/FaceRecognition/Face+Recognition+for+the+Happy+House+-+v3.ipynb
mit
from keras.models import Sequential from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D from keras.layers.merge import Concatenate from keras.layers.core import Lambda, Flatten, Dense from keras.initializers import glorot_uniform from keras.engine.topology import Layer from keras import backend as K K.set_image_data_format('channels_first') import cv2 import os import numpy as np from numpy import genfromtxt import pandas as pd import tensorflow as tf from fr_utils import * from inception_blocks_v2 import * %matplotlib inline %load_ext autoreload %autoreload 2 np.set_printoptions(threshold=np.nan) """ Explanation: Face Recognition for the Happy House Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace. Face recognition problems commonly fall into two categories: Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. In this assignment, you will: - Implement the triplet loss function - Use a pretrained model to map face images into 128-dimensional encodings - Use these encodings to perform face verification and face recognition In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Let's load the required packages. End of explanation """ FRmodel = faceRecoModel(input_shape=(3, 96, 96)) print("Total Params:", FRmodel.count_params()) """ Explanation: 0 - Naive Face Verification In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! <img src="images/pixel_comparison.png" style="width:380px;height:150px;"> <caption><center> <u> <font color='purple'> Figure 1 </u></center></caption> Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using an ConvNet to compute encodings The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook). The key things you need to know are: This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector Run the cell below to create the model for face images. End of explanation """ # GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1 pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)),axis=-1) # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1 neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)),axis=-1) # Step 3: subtract the two previous distances and add alpha. basic_loss = pos_dist - neg_dist + alpha # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.maximum(basic_loss, 0) loss = tf.reduce_sum(loss) ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval())) """ Explanation: Expected Output <table> <center> Total Params: 3743280 </center> </table> By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: <img src="images/distance_kiank.png" style="width:680px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption> So, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very different The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. <img src="images/triplet_comparison.png" style="width:280px;height:150px;"> <br> <caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption> 1.2 - The Triplet Loss For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network. <img src="images/f_x.png" style="width:380px;height:150px;"> <!-- We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1). !--> Training will use triplets of images $(A, P, N)$: A is an "Anchor" image--a picture of a person. P is a "Positive" image--a picture of the same person as the Anchor image. N is a "Negative" image--a picture of a different person than the Anchor image. These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$: $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$ You would thus like to minimize the following "triplet cost": $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$ Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes: - The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here. Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps: 1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ 2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ 3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$ 3. Compute the full formula by taking the max with zero and summing over the training examples: $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$ Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum(). For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples. End of explanation """ FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy']) load_weights_from_FaceNet(FRmodel) """ Explanation: Expected Output: <table> <tr> <td> **loss** </td> <td> 528.143 </td> </tr> </table> 2 - Loading the trained model FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run. End of explanation """ database = {} database["danielle"] = img_to_encoding("images/danielle.png", FRmodel) database["younes"] = img_to_encoding("images/younes.jpg", FRmodel) database["tian"] = img_to_encoding("images/tian.jpg", FRmodel) database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel) database["kian"] = img_to_encoding("images/kian.jpg", FRmodel) database["dan"] = img_to_encoding("images/dan.jpg", FRmodel) database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel) database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel) database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel) database["felix"] = img_to_encoding("images/felix.jpg", FRmodel) database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel) database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel) """ Explanation: Here're some examples of distances between the encodings between three individuals: <img src="images/distance_matrix.png" style="width:380px;height:200px;"> <br> <caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption> Let's now use this model to perform face verification and face recognition! 3 - Applying the model Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. 3.1 - Face Verification Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face. End of explanation """ # GRADED FUNCTION: verify def verify(image_path, identity, database, model): """ Function that verifies if the person on the "image_path" image is "identity". Arguments: image_path -- path to an image identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house. database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors). model -- your Inception model instance in Keras Returns: dist -- distance between the image_path and the image of "identity" in the database. door_open -- True, if the door should open. False otherwise. """ ### START CODE HERE ### # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line) encoding = img_to_encoding(image_path, model) # Step 2: Compute distance with identity's image (≈ 1 line) dist = np.linalg.norm(encoding - database[identity]) # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines) if dist < 0.7: print("It's " + str(identity) + ", welcome home!") door_open = True else: print("It's not " + str(identity) + ", please go away") door_open = False ### END CODE HERE ### return dist, door_open """ Explanation: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID. Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps: 1. Compute the encoding of the image from image_path 2. Compute the distance about this encoding and the encoding of the identity image stored in the database 3. Open the door if the distance is less than 0.7, else do not open. As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) End of explanation """ verify("images/camera_0.jpg", "younes", database, FRmodel) """ Explanation: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture: <img src="images/camera_0.jpg" style="width:100px;height:100px;"> End of explanation """ verify("images/camera_2.jpg", "kian", database, FRmodel) """ Explanation: Expected Output: <table> <tr> <td> **It's younes, welcome home!** </td> <td> (0.65939283, True) </td> </tr> </table> Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter. <img src="images/camera_2.jpg" style="width:100px;height:100px;"> End of explanation """ # GRADED FUNCTION: who_is_it def who_is_it(image_path, database, model): """ Implements face recognition for the happy house by finding who is the person on the image_path image. Arguments: image_path -- path to an image database -- database containing image encodings along with the name of the person on the image model -- your Inception model instance in Keras Returns: min_dist -- the minimum distance between image_path encoding and the encodings from the database identity -- string, the name prediction for the person on image_path """ ### START CODE HERE ### ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line) encoding = img_to_encoding(image_path, model) ## Step 2: Find the closest encoding ## # Initialize "min_dist" to a large value, say 100 (≈1 line) min_dist = 100 # Loop over the database dictionary's names and encodings. for (name, db_enc) in database.items(): # Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line) dist = np.linalg.norm(encoding - db_enc) # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines) if dist < min_dist: min_dist = dist identity = name ### END CODE HERE ### if min_dist > 0.7: print("Not in the database.") else: print ("it's " + str(identity) + ", the distance is " + str(min_dist)) return min_dist, identity """ Explanation: Expected Output: <table> <tr> <td> **It's not kian, please go away** </td> <td> (0.86224014, False) </td> </tr> </table> 3.2 - Face Recognition Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. Exercise: Implement who_is_it(). You will have to go through the following steps: 1. Compute the target encoding of the image from image_path 2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items(). - Compute L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set min_dist to dist, and identity to name. End of explanation """ who_is_it("images/camera_0.jpg", database, FRmodel) """ Explanation: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes. End of explanation """
dsacademybr/PythonFundamentos
Cap05/Notebooks/DSA-Python-Cap05-04-Heranca.ipynb
gpl-3.0
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font> Download: http://github.com/dsacademybr End of explanation """ # Criando a classe Animal - Super-classe class Animal(): def __init__(self): print("Animal criado") def Identif(self): print("Animal") def comer(self): print("Comendo") # Criando a classe Cachorro - Sub-classe class Cachorro(Animal): def __init__(self): Animal.__init__(self) print("Objeto Cachorro criado") def Identif(self): print("Cachorro") def latir(self): print("Au Au!") # Criando um objeto (Instanciando a classe) rex = Cachorro() # Executando o método da classe Cachorro (sub-classe) rex.Identif() # Executando o método da classe Animal (super-classe) rex.comer() # Executando o método da classe Cachorro (sub-classe) rex.latir() """ Explanation: Herança End of explanation """
syednasar/talks
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, we’re going to take a peek into the realm of neural network machine translation. We’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_sentences = source_text.split('\n') target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')] source_ids = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences] target_ids = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences] return (source_ids, target_ids) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids Here we turn the text into a number so the computer can understand it. In the function text_to_ids(), we'll turn source_text and target_text from words to ids. We will add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. We can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] We can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() print(source_int_text[:2]) print(target_int_text[:2]) print(source_vocab_to_int['where']) # sample english word to encoded value print(target_vocab_to_int['préférée']) # french word to encoded value print(target_vocab_to_int['<EOS>']) """ Explanation: Check Point This is our first checkpoint. If we ever decide to come back to this notebook or have to restart the notebook, we can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure we have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network We'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input The model_inputs() function creates TF Placeholders for the Neural Network. It would create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Returns the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) processed_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return processed_target """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: Process Decoding Input The process_decoding_input function is implemented using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, state_is_tuple=True) # Dropout drop_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob) # Encoder enc_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers, state_is_tuple=True) _, rnn_state = tf.nn.dynamic_rnn(cell = enc_cell, inputs = rnn_inputs, dtype=tf.float32) return rnn_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Here we implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(train_pred) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training We create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). We apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference We create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) with tf.variable_scope("decoding") as decoding_scope: #Output Function output_fn= lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope) #Train Logits train_logits=decoding_layer_train( encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,output_fn, keep_prob) decoding_scope.reuse_variables() #with tf.variable_scope("decoding") as decoding_scope: #Infer Logits infer_logits=decoding_layer_infer( encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer We implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use previously defined decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use previously defined decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: We'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ #Apply embedding to the input data for the encoder. enc_input = tf.contrib.layers.embed_sequence( input_data, source_vocab_size, enc_embedding_size ) #Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). enc_layer = encoding_layer( enc_input, rnn_size, num_layers, keep_prob ) #Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. dec_input = process_decoding_input( target_data, target_vocab_to_int, batch_size ) #Apply embedding to the target data for the decoder. dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) embed_target = tf.nn.embedding_lookup(dec_embed, dec_input) #Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). train_logits, inf_logits = decoding_layer( embed_target, dec_embed, enc_layer, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob ) return train_logits, inf_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network We apply the functions we implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation """ #Number of Epochs epochs = 5 #Batch Size batch_size = 256 #RNN Size rnn_size = 512 #25 #Number of Layers num_layers = 2 #Embedding Size encoding_embedding_size = 256 #13 decoding_embedding_size = 256 #13 #Learning Rate learning_rate = 0.01 #Dropout Keep Probability keep_probability = 0.5 """ Explanation: Neural Network Training Hyperparameters We tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ # Save parameters for checkpoint helper.save_params(save_path) print(save_path) """ Explanation: Save Parameters Let us save the batch_size and save_path parameters for inference. End of explanation """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ input_sentence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()] return input_sentence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, we first need to preprocess it. We implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'China is sometimes warm during autumn' """ DON'T MODIFY ANYTHING BELOW THIS LINE """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/csir-csiro/cmip6/models/sandbox-2/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: CSIR-CSIRO Source ID: SANDBOX-2 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:54 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
mbeyeler/opencv-machine-learning
notebooks/09.04-Training-an-MLP-in-OpenCV-to-Classify-Handwritten-Digits.ipynb
mit
from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() """ Explanation: <!--BOOK_INFORMATION--> <a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a> This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler. The code is released under the MIT license, and is available on GitHub. Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < Getting Acquainted with Deep Learning | Contents | Training a Deep Neural Net to Classify Handwritten Digits Using Keras > https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py Training an MLP in OpenCV to Classify Handwritten Digits In this section, we will use an MLP in OpenCV to classify handwritten digits from the popular MNIST dataset, which has been constructed by Yann LeCun and colleagues and serves as a popular benchmark dataset for machine learning algorithms. Loading the MNIST dataset The easiest way to obtain the MNIST dataset is using Keras: End of explanation """ X_train.shape, y_train.shape """ Explanation: This will download the data from the Amazon Cloud (might take a while depending on your internet connection) and automatically split the data into training and test sets. This data comes in a format that we are already familiar with: End of explanation """ import numpy as np np.unique(y_train) """ Explanation: We should take note that the labels come as integer values between zero and nine (corresponding to the digits 0-9): End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(16, 6)) for i in range(10): plt.subplot(2, 5, i + 1) plt.imshow(X_train[i, :, :], cmap='gray') plt.axis('off') plt.savefig('mnist-examples.png') """ Explanation: We can have a look at some example digits: End of explanation """ from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(sparse=False, dtype=np.float32) y_train_pre = enc.fit_transform(y_train.reshape(-1, 1)) """ Explanation: In fact, the MNIST dataset is the successor to the NIST digits dataset provided by scikit-learn that we used before (sklearn.datasets.load_digits (refer to Chapter 2, Working with Data in OpenCV and Python). Some notable differences are as follows: - MNIST images are significantly larger (28x28 pixels) than NIST images (8x8 pixels), thus paying more attention to fine details, such as distortions and individual differences between images of the same digit - The MNIST dataset is much larger than the NIST dataset, providing 60,000 training and 10,000 test samples (as compared to a total of 5,620 NIST images) Preprocessing the MNIST dataset As we learned in Chapter 4, Representing Data and Engineering Features, there are a number of preprocessing steps we might like to apply here, such as centering, scaling, and representing categorical features. The easiest way to transform y_train and y_test is by the one-hot encoder from scikit-learn: End of explanation """ y_test_pre = enc.fit_transform(y_test.reshape(-1, 1)) """ Explanation: This will transform the labels of the training set from a &lt;n_samples x 1&gt; vector with integers 0-9 into a &lt;n_samples x 10&gt; matrix with floating point numbers 0.0 or 1.0. Analogously, we can transform y_test using the same procedure: End of explanation """ X_train_pre = X_train.astype(np.float32) / 255.0 X_train_pre = X_train_pre.reshape((X_train.shape[0], -1)) X_test_pre = X_test.astype(np.float32) / 255.0 X_test_pre = X_test_pre.reshape((X_test.shape[0], -1)) """ Explanation: In addition, we need to preprocess X_train and X_test for the purpose of working with OpenCV. Currently, X_train and X_test are 3-D matrices &lt;n_samples x 28 x 28&gt; with integer values between 0 and 255. Preferably, we want a 2-D matrix &lt;n_samples x n_features&gt; with floating point numbers, where n_features is 784: End of explanation """ import cv2 mlp = cv2.ml.ANN_MLP_create() """ Explanation: Then we are ready to train the network. Training an MLP using OpenCV We can set up and train an MLP in OpenCV with the following recipe: Instantiate a new MLP object: End of explanation """ mlp.setLayerSizes(np.array([784, 512, 512, 10])) """ Explanation: Specify the size of every layer in the network. We are free to add as many layers as we want, but we need to make sure that the first layer has the same number of neurons as input features (784 in our case), and that the last layer has the same number of neurons as class labels (10 in our case): End of explanation """ mlp.setActivationFunction(cv2.ml.ANN_MLP_SIGMOID_SYM, 2.5, 1.0) """ Explanation: Specify an activation function. Here we use the sigmoidal activation function from before: End of explanation """ mlp.setTrainMethod(cv2.ml.ANN_MLP_BACKPROP) mlp.setBackpropWeightScale(0.0001) """ Explanation: Specify the training method. Here we use the backpropagation algorithm described above. We also need to make sure that we choose a small enough learning rate. Since we have on the order of $10^5$ training samples, it is a good idea to set the learning rate to at most $10^{-5}$: End of explanation """ term_mode = cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS term_max_iter = 10 term_eps = 0.01 mlp.setTermCriteria((term_mode, term_max_iter, term_eps)) """ Explanation: Specify the termination criteria. Here we use the same criteria as above: to run training for ten iterations (term_max_iter) or until the error does no longer decrease significantly (term_eps): End of explanation """ mlp.train(X_train_pre, cv2.ml.ROW_SAMPLE, y_train_pre) _, y_hat_train = mlp.predict(X_train_pre) from sklearn.metrics import accuracy_score accuracy_score(y_hat_train.round(), y_train_pre) """ Explanation: Train the network on the training set (X_train_pre): When the training completes, we can calculate the accuracy score on the training set to see how far we got: End of explanation """ _, y_hat_test = mlp.predict(X_test_pre) accuracy_score(y_hat_test.round(), y_test_pre) """ Explanation: But, of course, what really counts is the accuracy score we get on the held-out test data: End of explanation """
jonathanrocher/pandas_tutorial
climate_timeseries/climate_timeseries-Part2.ipynb
mit
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option("display.max_rows", 16) LARGE_FIGSIZE = (12, 8) # Change this cell to the demo location on YOUR machine %cd ~/Projects/pandas_tutorial/climate_timeseries/ %ls """ Explanation: Last updated: June 29th 2016 Climate data exploration: a journey through Pandas Welcome to a demo of Python's data analysis package called Pandas. Our goal is to learn about Data Analysis and transformation using Pandas while exploring datasets used to analyze climate change. The story The global goal of this demo is to provide the tools to be able to try and reproduce some of the analysis done in the IPCC global climate reports published in the last decade (see for example https://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full.pdf). We are first going to load a few public datasets containing information about global temperature, global and local sea level infomation, and global concentration of greenhouse gases like CO2, to see if there are correlations and how the trends are to evolve, assuming no fundamental change in the system. For all these datasets, we will download them, visualize them, clean them, search through them, merge them, resample them, transform them and summarize them. In the process, we will learn about: Part 1: 1. Loading data 2. Pandas datastructures 3. Cleaning and formatting data 4. Basic visualization Part 2: 5. Accessing data 6. Working with dates and times 7. Transforming datasets 8. Statistical analysis 9. Data agregation and summarization 10. Correlations and regressions 11. Predictions from auto regression models Some initial setup End of explanation """ with pd.HDFStore("all_data.h5") as store: giss_temp = store["/temperatures/giss"] full_globe_temp = store["/temperatures/full_globe"] mean_sea_level = store["/sea_level/mean_sea_level"] local_sea_level_stations = store["/sea_level/stations"] """ Explanation: Reloading data End of explanation """ full_globe_temp # By default [] on a series accesses values using the index, not the location in the series # This index is non-trivial though (will talk more about these datetime objects further down): full_globe_temp.index.dtype first_date = full_globe_temp.index[0] first_date == pd.to_datetime('1880') # By default [] on a series accesses values using the index, not the location in the series print(full_globe_temp[pd.to_datetime('1880')]) # print(temp1[0]) # This would fail!! # Another more explicit way to do the same thing is to use loc print(full_globe_temp.loc[pd.to_datetime('1990')]) print(full_globe_temp.iloc[0], full_globe_temp.iloc[-1]) # Year of the last record? full_globe_temp.index[-1] # New records can be added: full_globe_temp[pd.to_datetime('2011')] = np.nan """ Explanation: 5. Accessing data The general philosophy for accessing values inside a Pandas datastructure is that, unlike a numpy array that only allows to index using integers a Series allows to index with the values inside the index. That makes the code more readable. In a series End of explanation """ # In 2D, same idea, though in a DF [] accesses columns (Series) giss_temp["Jan"] # while .loc and .iloc allow to access individual values, slices or masked selections: print(giss_temp.loc[1979, "Dec"]) # the loc operators support fancy indexing: print(giss_temp.loc[1979, ["Nov", "Dec"]) # Slicing can be done with .loc and .iloc print(giss_temp.loc[1979, "Jan":"Jun"]) # Note that the end point is included unlike NumPy!!! print(giss_temp.loc[1979, ::2]) # Masking can also be used in one or more dimensions. For example, another way to grab every other month for the first year: mask = [True, False] * 6 print(giss_temp.iloc[0, mask]) print(giss_temp.loc[1880, mask]) # We could also add a new column like a new entry in a dictionary giss_temp["totals"] = giss_temp.sum(axis=1) giss_temp # Let's remove this new column, we will learn to do this differently giss_temp = giss_temp.drop("totals", axis=1) """ Explanation: In a dataframe End of explanation """ local_sea_level_stations.columns american_stations = local_sea_level_stations["Country"] == "USA" local_sea_level_stations.loc[american_stations, ["ID", "Station Name"]] """ Explanation: More complex queries rely on the same concepts. For example what are the names, and IDs of the sea level stations in the USA? End of explanation """ giss_temp giss_temp['Jan'] # This works right now, but this is dangerous: giss_temp['Jan'][1880] = -33.9 giss_temp # This is the safe way to do it: giss_temp.loc[1880, 'Jan'] = -33.9 """ Explanation: EXERCISE: Print all European countries that have sea level stations. We will for now define Europe as being a country that has a station within the 30-70 latitude and a longitude in -10 to 40. You will need to combine masks using the &amp; (and) and/or the | (or) operators, just like in Numpy. Bonus: print each country only once. Warning: modifying a dataframe with multiple indexing End of explanation """ # Its dtype is NumPy's new 'datetime64[ns]': full_globe_temp.index.dtype """ Explanation: 6. Working with dates and times More details at http://pandas.pydata.org/pandas-docs/stable/timeseries.html Let's work some more with full_globe_temp's index since we saw it is special. End of explanation """ black_friday = pd.to_datetime('1929-10-29') full_globe_temp.index > black_friday """ Explanation: The advantage of having a real datetime index is that operations can be done efficiently on it. Let's add a flag to signal if the value is before or after the great depression's black Friday: End of explanation """ # Convert its index from timestamp to period: it is more meaningfull since it was measured and averaged over the year... full_globe_temp.index = full_globe_temp.index.to_period() full_globe_temp """ Explanation: Timestamps or periods? End of explanation """ # Frequencies can be specified as strings: "us", "ms", "S", "T", "H", "D", "B", "W", "M", "A", "3min", "2h20", ... # More aliases at http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases full_globe_temp.resample("M").mean() full_globe_temp.resample("10A").mean() """ Explanation: See also to_timestamp to conver back to timestamps and its how method to specify when inside the range to set the timestamp. Resampling Another thing that can be done is to resample the series, downsample or upsample. Let's see the series converted to 10 year blocks or upscale to a monthly series: End of explanation """ # Can specify a start date and a number of values desired. By default it will assume an interval of 1 day: pd.date_range('1/1/1880', periods=4) # Can also specify a start and a stop date, as well as a frequency pd.date_range('1/1/1880', '1/1/2016', freq="A") """ Explanation: Generating DatetimeIndex objects The index for giss_temp isn't an instance of datetimes so we may want to generate such DatetimeIndex objects. This can be done with date_range and period_range: End of explanation """ from pandas.tseries.offsets import YearBegin pd.date_range('1/1/1880', '1/1/2015', freq=YearBegin()) """ Explanation: Note that "A" by default means the end of the year. Other times in the year can be specified with "AS" (start), "A-JAN" or "A-JUN". Even more options can be imported from pandas.tseries.offsets: End of explanation """ giss_temp_index = pd.period_range('1/1/1880', '12/1/2015', freq="M") giss_temp_index """ Explanation: Actually we will convert that dataset to a 1D dataset, and build a monthly index, so lets build a monthly period index End of explanation """ # What about the range of dates? local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max(), local_sea_level_stations["Date"].iloc[-1] local_sea_level_stations.dtypes """ Explanation: 7. Transforming datasets: apply, sort, stack/unstack and transpose Let's look at our local_sea_level_stations dataset some more, to learn more about it and also do some formatting. What is the range of dates and lattitudes we have, the list of countries, the range of variations, ... End of explanation """ local_sea_level_stations["Date"].apply(pd.to_datetime) """ Explanation: Apply: transforming Series We don't see the range of dates because the dates are of dtype "Object", (usually meaning strings). Let's convert that using apply: End of explanation """ local_sea_level_stations["Date"] = local_sea_level_stations["Date"].apply(pd.to_datetime) # Now we can really compare the dates, and therefore get a real range: print(local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max()) """ Explanation: This apply method is very powerful and general. We have used it to do something we could have done with astype, but any custom function can be provided to apply. End of explanation """ # Your code here """ Explanation: EXERCISE: Use the apply method to search through the stations names for a station in New York. What is the ID of the station? End of explanation """ local_sea_level_stations.sort_values(by="Date") """ Explanation: Now that we know the range of dates, to look at the data, sorting it following the dates is done with sort: End of explanation """ local_sea_level_stations.sort_values(by=["Date", "Country"], ascending=False) """ Explanation: Since many stations last updated on the same dates, it is logical to want to sort further, for example, by Country at constant date: End of explanation """ giss_temp.unstack? unstacked = giss_temp.unstack() unstacked # Note the nature of the result: type(unstacked) """ Explanation: Stack and unstack Let's look at the GISS dataset differently. Instead of seeing the months along the axis 1, and the years along the axis 0, it would could be good to convert these into an outer and an inner axis along only 1 time dimension. Stacking and unstacking allows to convert a dataframe into a series and vice-versa: End of explanation """ giss_temp.transpose() giss_temp_series = giss_temp.transpose().unstack() giss_temp_series.name = "Temp anomaly" giss_temp_series """ Explanation: The result is grouped in the wrong order since it sorts first the axis that was unstacked. Another transformation that would help us is transposing... End of explanation """ # Note the nature of the resulting index: giss_temp_series.index # It is an index made of 2 columns. Let's fix the fact that one of them doesn't have a name: giss_temp_series.index = giss_temp_series.index.set_names(["year", "month"]) # We can now access deviations by specifying the year and month: giss_temp_series[1980, "Jan"] """ Explanation: A side note: Multi-indexes End of explanation """ giss_temp_series.plot(figsize=LARGE_FIGSIZE) """ Explanation: But this new multi-index isn't very good, because is it not viewed as 1 date, just as a tuple of values: End of explanation """ giss_temp_series.index = giss_temp_index giss_temp_series.plot(figsize=LARGE_FIGSIZE) """ Explanation: To improve on this, let's reuse an index we generated above with date_range: End of explanation """ monthly_averages = giss_temp.mean() monthly_averages """ Explanation: 8. Statistical analysis Descriptive statistics Let's go back to the dataframe version of the GISS temperature dataset temporarily to analyze anomalies month per month. Like most functions on a dataframe, stats functions are computed column per column. They also ignore missing values: End of explanation """ yearly_averages = giss_temp.mean(axis=1) yearly_averages """ Explanation: It is possible to apply stats functions across rows instead of columns using the axis keyword (just like in NumPy). End of explanation """ mean_sea_level.describe() """ Explanation: describe provides many descriptive stats computed at once: End of explanation """ full_globe_temp.plot() rolled_series = full_globe_temp.rolling(window=10, center=False) print(rolled_series) rolled_series.mean().plot(figsize=LARGE_FIGSIZE) # To see what all can be done while rolling, #pd.rolling_<TAB> """ Explanation: Rolling statistics Let's remove high frequency signal and extract the trend: End of explanation """ local_sea_level_stations.describe() """ Explanation: Describing categorical series Let's look at our local_sea_level_stations dataset some more: End of explanation """ local_sea_level_stations.columns local_sea_level_stations["Country"] local_sea_level_stations["Country"].describe() # List of unique values: local_sea_level_stations["Country"].unique() local_sea_level_stations["Country"].value_counts() # To save memory, we can convert it to a categorical column: local_sea_level_stations["Country"] = local_sea_level_stations["Country"].astype("category") """ Explanation: .describe() only displays information about continuous Series. What about categorical ones? End of explanation """ categorized = pd.cut(full_globe_temp, 3, labels=["L", "M", "H"]) categorized # The advantage is that we can use labels and control the order they should be treated in (L < M < H) categorized.cat.categories """ Explanation: We can also create categorical series from continuous ones with the cut function: End of explanation """ mean_sea_level mean_sea_level = mean_sea_level.reset_index() mean_sea_level # Groupby with pandas can be done on a column or by applying a custom function to the index. # If we want to group the data by year, we can build a year column into the DF: mean_sea_level["year"] = mean_sea_level["date"].apply(int) mean_sea_level sl_grouped_year = mean_sea_level.groupby("year") """ Explanation: QUIZ: How much memory did we save? What if it was categorized but with dtype object instead of category? 9. Data Aggregation/summarization Now that we have a good grasp on our datasets, Let's transform and analyze them some more to prepare them to compare them. The 2 function(alities)s to learn about here are groupby and pivot_table. GroupBy Let's explore the sea levels, first splitting into calendar years to compute average sea levels for each year: End of explanation """ type(sl_grouped_year) """ Explanation: What kind of object did we create? End of explanation """ for group_name, subdf in sl_grouped_year: print(group_name) print(subdf) print("") """ Explanation: What to do with that strange GroupBy object? We can first loop over it to get the labels and the sub-dataframes for each group: End of explanation """ mean_sea_level = mean_sea_level.drop(["year"], axis=1).set_index("date") """ Explanation: We could have done the same with less effort by grouping by the result of a custom function applied to the index. Let's reset the dataframe: End of explanation """ sl_grouped_year = mean_sea_level.groupby(int) """ Explanation: So that we can do the groupby on the index: End of explanation """ sl_grouped_year.groups """ Explanation: Something else that can be done with such an object is to look at its groups attribute to see the labels mapped to the rows involved: End of explanation """ sl_grouped_year.mean() # We can apply any other reduction function or even a dict of functions using aggregate: sl_grouped_year.aggregate({"mean_global": np.std}) """ Explanation: How to aggregate the results of this grouping depends on what we want to see: do we want to see averaged over the years? That is so common that it has been implemented directly as a method on the GroupBy object. End of explanation """ sl_grouped_decade = mean_sea_level.groupby(lambda x: int(x/10.)) sl_grouped_decade.groups.keys() sl_grouped_decade.transform(lambda subframe: (subframe - subframe.mean()/subframe.std())) """ Explanation: Another possibility is to transform each group separately, rather than aggregate. For example, here we group over decades and subtract to each value, the average over that decade: End of explanation """ european_filter = ((local_sea_level_stations["Lat"] > 30) & (local_sea_level_stations["Lat"] < 70) & (local_sea_level_stations["Lon"] > -10) & (local_sea_level_stations["Lon"] < 40) ) # Let's make a copy to work with a new, clean block of memory # (if you are interested, try and remove the copy to see the consequences further down...) european_stations = local_sea_level_stations[european_filter].copy() european_stations["Country"].unique() """ Explanation: Pivot_table Pivot table also allows to summarize the information, allowing to convert repeating columns into axes. For example, let's say that we would like to know how many sea level stations are in various european countries. And we would like to group the answers into 2 categories: the stations that have been updated recently (after 2000) and the others. Let's first extract only entries located (roughly) in Europe. End of explanation """ european_stations["Recently updated"] = european_stations["Date"] > pd.to_datetime("2000") """ Explanation: The columns of our future table should have 2 values, whether the station was updated recently or not. Let's build a column to store that information: End of explanation """ european_stations["Number of stations"] = np.ones(len(european_stations)) european_stations.sort_values(by="Country") station_counts = pd.pivot_table(european_stations, index="Country", columns="Recently updated", values="Number of stations", aggfunc=np.sum) # Let's remove from the table the countries for which no station was found: station_counts.dropna(how="all") """ Explanation: Finally, what value will be displayed inside the table. The values should be extracted from a column, pivot_table allowing an aggregation function to be applied when more than 1 value is found for a given case. Each station should count for 1, and we could aggregate multiple stations by summing these ones: End of explanation """ # Your code here """ Explanation: QUIZ: Why is there still some countries with no entries? EXERCISE: How many recently updated stations? Not recently updated stations? Which country has the most recently updated stations? Bonus: Which country has the most stations? End of explanation """ # Your code here """ Explanation: EXERCISE: How would we build the same dataframe with a groupby operation? End of explanation """ # Let's see what how the various sea levels are correlated with each other: mean_sea_level["northern_hem"].corr(mean_sea_level["southern_hem"]) # If series are already grouped into a DataFrame, computing all correlation coeff is trivial: mean_sea_level.corr() """ Explanation: 10. Correlations and regressions Correlation coefficients Both Series and dataframes have a corr method to compute the correlation coefficient between series: End of explanation """ # Visualize the correlation matrix plt.imshow(mean_sea_level.corr(), interpolation="nearest") plt.yticks? # let's make it a little better to confirm that learning about global sea level cannot be done from just # looking at stations in the northern hemisphere: plt.imshow(mean_sea_level.corr(), interpolation="nearest") plt.xticks(np.arange(3), mean_sea_level.corr().columns) plt.yticks(np.arange(3), mean_sea_level.corr().index) plt.colorbar() """ Explanation: Note: by default, the method used is the Pearson correlation coefficient (https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient). Other methods are available (kendall, spearman using the method kwarg). End of explanation """ import statsmodels.formula.api as sm sm_model = sm.ols(formula="mean_global ~ northern_hem + southern_hem", data=mean_sea_level).fit() sm_model.params type(sm_model.params) sm_model.summary() plt.figure(figsize=LARGE_FIGSIZE) mean_sea_level["mean_global"].plot() sm_model.fittedvalues.plot(label="OLS prediction") plt.legend(loc="upper left") """ Explanation: OLS The recommeded way to build ordinaty least square regressions is by using statsmodels. End of explanation """ mean_sea_level["mean_global"].index giss_temp_series.index DAYS_PER_YEAR = {} import calendar # Let's first convert the floating point dates in the sea level to timestamps: def floating_year_to_timestamp(float_date): """ Convert a date as a floating point year number to a pandas timestamp object. """ year = int(float_date) days_per_year = 366 if calendar.isleap(year) else 365 remainder = float_date - year daynum = 1 + remainder * (days_per_year - 1) daynum = int(round(daynum)) # Convert day number to month and day day = daynum month = 1 while month < 13: month_days = calendar.monthrange(year, month)[1] if day <= month_days: return pd.Timestamp(str(year)+"/"+str(month)+"/"+str(day)) day -= month_days month += 1 raise ValueError('{} does not have {} days'.format(year, daynum)) floating_year_to_timestamp(1996.0), floating_year_to_timestamp(1996.5), floating_year_to_timestamp(1996.9999) dt_index = pd.Series(mean_sea_level["mean_global"].index).apply(floating_year_to_timestamp) dt_index mean_sea_level = mean_sea_level.reset_index(drop=True) mean_sea_level.index = dt_index mean_sea_level """ Explanation: An interlude: data alignment Converting the floating point date to a timestamp Now, we would like to look for correlations between our monthly temperatures and the sea levels we have. For this to be possible, some data alignment must be done since the time scales are very different for the 2 datasets. End of explanation """ dt_index.dtype # What is the frequency of the new index? The numpy way to compute differences between all values doesn't work: dt_index[1:] - dt_index[:-1] """ Explanation: Now, how to align the 2 series? Is this one sampled regularly so that the month temperatures can be upscaled to that frequency? Computing the difference between successive values What is the frequency of that new index? End of explanation """ # There is a method for shifting values up/down the index: dt_index.shift() # So the distances can be computed with dt_index - dt_index.shift() # Not constant reads apparently. Let's downscale the frequency of the sea levels # to monthly, like the temperature reads we have: monthly_mean_sea_level = mean_sea_level.resample("MS").mean().to_period() monthly_mean_sea_level monthly_mean_sea_level["mean_global"].align(giss_temp_series) giss_temp_series.align? # Now that the series are using the same type and frequency of indexes, to align them is trivial: monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner') aligned_sl, aligned_temp = monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner') aligned_df = pd.DataFrame({"mean_sea_level": aligned_sl, "mean_global_temp": aligned_temp}) """ Explanation: IMPORTANT Note: The above failure is due to the fact that operations between series automatically align them based on their index. End of explanation """ monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner') aligned_sea_levels, aligned_temp = monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner') aligned_monthly_data = aligned_sea_levels.copy() aligned_monthly_data["global_temp"] = aligned_temp aligned_monthly_data """ Explanation: The alignment can even be done on an entire dataframe: End of explanation """ aligned_monthly_data.plot(figsize=LARGE_FIGSIZE) aligned_monthly_data.corr() model = sm.ols("southern_hem ~ global_temp", data=aligned_monthly_data) params = model.fit() params.rsquared """ Explanation: Correlations between sea levels and temperatures End of explanation """ aligned_yearly_data = aligned_monthly_data.resample("A").mean() aligned_yearly_data.plot() aligned_yearly_data.corr() model = sm.ols("southern_hem ~ global_temp", data=aligned_yearly_data).fit() model.rsquared """ Explanation: What if we had done the analysis yearly instead of monthly to remove seasonal variations? End of explanation """ from statsmodels.tsa.api import AR import statsmodels as sm # Let's remove seasonal variations by resampling annually data = giss_temp_series.resample("A").mean().to_timestamp() ar_model = AR(data, freq='A') ar_res = ar_model.fit(maxlag=60, disp=True) plt.figure(figsize=LARGE_FIGSIZE) pred = ar_res.predict(start='1950-1-1', end='2070') data.plot(style='k', label="Historical Data") pred.plot(style='r', label="Predicted Data") plt.ylabel("Temperature variation (0.01 degC)") plt.legend() """ Explanation: 11. Predictions from auto regression models An auto-regresssive model fits existing data and build a (potentially predictive) model of the data fitted. We use the timeseries analysis (tsa) submodule of statsmodels to make out-of-sample predictions for the upcoming decades: End of explanation """ # Your code here """ Explanation: EXERCISE: Make another auto-regression on the sea level of the Atlantic ocean to estimate how much New York is going to flood in the coming century. You can find the historical sea levels of the Atlantic ocean at http://sealevel.colorado.edu/files/current/sl_Atlantic_Ocean.txt or locally in data/sea_levels/sl_Atlantic_Ocean.txt. A little more work but more precise: extract the ID of a station in NewYork from the local_sea_level_stations dataset, and use it to download timeseries in NY (URL would be http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata). End of explanation """
mspcvsp/cincinnati311Data
ClusterServiceCodes.ipynb
gpl-3.0
import csv import re import numpy as np import matplotlib.pyplot as plt import nltk from sklearn.cluster import KMeans from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity from collections import defaultdict import seaborn as sns %matplotlib inline """ Explanation: Setup Code Environment End of explanation """ h_file = open("./serviceCodesCount.tsv","r") code_name_map = {} code_histogram = {} patternobj = re.compile('^([0-9a-z]+)\s\|\s([0-9a-z\s]+)$') for fields in csv.reader(h_file, delimiter="\t"): matchobj = patternobj.match(fields[0]) cur_code = matchobj.group(1) code_name_map[cur_code] = matchobj.group(2) code_histogram[cur_code] = float(fields[1]) h_file.close() """ Explanation: Initialize service code data structures Service code / service name map Service code histogram End of explanation """ total_count_fraction = code_histogram.values() total_count_fraction.sort() total_count_fraction = total_count_fraction[::-1] total_count_fraction /= np.sum(total_count_fraction) total_count_fraction = np.cumsum(total_count_fraction) sns.set(font_scale=2) f,h_ax = plt.subplots(1,2,figsize=(12,6)) h_ax[0].bar(range(0,len(code_histogram.values())), code_histogram.values()) h_ax[0].set_xlim((0,len(total_count_fraction))) h_ax[0].set_xlabel('Service Code #') h_ax[0].set_ylabel('Service Code Count') h_ax[0].set_title('Cincinnati 311\nService Code Histogram') h_ax[1].plot(total_count_fraction, linewidth=4) h_ax[1].set_xlim((0,len(total_count_fraction))) h_ax[1].set_xlabel('Sorted Service Code #') h_ax[1].set_ylabel('Total Count Fraction') f.tight_layout() plt.savefig("./cincinatti311Stats.png") """ Explanation: Plot Cincinnati 311 Service Code Statistics References Descending Array Sort Change Plot Font Size End of explanation """ from nltk.stem.snowball import SnowballStemmer def tokenize(text): """ Extracts unigrams (i.e. words) from a string that contains a service code name. Args: text: String that stores a service code name Returns: filtered_tokens: List of words contained in a service code name""" tokens = [word.lower() for word in nltk.word_tokenize(text)] filtered_tokens =\ filter(lambda elem: re.match('^[a-z]+$', elem) != None, tokens) filtered_tokens =\ map(lambda elem: re.sub("\s+"," ", elem), filtered_tokens) return filtered_tokens def tokenize_and_stem(text): """ Applies the Snowball stemmer to unigrams (i.e. words) extracted from a string that contains a service code name. Args: text: String that stores a service code name Returns: filtered_tokens: List of words contained in a service code name""" stemmer = SnowballStemmer('english') tokens = [word.lower() for word in nltk.word_tokenize(text)] filtered_tokens =\ filter(lambda elem: re.match('^[a-z]+$', elem) != None, tokens) filtered_tokens =\ map(lambda elem: re.sub("\s+"," ", elem), filtered_tokens) filtered_tokens = [stemmer.stem(token) for token in filtered_tokens] return filtered_tokens def compute_tfidf_features(code_name_map, tokenizer, params): """ Constructs a Term Frequency Inverse Document Frequency (TF-IDF) matrix for the Cincinnati 311 service code names. Args: code_name_map: Dictionary that stores the mapping of service codes to service names tokenizer: Function that transforms a string into a list of words params: Dictionary that stores parameters that configure the TfidfVectorizer class constructor - mindocumentcount: Minimum number of term occurrences in separate service code names - maxdocumentfrequency: Maximum document frequency Returns: Tuple that stores a TF-IDF matrix and a TfidfVectorizer class object. Index: Description: ----- ----------- 0 TF-IDF matrix 1 TfidfVectorizer class object""" token_count = 0 for key in code_name_map.keys(): token_count += len(tokenize(code_name_map[key])) num_codes = len(code_name_map.keys()) min_df = float(params['mindocumentcount']) / num_codes tfidf_vectorizer =\ TfidfVectorizer(max_df=params['maxdocumentfrequency'], min_df=min_df, stop_words = 'english', max_features = token_count, use_idf=True, tokenizer=tokenizer, ngram_range=(1,1)) tfidf_matrix =\ tfidf_vectorizer.fit_transform(code_name_map.values()) return (tfidf_matrix, tfidf_vectorizer) def cluster_311_services(tfidf_matrix, num_clusters, random_seed): """Applies the K-means algorithm to cluster Cincinnati 311 service codes based on their service name Term Frequency Inverse Document Frequency (TF-IDF) feature vector. Args: tfidf_matrix: Cincinnati 311 service names TF-IDF feature matrix num_clusters: K-means algorithm number of clusters input random_seed: K-means algorithm random seed input: Returns: clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name""" km = KMeans(n_clusters = num_clusters, random_state=np.random.RandomState(seed=random_seed)) km.fit(tfidf_matrix) clusters = km.labels_.tolist() clusterid_code_map = defaultdict(list) clusterid_name_map = defaultdict(list) codes = code_name_map.keys() names = code_name_map.values() for idx in range(0, len(codes)): clusterid_code_map[clusters[idx]].append(codes[idx]) clusterid_name_map[clusters[idx]].append(names[idx]) return (clusterid_code_map, clusterid_name_map) def compute_clusterid_totalcounts(clusterid_code_map, code_histogram): """ Computes the total Cincinnati 311 requests / service names cluster Args: clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code code_histogram: Dictionary that stores the number of occurrences for each Cincinnati 311 service code Returns: clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster""" clusterid_total_count = defaultdict(int) num_clusters = len(clusterid_code_map.keys()) for cur_cluster_id in range(0, num_clusters): for cur_code in clusterid_code_map[cur_cluster_id]: clusterid_total_count[cur_cluster_id] +=\ code_histogram[cur_code] return clusterid_total_count def print_cluster_stats(clusterid_name_map, clusterid_total_count): """ Prints the total number of codes and total requests count for each Cincinnati 311 service names cluster. Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster Returns: None""" num_clusters = len(clusterid_total_count.keys()) for cur_cluster_id in range(0, num_clusters): print "clusterid %d | # of codes: %d | total count: %d" %\ (cur_cluster_id, len(clusterid_name_map[cur_cluster_id]), clusterid_total_count[cur_cluster_id]) def eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram): """ This function performs the following two operations: 1.) Plots the requests count for each service name in the maximum count service names cluster. 2. Prints the maximum count service name in the maximum count service names cluster Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster code_histogram: Dictionary that stores the number of occurrences for each Cincinnati 311 service code Returns: None""" num_clusters = len(clusterid_code_map.keys()) contains_multiple_codes = np.empty(num_clusters, dtype=bool) for idx in range(0, num_clusters): contains_multiple_codes[idx] = len(clusterid_code_map[idx]) > 1 filtered_clusterid =\ np.array(clusterid_total_count.keys()) filtered_total_counts =\ np.array(clusterid_total_count.values()) filtered_clusterid =\ filtered_clusterid[contains_multiple_codes] filtered_total_counts =\ filtered_total_counts[contains_multiple_codes] max_count_idx = np.argmax(filtered_total_counts) maxcount_clusterid = filtered_clusterid[max_count_idx] cluster_code_counts =\ np.zeros(len(clusterid_code_map[maxcount_clusterid])) for idx in range(0, len(cluster_code_counts)): key = clusterid_code_map[maxcount_clusterid][idx] cluster_code_counts[idx] = code_histogram[key] plt.bar(range(0,len(cluster_code_counts)),cluster_code_counts) plt.grid(True) plt.xlabel('Service Code #') plt.ylabel('Service Code Count') plt.title('Cluster #%d Service Code Histogram' %\ (maxcount_clusterid)) max_idx = np.argmax(cluster_code_counts) print "max count code: %s" %\ (clusterid_code_map[maxcount_clusterid][max_idx]) def add_new_cluster(from_clusterid, service_code, clusterid_total_count, clusterid_code_map, clusterid_name_map): """Creates a new service name(s) cluster Args: from_clusterid: Integer that refers to a service names cluster that is being split servicecode: String that refers to a 311 service code clusterid_code_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service code clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name Returns: None - Service names cluster data structures are updated in place""" code_idx =\ np.argwhere(np.array(clusterid_code_map[from_clusterid]) ==\ service_code)[0][0] service_name = clusterid_name_map[from_clusterid][code_idx] next_clusterid = (clusterid_code_map.keys()[-1])+1 clusterid_code_map[from_clusterid] =\ filter(lambda elem: elem != service_code, clusterid_code_map[from_clusterid]) clusterid_name_map[from_clusterid] =\ filter(lambda elem: elem != service_name, clusterid_name_map[from_clusterid]) clusterid_code_map[next_clusterid] = [service_code] clusterid_name_map[next_clusterid] = [service_name] def print_clustered_servicenames(cur_clusterid, clusterid_name_map): """Prints the Cincinnati 311 service names(s) for a specific Cincinnati 311 service names cluster Args: cur_clusterid: Integer that refers to a specific Cincinnati 311 service names cluster clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name""" for cur_name in clusterid_name_map[cur_clusterid]: print "%s" % (cur_name) def plot_cluster_stats(clusterid_code_map, clusterid_total_count): """Plots the following service name(s) cluster statistics: - Number of service code(s) / service name(s) cluster - Total number of requests / service name(s) cluster Args: clusterid_name_map: Dictionary that stores the mapping of cluster identifier to Cincinnati 311 service name clusterid_total_count: Dictionary that stores the total Cincinnati 311 requests / service names cluster Returns: None""" codes_per_cluster =\ map(lambda elem: len(elem), clusterid_code_map.values()) num_clusters = len(codes_per_cluster) f,h_ax = plt.subplots(1,2,figsize=(12,6)) h_ax[0].bar(range(0,num_clusters), codes_per_cluster) h_ax[0].set_xlabel('Service Name(s) cluster id') h_ax[0].set_ylabel('Number of service codes / cluster') h_ax[1].bar(range(0,num_clusters), clusterid_total_count.values()) h_ax[1].set_xlabel('Service Name(s) cluster id') h_ax[1].set_ylabel('Total number of requests') plt.tight_layout() """ Explanation: Cluster service code names Compute Term Frequency Inverse Document Frequency (TF-IDF) feature vectors Apply the K-means algorithm to cluster service code names based on their TF-IDF feature vector References: Rose, B. "Document Clustering in Python" Text pre-processing to reduce dictionary size End of explanation """ params = {'maxdocumentfrequency': 0.25, 'mindocumentcount': 10} (tfidf_matrix, tfidf_vectorizer) = compute_tfidf_features(code_name_map, tokenize, params) print "# of terms: %d" % (tfidf_matrix.shape[1]) print tfidf_vectorizer.get_feature_names() """ Explanation: Apply a word tokenizer to the service names and construct a TF-IDF feature matrix End of explanation """ num_clusters = 20 kmeans_seed = 3806933558 (clusterid_code_map, clusterid_name_map) = cluster_311_services(tfidf_matrix, num_clusters, kmeans_seed) clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) """ Explanation: Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector End of explanation """ eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram) """ Explanation: Plot the service code histogram for the maximum size cluster End of explanation """ params = {'maxdocumentfrequency': 0.25, 'mindocumentcount': 10} (tfidf_matrix, tfidf_vectorizer) = compute_tfidf_features(code_name_map, tokenize_and_stem, params) print "# of terms: %d" % (tfidf_matrix.shape[1]) print tfidf_vectorizer.get_feature_names() """ Explanation: Apply a word tokenizer (with stemming) to the service names and construct a TF-IDF feature matrix End of explanation """ num_clusters = 20 kmeans_seed = 3806933558 (clusterid_code_map, clusterid_name_map) = cluster_311_services(tfidf_matrix, num_clusters, kmeans_seed) clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) plot_cluster_stats(clusterid_code_map, clusterid_total_count) """ Explanation: Apply the K-means algorithm to cluster the Cincinnati 311 service names based on their TF-IDF feature vector End of explanation """ eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram) """ Explanation: Plot the service code histogram for the maximum size cluster End of explanation """ add_new_cluster(1, 'mtlfrn', clusterid_total_count, clusterid_code_map, clusterid_name_map) """ Explanation: Create a separate service name(s) cluster for the 'mtlfrn' service code End of explanation """ clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) """ Explanation: Evaluate the service name(s) cluster statistics End of explanation """ eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram) """ Explanation: Plot the service code histogram for the maximum size cluster End of explanation """ add_new_cluster(1, 'ydwstaj', clusterid_total_count, clusterid_code_map, clusterid_name_map) """ Explanation: Create a separate service name(s) cluster for the 'ydwstaj' service code End of explanation """ clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) """ Explanation: Evaluate the service name(s) cluster statistics End of explanation """ eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram) """ Explanation: Plot the service code histogram for the maximum size cluster End of explanation """ add_new_cluster(1, 'grfiti', clusterid_total_count, clusterid_code_map, clusterid_name_map) """ Explanation: Create a separate service name(s) cluster for the 'grfiti' service code End of explanation """ clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) """ Explanation: Evaluate the service name(s) cluster statistics End of explanation """ eval_maxcount_clusterid(clusterid_code_map, clusterid_total_count, code_histogram) """ Explanation: Plot the service code histogram for the maximum size cluster End of explanation """ add_new_cluster(1, 'dapub1', clusterid_total_count, clusterid_code_map, clusterid_name_map) """ Explanation: Create a separate service name(s) cluster for the 'dapub1' service code End of explanation """ clusterid_total_count =\ compute_clusterid_totalcounts(clusterid_code_map, code_histogram) print_cluster_stats(clusterid_name_map, clusterid_total_count) plot_cluster_stats(clusterid_code_map, clusterid_total_count) """ Explanation: Evaluate the service name(s) cluster statistics End of explanation """ cur_clusterid = 0 clusterid_category_map = {} clusterid_category_map[cur_clusterid] = 'streetmaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'miscellaneous' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'trashcart' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildinghazzard' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildingcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'repairrequest' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'propertymaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'defaultrequest' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'propertycomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'trashcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'servicecompliment' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'inspection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'servicecomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildinginspection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'buildingcomplaint' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'signmaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'requestforservice' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'litter' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'recycling' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid +=1 clusterid_category_map[cur_clusterid] = 'treemaintenance' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'metalfurniturecollection' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'yardwaste' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'graffitiremoval' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map[cur_clusterid] = 'deadanimal' print_clustered_servicenames(cur_clusterid, clusterid_name_map) cur_clusterid += 1 clusterid_category_map """ Explanation: Label each service name(s) cluster End of explanation """ import pandas as pd category_totalcountdf =\ pd.DataFrame({'totalcount': clusterid_total_count.values()}, index=clusterid_category_map.values()) sns.set(font_scale=1.5) category_totalcountdf.plot(kind='barh') """ Explanation: Plot Cincinnati 311 Service Name Categories End of explanation """ servicecode_category_map = {} for clusterid in clusterid_name_map.keys(): cur_category = clusterid_category_map[clusterid] for servicecode in clusterid_code_map[clusterid]: servicecode_category_map[servicecode] = cur_category with open('serviceCodeCategory.txt', 'w') as fp: num_names = len(servicecode_category_map) keys = servicecode_category_map.keys() values = servicecode_category_map.values() for idx in range(0, num_names): if idx == 0: fp.write("%s{\"%s\": \"%s\",\n" % (" " * 12, keys[idx], values[idx])) #---------------------------------------- elif idx > 0 and idx < num_names-1: fp.write("%s\"%s\": \"%s\",\n" % (" " * 13, keys[idx], values[idx])) #---------------------------------------- else: fp.write("%s\"%s\": \"%s\"}" % (" " * 13, keys[idx], values[idx])) """ Explanation: Write service code / category map to disk Storing Python Dictionaries End of explanation """
dsacademybr/PythonFundamentos
Cap10/Mini-Projeto2-Solucao/Mini-Projeto2 - Analise2.ipynb
gpl-3.0
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) # Imports import os import subprocess import stat import numpy as np import pandas as pd import seaborn as sns import matplotlib as mat import matplotlib.pyplot as plt from datetime import datetime sns.set(style="white") %matplotlib inline np.__version__ pd.__version__ sns.__version__ mat.__version__ # Dataset clean_data_path = "dataset/autos.csv" df = pd.read_csv(clean_data_path,encoding="latin-1") """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font> Download: http://github.com/dsacademybr Mini-Projeto 2 - Análise Exploratória em Conjunto de Dados do Kaggle Análise 2 End of explanation """ # Crie um Plot que mostre o número de veículos pertencentes a cada marca sns.set_style("whitegrid") g = sns.catplot(y="brand", data=df, kind="count", palette="Reds_r", height=7, aspect=1.5) g.ax.set_title("Veículos Por Marca",fontdict={'size':18}) g.ax.xaxis.set_label_text("Número de Veículos",fontdict= {'size':16}) g.ax.yaxis.set_label_text("Marca",fontdict= {'size':16}) plt.show() # Salvando o plot g.savefig(("plots/Analise2/brand-vehicleCount.png")) """ Explanation: Número de veículos pertencentes a cada marca End of explanation """ # Crie um Plot com o Preço médio dos veículos com base no tipo de veículo, bem como no tipo de caixa de câmbio fig, ax = plt.subplots(figsize=(8,5)) colors = ["#00e600", "#ff8c1a","#a180cc"] sns.barplot(x="vehicleType", y="price",hue="gearbox", palette=colors, data=df) ax.set_title("Preço médio dos veículos por tipo de veículo e tipo de caixa de câmbio",fontdict= {'size':12}) ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':12}) ax.yaxis.set_label_text("Preço Médio",fontdict= {'size':12}) plt.show() # Salvando o plot fig.savefig("plots/Analise2/vehicletype-gearbox-price.png") """ Explanation: Preço médio dos veículos com base no tipo de veículo, bem como no tipo de caixa de câmbio End of explanation """
basnijholt/holoviews
examples/user_guide/Customizing_Plots.ipynb
bsd-3-clause
import numpy as np import holoviews as hv from holoviews import dim, opts hv.extension('bokeh', 'matplotlib') """ Explanation: Customizing Plots End of explanation """ hv.HoloMap({i: hv.Curve([1, 2, 3-i], group='Group', label='Label') for i in range(3)}, 'Value') """ Explanation: The HoloViews options system allows controlling the various attributes of a plot. While different plotting extensions like bokeh, matplotlib and plotly offer different features and the style options may differ, there are a wide array of options and concepts that are shared across the different extensions. Specifically this guide provides an overview on controlling the various aspects of a plot including titles, axes, legends and colorbars. Plots have an overall hierarchy and here we will break down the different components: Plot: Refers to the overall plot which can consist of one or more axes Titles: Using title formatting and providing custom titles Background: Setting the plot background color Font sizes: Controlling the font sizes on a plot Plot hooks: Using custom hooks to modify plots Axes: A set of axes provides scales describing the mapping between data and the space on screen Types of axes: Linear axes Logarithmic axes Datetime axes Categorical axes Axis position: Positioning and hiding axes Inverting axes: Flipping the x-/y-axes and inverting an axis Axis labels: Setting axis labels using dimensions and options Axis ranges: Controlling axes ranges using dimensions, padding and options Axis ticks: Controlling axis tick locations, labels and formatting Customizing the plot Title A plot's title is usually constructed using a formatter which takes the group and label along with the plots dimensions into consideration. The default formatter is: '{label} {group} {dimensions}' where the {label} and {group} are inherited from the objects group and label parameters and dimensions represent the key dimensions in a HoloMap/DynamicMap: End of explanation """ hv.Curve([1, 2, 3]).opts(title="Custom Title") """ Explanation: The title formatter may however be overriden with an explicit title, which may include any combination of the three formatter variables: End of explanation """ hv.Curve([1, 2, 3]).opts(bgcolor='lightgray') """ Explanation: Background Another option which can be controlled at the level of a plot is the background color which may be set using the bgcolor option: End of explanation """ hv.Curve([1, 2, 3], label='Title').opts(fontsize={'title': 16, 'labels': 14, 'xticks': 6, 'yticks': 12}) """ Explanation: Font sizes Controlling the font sizes of a plot is very common so HoloViews provides a convenient option to set the fontsize. The fontsize accepts a dictionary which allows supplying fontsizes for different components of the plot from the title, to the axis labels, ticks and legends. The full list of plot components that can be customized separately include: ['xlabel', 'ylabel', 'zlabel', 'labels', 'xticks', 'yticks', 'zticks', 'ticks', 'minor_xticks', 'minor_yticks', 'minor_ticks', 'title', 'legend', 'legend_title'] Let's take a simple example customizing the title, the axis labels and the x/y-ticks separately: End of explanation """ def hook(plot, element): print('plot.state: ', plot.state) print('plot.handles: ', sorted(plot.handles.keys())) plot.handles['xaxis'].axis_label_text_color = 'red' plot.handles['yaxis'].axis_label_text_color = 'blue' hv.Curve([1, 2, 3]).opts(hooks=[hook]) """ Explanation: Plot hooks HoloViews does not expose every single option a plotting extension like matplotlib or bokeh provides, therefore it is sometimes necessary to dig deeper to achieve precisely the customizations one might need. One convenient way of doing so is to use plot hooks to modify the plot object directly. The hooks are applied after HoloViews is done with the plot, allowing for detailed manipulations of the backend specific plot object. The signature of a hook has two arguments, the HoloViews plot object that is rendering the plot and the element being rendered. From there the hook can modify the objects in the plot's handles, which provides convenient access to various components of a plot or simply access the plot.state which corresponds to the plot as a whole, e.g. in this case we define colors for the x- and y-labels of the plot. End of explanation """ semilogy = hv.Curve(np.logspace(0, 5), label='Semi-log y axes') loglog = hv.Curve((np.logspace(0, 5), np.logspace(0, 5)), label='Log-log axes') semilogy.opts(logy=True) + loglog.opts(logx=True, logy=True, shared_axes=False) """ Explanation: Customizing axes Controlling the axis scales is one of the most common changes to make to a plot, so we will provide a quick overview of the three main types of axes and then go into some more detail on how to control the axis labels, ranges, ticks and orientation. Types of axes There are four main types of axes supported across plotting backends, standard linear axes, log axes, datetime axes and categorical axes. In most cases HoloViews automatically detects the appropriate axis type to use based on the type of the data, e.g. numeric values use linear/log axes, date(time) values use datetime axes and string or other object types use categorical axes. Linear axes A linear axes is simply the default, as long as the data is numeric HoloViews will automatically use a linear axis on the plot. Log axes When the data is exponential it is often useful to use log axes, which can be enabled using independent logx and logy options. This way both semi-log and log-log plots can be achieved: End of explanation """ from bokeh.sampledata.stocks import GOOG, AAPL goog_dates = np.array(GOOG['date'], dtype=np.datetime64) aapl_dates = np.array(AAPL['date'], dtype=np.datetime64) goog = hv.Curve((goog_dates, GOOG['adj_close']), 'Date', 'Stock Index', label='Google') aapl = hv.Curve((aapl_dates, AAPL['adj_close']), 'Date', 'Stock Index', label='Apple') (goog * aapl).opts(width=600, legend_position='top_left') """ Explanation: Datetime axes All current plotting extensions allow plotting datetime data, if you ensure the dates array is of a valid datetime dtype. End of explanation """ points = hv.Points([(chr(i+65), chr(j+65), i*j) for i in range(10) for j in range(10)], vdims='z') heatmap = hv.HeatMap(points) (heatmap * points).opts( opts.HeatMap(toolbar='above', tools=['hover']), opts.Points(tools=['hover'], size=dim('z')*0.3)) """ Explanation: Categorical axes While the handling of categorical data handles significantly between plotting extensions the same basic concepts apply. If the data is a string type or other object type it is formatted as a string and each unique category is assigned a tick along the axis. When overlaying elements the categories are combined and overlaid appropriately. Whether an axis is categorical also depends on the Element type, e.g. a HeatMap always has two categorical axes while a Bars element always has a categorical x-axis. As a simple example let us create a set of points with categories along the x- and y-axes and render them on top of a HeatMap of th same data: End of explanation """ overlay = hv.NdOverlay({group: hv.Scatter(([group]*100, np.random.randn(100)*(5-i)-i)) for i, group in enumerate(['A', 'B', 'C', 'D', 'E'])}) errorbars = hv.ErrorBars([(k, el.reduce(function=np.mean), el.reduce(function=np.std)) for k, el in overlay.items()]) curve = hv.Curve(errorbars) (errorbars * overlay * curve).opts( opts.ErrorBars(line_width=5), opts.Scatter(jitter=0.2, alpha=0.5, size=6, height=400, width=600)) """ Explanation: As a more complex example which does not implicitly assume categorical axes due to the element type we will create a set of random samples indexed by categories from 'A' to 'E' using the Scatter Element and overlay them. Secondly we compute the mean and standard deviation for each category displayed using a set of ErrorBars and finally we overlay these two elements with a Curve representing the mean value . All these Elements respect the categorical index, providing us a view of the distribution of values in each category: End of explanation """ groups = [chr(65+g) for g in np.random.randint(0, 3, 200)] boxes = hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)), ['Group', 'Category'], 'Value').sort() boxes.opts(width=600) """ Explanation: Categorical axes are special in that they support multi-level nesting in some cases. Currently this is only supported for certain element types (BoxWhisker, Violin and Bars) but eventually all chart-like elements will interpret multiple key dimensions as a multi-level categorical hierarchy. To demonstrate this behavior consider the BoxWhisker plot below which support two-level nested categories: End of explanation """ np.random.seed(42) ys = np.random.randn(101).cumsum(axis=0) curve = hv.Curve(ys, ('x', 'x-label'), ('y', 'y-label')) (curve.relabel('No axis').opts(xaxis=None, yaxis=None) + curve.relabel('Bare axis').opts(xaxis='bare') + curve.relabel('Moved axis').opts(xaxis='top', yaxis='right')) """ Explanation: Axis positions Axes may be hidden or moved to a different location using the xaxis and yaxis options, which accept None, 'right'/'bottom', 'left'/'top' and 'bare' as values. End of explanation """ bars = hv.Bars([('Australia', 10), ('United States', 14), ('United Kingdom', 7)], 'Country') (bars.relabel('Invert axes').opts(invert_axes=True, width=400) + bars.relabel('Invert x-axis').opts(invert_xaxis=True) + bars.relabel('Invert y-axis').opts(invert_yaxis=True)).opts(shared_axes=False) """ Explanation: Inverting axes Another option to control axes is to invert the x-/y-axes using the invert_axes options, i.e. turn a vertical plot into a horizontal plot. Secondly each individual axis can be flipped left to right or upside down respectively using the invert_xaxis and invert_yaxis options. End of explanation """ (curve.relabel('Dimension labels') + curve.relabel("xlabel='Custom x-label'").opts(xlabel='Custom x-label') + curve.relabel('Unlabelled').opts(labelled=[])) """ Explanation: Axis labels Ordinarily axis labels are controlled using the dimension label, however explicitly xlabel and ylabel options make it possible to override the label at the plot level. Additionally the labelled option allows specifying which axes should be labelled at all, making it possible to hide axis labels: End of explanation """ curve.redim(x=hv.Dimension('x', range=(-10, 90))) """ Explanation: Axis ranges The ranges of a plot are ordinarily controlled by computing the data range and combining it with the dimension range and soft_range but they may also be padded or explicitly overridden using xlim and ylim options. Dimension ranges data range: The data range is computed by min and max of the dimension values range: Hard override of the data range soft_range: Soft override of the data range Dimension.range Setting the range of a Dimension overrides the data ranges, i.e. here we can see that despite the fact the data extends to x=100 the axis is cut off at 90: End of explanation """ curve.redim(x=hv.Dimension('x', soft_range=(-10, 90))) """ Explanation: Dimension.soft_range Declaringa soft_range on the other hand combines the data range and the supplied range, i.e. it will pick whichever extent is wider. Using the same example as above we can see it uses the -10 value supplied in the soft_range but also extends to 100, which is the upper bound of the actual data: End of explanation """ (curve.relabel('Pad both axes').opts(padding=0.1) + curve.relabel('Pad y-axis').opts(padding=(0, 0.1)) + curve.relabel('Pad y-axis upper bound').opts(padding=(0, (0, 0.1)))).opts(shared_axes=False) """ Explanation: Padding Applying padding to the ranges is an easy way to ensure that the data is not obscured by the margins. The padding is specified by the fraction by which to increase auto-ranged extents to make datapoints more visible around borders. The padding considers the width and height of the plot to keep the visual extent of the padding equal. The padding values can be specified with three levels of detail: A single numeric value (e.g. padding=0.1) A tuple specifying the padding for the x/y(/z) axes respectively (e.g. padding=(0, 0.1)) A tuple of tuples specifying padding for the lower and upper bound respectively (e.g. padding=(0, (0, 0.1))) End of explanation """ curve.relabel('Explicit xlim/ylim').opts(xlim=(-10, 110), ylim=(-14, 6)) """ Explanation: xlim/ylim The data ranges, dimension ranges and padding combine across plots in an overlay to ensure that all the data is contained in the viewport. In some cases it is more convenient to override the ranges with explicit xlim and ylim options which have the highest precedence and will be respected no matter what. End of explanation """ (curve.relabel('N ticks (xticks=10)').opts(xticks=10) + curve.relabel('Listed ticks (xticks=[0, 1, 2])').opts(xticks=[0, 50, 100]) + curve.relabel("Tick labels (xticks=[(0, 'zero'), ...").opts(xticks=[(0, 'zero'), (50, 'fifty'), (100, 'one hundred')])) """ Explanation: Axis ticks Setting tick locations differs a little bit dependening on the plotting extension, interactive backends such as bokeh or plotly dynamically update the ticks, which means fixed tick locations may not be appropriate and the formatters have to be applied in Javascript code. Nevertheless most options to control the ticking are consistent across extensions. Tick locations The number and locations of ticks can be set in three main ways: Number of ticks: Declare the number of desired ticks as an integer List of tick positions: An explicit list defining the list of positions at which to draw a tick List of tick positions and labels: A list of tuples of the form (position, label) End of explanation """ def formatter(value): return str(value) + ' days' curve.relabel('Tick formatters').opts(xformatter=formatter, yformatter='$%.2f', width=500) """ Explanation: Lastly each extension will accept the custom Ticker objects the library provides, which can be used to achieve layouts not usually available. Tick formatters Tick formatting works very differently in different backends, however the xformatter and yformatter options try to minimize these differences. Tick formatters may be defined in one of three formats: A classic format string such as '%d', '%.3f' or '%d' which may also contain other characters ('$%.2f') A function which will be compiled to JS using flexx (if installed) when using bokeh A bokeh.models.TickFormatter in bokeh and a matplotlib.ticker.Formatter instance in matplotlib Here is a small example demonstrating how to use the string format and function approaches: End of explanation """ bars.opts(xrotation=45) """ Explanation: Tick orientation Particularly when dealing with categorical axes it is often useful to control the tick rotation. This can be achieved using the xrotation and yrotation options which accept angles in degrees. End of explanation """
tammoippen/nest-simulator
doc/model_details/aeif_models_implementation.ipynb
gpl-2.0
import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (15, 6) """ Explanation: NEST implementation of the aeif models Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09 This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire (AEIF) neuronal model and compares it with several numerical implementation using simpler solvers. In particular this justifies the change of implementation in September 2016 to make the simulation closer to the reference solution. Position of the problem Basics The equations governing the evolution of the AEIF model are $$\left\lbrace\begin{array}{rcl} C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\ \tau_s\dot{w} &=& a(V-E_L) - w \end{array}\right.$$ when $V < V_{peak}$ (threshold/spike detection). Once a spike occurs, we apply the reset conditions: $$V = V_r \quad \text{and} \quad w = w + b$$ Divergence In the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large. This can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\Delta_T$ is small. Tested solutions Old implementation (before September 2016) The orginal solution that was adopted was to bind the exponential argument to be smaller that 10 (ad hoc value to be close to the original implementation in BRIAN). As will be shown in the notebook, this solution does not converge to the reference LSODAR solution. New implementation The new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$. We will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller. Reference solution The reference solution is implemented using the LSODAR solver which is described and compared in the following references: http://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one) http://www.sciencedirect.com/science/article/pii/S0377042712000684 http://www.radford.edu/~thompson/RP/rootfinding.pdf https://computation.llnl.gov/casc/nsde/pubs/u88007.pdf http://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf http://www.sciencedirect.com/science/article/pii/0377042789903348 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf https://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf Technical details and requirements Implementation of the functions The old and new implementations are reproduced using Scipy and are called by the scipy_aeif function The NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver. The reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package. Requirements To run this notebook, you need: numpy and scipy assimulo matplotlib End of explanation """ def rhs_aeif_new(y, _, p): ''' New implementation bounding V < V_peak Parameters ---------- y : list Vector containing the state variables [V, w] _ : unused var p : Params instance Object containing the neuronal parameters. Returns ------- dv : double Derivative of V dw : double Derivative of w ''' v = min(y[0], p.Vpeak) w = y[1] Ispike = 0. if p.DeltaT != 0.: Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT) dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm dw = (p.a * (v-p.EL) - w) / p.tau_w return dv, dw def rhs_aeif_old(y, _, p): ''' Old implementation bounding the argument of the exponential function (e_arg < 10.). Parameters ---------- y : list Vector containing the state variables [V, w] _ : unused var p : Params instance Object containing the neuronal parameters. Returns ------- dv : double Derivative of V dw : double Derivative of w ''' v = y[0] w = y[1] Ispike = 0. if p.DeltaT != 0.: e_arg = min((v-p.vT)/p.DeltaT, 10.) Ispike = p.gL * p.DeltaT * np.exp(e_arg) dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm dw = (p.a * (v-p.EL) - w) / p.tau_w return dv, dw """ Explanation: Scipy functions mimicking the NEST code Right hand side functions End of explanation """ def scipy_aeif(p, f, simtime, dt): ''' Complete aeif model using scipy `odeint` solver. Parameters ---------- p : Params instance Object containing the neuronal parameters. f : function Right-hand side function (either `rhs_aeif_old` or `rhs_aeif_new`) simtime : double Duration of the simulation (will run between 0 and tmax) dt : double Time increment. Returns ------- t : list Times at which the neuronal state was evaluated. y : list State values associated to the times in `t` s : list Spike times. vs : list Values of `V` just before the spike. ws : list Values of `w` just before the spike fos : list List of dictionaries containing additional output information from `odeint` ''' t = np.arange(0, simtime, dt) # time axis n = len(t) y = np.zeros((n, 2)) # V, w y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.) y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.) s = [] # spike times vs = [] # membrane potential at spike before reset ws = [] # w at spike before step fos = [] # full output dict from odeint() # imitate NEST: update time-step by time-step for k in range(1, n): # solve ODE from t_k-1 to t_k d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True) y[k, :] = d[1, :] fos.append(fo) # check for threshold crossing if y[k, 0] >= p.Vpeak: s.append(t[k]) vs.append(y[k, 0]) ws.append(y[k, 1]) y[k, 0] = p.Vreset # reset y[k, 1] += p.b # step return t, y, s, vs, ws, fos """ Explanation: Complete model End of explanation """ from assimulo.solvers import LSODAR from assimulo.problem import Explicit_Problem class Extended_Problem(Explicit_Problem): # need variables here for access sw0 = [ False ] ts_spikes = [] ws_spikes = [] Vs_spikes = [] def __init__(self, p): self.p = p self.y0 = [self.p.EL, 5.] # V, w # reset variables self.ts_spikes = [] self.ws_spikes = [] self.Vs_spikes = [] #The right-hand-side function (rhs) def rhs(self, t, y, sw): """ This is the function we are trying to simulate (aeif model). """ V, w = y[0], y[1] Ispike = 0. if self.p.DeltaT != 0.: Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT) dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w return np.array([dotV, dotW]) # Sets a name to our function name = 'AEIF_nosyn' # The event function def state_events(self, t, y, sw): """ This is our function that keeps track of our events. When the sign of any of the events has changed, we have an event. """ event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike if event_0 < 0: if not self.ts_spikes: self.ts_spikes.append(t) self.Vs_spikes.append(y[0]) self.ws_spikes.append(y[1]) elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01): self.ts_spikes.append(t) self.Vs_spikes.append(y[0]) self.ws_spikes.append(y[1]) return np.array([event_0]) #Responsible for handling the events. def handle_event(self, solver, event_info): """ Event handling. This functions is called when Assimulo finds an event as specified by the event functions. """ ev = event_info event_info = event_info[0] # only look at the state events information. if event_info[0] > 0: solver.sw[0] = True solver.y[0] = self.p.Vreset solver.y[1] += self.p.b else: solver.sw[0] = False def initialize(self, solver): solver.h_sol=[] solver.nq_sol=[] def handle_result(self, solver, t, y): Explicit_Problem.handle_result(self, solver, t, y) # Extra output for algorithm analysis if solver.report_continuously: h, nq = solver.get_algorithm_data() solver.h_sol.extend([h]) solver.nq_sol.extend([nq]) """ Explanation: LSODAR reference solution Setting assimulo class End of explanation """ def reference_aeif(p, simtime): ''' Reference aeif model using LSODAR. Parameters ---------- p : Params instance Object containing the neuronal parameters. f : function Right-hand side function (either `rhs_aeif_old` or `rhs_aeif_new`) simtime : double Duration of the simulation (will run between 0 and tmax) dt : double Time increment. Returns ------- t : list Times at which the neuronal state was evaluated. y : list State values associated to the times in `t` s : list Spike times. vs : list Values of `V` just before the spike. ws : list Values of `w` just before the spike h : list List of the minimal time increment at each step. ''' #Create an instance of the problem exp_mod = Extended_Problem(p) #Create the problem exp_sim = LSODAR(exp_mod) #Create the solver exp_sim.atol=1.e-8 exp_sim.report_continuously = True exp_sim.store_event_points = True exp_sim.verbosity = 30 #Simulate t, y = exp_sim.simulate(simtime) #Simulate 10 seconds return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol """ Explanation: LSODAR reference model End of explanation """ # Regular spiking aeif_param = { 'V_reset': -58., 'V_peak': 0.0, 'V_th': -50., 'I_e': 420., 'g_L': 11., 'tau_w': 300., 'E_L': -70., 'Delta_T': 2., 'a': 3., 'b': 0., 'C_m': 200., 'V_m': -70., #! must be equal to E_L 'w': 5., #! must be equal to 5. 'tau_syn_ex': 0.2 } # Bursting aeif_param2 = { 'V_reset': -46., 'V_peak': 0.0, 'V_th': -50., 'I_e': 500.0, 'g_L': 10., 'tau_w': 120., 'E_L': -58., 'Delta_T': 2., 'a': 2., 'b': 100., 'C_m': 200., 'V_m': -58., #! must be equal to E_L 'w': 5., #! must be equal to 5. } # Close to chaos (use resol < 0.005 and simtime = 200) aeif_param3 = { 'V_reset': -48., 'V_peak': 0.0, 'V_th': -50., 'I_e': 160., 'g_L': 12., 'tau_w': 130., 'E_L': -60., 'Delta_T': 2., 'a': -11., 'b': 30., 'C_m': 100., 'V_m': -60., #! must be equal to E_L 'w': 5., #! must be equal to 5. } class Params(object): ''' Class giving access to the neuronal parameters. ''' def __init__(self): self.params = aeif_param self.Vpeak = aeif_param["V_peak"] self.Vreset = aeif_param["V_reset"] self.gL = aeif_param["g_L"] self.Cm = aeif_param["C_m"] self.EL = aeif_param["E_L"] self.DeltaT = aeif_param["Delta_T"] self.tau_w = aeif_param["tau_w"] self.a = aeif_param["a"] self.b = aeif_param["b"] self.vT = aeif_param["V_th"] self.Ie = aeif_param["I_e"] p = Params() """ Explanation: Set the parameters and simulate the models Params (chose a dictionary) End of explanation """ # Parameters of the simulation simtime = 100. resol = 0.01 t_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resol) t_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resol) t_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime) """ Explanation: Simulate the 3 implementations End of explanation """ fig, ax = plt.subplots() ax2 = ax.twinx() # Plot the potentials ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.") ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old") ax.plot(t_new, y_new[:,0], linestyle="--", label="V new") # Plot the adaptation variables ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.") ax2.plot(t_old, y_old[:,1], linestyle="-.", c="m", label="w old") ax2.plot(t_new, y_new[:,1], linestyle="--", c="y", label="w new") # Show ax.set_xlim([0., simtime]) ax.set_ylim([-65., 40.]) ax.set_xlabel("Time (ms)") ax.set_ylabel("V (mV)") ax2.set_ylim([-20., 20.]) ax2.set_ylabel("w (pA)") ax.legend(loc=6) ax2.legend(loc=2) plt.show() """ Explanation: Plot the results Zoom out End of explanation """ fig, ax = plt.subplots() ax2 = ax.twinx() # Plot the potentials ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.") ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old") ax.plot(t_new, y_new[:,0], linestyle="--", label="V new") # Plot the adaptation variables ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.") ax2.plot(t_old, y_old[:,1], linestyle="-.", c="y", label="w old") ax2.plot(t_new, y_new[:,1], linestyle="--", c="m", label="w new") ax.set_xlim([90., 92.]) ax.set_ylim([-65., 40.]) ax.set_xlabel("Time (ms)") ax.set_ylabel("V (mV)") ax2.set_ylim([17.5, 18.5]) ax2.set_ylabel("w (pA)") ax.legend(loc=5) ax2.legend(loc=2) plt.show() """ Explanation: Zoom in End of explanation """ print("spike times:\n-----------") print("ref", np.around(s_ref, 3)) # ref lsodar print("old", np.around(s_old, 3)) print("new", np.around(s_new, 3)) print("\nV at spike time:\n---------------") print("ref", np.around(vs_ref, 3)) # ref lsodar print("old", np.around(vs_old, 3)) print("new", np.around(vs_new, 3)) print("\nw at spike time:\n---------------") print("ref", np.around(ws_ref, 3)) # ref lsodar print("old", np.around(ws_old, 3)) print("new", np.around(ws_new, 3)) """ Explanation: Compare properties at spike times End of explanation """ plt.semilogy(t_ref, h_ref, label='Reference') plt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old') plt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New') plt.legend(loc=6) plt.show(); """ Explanation: Size of minimal integration timestep End of explanation """ plt.plot(t_ref, y_ref[:,0], label="V ref.") resolutions = (0.1, 0.01, 0.001) di_res = {} for resol in resolutions: t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resol) t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resol) di_res[resol] = (t_old, y_old, t_new, y_new) plt.plot(t_old, y_old[:,0], linestyle=":", label="V old, r={}".format(resol)) plt.plot(t_new, y_new[:,0], linestyle="--", linewidth=1.5, label="V new, r={}".format(resol)) plt.xlim(0., simtime) plt.xlabel("Time (ms)") plt.ylabel("V (mV)") plt.legend(loc=2) plt.show(); """ Explanation: Convergence towards LSODAR reference with step size Zoom out End of explanation """ plt.plot(t_ref, y_ref[:,0], label="V ref.") for resol in resolutions: t_old, y_old = di_res[resol][:2] t_new, y_new = di_res[resol][2:] plt.plot(t_old, y_old[:,0], linestyle="--", label="V old, r={}".format(resol)) plt.plot(t_new, y_new[:,0], linestyle="-.", linewidth=2., label="V new, r={}".format(resol)) plt.xlim(90., 92.) plt.ylim([-62., 2.]) plt.xlabel("Time (ms)") plt.ylabel("V (mV)") plt.legend(loc=2) plt.show(); """ Explanation: Zoom in End of explanation """
AllenDowney/ThinkStats2
code/chap06ex.ipynb
gpl-3.0
from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print("Downloaded " + local) download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py") import numpy as np import thinkstats2 import thinkplot """ Explanation: Chapter 6 Examples and Exercises from Think Stats, 2nd Edition http://thinkstats2.com Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/brfss.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/CDBRFS08.ASC.gz") import brfss df = brfss.ReadBrfss(nrows=None) """ Explanation: I'll start with the data from the BRFSS again. End of explanation """ female = df[df.sex==2] female_heights = female.htm3.dropna() mean, std = female_heights.mean(), female_heights.std() mean, std """ Explanation: Here are the mean and standard deviation of female height in cm. End of explanation """ pdf = thinkstats2.NormalPdf(mean, std) pdf.Density(mean + std) """ Explanation: NormalPdf returns a Pdf object that represents the normal distribution with the given parameters. Density returns a probability density, which doesn't mean much by itself. End of explanation """ thinkplot.Pdf(pdf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) """ Explanation: thinkplot provides Pdf, which plots the probability density with a smooth curve. End of explanation """ pmf = pdf.MakePmf() thinkplot.Pmf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) """ Explanation: Pdf provides MakePmf, which returns a Pmf object that approximates the Pdf. End of explanation """ thinkplot.Pdf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) """ Explanation: If you have a Pmf, you can also plot it using Pdf, if you have reason to think it should be represented as a smooth curve. End of explanation """ thinkplot.Pdf(pdf, label='normal') sample = np.random.normal(mean, std, 500) sample_pdf = thinkstats2.EstimatedPdf(sample, label='sample') thinkplot.Pdf(sample_pdf, label='sample KDE') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) """ Explanation: Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE). If you run this a few times, you'll see how much variation there is in the estimate. End of explanation """ def RawMoment(xs, k): return sum(x**k for x in xs) / len(xs) """ Explanation: Moments Raw moments are just sums of powers. End of explanation """ RawMoment(female_heights, 1), RawMoment(female_heights, 2), RawMoment(female_heights, 3) def Mean(xs): return RawMoment(xs, 1) Mean(female_heights) """ Explanation: The first raw moment is the mean. The other raw moments don't mean much. End of explanation """ def CentralMoment(xs, k): mean = RawMoment(xs, 1) return sum((x - mean)**k for x in xs) / len(xs) """ Explanation: The central moments are powers of distances from the mean. End of explanation """ CentralMoment(female_heights, 1), CentralMoment(female_heights, 2), CentralMoment(female_heights, 3) def Var(xs): return CentralMoment(xs, 2) Var(female_heights) """ Explanation: The first central moment is approximately 0. The second central moment is the variance. End of explanation """ def StandardizedMoment(xs, k): var = CentralMoment(xs, 2) std = np.sqrt(var) return CentralMoment(xs, k) / std**k """ Explanation: The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel. End of explanation """ StandardizedMoment(female_heights, 1), StandardizedMoment(female_heights, 2), StandardizedMoment(female_heights, 3) def Skewness(xs): return StandardizedMoment(xs, 3) Skewness(female_heights) """ Explanation: The third standardized moment is skewness. End of explanation """ def Median(xs): cdf = thinkstats2.Cdf(xs) return cdf.Value(0.5) """ Explanation: Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median. End of explanation """ Mean(female_heights), Median(female_heights) """ Explanation: But in this case the mean is greater than the median, which indicates skew to the right. End of explanation """ def PearsonMedianSkewness(xs): median = Median(xs) mean = RawMoment(xs, 1) var = CentralMoment(xs, 2) std = np.sqrt(var) gp = 3 * (mean - median) / std return gp """ Explanation: Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust. End of explanation """ PearsonMedianSkewness(female_heights) """ Explanation: Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right. End of explanation """ download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct") download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz" ) import first live, firsts, others = first.MakeFrames() """ Explanation: Birth weights Let's look at the distribution of birth weights again. End of explanation """ birth_weights = live.totalwgt_lb.dropna() pdf = thinkstats2.EstimatedPdf(birth_weights) thinkplot.Pdf(pdf, label='birth weight') thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PDF') """ Explanation: Based on KDE, it looks like the distribution is skewed to the left. End of explanation """ Mean(birth_weights), Median(birth_weights) """ Explanation: The mean is less than the median, which is consistent with left skew. End of explanation """ Skewness(birth_weights), PearsonMedianSkewness(birth_weights) """ Explanation: And both ways of computing skew are negative, which is consistent with left skew. End of explanation """ adult_weights = df.wtkg2.dropna() pdf = thinkstats2.EstimatedPdf(adult_weights) thinkplot.Pdf(pdf, label='Adult weight') thinkplot.Config(xlabel='Adult weight (kg)', ylabel='PDF') """ Explanation: Adult weights Now let's look at adult weights from the BRFSS. The distribution looks skewed to the right. End of explanation """ Mean(adult_weights), Median(adult_weights) """ Explanation: The mean is greater than the median, which is consistent with skew to the right. End of explanation """ Skewness(adult_weights), PearsonMedianSkewness(adult_weights) """ Explanation: And both ways of computing skewness are positive. End of explanation """ def InterpolateSample(df, log_upper=6.0): """Makes a sample of log10 household income. Assumes that log10 income is uniform in each range. df: DataFrame with columns income and freq log_upper: log10 of the assumed upper bound for the highest range returns: NumPy array of log10 household income """ # compute the log10 of the upper bound for each range df['log_upper'] = np.log10(df.income) # get the lower bounds by shifting the upper bound and filling in # the first element df['log_lower'] = df.log_upper.shift(1) df.loc[0, 'log_lower'] = 3.0 # plug in a value for the unknown upper bound of the highest range df.loc[41, 'log_upper'] = log_upper # use the freq column to generate the right number of values in # each range arrays = [] for _, row in df.iterrows(): vals = np.linspace(row.log_lower, row.log_upper, int(row.freq)) arrays.append(vals) # collect the arrays into a single sample log_sample = np.concatenate(arrays) return log_sample download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/hinc.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/hinc06.csv") import hinc income_df = hinc.ReadData() log_sample = InterpolateSample(income_df, log_upper=6.0) log_cdf = thinkstats2.Cdf(log_sample) thinkplot.Cdf(log_cdf) thinkplot.Config(xlabel='Household income (log $)', ylabel='CDF') sample = np.power(10, log_sample) cdf = thinkstats2.Cdf(sample) thinkplot.Cdf(cdf) thinkplot.Config(xlabel='Household income ($)', ylabel='CDF') """ Explanation: Exercises The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is. The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded hinc06.xls, which is an Excel spreadsheet with information about household income, and converted it to hinc06.csv, a CSV file you will find in the repository for this book. You will also find hinc2.py, which reads this file and transforms the data. The dataset is in the form of a series of income ranges and the number of respondents who fell in each range. The lowest range includes respondents who reported annual household income “Under \$5000.” The highest range includes respondents who made “\$250,000 or more.” To estimate mean and other statistics from these data, we have to make some assumptions about the lower and upper bounds, and how the values are distributed in each range. hinc2.py provides InterpolateSample, which shows one way to model this data. It takes a DataFrame with a column, income, that contains the upper bound of each range, and freq, which contains the number of respondents in each frame. It also takes log_upper, which is an assumed upper bound on the highest range, expressed in log10 dollars. The default value, log_upper=6.0 represents the assumption that the largest income among the respondents is $10^6$, or one million dollars. InterpolateSample generates a pseudo-sample; that is, a sample of household incomes that yields the same number of respondents in each range as the actual data. It assumes that incomes in each range are equally spaced on a log10 scale. End of explanation """
kfollette/ASTR200-Spring2017
Labs/Lab6/Lab6.ipynb
mit
from numpy import * """ Explanation: <small><i>This notebook is based on one put together by Jake Vanderplas and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. Source and license info for the original is on GitHub.</i></small> Names: [Insert Your Names Here] Lab 6 - Advanced Data Structures Lab 6 Contents Tuples Defining Tuples Indexing Tuples Tuple Modification Lists Defining Lists Indexing Lists Extending and Appending Lists Searching, Sorting and Counting Lists Exploring List Methods Iterating over lists The range function Creating lists on the fly Sets Dictionaries Defining Disctionaries Dictionary Keys This lab will introduce you to four new types of Python objects that allow you to collect data of arbitraty (and often mixed) type in Python, and these are known as "Sequence objects" tuple : an immutable ordered array of data list : a mutable ordered array of data set : an unordered collection of unique elements dict : an unordered mapping from keys to values End of explanation """ t = (12, -1) type(t) """ Explanation: 1. Tuples 1.1 Defining Tuples Tuples are denoted with round parentheses End of explanation """ isinstance(t,tuple) isinstance(t,list) """ Explanation: If you'd like to test whether an object is a tuple (or any other type of object), you can use the python function isinstance End of explanation """ print(len(t)) """ Explanation: Tuples have lengths just like other types of Python objects End of explanation """ t = (12, "monty", True, -1.23e6) t """ Explanation: and you can mix types within a tuple End of explanation """ t[0] t[-1] t[-2:] # get the last two elements, return as a tuple """ Explanation: 1.2 Indexing Tuples Indexing works the same way as for arrays: End of explanation """ x = (True) ; print(type(x)) #by the way, did you know you can execute two commands on one line with a semicolon? x = (True,) ; print(type(x)) x = () type(x), len(x) #and you can also return multiple things to the output of a notebook cell with commas """ Explanation: Single element tuples look like (element,) rather than (element) End of explanation """ t[2] = False """ Explanation: 1.3 Tuple Modification Tuples cannot be modified. The following cell will spit out an error. End of explanation """ newt = t[0:2], False, t[3:] type(newt), newt """ Explanation: but you can create a new tuple by combining elements from other tuples End of explanation """ len(newt), type(newt[0]), type(newt[1]), type(newt[2]) """ Explanation: Note the above did something, but not exactly what you might think. It created a three element tuple, where the first (index 0) and third (index 2) elements are themselves tuples End of explanation """ t[0:2] + False + t[3:] """ Explanation: This can have its uses, but more often you will want to create a tuple identical to the original but with different elements, for which we use concatenation instead, just like we did with strings. But concatenation is tricky. What's wrong with the following statement? End of explanation """ 'I can not concatenate things like ' + 7 + ' and ' + 'monkeys' """ Explanation: similarly: End of explanation """ y = t[0:2] + (False,) + t[3:] y """ Explanation: You can only concatenate objects of the same type, so you have to use the trick for a single element tuple, as described above End of explanation """ x=y x """ Explanation: So tuples are immutable, but not indestructible. Once we've defined a new one, we can assign it to x and overwrite the original if we really want to End of explanation """ t = t[0:2] + (False,) + t[3:] t """ Explanation: Similarly, we could have done this without assigning a the new variable, but note that this erases memory if the original. End of explanation """ t * 2 """ Explanation: Like strings, you can also "multiply" tuples to duplicate elements End of explanation """ v = [1,2,3] print(len(v)) print(type(v)) """ Explanation: Tuples are most commonly used in functions that return multiple arguments. 2. Lists 2.1 Defining Lists Python lists are denoted with square brackets. We've dealt with them indirectly a bit already in this class, but it's worth discussing them explicitly here. End of explanation """ v[0:2], v[-1] v = v[2:] print(v) """ Explanation: 2.2 Indexing Lists Lists can be indexed End of explanation """ v = ["eggs", "spam", -1, ("monty","python"), [-1.2,-3.5]] len(v) """ Explanation: Lists can contain multiple data types, including tuples and other lists End of explanation """ v[0] ="green egg" v[1] += ",love it." # this takes what's already in v[1] and adds what comes after += v """ Explanation: Unlike tuples, however, lists are mutable. End of explanation """ v[-1][1] = None print(v) z = array([[1,2],[3,4]]) z[0][1], z[0,1] """ Explanation: You can index multi-element objects within a list as well, but in this case, you index variable[list element index][index of thing you want], as below. Note this is slightly different from the way you were taught to index a numpy array with arrayname[column,row], but the same syntax actually works with numpy arrays (arrayname[column][row]) End of explanation """ v = [1,2,3] v.append(4) v.append([-5]) v """ Explanation: <div class=sidebar> ### Sidebar: *A Note on lists vs. arrays* In fact, lists can be made to look a lot like numpy arrays (e.g. vv = [ [1,2], [3,4] ] makes a list that looks just like the numpy array above), but it's important to note that the properties of a list object are slightly diffferent. Specifically: * Since a list contains pointers to a bunch of python objects, it takes more memory to store an array in list format than as an array (which points to a single object in memory). Operations on large arrays will be much faster than on equivalent lists, because list operations require a variety of type checks, etc. * Many mathematical operations, particularly matrix operations, will only work on numpy arrays * Lists support insertion, deletion, appending and concatenation in ways that arrays do not, as detailed in the next section So each is useful for its own thing. Lists are useful for storing mixed type objects associated with one another and their mutability allows insertion, deletion, etc. Arrays are useful for storing and operating on large matrices of numbers. ### 2.3 Extending and Appending Lists #### Useful list methods: * `.append()`: adds a new element * `.extend()`: concatenates a list/element * `.pop()`: remove an element End of explanation """ v = v[:4] w = ['elderberries', 'eggs'] v + w v v.extend(w) v z = v.pop() z v v.pop(0) ## pop the first element v """ Explanation: Note: lists can be considered objects. Objects are collections of data and associated methods. In the case of a list, append is a method: it is a function associated with the object. End of explanation """ v = [1, 3, 2, 3, 4] v.sort() v """ Explanation: 2.4 Searching, Sorting, and Counting Lists End of explanation """ v.sort(reverse=True) v """ Explanation: reverse is a keyword of the .sort() method End of explanation """ v.index(4) ## lookup the index of the entry 4 v.index(3) v.count(3) v.insert(0, "it's full of stars") v v.remove(1) v """ Explanation: .sort() changes the the list in place End of explanation """ v. """ Explanation: 2.5 Exploring List Methods Jupyter is your new best friend: it's tab-completion allows you to explore all methods available to an object. (This only works in jupyter, not in the command line) Type v. and then the tab key to see all the available methods: End of explanation """ v.index? """ Explanation: Once you find a method, type (for example) v.index? and press shift-enter: you'll see the documentation of the method End of explanation """ a = ['cat', 'window', 'defenestrate'] for x in a: print(x, len(x)) #enumerate is a useful command that returns ordered pairs of the form #(index, array element) for all of the elements in a list for i,x in enumerate(a): print(i, x, len(x)) # print all the elements in the list with spaces between for x in a: print(x, end=' ') """ Explanation: This is probably the most important thing you'll learn today 2.6 Iterating Over Lists End of explanation """ x = range(4) x total = 0 for val in range(4): total += val print("By adding " + str(val) + \ " the total is now " + str(total)) """ Explanation: The syntax for iteration is... for variable_name in iterable: # do something with variable_name 2.7 The range() function The range() function creates a list of integers (actually an iterator, but think of it as a list) End of explanation """ total = 0 for val in range(1, 10, 2): total += val print("By adding " + str(val) + \ " the total is now " + str(total)) """ Explanation: range([start,] stop[, step]) → list of integers End of explanation """ y = arange(4) y z = range(0,10,0.1) """ Explanation: In practice, this is equivalent to the python arange command that you've already seen, but note that arange creates a numpy array with all of the elements between the start and stop point, and is therefore more efficient for large loops. Still, it's usefult to be aware of range as well. arange can also be used on non-integers, which is quite useful. Note that the second cell below will result in an error. End of explanation """ L = [] #before populating the list, you must first define it! for num in range(100): if (num % 7 == 0) or (num % 11 == 0): #recall that % is the "mod" function L.append(num) print(L) """ Explanation: 2.8 Creating Lists on-the-fly Example: imagine you want a list of all numbers from 0 to 100 which are divisible by 7 or 11. End of explanation """ L = [num for num in range(100) if (num % 7 == 0) or (num % 11 == 0)] print(L) # Can also operate on each element: L = [2 * num for num in range(100) if (num % 7 == 0) or (num % 11 == 0)] print(L) """ Explanation: We can also do this with a list comprehension: End of explanation """ L = ["Oh", "Say", "does", "that", "star", "spangled", "banner", "yet", "wave"] """ Explanation: <div class=hw> ### Exercise 1 ---------- Write a loop over the words in this list and print the words longer than three characters in length: End of explanation """ {1,2,3,"bingo"} """ Explanation: 3. Sets Sets can be thought of as unordered lists of unique items Sets are denoted with a curly braces End of explanation """ {1,2,3,"bingo",3} type({1,2,3,"bingo"}) """ Explanation: The uniqueness aspect is the key here. Note that the output of the cell below is the same as the one above. End of explanation """ set("spamIam") """ Explanation: The set function will make a set out of whatever is provided. End of explanation """ a = set("sp") b = set("am") print(a, b) c = set(["a","m"]) c == b "p" in a a | b """ Explanation: sets have unique elements. They can be compared, differenced, unionized, etc. End of explanation """ # number 1... curly braces & colons d = {"favorite cat": None, "favorite spam": "all"} d # number 2 d = dict(one = 1, two=2, cat='dog') d # number 3 ... just start filling in items/keys d = {} # empty dictionary d['cat'] = 'dog' d['one'] = 1 d['two'] = 2 d # number 4... start with a list of tuples and then use the dict function to create a dictionary with them mylist = [("cat","dog"), ("one",1), ("two",2)] dict(mylist) dict(mylist) == d """ Explanation: 4. Dictionaries 4.1 Defining Dictionaries Dictionaries are one-to-one mappings of objects. They are often useful when you want to assign multiple named properties to individuals. Each entry in a dictionary has a set of "keys" that can be assigned unique values. We'll show four ways to make a Dictionary End of explanation """ d = {"favorite cat": None, "favorite spam": "all"} d[0] # this breaks! Dictionaries have no order """ Explanation: 4.2 Dictionary Keys Note that there is no guaranteed order in a dictionary, thus they cannot be indexed numerically! End of explanation """ d["favorite spam"] """ Explanation: They can, however, be indexed with an appropriate key. End of explanation """ d[0] = "this is a zero" d """ Explanation: and the following syntax results in a key called "0" End of explanation """ d = {'favorites': {'cat': None, 'spam': 'all'},\ 'least favorite': {'cat': 'all', 'spam': None}} d['least favorite']['cat'] """ Explanation: Dictionaries can contain dictionaries! End of explanation """ # globals() and locals() store all global and local variables (in this case, since we've imported numpy, quite a few) globals().keys() """ Explanation: note: the backslash ('\') above allows you to break lines without interrupting the code. Not technically needed when defining a dictionary or list, but useful in many instances when you have a long operation that is unwiedy in a single line of code Dictionaries are used everywhere within Python... End of explanation """ # Each element is (name, semi-major axis (AU), eccentricity, orbit class) # source: http://ssd.jpl.nasa.gov/sbdb_query.cgi Asteroids = [('Eros', 1.457916888347732, 0.2226769029627053, 'AMO'), ('Albert', 2.629584157344544, 0.551788195302116, 'AMO'), ('Alinda', 2.477642943521562, 0.5675993715753302, 'AMO'), ('Ganymed', 2.662242764279804, 0.5339300994578989, 'AMO'), ('Amor', 1.918987277620309, 0.4354863345648127, 'AMO'), ('Icarus', 1.077941311539208, 0.826950446001521, 'APO'), ('Betulia', 2.196489260519891, 0.4876246891992282, 'AMO'), ('Geographos', 1.245477192797457, 0.3355407124897842, 'APO'), ('Ivar', 1.862724540418448, 0.3968541470639658, 'AMO'), ('Toro', 1.367247622946547, 0.4358829575017499, 'APO'), ('Apollo', 1.470694262588244, 0.5598306817483757, 'APO'), ('Antinous', 2.258479598510079, 0.6070051516585434, 'APO'), ('Daedalus', 1.460912865705988, 0.6144629118218898, 'APO'), ('Cerberus', 1.079965807367047, 0.4668134997419173, 'APO'), ('Sisyphus', 1.893726635847921, 0.5383319204425762, 'APO'), ('Quetzalcoatl', 2.544270656955212, 0.5704591861565643, 'AMO'), ('Boreas', 2.271958775354725, 0.4499332278634067, 'AMO'), ('Cuyo', 2.150453953345012, 0.5041719257675564, 'AMO'), ('Anteros', 1.430262719980132, 0.2558054402785934, 'AMO'), ('Tezcatlipoca', 1.709753263222791, 0.3647772103513082, 'AMO'), ('Midas', 1.775954494579457, 0.6503697243919138, 'APO'), ('Baboquivari', 2.646202507670927, 0.5295611095751231, 'AMO'), ('Anza', 2.26415089613359, 0.5371603112900858, 'AMO'), ('Aten', 0.9668828078092987, 0.1827831025175614, 'ATE'), ('Bacchus', 1.078135348117527, 0.3495569270441645, 'APO'), ('Ra-Shalom', 0.8320425524852308, 0.4364726062545577, 'ATE'), ('Adonis', 1.874315684524321, 0.763949321566, 'APO'), ('Tantalus', 1.289997492877751, 0.2990853014998932, 'APO'), ('Aristaeus', 1.599511990737142, 0.5030618532252225, 'APO'), ('Oljato', 2.172056090036035, 0.7125729402616418, 'APO'), ('Pele', 2.291471988746353, 0.5115484924883255, 'AMO'), ('Hephaistos', 2.159619960333728, 0.8374146846143349, 'APO'), ('Orthos', 2.404988778495748, 0.6569133796135244, 'APO'), ('Hathor', 0.8442121506103012, 0.4498204013480316, 'ATE'), ('Beltrovata', 2.104690977122337, 0.413731105995413, 'AMO'), ('Seneca', 2.516402574514213, 0.5708728441169761, 'AMO'), ('Krok', 2.152545170235639, 0.4478259793515817, 'AMO'), ('Eger', 1.404478323548423, 0.3542971360331806, 'APO'), ('Florence', 1.768227407864309, 0.4227761019048867, 'AMO'), ('Nefertiti', 1.574493139339916, 0.283902719273878, 'AMO'), ('Phaethon', 1.271195939723604, 0.8898716672181355, 'APO'), ('Ul', 2.102493486378346, 0.3951143067760007, 'AMO'), ('Seleucus', 2.033331705805067, 0.4559159977082651, 'AMO'), ('McAuliffe', 1.878722427225527, 0.3691521497610656, 'AMO'), ('Syrinx', 2.469752836845105, 0.7441934504192601, 'APO'), ('Orpheus', 1.209727780883745, 0.3229034563257626, 'APO'), ('Khufu', 0.989473784873371, 0.468479627898914, 'ATE'), ('Verenia', 2.093231870619781, 0.4865133359612604, 'AMO'), ('Don Quixote', 4.221712367193639, 0.7130894892477316, 'AMO'), ('Mera', 1.644476057737928, 0.3201425983025733, 'AMO')] orbit_class = {'AMO':'Amor', 'APO':'Apollo', 'ATE':'Aten'} from IPython.core.display import HTML def css_styling(): styles = open("../custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: <div class=hw> ### Exercise 2 -------------- Below is a list of information on 50 of the largest near-earth asteroids. (a) Given this list of asteroid information, find and list all asteroids with semi-major axis (a) within 0.2AU of earth, and with eccentricities (e) less than 0.5. (b) Note that the object below is a list (denoted with square brackets) of tuples (denoted with round brackets), and that the orbit class object is a dictionary. Create a dictionary where the name of each asteroid is the key, and the object stored under that key is a three element tuple (semi-major axis (AU), eccentricity, orbit class). (c) using the list (and not the dictionary), print the list of asteroids according to: (i) alphabetical by asteroid name (ii) in order of increasing semi-major axis (iii) in order of increasing eccentricity (iv) alphabetically by class (two-stage sorting) hint: use the "sorted" function rather than object.sort, and check out the function "itemgetter" from the python module "operator" Bonus points if you can get it to print with the columns lined up nicely! End of explanation """
opesci/devito
examples/seismic/tutorials/04_dask_pickling.ipynb
mit
#NBVAL_IGNORE_OUTPUT # Set up inversion parameters. param = {'t0': 0., 'tn': 1000., # Simulation last 1 second (1000 ms) 'f0': 0.010, # Source peak frequency is 10Hz (0.010 kHz) 'nshots': 5, # Number of shots to create gradient from 'shape': (101, 101), # Number of grid points (nx, nz). 'spacing': (10., 10.), # Grid spacing in m. The domain size is now 1km by 1km. 'origin': (0, 0), # Need origin to define relative source and receiver locations. 'nbl': 40} # nbl thickness. import numpy as np import scipy from scipy import signal, optimize from devito import Grid from distributed import Client, LocalCluster, wait import cloudpickle as pickle # Import acoustic solver, source and receiver modules. from examples.seismic import Model, demo_model, AcquisitionGeometry, Receiver from examples.seismic.acoustic import AcousticWaveSolver from examples.seismic import AcquisitionGeometry # Import convenience function for plotting results from examples.seismic import plot_image from examples.seismic import plot_shotrecord def get_true_model(): ''' Define the test phantom; in this case we are using a simple circle so we can easily see what is going on. ''' return demo_model('circle-isotropic', vp_circle=3.0, vp_background=2.5, origin=param['origin'], shape=param['shape'], spacing=param['spacing'], nbl=param['nbl']) def get_initial_model(): '''The initial guess for the subsurface model. ''' # Make sure both model are on the same grid grid = get_true_model().grid return demo_model('circle-isotropic', vp_circle=2.5, vp_background=2.5, origin=param['origin'], shape=param['shape'], spacing=param['spacing'], nbl=param['nbl'], grid=grid) def wrap_model(x, astype=None): '''Wrap a flat array as a subsurface model. ''' model = get_initial_model() v_curr = 1.0/np.sqrt(x.reshape(model.shape)) if astype: model.update('vp', v_curr.astype(astype).reshape(model.shape)) else: model.update('vp', v_curr.reshape(model.shape)) return model def load_model(filename): """ Returns the current model. This is used by the worker to get the current model. """ pkl = pickle.load(open(filename, "rb")) return pkl['model'] def dump_model(filename, model): ''' Dump model to disk. ''' pickle.dump({'model':model}, open(filename, "wb")) def load_shot_data(shot_id, dt): ''' Load shot data from disk, resampling to the model time step. ''' pkl = pickle.load(open("shot_%d.p"%shot_id, "rb")) return pkl['geometry'], pkl['rec'].resample(dt) def dump_shot_data(shot_id, rec, geometry): ''' Dump shot data to disk. ''' pickle.dump({'rec':rec, 'geometry': geometry}, open('shot_%d.p'%shot_id, "wb")) def generate_shotdata_i(param): """ Inversion crime alert! Here the worker is creating the 'observed' data using the real model. For a real case the worker would be reading seismic data from disk. """ # Reconstruct objects with open("arguments.pkl", "rb") as cp_file: cp = pickle.load(cp_file) solver = cp['solver'] # source position changes according to the index shot_id=param['shot_id'] solver.geometry.src_positions[0,:]=[20, shot_id*1000./(param['nshots']-1)] true_d = solver.forward()[0] dump_shot_data(shot_id, true_d.resample(4.0), solver.geometry.src_positions) def generate_shotdata(solver): # Pick devito objects (save on disk) cp = {'solver': solver} with open("arguments.pkl", "wb") as cp_file: pickle.dump(cp, cp_file) work = [dict(param) for i in range(param['nshots'])] # synthetic data is generated here twice: serial(loop below) and parallel (via dask map functionality) for i in range(param['nshots']): work[i]['shot_id'] = i generate_shotdata_i(work[i]) # Map worklist to cluster, We pass our function and the dictionary to the map() function of the client # This returns a list of futures that represents each task futures = c.map(generate_shotdata_i, work) # Wait for all futures wait(futures) #NBVAL_IGNORE_OUTPUT from examples.seismic import plot_shotrecord # Client setup cluster = LocalCluster(n_workers=2, death_timeout=600) c = Client(cluster) # Generate shot data. true_model = get_true_model() # Source coords definition src_coordinates = np.empty((1, len(param['shape']))) # Number of receiver locations per shot. nreceivers = 101 # Set up receiver data and geometry. rec_coordinates = np.empty((nreceivers, len(param['shape']))) rec_coordinates[:, 1] = np.linspace(param['spacing'][0], true_model.domain_size[0] - param['spacing'][0], num=nreceivers) rec_coordinates[:, 0] = 980. # 20m from the right end # Geometry geometry = AcquisitionGeometry(true_model, rec_coordinates, src_coordinates, param['t0'], param['tn'], src_type='Ricker', f0=param['f0']) # Set up solver solver = AcousticWaveSolver(true_model, geometry, space_order=4) generate_shotdata(solver) """ Explanation: 04 - Full waveform inversion with Dask and Devito pickling Introduction Here, we revisit 04_dask.ipynb: Full Waveform Inversion with Devito and Dask, but with a twist: we now want to show that it is possible to use pickle to serialize (deserialize) a Devito object structure into (from) a byte stream. This is specially useful in our example as the geometry of all source experiments remains essentially the same; only the source location changes. In other words, we can convert a solver object (built on top of generic Devito objects) into a byte stream to store it. Later on, this byte stream can then be retrieved and de-serialized back to an instance of the original solver object by the dask workers, and then be populated with the correct geometry for the i-th source location. We can still benefit from the simplicity of the example and create only one solver object which can be used to both generate the observed data set and to compute the predicted data and gradient in the FWI process. Further examples of pickling can be found here. The tutorial roughly follows the structure of 04_dask.ipynb. Technical details about Dask and scipy.optimize.minimize will therefore treated only superficially. What is different from 04_dask.ipynb The big difference between 04_dask.ipynb and this tutorial is that in the former is created a solver object for each source in both forward modeling and FWI gradient kernels. While here only one solver object is created and reused along all the optimization process. This is done through pickling and unpickling respectively. Another difference between the tutorials is that the in 04_dask.ipynb is created a list with the observed shots, and then each observed shot record of the list is passed as parameter to a single-shot FWI objective function executed in parallel using the submit() method. Here, a single observed shot record along information of its source location is stored in a dictionary, which is saved into a pickle file. Later, dask workers retrieve the corresponding pickled data when computing the gradient for a single shot. The same applies for the model object in the optimization process. It is serialized each time the model's velocity is updated. Then, dask workers unpickle data from file back to model object. Moreover, there is a difference in the way that the global functional-gradient is obtained. In 04_dask.ipynb we had to wait for all computations to finish via wait(futures) and then we sum the function values and gradients from all workers. Here, it is defined a type fg_pair so that a reduce function sum can be used, such function takes all the futures given to it and after they are completed, combine them to get the estimate of the global functional-gradient. scipy.optimize.minimize As in 04_dask.ipynb, here we are going to focus on using L-BFGS via scipy.optimize.minimize(method=’L-BFGS-B’) python scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxls': 20, 'iprint': -1, 'gtol': 1e-05, 'eps': 1e-08, 'maxiter': 15000, 'ftol': 2.220446049250313e-09, 'maxcor': 10, 'maxfun': 15000}) The argument fun is a callable function that returns the misfit between the simulated and the observed data. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function - as is our case when applying the adjoint-state method. Dask Dask is task-based parallelization framework for Python. It allows us to distribute our work among a collection of workers controlled by a central scheduler. Dask is well-documented, flexible, an currently under active development. In the same way as in 04_dask.ipynb, we are going to use it here to parallelise the computation of the functional and gradient as this is the vast bulk of the computational expense of FWI and it is trivially parallel over data shots. Forward modeling We define the functions used for the forward modeling, as well as the other functions used in constructing and deconstructing Python/Devito objects to/from binary data as follows: End of explanation """ # Define a type to store the functional and gradient. class fg_pair: def __init__(self, f, g): self.f = f self.g = g def __add__(self, other): f = self.f + other.f g = self.g + other.g return fg_pair(f, g) def __radd__(self, other): if other == 0: return self else: return self.__add__(other) """ Explanation: Dask specifics Previously in 03_fwi.ipynb, we defined a function to calculate the individual contribution to the functional and gradient for each shot, which was then used in a loop over all shots. However, when using distributed frameworks such as Dask we instead think in terms of creating a worklist which gets mapped onto the worker pool. The sum reduction is also performed in parallel. For now however we assume that the scipy.optimize.minimize itself is running on the master process; this is a reasonable simplification because the computational cost of calculating (f, g) far exceeds the other compute costs. Because we want to be able to use standard reduction operators such as sum on (f, g) we first define it as a type so that we can define the __add__ (and __radd__ method). End of explanation """ #NBVAL_IGNORE_OUTPUT from devito import Function # Create FWI gradient kernel for a single shot def fwi_gradient_i(param): # Load the current model and the shot data for this worker. # Note, unlike the serial example the model is not passed in # as an argument. Broadcasting large datasets is considered # a programming anti-pattern and at the time of writing it # it only worked reliably with Dask master. Therefore, the # the model is communicated via a file. model0 = load_model(param['model']) dt = model0.critical_dt nbl = model0.nbl # Get src_position and data src_positions, rec = load_shot_data(param['shot_id'], dt) # Set up solver -- load the solver used above in the generation of the syntethic data. with open("arguments.pkl", "rb") as cp_file: cp = pickle.load(cp_file) solver = cp['solver'] # Set attributes to solver solver.geometry.src_positions=src_positions solver.geometry.resample(dt) # Compute simulated data and full forward wavefield u0 d, u0 = solver.forward(vp=model0.vp, dt=dt, save=True)[0:2] # Compute the data misfit (residual) and objective function residual = Receiver(name='rec', grid=model0.grid, time_range=solver.geometry.time_axis, coordinates=solver.geometry.rec_positions) #residual.data[:] = d.data[:residual.shape[0], :] - rec.data[:residual.shape[0], :] residual.data[:] = d.data[:] - rec.data[0:d.data.shape[0], :] f = .5*np.linalg.norm(residual.data.flatten())**2 # Compute gradient using the adjoint-state method. Note, this # backpropagates the data misfit through the model. grad = Function(name="grad", grid=model0.grid) solver.gradient(rec=residual, u=u0, vp=model0.vp, dt=dt, grad=grad) # Copying here to avoid a (probably overzealous) destructor deleting # the gradient before Dask has had a chance to communicate it. g = np.array(grad.data[:])[nbl:-nbl, nbl:-nbl] # return the objective functional and gradient. return fg_pair(f, g) """ Explanation: Create operators for gradient based inversion To perform the inversion we are going to use scipy.optimize.minimize(method=’L-BFGS-B’). First we define the functional, f, and gradient, g, operator (i.e. the function fun) for a single shot of data. This is the work that is going to be performed by the worker on a unit of data. End of explanation """ def fwi_gradient(model, param): # Dump a copy of the current model for the workers # to pick up when they are ready. param['model'] = "model_0.p" dump_model(param['model'], wrap_model(model)) # Define work list work = [dict(param) for i in range(param['nshots'])] for i in range(param['nshots']): work[i]['shot_id'] = i # Distribute worklist to workers. fgi = c.map(fwi_gradient_i, work, retries=1) # Perform reduction. fg = c.submit(sum, fgi).result() # L-BFGS in scipy expects a flat array in 64-bit floats. return fg.f, fg.g.flatten().astype(np.float64) """ Explanation: Define the global functional-gradient operator. This does the following: * Maps the worklist (shots) to the workers so that the invidual contributions to (f, g) are computed. * Sum individual contributions to (f, g) and returns the result. End of explanation """ from scipy import optimize # Many optimization methods in scipy.optimize.minimize accept a callback # function that can operate on the solution after every iteration. Here # we use this to monitor the true relative solution error. relative_error = [] def fwi_callbacks(x): # Calculate true relative error true_vp = get_true_model().vp.data[param['nbl']:-param['nbl'], param['nbl']:-param['nbl']] true_m = 1.0 / (true_vp.reshape(-1).astype(np.float64))**2 relative_error.append(np.linalg.norm((x-true_m)/true_m)) # FWI with L-BFGS ftol = 0.1 maxiter = 5 def fwi(model, param, ftol=ftol, maxiter=maxiter): # Initial guess v0 = model.vp.data[param['nbl']:-param['nbl'], param['nbl']:-param['nbl']] m0 = 1.0 / (v0.reshape(-1).astype(np.float64))**2 # Define bounding box constraints on the solution. vmin = 1.4 # do not allow velocities slower than water vmax = 4.0 bounds = [(1.0/vmax**2, 1.0/vmin**2) for _ in range(np.prod(model.shape))] # in [s^2/km^2] result = optimize.minimize(fwi_gradient, m0, args=(param, ), method='L-BFGS-B', jac=True, bounds=bounds, callback=fwi_callbacks, options={'ftol':ftol, 'maxiter':maxiter, 'disp':True}) return result """ Explanation: FWI with L-BFGS-B Equipped with a function to calculate the functional and gradient, we are finally ready to define the optimization function. End of explanation """ #NBVAL_IGNORE_OUTPUT model0 = get_initial_model() # Baby steps result = fwi(model0, param) # Print out results of optimizer. print(result) #NBVAL_SKIP # Plot FWI result from examples.seismic import plot_image slices = tuple(slice(param['nbl'],-param['nbl']) for _ in range(2)) vp = 1.0/np.sqrt(result['x'].reshape(true_model.shape)) plot_image(true_model.vp.data[slices], vmin=2.4, vmax=2.8, cmap="cividis") plot_image(vp, vmin=2.4, vmax=2.8, cmap="cividis") #NBVAL_SKIP import matplotlib.pyplot as plt # Plot model error plt.plot(range(1, maxiter+1), relative_error); plt.xlabel('Iteration number'); plt.ylabel('L2-model error') plt.show() """ Explanation: We now apply our FWI function and have a look at the result. End of explanation """
ML4DS/ML4all
U3.PCA/PCA_professor.ipynb
mit
# Basic imports %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() """ Explanation: Principal Component Analysis The code in this notebook has been taken from a notebook in the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The code has been released by VanderPlas under the MIT license. Our text is original, though the presentation structure partially follows VanderPlas' presentation of the topic. Version: 1.0 (2020/09), Jesús Cid-Sueiro <!-- I KEEP THIS LINK, MAY BE WE COULD GENERATE SIMILAR COLAB LINKS TO ML4ALL <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> --> End of explanation """ rng = np.random.RandomState(1) X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T plt.scatter(X[:, 0], X[:, 1]) plt.xlabel('$x_0$') plt.ylabel('$x_1$') plt.axis('equal') plt.show() """ Explanation: Many machine learning applications involve the processing of highly multidimensional data. More data dimensions usually imply more information to make better predictions. However, a large dimension may state computational problems (the computational load of machine learning algorithms usually grows with the data dimension) and more difficulties to design a good predictor. For this reason, a whole area of machine learning has been focused on feature extraction algorithms, i.e. algorithms that transform a multidimensional dataset into data with a reduced set of features. The goal of these techniques is to reduce the data dimension while preserving the most relevant information for the prediction task. Feature extraction (and, more generally, dimensionality reduction) algorithms are also useful for visualization. By reducing the data dimensions to 2 or 3, we can transform data into points in the plane or the space, that can be represented graphically. Principal Component Analysis (PCA) is a particular example of linear feature extraction methods, that compute the new features as linear combinations of the original data components. Besides feature extraction and visualization, PCA is also a usefull tool for noise filtering, as we will see later. 1. A visual explanation. Before going into the mathematical details, we can illustrate the behavior of PCA by looking at a two-dimensional dataset with 200 samples: End of explanation """ from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(X) """ Explanation: PCA looks for the principal axes in the data, using them as new coordinates to represent the data points. We can compute this as follows: End of explanation """ print(pca.components_) """ Explanation: After fitting PCA to the data, we can read the directions of the new axes (the principal directions) using: End of explanation """ print(pca.explained_variance_) """ Explanation: These directions are unit vectors. We can plot them over the scatter plot of the input data, scaled up by the standard deviation of the data along each direction. The standard deviations can be computed as the square root of the variance along each direction, which is available through End of explanation """ def draw_vector(v0, v1, ax=None): ax = ax or plt.gca() arrowprops=dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0, color='k') ax.annotate('', v1, v0, arrowprops=arrowprops) # plot data plt.scatter(X[:, 0], X[:, 1], alpha=0.2) for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) draw_vector(pca.mean_, pca.mean_ + v) plt.axis('equal'); """ Explanation: The resulting axis plot is the following End of explanation """ # plot principal components T = pca.transform(X) plt.scatter(T[:, 0], T[:, 1], alpha=0.2) plt.axis('equal') plt.xlabel('component 1') plt.ylabel('component 2') plt.title('principal components') plt.show() """ Explanation: The principal axes of the data can be used as a new basis for the data representation. The principal components of any point are given by the projections of the point onto each principal axes. End of explanation """ pca = PCA(n_components=1) pca.fit(X) T = pca.transform(X) print("original shape: ", X.shape) print("transformed shape:", T.shape) """ Explanation: Note that PCA is essentially an affine transformation: data is centered around the mean and rotated according to the principal directions. At this point, we can select those directions that may be more relevant for prediction. 2. Mathematical Foundations (The material in this section is based on Wikipedia: Principal Component Analysis) In this section we will see how the principal directions are determined mathematically, and how can they be used to tranform the original dataset. PCA is defined as a linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. Consider a dataset ${\cal S} = {{\bf x}k, k=0,\cdots, K-1}$ of $m$-dimensional samples arranged by rows in data matrix, ${\bf X}$. Assume the dataset has zero sample mean, that is \begin{align} \sum{k=0}^{K-1} {\bf x}_k = {\bf 0} \end{align} which implies that the sample mean of each column in ${\bf X}$ is zero. If data is not zero-mean, the data matrix ${\bf X}$ is built with rows ${\bf x}_k - {\bf m}$, where ${\bf m}$ is the mean. PCA transforms each sample ${\bf x}k \in {\cal S}$ into a vector of principal components ${\bf t}_k$. The transformation is linear so each principal component can be computed as the scalar product of each sample with a weight vector of coefficients. For instance, if the coeficient vectors are ${\bf w}_0, {\bf w}_1, \ldots, {\bf w}{l-1}$, the principal components of ${\bf x}k$ are $$ t{k0} = {\bf w}0^\top \mathbf{x}_k, \ t{k1} = {\bf w}1^\top \mathbf{x}_k, \ t{k2} = {\bf w}_2^\top \mathbf{x}_k, \ ... $$ These components can be computed iteratively. In the next section we will see how to compute the first one. 2.1. Computing the first component 2.2.1. Computing ${\bf w}_0$ The principal direction is selected in such a way that the sample variance of the first components of the data (that is, $t_{00}, t_{10}, \ldots, t_{K-1,0}$) is maximized. Since we can make the variance arbitrarily large by using an arbitrarily large ${\bf w}_0$, we will impose a constraint of the size of the coefficient vectors, that should be unitary. Thus, $$ \|{\bf w}_0\| = 1 $$ Note that the mean of the transformed components is zero, because samples are zero-mean: \begin{align} \sum_{k=0}^{K-1} t_{k0} = \sum_{k=0}^{K-1} {\bf w}0^\top {\bf x}_k = {\bf w}_0^\top \sum{k=0}^{K-1} {\bf x}_k ={\bf 0} \end{align} therefore, the variance of the first principal component can be computed as \begin{align} V &= \frac{1}{K} \sum_{k=0}^{K-1} t_{k0}^2 = \frac{1}{K} \sum_{k=0}^{K-1} {\bf w}0^\top {\bf x}_k {\bf x}_k^\top {\bf w}_0 = \frac{1}{K} {\bf w}_0^\top \left(\sum{k=0}^{K-1} {\bf x}_k {\bf x}_k^\top \right) {\bf w}_0 \ &= \frac{1}{K} {\bf w}_0^\top {\bf X}^\top {\bf X} {\bf w}_0 \end{align} The first principal component ${\bf w}_0$ is the maximizer of the variance, thus, it can be computed as $$ {\bf w}_0 = \underset{\Vert {\bf w} \Vert= 1}{\operatorname{\arg\,max}} \left{ {\bf w}^\top {\bf X}^\top {\bf X} {\bf w} \right}$$ Since ${\bf X}^\top {\bf X}$ is necessarily a semidefinite matrix, the maximum is equal to the largest eigenvalue of the matrix, which occurs when ${\bf w}_0$ is the corresponding eigenvector. 2.2.2. Computing $t_{k0}$ Once we have computed the first eigenvector ${\bf w}0$, we can compute the first component of each sample, $$ t{k0} = {\bf w}0^\top \mathbf{x}_k $$ Also, we can compute the projection of each sample along the first principal direction as $$ t{k0} {\bf w}_0 $$ We can illustrate this with the example data, applying PCA with only one component End of explanation """ X_new = pca.inverse_transform(T) plt.scatter(X[:, 0], X[:, 1], alpha=0.2) plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8) plt.axis('equal'); """ Explanation: and projecting the data over the first principal direction: End of explanation """ from sklearn.datasets import load_digits digits = load_digits() digits.data.shape """ Explanation: 2.2. Computing further components The error, i.e. the difference between any sample an its projection, is given by \begin{align} \hat{\bf x}{k0} &= {\bf x}_k - t{k0} {\bf w}0 = {\bf x}_k - {\bf w}_0 {\bf w}_0^\top \mathbf{x}_k = \ &= ({\bf I} - {\bf w}_0{\bf w}_0^\top ) {\bf x}_k \end{align} If we arrange all error vectors, by rows, in a data matrix, we get $$ \hat{\bf X}{0} = {\bf X}({\bf I} - {\bf w}_0 {\bf w}_0^T) $$ The second principal component can be computed by repeating the analysis in section 2.1 over the error matrix $\hat{\bf X}_{0}$. Thus, it is given by $$ {\bf w}_1 = \underset{\Vert {\bf w} \Vert= 1}{\operatorname{\arg\,max}} \left{ {\bf w}^\top \hat{\bf X}_0^\top \hat{\bf X}_0 {\bf w} \right} $$ It turns out that this gives the eigenvector of ${\bf X}^\top {\bf X}$ with the second largest eigenvalue. Repeating this process iterativelly (by substracting from the data all components in the previously computed principal directions) we can compute the third, fourth and succesive principal directions. 2.3. Summary of computations Summarizing, we can conclude that the $l$ principal components of the data can be computed as follows: Compute the $l$ unitary eigenvectors ${\bf w}0, {\bf w}_1, \ldots, {\bf w}{l-1}$ from matrix ${\bf X}^\top{\bf X}$ with the $l$ largest eigenvalues. Arrange the eigenvectors columnwise into an $m \times l$ weight matrix ${\bf W} = ({\bf w}0 | {\bf w}_1 | \ldots | {\bf w}{l-1})$ Compute the principal components for all samples in data matrix ${\bf X}$ as $$ {\bf T} = {\bf X}{\bf W} $$ The computation of the eigenvectors of ${\bf X}^\top{\bf X}$ can be problematic, specially if the data dimension is very high. Fortunately, there exist efficient algorithms for the computation of the eigenvectors without computing ${\bf X}^\top{\bf X}$, by means of the singular value decomposition of matrix ${\bf X}$. This is the method used by the PCA method from the sklearn library 2. PCA as dimensionality reduction After a PCA transformation, we may find that the variance of the data along some of the principal directions is very small. Thus, we can simply remove those directions, and represent data using the components with the highest variance only. In the above 2-dimensional example, we selected the principal direction only, and all data become projected onto a single line. The key idea in the use of PCA for dimensionality reduction is that, if the removed dimensions had a very low variance, we can expect a small information loss for a prediction task. Thus, we can try to design our predictor with the selected features, with the hope to preserve a good prediction performance. 3. PCA for visualization: Hand-written digits In the illustrative example we used PCA to project 2-dimensional data into one dimension, but the same analysis can be applied to project $N$-dimensional data to $r<N$ dimensions. An interesting application of this is the projection to 2 or 3 dimensions, that can be visualized. We will illustrate this using the digits dataset: End of explanation """ pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) """ Explanation: This dataset contains $8\times 8$ pixel images of digit manuscritps. Thus, each image can be converted into a 64-dimensional vector, and then projected over into two dimensions: End of explanation """ plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('rainbow', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); """ Explanation: Every image has been tranformed into a 2 dimensional vector, and we can represent them into a scatter plot: End of explanation """ def plot_pca_components(x, coefficients=None, mean=0, components=None, imshape=(8, 8), n_components=8, fontsize=12, show_mean=True): if coefficients is None: coefficients = x if components is None: components = np.eye(len(coefficients), len(x)) mean = np.zeros_like(x) + mean fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2)) g = plt.GridSpec(2, 4 + bool(show_mean) + n_components, hspace=0.3) def show(i, j, x, title=None): ax = fig.add_subplot(g[i, j], xticks=[], yticks=[]) ax.imshow(x.reshape(imshape), interpolation='nearest') if title: ax.set_title(title, fontsize=fontsize) show(slice(2), slice(2), x, "True") approx = mean.copy() counter = 2 if show_mean: show(0, 2, np.zeros_like(x) + mean, r'$\mu$') show(1, 2, approx, r'$1 \cdot \mu$') counter += 1 for i in range(n_components): approx = approx + coefficients[i] * components[i] show(0, i + counter, components[i], f'$c_{i}$') show(1, i + counter, approx, f"${coefficients[i]:.2f} \cdot c_{i}$") #r"${0:.2f} \cdot c_{1}$".format(coefficients[i], i)) if show_mean or i > 0: plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom', transform=plt.gca().transAxes, fontsize=fontsize) show(slice(2), slice(-2, None), approx, "Approx") return fig """ Explanation: Note that we have just transformed a collection of digital images into a cloud of points, using a different color to represent the points corresponding to the same digit. Note that colors from the same digit tend to be grouped in the same cluster, which suggests that these two components may contain useful information for discriminating between digits. Clusters show some overlap, so maybe using more components could help for a better discrimination. The example shows that, despite a 2-dimensional projection may loose relevant information for a prediction task, the visualization of this projections may provide some insights to the data analyst on the predition problem to solve. 3.1. Interpreting principal components Note that an important step in the application of PCA to digital images is the vectorization: each digit image is converted into a 64 dimensional vector: $$ {\bf x} = (x_0, x_1, x_2 \cdots x_{63})^\top $$ where $x_i$ represents the intesity of the $i$-th pixel in the image. We can go back to reconstruct the original image as follows: if $I_i$ is an black image with unit intensity at the $i$-th pixel only, we can reconstruct the original image as $$ {\rm image}({\bf x}) = \sum_{i=0}^{63} x_i I_i $$ A crude way to reduce the dimensionality of this data is to remove some of the components in the sum. For instance, we can keep the first eight pixels, only. But we then we get a poor representation of the original image: End of explanation """ idx = 25 # Select digit from the dataset pca = PCA(n_components=10) Xproj = pca.fit_transform(digits.data) sns.set_style('white') fig = plot_pca_components(digits.data[idx], Xproj[idx], pca.mean_, pca.components_) """ Explanation: PCA provides an alternative basis for the image representation. Using PCA, we can represent each vector as linear combination of the principal direction vectors ${\bf w}0, {\bf w}_1, \cdots, {\bf w}{63}$: $$ {\bf x} = {\bf m} + \sum_{i=0}^{63} t_i {\bf w}i $$ and, thus, we can represent the image as the linear combination of the images associated to each direction vector $$ image({\bf x}) = image({\bf m}) + \sum{i=0}^{63} t_i \cdot image({\bf w}_i) $$ PCA selects the principal directions in such a way that the first components capture most of the variance of the data. Thus, a few components may provide a good approximation to the original image. The figure shows a reconstruction of a digit using the mean image and the first eight PCA components: End of explanation """ pca = PCA().fit(digits.data) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); print(np.cumsum(pca.explained_variance_ratio_)) """ Explanation: 4. Choosing the number of components The number of components required to approximate the data can be quantified by computing the cumulative explained variance ratio as a function of the number of components: End of explanation """ def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16)) plot_digits(digits.data) """ Explanation: In this curve we can see that the 16 principal components explain more than 86 % of the data variance. 32 out of 64 components explain 96.6 % of the data variance. This suggest that the original data dimension can be substantally reduced. 5. PCA as Noise Filtering The use of PCA for noise filtering can be illustrated with some examples from the digits dataset. End of explanation """ np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy) """ Explanation: As we have shown before, the majority of the data variance is concentrated in a fraction of the principal components. Now assume that the dataset is affected by AWGN noise: End of explanation """ pca = PCA(0.55).fit(noisy) pca.n_components_ """ Explanation: It is not difficult to show that, in the noise samples are independent for all pixels, the noise variance over all principal directions is the same. Thus, the principal components with higher variance will be less afected by nose. By removing the compoments with lower variance, we will be removing noise, majoritarily. Let's train a PCA on the noisy data, requesting that the projection preserve 55% of the variance: End of explanation """ components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered) """ Explanation: 15 components contain this amount of variance. The corresponding images are shown below: End of explanation """ from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape) """ Explanation: This is another reason why PCA works well in some prediction problems: by removing the components with less variance, we can be removing mostly noise, keeping the relevant information for a prediction task in the selected components. 6. Example: Eigenfaces We will see another application of PCA using the Labeled Faces from the dataset taken from Scikit-Learn: End of explanation """ #from sklearn.decomposition import Randomized PCA pca = PCA(150, svd_solver="randomized") pca.fit(faces.data) """ Explanation: We will take a look at the first 150 principal components. Because of the large dimensionality of this dataset (close to 3000), we will select the randomized solver for a fast approximation to the first $N$ principal components. End of explanation """ fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone') """ Explanation: Now, let us visualize the images associated to the eigenvectors of the first principal components (the "eigenfaces"). These are the basis images, and all faces can be approximated as linear combinations of them. End of explanation """ plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); """ Explanation: Note that some eigenfaces seem to be associated to the lighting conditions of the image, an other to specific features of the faces (noses, eyes, mouth, etc). The cumulative variance shows that 150 components cope with more than 90 % of the variance: End of explanation """ # Compute the components and projected faces pca = PCA(150, svd_solver="randomized").fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction'); """ Explanation: We can compare the input images with the images reconstructed from these 150 components: End of explanation """
riddhishb/ipython-notebooks
Kalman Filter/Kalman-first.ipynb
gpl-3.0
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import random """ Explanation: This is an jupyter notebook. Lectures about Python, useful both for beginners and experts, can be found at http://scipy-lectures.github.io. Open the notebook by (1) copying this file into a directory, (2) in that directory typing jupyter-notebook and (3) selecting the notebook. Written By: Riddhish Bhalodia In this exercise, we will learn and code about Kalman Filtering and look at few of it's applications. Kalman Filtering Motivation Let me start it this way that before Rudolf Kalman (co-inventor of Kalman filtering), problems were divided in two distinct classes Control Problems (what value of acceleration should be provided to the car so that it climbs a certain incline with constant speed) and Filtering Problem (damn this noisy accelerometer, I can't get a clear value even at a fixed point). You might have guessed (if you care to read the brackets :P) that the two problems are not uncorrelated. To the un-initiated, take a scenario that the car has a noisy accelerometer and you want to control it's speed on the incline, so two problems in one. One way is to solve them independently, but that is too situation dependent and so there was a need for a dynamic solution for filtering while controlling and vice versa, essentially bringing the two separate problems under one roof. This is precisely what Kalman Filter does! Kalman Filter and it's non-linear extensions are essential elements in modern control theory. Lot of different applications ranging from filtering noisy sensor output to autonomous robot navigation uses Kalman Filter. In this tutorial we will first start off with an application, which we will code (yay!) and then move on to build Kalman filtering theory. Faulty Voltmeter A classic example to start with and very intuitive. We have to measure a DC voltage from a faulty (noisy) voltmeter, it cant get any simpler than this. First off let's import certain packages End of explanation """ true_voltage = 1 noise_level_sig = 0.2 iter_max = 50 measurements = [] true_voltage_mat = [] for i in range(iter_max): measured = random.gauss(true_voltage , noise_level_sig) measurements.append(measured) true_voltage_mat.append(true_voltage) """ Explanation: Now for solving any computational problem we need to model the system (in layman terms, set of equations which is used to describe the overall situation and how it varies with physical inputs and parameters). The simplest way to model the noisy voltmeter at every measurements instant is through the following equation \begin{equation} V_m = V_{m-1} + \omega_m \qquad (1) \end{equation} Here, $V_m$ : Voltage at current time, $V_{m-1}$ : Voltage at previous time instant, $\omega_m$ : Random Noise (process noise) We also have measurements taken at each instance m which is given by $Z_m$ and is corrupted by some sensor (measurements) noise $\nu_m$ $$Z_m = V_m + \nu_m \qquad (2)$$ Usually in such cases as $\nu_m$ is introduced due to faults in voltmeter (sensor) and hence we usually know it's characteristics (least count, precision ... ring any bells?). So here we will model $\nu_m$ as a Gaussian Random Variable (as it is usually done) with zero mean and standard deviation $\sigma_{\nu} = 0.3$ (for simulation sake). $\omega_m$ is more difficult to predict and is introduced by error due to non-ideality (we know it's constant DC voltage, but is it really!) of our process equations but we still assume that we know it. Again, $\omega_m$ is to be modeled by a Gaussian Random Variable with zero mean and standard deviation $\sigma_{\omega} = 0.01$. So before solving this let's model this voltmeter, What are the parameters to give. The true constant voltage lets say variable true_voltage = 1, and we need a noise level as well (this is voltmeter's error so will feature in measurements error), say variable noise_level_sig = 0.2 (KF works when noise estimate are off by a mark...). Let's take measurements for 50 instances store it in iter_max and we will generate the measurements for each instant which will be just $~ \mathcal{N}(true_voltage,noise_level_sig)$ (Think about this :)) End of explanation """ plt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r') plt.xlabel('Time Instances') plt.ylabel('Voltage') plt.legend(('true voltage', 'measured'), loc=4) """ Explanation: Let's plot how the measurements look as compared to the true voltage End of explanation """ # Initialize the parameters initial_guess = 3 initial_guess_error = 1 sig_nu = 0.3 sig_omega = 0.01 estimate_vector = [] estimate_vector.append(initial_guess) error_estimate_vector = [] error_estimate_vector.append(initial_guess_error) # Run the Filter for i in range(iter_max-1): # first the prior estimation step volt_prior_est = estimate_vector[i] error_prior_est = error_estimate_vector[i] + sig_omega * sig_omega # estimate correction k = error_prior_est/(error_prior_est + sig_nu * sig_nu) volt_corrected_est = volt_prior_est + k * (measurements[i+1] - volt_prior_est) error_corrected_est = (1 - k) * error_prior_est estimate_vector.append(volt_corrected_est) error_estimate_vector.append(error_corrected_est) """ Explanation: Now starts the actual thing, we want to filter this :D, by using Kalman Filtering. But first we need to derive, so do get ready for a chunk load of theory, but but please be patient as once we are done with this, the standard Kalman Filter will be a piece of cake :) Filtering!! Math Math Math. That being said, to solve this problem iteratively we need two major step Predict the voltage at next instance using the previous estimate (1) Correct the estimate based on the measurements at that instant So we define two variables, $\hat{V}^-_m :$ prior estimate of the voltage given only the knowledge of the process (the equation (1)) and $\hat{V}_m :$ posterior estimate of the voltage at step m given the knowledge of the measurements $Z_m$ So lets' start the firing of equations, just one comment before this, we will also estimate the error in the estimate at every iteration along with the estimate itself. $$e_m = V_m - \hat{V}_m \quad \textrm{and} \quad \sigma^2_m = \mathbb{E}[e_m^2]$$ and $$e^-_m = V_m - \hat{V}^-_m \quad \textrm{and} \quad \sigma^{2-}_m = \mathbb{E}[e_m^{2-}]$$ We have to minimize this $\sigma^2_m$. So now as any sane person (ok statistician) would do we would model the posterior estimate $\hat{V}_m$ as the linear combination of the prior estimate $\hat{V}^-_m$ and the deviation of the estimate from the measurements (also called as innovation term) given as $$y_m = Z_m - \hat{V}^-_m \qquad (3)$$ Putting the above ramble in equation we have $$\hat{V}_m = \hat{V}^-_m + k_my_m \qquad (4)$$ subtracting $V_m$ from both sides we get $$\hat{V}_m - V_m = \hat{V}^-_m - V_m + k_m(Z_m - \hat{V}^-_m)$$ To compute $k_m$ we take the square and it's expectation and then differentiate the quadratic in $k_m$ to get something like (try this your self) $$k_m = \frac{\mathbb{E}[(V_m - \hat{V}^-_m)(y_m)]}{\mathbb{E}[y_m^2]}$$ The numerator and denominator when expanded and taking into the account for independence of the R.V $Z_m$, $V_m$, and $\hat{V}^-_m$ (think about this too) we get $$k_m = \frac{\sigma^{2-}m}{\sigma^{2-}_m + \sigma^2{\nu}} \qquad (5)$$ along with this we also have from equation(1) $$\sigma_m^{2-} = \sigma_{m-1}^2 + \sigma_\omega^2 \qquad (6)$$ Now substituting this (5) in the quadratic for $\mathbb{E}[e^2_m]$ we get the variance as $$\sigma^2_m = (1 - k_m)\sigma_m^{2-} \qquad (7)$$ Now we have everything :D, already! It will be clear when you look at the summary below * Start with an initial guess for $V = V_0$ * Get the prior estimate of voltage and it's error ($\hat{V}^-_m$ and $\sigma_m^{2-}$) from the process equation (1) and (6) * Using the prior estimates and the measurements data at instant m we get the posterior (read corrected) estimates of the voltage and it's error at instant m ($\hat{V}_m$ and $\sigma_m^2$) * Repeat this for several instances and we will converge to a solution (Yes! there exist a proof for convergence, you can google it up) So hopefully you would have got a hang of how this works. So let's code it up End of explanation """ plt.figure() plt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r', range(iter_max), estimate_vector,'g') plt.xlabel('Time Instances') plt.ylabel('Voltage') plt.legend(('true voltage', 'measured', 'filtered'), loc=1) """ Explanation: Let us plot the things End of explanation """ plt.figure() plt.plot(range(iter_max),error_estimate_vector) plt.xlabel('Time Instances') plt.ylabel('Voltage Error') """ Explanation: Did you have your voila moment? :D Lets also look at the error for the estimate, lets plot it. End of explanation """ class kalmanFilter: def __init__(self, X0, P0, A, B, H, Q, R): self.A = A # State Transition Matrix self.B = B # Control Matrix self.H = H # Observation Matrix self.Q = Q # Covariance for the process error self.R = R # Covariance for the measurements error self.current_estimate = X0 # this is the initial guess of the state self.current_error_estimate = P0 # initial guess for the state estimate error def getEstimate(self): # returns the current state estimate return self.current_estimate def getErrorEstimate(self): # returns the current state error estimate return self.current_error_estimate def iteration(self, U, Z): # here is where the updates happen # U = control vector # Z = measurements vector # prior prediction step prior_estimate = self.A * self.current_estimate + self.B * U prior_error_estimate = (self.A * self.current_error_estimate) * np.transpose(self.A) + self.Q # intermediate observation y = Z - self.H * prior_estimate y_covariance = self.H * prior_error_estimate * np.transpose(self.H) + self.R # Correction Step K = prior_error_estimate * np.transpose(self.H) * np.linalg.inv(y_covariance) self.current_estimate = prior_estimate + K * y # We need the size of the matrix so we can make an identity matrix. size = self.current_error_estimate.shape[0] # eye(n) = nxn identity matrix. self.current_error_estimate = (np.eye(size) - K * self.H) * prior_error_estimate """ Explanation: Kalman Filter You might be wondering where is the control in all this. So now that's easily introduced in the actual formulation of Kalman filter which we will look at now. In the actual Kalman Filter we deal with multi-variable setting unlike that of the voltmeter example. So let me describe the inputs and the outputs and the parameters Inputs * $\textbf{Z}_m : $ The measurements vector at each instant m * $\textbf{U}_m : $ This is new!, it denotes the controls provided to the system at instance m (this is just like force being the control for velocity of the car) Outputs * $\textbf{X}_m : $ Newest estimate of the current state (state can be thought of as a parameter vector) * $P_m :$ The newest estimate for the average error of the state Parameters * A : State transition matrix, basically the constant matrix multiplied to the previous estimate in the process equation * B : Control matrix, one to be multiplied to the control vector in the process equation * H : Observation matrix, the proportionality factor for state to be equal to the measurements * Q : Covariance matrix for the process error (this is again assumed to be known) * R : Covariance matrix for the measurements error (again known) Now we are ready to write the basic equations for the KF, don't worry much of the above will be cleared as you look at the equation. Lets start with the two basic equations, first is the process equation $$\textbf{X}m = A\textbf{X}{m-1} + B\textbf{U}_{m-1} + \pmb{\omega}_m \qquad (8)$$ and then we have the measurements equation $$\textbf{Z}_m = H\textbf{X}_m + \pmb{\nu}_m \qquad (9)$$ Here, we model the two error terms $\pmb{\omega}_m$ and $\pmb{\nu}_m$ as multi-variate Gaussian distributions with zero mean and covariance matrices Q and P respectively, i.e. $\pmb{\omega}_m \textrm{~} \mathcal{N}(0,Q)$ and $\pmb{\nu}_m \textrm{~} \mathcal{N}(0,P)$ So now following the exact same philosophy of that we followed for the derivation in voltmeter example we get the update equations for the general Kalman Filter. I will just list it down, if people are interested they can look up the references given below $$ P_m^- = AP_{m-1}A^T + Q \qquad (10) $$ $$ K_m = P_m^-H^T(HP_m^-H^T + R) \qquad (11) $$ $$\pmb{y}_m = \pmb{Z}_m - H\hat{\pmb{X}_m^-} \qquad (12)$$ $$ \hat{\pmb{X}}_m = \hat{\pmb{X}_m^-} + K_m\pmb{y}_m \qquad (13)$$ $$ P_m = (I - K_mH)P_m^- \qquad (14)$$ Again summarizing Start with an initial guess for $\pmb{X} = \pmb{X}_0$ Get the prior estimate of state and it's error ($\hat{\pmb{X}}^-_m$ and $P_m^{-}$) from the process equation (1) and (6) Using the prior estimates and the measurements data at instant m we get the posterior (read corrected) estimates of the voltage and it's error at instant m ($\hat{V}_m$ and $\sigma_m^2$) Repeat this for several instances and we will converge to a solution (Yes! there exist a proof for convergence, you can google it up) Now enough of this rambling, I do hope you get this but we are going to make a kalman filter class and then try to see how this fits with our voltmeter example. So lets first create a class End of explanation """ A = np.matrix([1]) B = np.matrix([0]) H = np.matrix([1]) Q = np.matrix([0.0001]) # the sigmas gets squared R = np.matrix([0.09]) X0 = np.matrix([3]) P0 = np.matrix([1]) KF = kalmanFilter(X0, P0, A, B, H, Q, R) estimate_vector_new = [] estimate_vector_new.append(initial_guess) error_estimate_vector_new = [] error_estimate_vector_new.append(initial_guess_error) # Run the Filter for i in range(iter_max-1): U = np.matrix([0]) # there is no control here Z = np.matrix([measurements[i+1]]) estimate_vector_new.append(KF.getEstimate()[0,0]) error_estimate_vector_new.append(KF.getErrorEstimate()[0,0]) KF.iteration(U,Z) """ Explanation: Now we have this nice class set up, let's test it's correctness by applying it to the Voltmeter problem. First things in the voltmeter problem we set the parameters first. End of explanation """ plt.figure() plt.plot(range(iter_max),true_voltage_mat,'b',range(iter_max), measurements,'r', range(iter_max), estimate_vector_new,'g') plt.xlabel('Time Instances') plt.ylabel('Voltage') plt.legend(('true voltage', 'measured', 'filtered'), loc=1) """ Explanation: Now lets plot again to see weather we are good to go or not. End of explanation """ # Physics # 1) sin(45)*100 = 70.710 and cos(45)*100 = 70.710 # vf = vo + at # 0 = 70.710 + (-9.81)t # t = 70.710/9.81 = 7.208 seconds for half # 14.416 seconds for full journey # distance = 70.710 m/s * 14.416 sec = 1019.36796 m """ Explanation: Well it should exactly match with the previous plot :P duh, big deal. But now as we have this nice class we can start dealing with cooler application. So now after much search I have come up with this application to end this hopefully interesting notebook :D Well a simple but I guess not with real wow factor example is that we have a ball thrown in a projectile and we can measure it's (x,y) position with camera system and also it's velocity (vx,vy) by the sensors on the ball (used in cricket for LBW system without the velocity part). Now we know that these cameras are noisy so we need a filtered estimate of the state of the ball finally (here the state is a vector of x,y,vx,vy). So let's get on with the system modeling first. Remember projectiles! Remember JEE physics which you all did. All right I am not going to explain the kinematics equations just have a look and you will understand. We project at an initial velocity u and angle $\theta$ wrt horizontal. we divide into intervals of measurements into time intervals of $\Delta t$. $$Vx_{t} = Vx_{t-1}$$ $$Vy_{t} = Vy_{t-1} - g\Delta t$$ $$x_{t} = x_{t-1} + Vx_{t-1}\Delta t$$ $$y_{t} = y_{t-1} + Vy_{t-1}\Delta t - 0.5g\Delta t^2$$ The state of the equations at instance t is given by the vector $\pmb{X}_t = (x_t,Vx_t,y_t,Vy_t)$, and the control vector is the additional term $\pmb{u}_t = (0,0,-0.5g\Delta t^2, -g\Delta t)$. Think about this! Now we start defining matrices. \begin{equation} \pmb{X}t = A\pmb{X}{t-1} + B\pmb{u}_t \ A=\left(\begin{array}{cccc} 1 & \Delta t & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & \Delta t \ 0 & 0 & 0 & 1 \end{array}\right) B = \left(\begin{array}{cccc} 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{array}\right) \end{equation} As our measurements are directly the state there is no proportionality and hence the H matrix is an identity matrix (H = $I_4$). Also we assume that our process has an error covariance as $Q = 0.0001I_4$ and the measurements has an error covariance of $R = 0.3I_4$ (all the errors are not necessary to be equal, but taken here for convenience :D). Also we need the values for $\theta$ and $u$, lets say $\theta = 45$ and $u = 100m/s$ Now we have all the matrices and so lets start with solving this using our kalmanFilter class. End of explanation """ del_t = 0.1 max_iter = 145 added_noise = 25 init_vel = 100 theta = np.pi/4 # now we define the measurement matrix measurements = np.zeros((4,max_iter)) true_value = np.zeros((4,max_iter)) ux0 = init_vel * np.cos(theta) uy0 = init_vel * np.sin(theta) for i in range(max_iter): # we generate this by projectile equations and adding noise to it t = i * del_t true_value[0,i] = ux0 * t true_value[1,i] = ux0 true_value[2,i] = uy0 * t - 0.5 * 9.8 * t * t true_value[3,i] = uy0 - 9.8 * t measurements[0,i] = random.gauss(true_value[0,i],added_noise) measurements[1,i] = random.gauss(true_value[1,i],added_noise) measurements[2,i] = random.gauss(true_value[2,i],added_noise) measurements[3,i] = random.gauss(true_value[3,i],added_noise) """ Explanation: due to the above calculations we have to be careful choosing $\Delta t$ and number of iterations. $\Delta t = 0.1$ and max_iter = 145 makes sense (think). So now we create our simulated data End of explanation """ plt.figure() plt.plot(true_value[0,:],true_value[2,:],'b',measurements[0,:],measurements[2,:],'r') plt.xlabel('X Position') plt.ylabel('Y Position') plt.legend(('true trajectory', 'measured trajectory'), loc=2) """ Explanation: Lets plot the position data and measurements End of explanation """ A = np.matrix([[1,del_t,0,0],[0,1,0,0],[0,0,1,del_t],[0,0,0,1]]) B = np.matrix([[0,0,0,0],[0,0,0,0],[0,0,1,0],[0,0,0,1]]) u = np.matrix([[0],[0],[-0.5*9.8*del_t*del_t],[-9.8*del_t]]) # control vector is constant does not depend on t H = np.eye(4) Q = 0.0001 * np.eye(4) R = 0.3 * np.eye(4) X0 = np.matrix([[0],[ux0],[500],[uy0]]) # set it little different than the orig initial just to show that KF will still work P0 = np.eye(4) # set arbitrary as identity estimate_matrix = np.zeros((4,max_iter)) estimate_matrix[:,0] = np.asarray(X0)[:,0] estimate_error = P0 KF = kalmanFilter(X0, P0, A, B, H, Q, R) for i in range(max_iter-1): Z = np.matrix([[measurements[0,i+1]],[measurements[1,i+1]],[measurements[2,i+1]],[measurements[3,i+1]]]) estimate_matrix[:,i+1] = np.asarray(KF.getEstimate())[:,0] KF.iteration(u,Z) plt.figure() plt.plot(true_value[0,:],true_value[2,:],'b',measurements[0,:],measurements[2,:],'r',estimate_matrix[0,:],estimate_matrix[2,:],'g') plt.xlabel('X Position') plt.ylabel('Y Position') plt.legend(('true trajectory', 'measured trajectory', 'filtered trajectory'), loc=1) """ Explanation: So this is how the cricket ball's trajectory is measured, now we go to the filtering part End of explanation """
arne-cl/alt-mulig
python/maz176-statistics.ipynb
gpl-3.0
%matplotlib inline import os from collections import Counter from operator import itemgetter import pandas as pd from networkx import Graph from networkx.algorithms.components.connected import connected_components from discoursegraphs import select_edges_by, get_pointing_chains from discoursegraphs.readwrite import ConanoDocumentGraph, MMAXDocumentGraph, RSTGraph, TigerDocumentGraph MAZ_ROOTDIR = os.path.expanduser('~/repos/pcc-annis-merged/maz176/') """ Explanation: MAZ176 statistics Author: Arne Neumann Date: 2014-10-13 End of explanation """ CONANO_DIR = os.path.join(MAZ_ROOTDIR, 'connectors') conano_files = !ls $CONANO_DIR/*.xml """ Explanation: Number of Connectors End of explanation """ all_connectors = Counter() for conano_file in conano_files: cdg = ConanoDocumentGraph(conano_file) for token_id in cdg.tokens: if 'conano:connective' in cdg.node[token_id]['layers']: all_connectors[cdg.node[token_id]['relation']] += 1 connector_counts = sorted(all_connectors.iteritems(), key=itemgetter(1), reverse=True) connector_df = pd.DataFrame(connector_counts, columns=['connectors', 'counts']) connector_df connector_df.plot(kind='bar', x='connectors', y='counts') """ Explanation: Number of Connectors by Connector Type End of explanation """ connector_tokens = Counter() for conano_file in conano_files: cdg = ConanoDocumentGraph(conano_file) for token_id in cdg.tokens: if 'conano:connective' in cdg.node[token_id]['layers']: # count tokens normalized to lower case connector_tokens[cdg.get_token(token_id).lower()] += 1 connector_token_counts = sorted(connector_tokens.iteritems(), key=itemgetter(1), reverse=True) connector_token_df = pd.DataFrame(connector_token_counts, columns=['connectors', 'counts']) connector_token_df import matplotlib.pylab as pylab pylab.rcParams['figure.figsize'] = 15, 5 connector_token_df[connector_token_df.counts > 4].plot(kind='bar', x='connectors', title='Connector tokens occurring five times or more') """ Explanation: Number of Connectors by Connector Token End of explanation """ TIGER_DIR = os.path.join(MAZ_ROOTDIR, 'syntax') connector_pos = Counter() for conano_file in conano_files: cdg = ConanoDocumentGraph(conano_file) maz_id = os.path.basename(conano_file).split('.')[0] tdg = TigerDocumentGraph(os.path.join(TIGER_DIR, maz_id+'.xml')) cdg.merge_graphs(tdg) for token_id in cdg.tokens: if 'conano:connective' in cdg.node[token_id]['layers']: connector_pos[cdg.node[token_id]['tiger:pos']] += 1 connector_pos_counts = sorted(connector_pos.iteritems(), key=itemgetter(1), reverse=True) connector_pos_df = pd.DataFrame(connector_pos_counts, columns=['connector-pos', 'counts']) connector_pos_df pylab.rcParams['figure.figsize'] = 10, 5 connector_pos_df.plot(kind='bar', x='connector-pos', y='counts') """ Explanation: Number of Connectors by Connector POS End of explanation """ sum(connector_df['counts']) """ Explanation: Total number of connectives End of explanation """ RST_DIR = os.path.join(MAZ_ROOTDIR, 'rst') rst_files = !ls $RST_DIR/*.rs3 all_rst_relations = Counter() for rst_file in rst_files: rdg = RSTGraph(rst_file) for source, target, edge_attribs in select_edges_by(rdg, layer='rst:relation', data=True): all_rst_relations[edge_attribs['rst:relname']] += 1 rst_relations_count = sorted(all_rst_relations.iteritems(), key=itemgetter(1), reverse=True) rst_relations_df = pd.DataFrame(rst_relations_count, columns=['RST relations', 'counts']) rst_relations_df """ Explanation: Number of RST relations by type End of explanation """ pylab.rcParams['figure.figsize'] = 15, 5 rst_relations_df.plot(kind='bar', x='RST relations', y='counts') """ Explanation: RST relations by type (incl. generic 'span' relation) End of explanation """ rst_relations_df[1:].plot(kind='bar', x='RST relations', y='counts') """ Explanation: RST relations by type (excl. generic 'span' relation) End of explanation """ sum(rst_relations_df['counts']) """ Explanation: Total number of RST relations (incl. generic 'span' relation) End of explanation """ sum(rst_relations_df['counts'][1:]) """ Explanation: Total number of RST relations (excl. generic 'span' relation) End of explanation """ COREF_DIR = os.path.join(MAZ_ROOTDIR, 'coreference') coref_files = !ls $COREF_DIR/*.mmax """ Explanation: Coreference counts End of explanation """ binary_coref_relations = 0 binary_coref_relation_types = Counter() for coref_file in coref_files: mdg = MMAXDocumentGraph(coref_file) for source, target, edge_attribs in select_edges_by(mdg, edge_type='points_to', data=True): binary_coref_relations += 1 binary_coref_relation_types[edge_attribs['label']] += 1 """ Explanation: Number of binary coreference relations End of explanation """ binary_coref_relations """ Explanation: Total number of binary coreference relations End of explanation """ sorted(binary_coref_relation_types.iteritems(), key=itemgetter(1), reverse=True) """ Explanation: Number of binary coreference relations by subtype End of explanation """ coref_chains = 0 for coref_file in coref_files: mdg = MMAXDocumentGraph(coref_file) coref_chains += len(get_pointing_chains(mdg)) print coref_chains """ Explanation: Number of coreference chains Please note that the number of coreference chains is somewhat skewed. MMAX allows markables to point to more than one markable/target/antecedent. This features was used when MAZ176 was annotated. As an example, consider a coreference chain in which A points to B and B points to both C and D (i.e. A -&gt; B -&gt; {C, D}). The algorithm used here will interpret this as two separate coreference chains (A-B-C and A-B-D). cf. https://github.com/arne-cl/discoursegraphs/issues/40 End of explanation """ from networkx import Graph from networkx.algorithms.components.connected import connected_components coref_connected_components = 0 for coref_file in coref_files: coref_digraph = Graph() mdg = MMAXDocumentGraph(coref_file) for source, target, edge_attribs in select_edges_by(mdg, edge_type='points_to', data=True): coref_digraph.add_edge(source, target, edge_attribs) coref_connected_components += len(list(connected_components(coref_digraph))) print coref_connected_components """ Explanation: Number of connected coreference components To avoid the abovementioned problem, we can calculate the number of connected components from a graph constructed using all binary coreference relations. In this setting, A -&gt; B -&gt; {C, D} will be counted as one connected component. End of explanation """ %load_ext version_information %version_information networkx, pandas """ Explanation: Reproducability information End of explanation """
empet/Math
Animating-the-Dragon-curve-construction.ipynb
bsd-3-clause
import numpy as np from numpy import pi import plotly.graph_objects as go def rot_matrix(alpha): #Define the matrix of rotation about origin with an angle of alpha radians: return np.array([[np.cos(alpha), -np.sin(alpha)], [np.sin(alpha), np.cos(alpha)]]) def rotate_dragon(x, y, alpha=pi/2): #x,y lists or 1D-array containng the (x, y)-coordinates of the turn points on the dragon curve constructed # in a single step X, Y = rot_matrix(alpha).dot(np.stack((x, y))) # the lists of coordinates of turn points on the rotated curve return X, Y #the initial step dragon cuvre is represented by a vertical line of length L L = 0.12 X = np.array([0, 0]) Y = np.array([-L, 0]) fig = go.Figure(data=[go.Scatter(x=X,y=Y, mode='lines', line_color='#0000ee', line_width=1.5, showlegend=False) ]) title = "Animated construction of the Dragon curve,<br>through successive rotations" fig.update_layout(title_text=title, title_x=0.5, font=dict(family='Balto', size=16), width=700, height=700, xaxis_visible=False, yaxis_visible=False, xaxis_range=[-11, 6], yaxis_range=[-11, 3], #margin_l=40, ); """ Explanation: Animated construction of the Dragon curve The most known method to draw a Dragon curve is by using turtle graphics. Here we implement a method visually illustrated in a video posted by Numberphile: https://www.youtube.com/watch?v=NajQEiKFom4. We are starting with a vertical segment and the successive rotations are counterclockwise. End of explanation """ alpha = pi/10 # The rotation of 90 degrees is defined as 5 successive rotations of 18 degrees=pi10 radians n_rot90 = 13 # we have 13 steps frames = [] for k in range(n_rot90): #Record the last point on the dragon, defined in the previous step x0, y0 = X[-1], Y[-1] x = X-x0 #Translation with origin at (x0, y0) to be the center of rotation y = Y-y0 for j in range(5): X, Y = rotate_dragon(x, y, alpha=(j+1)*alpha) X = np.concatenate((x[:-1], X[::-1]), axis=None) #concatenate to the (k-1)^th step dragon its rotated version Y = np.concatenate((y[:-1], Y[::-1]), axis=None) X = X+x0 Y = Y+y0 frames.append(go.Frame(data=[go.Scatter(x=X,y=Y)], traces=[0])) """ Explanation: The frame 0 displays the initial vertical segment, as the dragon cuve defined in step 0 of the iterative process of construction. End of explanation """ buttonPlay = {'args': [None, {'frame': {'duration': 100, 'redraw': False}, 'transition': {'duration': 0}, 'fromcurrent': True, 'mode': 'immediate'}], 'label': 'Play', 'method': 'animate'} fig.update_layout(updatemenus=[{'buttons': [buttonPlay], 'showactive': False, 'type': 'buttons', 'x': 1, 'xanchor': 'left', 'y': 1, 'yanchor': 'top' }]) fig.frames=frames import chart_studio.plotly as py py.iplot(fig, filename='rot-dragon1') """ Explanation: Define a button that triggers the animation: End of explanation """
wrgeorge1983/pcap-plotting
pcap.ipynb
mit
# This whole business is totally unnecessary if you're path is setup right. But if it's not, # this is probably easier than actually fixing it. %load_ext autoreload import os wireshark_path = "C:\\Program Files\\Wireshark\\" + os.pathsep # or, if it's under 'program files(x86)'... # wireshark_path = "C:\\Program Files (x86)\\Wireshark\\" + os.pathsep os.environ['path'] += wireshark_path from utilities import * from pprint import * %autoreload pcap_folder = 'C:\\Users\\william.george\\Desktop\\SUA-Test-Data\\' os.chdir(pcap_folder) os.getcwd() !dir pcap_file = pcap_folder + 'test_2_merge.pcap' output_file = pcap_folder + 'frame.len' !tshark -n -r $pcap_file -T fields -Eheader=y -e frame.number -e frame.len > $output_file """ Explanation: Analysing network traffic with Pandas Origianl: Dirk Loss, http://dirk-loss.de, @dloss. v1.1, 2013-06-02 Modified for Python3 on Win32 & further modified by: William George 2015-04-20 End of explanation """ import pandas as pd """ Explanation: Let's have a look at the file: End of explanation """ %pylab inline """ Explanation: Plotting For a better overview, we plot the frame length over time. We initialise IPython to show inline graphics: End of explanation """ figsize(17,10) """ Explanation: Set a figure size in inches: End of explanation """ import subprocess import datetime import pandas as pd def read_pcap(filename, fields=[], display_filter=[], timeseries=False, strict=False, outfile=None): """ Read PCAP file into Pandas DataFrame object. Uses tshark command-line tool from Wireshark. filename: Name or full path of the PCAP file to read fields: List of fields to include as columns display_filter: Additional filter to restrict frames strict: Only include frames that contain all given fields (Default: false) timeseries: Create DatetimeIndex from frame.time_epoch (Default: false) Syntax for fields and display_filter is specified in Wireshark's Display Filter Reference: http://www.wireshark.org/docs/dfref/ """ if timeseries: fields = ["frame.time_epoch"] + fields fieldspec = " ".join("-e %s" % f for f in fields) display_filters = fields if strict else [''] if display_filter: display_filters += display_filter display_filters = list(filter(None, display_filters)) # display_filter is concatenated with ' and '. If one or more filters # need to be 'ORed' togeather, then supply them as a single string # e.g. ['frame.len > 60', '(ip.addr == 10.10.10.10 or ip.addr == 20.20.20.20)'] # gives '-2 -R "frame.len > 60 and (ip.addr == 10.10.10.10 or ip.addr == 20.20.20.20)"' filterspec = '-2 -R "%s"' % " and ".join(f for f in display_filters) options = "-r %s -n -T fields -Eheader=y" % filename cmd = "tshark %s %s %s" % (options, filterspec, fieldspec) print('filterspec:{0}\n'.format(filterspec), 'display_filters:{0}\n'.format(display_filters), 'options:{0}\n'.format(options), 'cmd:{0}\n'.format(cmd) ) proc_arguments = {'shell': True} if outfile is not None: with open(outfile, 'w') as f: proc_arguments['stdout'] = f proc = subprocess.Popen(cmd, **proc_arguments) return outfile else: proc_arguments['stdout'] = subprocess.PIPE proc = subprocess.Popen(cmd, **proc_arguments) if timeseries: df = pd.read_table(proc.stdout, index_col = "frame.time_epoch", parse_dates=True, date_parser=datetime.datetime.fromtimestamp) else: df = pd.read_table(proc.stdout, parse_dates='frame.time_epoch', date_parser=datetime.datetime.fromtimestamp) return df """ Explanation: Pandas automatically uses Matplotlib for plotting. We plot with small dots and an alpha channel of 0.2: So there are always lots of small packets (< 100 bytes) and lots of large packets (> 1400 bytes). Some bursts of packets with other sizes (around 400 bytes, 1000 bytes, etc.) can be clearly seen. A Python function to read PCAP files into Pandas DataFrames Passing all those arguments to tshark is quite cumbersome. Here is a convenience function that reads the given fields into a Pandas DataFrame: End of explanation """ # # original read call # df=read_pcap(pcap_file, fields = ["frame.len", "ip.src", "ip.dst", 'tcp.stream', 'tcp.srcport', 'tcp.dstport'], timeseries=True).dropna() # df df=read_pcap(pcap_file, fields = ["frame.len", "ip.src", "ip.dst", 'tcp.stream', 'tcp.srcport', 'tcp.dstport'], display_filter=['ip', 'tcp'], timeseries=True, outfile=output_file) df = pd.read_table(output_file, names=['time','len','ip.src','ip.dst','stream','tcp.src', 'tcp.dst'], skiprows=1) import dateutil sample_time = 1429133053.239977000 print(pd.to_datetime(sample_time, unit='s')) df.time = pd.to_datetime(df.time, unit='s') df[[True if x not in [0,1,2,3, 145, 141] else False for x in df['stream']]] df2 = df.head(100) df.head(100).to_json(date_unit='us') """ Explanation: We will use this function in my further analysis. Bandwidth By summing up the frame lengths we can calculate the complete (Ethernet) bandwidth used. First use our convenience function to read the PCAP into a DataFrame: End of explanation """ df[df.stream == 1] # THIS WHOLE BLOCK IS COMMENTED OUT BECAUSE I DON'T TRUST IT RIGHT NOW. THIS IS THE OLD WAY. # flows = framelen.groupby(('tcp.stream', 'ip.src')) # keys = sorted(list(flows.groups.keys()), key=lambda x: x[0]) # #list_streams = [] # #for key in keys:( # zip (iter(x),...) # def f(x): # print('running one time!') # return pd.Series({'frame.len':x[0],'ip.src':x[1]}) # def extract_flow(flow): # ipdst = flow['ip.dst'][0] # tcpstrm = flow['tcp.stream'][0] # tcpsrc = flow['tcp.srcport'][0] # tcpdst = flow['tcp.dstport'][0] # flow_Bps = flow.resample("S", how="sum") # flow_filter = np.isnan(flow_Bps['tcp.dstport']) == False # flow_Bps.loc[flow_filter, "tcp.stream" : "tcp.dstport"] = (tcpstrm, tcpsrc, tcpdst) # return flow_Bps.loc[flow_filter] # flow_list = [] # for key in keys: # flow_list.append(extract_flow(flows.get_group(key))) # pprint(flow_list[0].head(2)) # #stream_df = pd.DataFrame.from_records(stream_list) # # stream1 = streams.get_group(keys[4]) # # extract_stream(stream1) # # stream1 = streams.get_group(keys[3]) # # ostrm = stream1['tcp.stream'][0] # # tcpsrc = stream1['tcp.srcport'][0] # # tcpdst = stream1['tcp.dstport'][0] # # ipdst = stream1['ip.dst'][0] # # stream_Bps = stream1.resample("S", how="sum") # # stream_filter = np.isnan(stream_Bps['tcp.dstport']) == False # # stream_filter# is np.float64(np.nan)) # # #stream_Bps['tcp.srcport'] = 80 # # stream_Bps.loc[stream_filter, "tcp.stream" :"tcp.dstport"] = (ostrm, tcpsrc, tcpdst) # # stream_Bps.loc[stream_filter] # # # #help(streams) # # # #stream1 bytes_per_second=framelen.resample("S", how="sum") help(framelen.resample) """ Explanation: Then we re-sample the timeseries into buckets of 1 second, summing over the lengths of all frames that were captured in that second: End of explanation """ bytes_per_second.sort('tcp.stream') framelen.sort('tcp.stream', inplace=False).dropna() #bytes_per_second.groupby("tcp.stream")["frame.len"].sum().sort('tcp.len',ascending=False,inplace=False).head(10) #bytes_per_second.groupby('tcp.stream')['frame.len'].sum() plt = (bytes_per_second.groupby('tcp.stream')).plot() ylabel('kbps') xlabel('Time') axhline(linewidth=2, color='r', y=2048) time_zero = bytes_per_second.index[0] annotate("2048 kbps",xy=(time_zero,2048), xycoords='data', xytext=(-30,30), textcoords='offset points', size=10, bbox=dict(boxstyle="round", fc="0.8"), arrowprops=dict(arrowstyle="simple")) #plt.set_xlim(-1,100) """ Explanation: Here are the first 5 rows. We get NaN for those timestamps where no frames were captured: End of explanation """ filters = [] fields=["tcp.stream", "ip.src", "ip.dst", "tcp.seq", "tcp.ack", "tcp.window_size", "tcp.len"] #filters=["ip.addr eq 161.217.20.5"] ts=read_pcap(pcap_file, fields, display_filter = filters, timeseries=True, strict=True) ts """ Explanation: TCP Time-Sequence Graph Let's try to replicate the TCP Time-Sequence Graph that is known from Wireshark (Statistics > TCP Stream Analysis > Time-Sequence Graph (Stevens). End of explanation """ stream=ts[ts["tcp.stream"] == 0] stream """ Explanation: Now we have to select a TCP stream to analyse. As an example, we just pick stream number 10: End of explanation """ print(stream.to_string()) """ Explanation: Pandas only print the overview because the table is to wide. So we force a display: End of explanation """ stream["type"] = stream.apply(lambda x: "client" if x["ip.src"] == stream.irow(0)["ip.src"] else "server", axis=1) print(stream.to_string()) client_stream=stream[stream.type == "client"] client_stream["tcp.seq"].plot(style="r-o") """ Explanation: Add a column that shows who sent the packet (client or server). The fancy lambda expression is a function that distinguishes between the client and the server side of the stream by comparing the source IP address with the source IP address of the first packet in the stream (for TCP steams that should have been sent by the client). End of explanation """ client_stream.index = arange(len(client_stream)) client_stream["tcp.seq"].plot(style="r-o") """ Explanation: Notice that the x-axis shows the real timestamps. For comparison, change the x-axis to be the packet number in the stream: End of explanation """ def most_bytes_per_stream(df): return (df.groupby("tcp.stream"))["tcp.len"].sum().sort('tcp.len',ascending=False,inplace=False).head(10) bytes_per_stream = most_bytes_per_stream(ts) print(bytes_per_stream.index) df_filter = ts['tcp.stream'].isin(bytes_per_stream.index)#[row in bytes_per_stream.index for row in ts['tcp.stream']] streams = ts[df_filter] streams.pivot(index=streams.index, columns='tcp.stream', values='tcp.seq') #df[str(df.index) in str(bytes_per_stream.index)] #bytes_per_stream.sort('tcp.len', inplace=False,ascending=False).head(5) per_stream=ts.groupby("tcp.stream") per_stream.head() bytes_per_stream = per_stream["tcp.len"].sum() bytes_per_stream.head() bytes_per_stream.plot(kind='bar') bytes_per_stream.max() biggest_stream=bytes_per_stream.idxmax() biggest_stream bytes_per_stream.ix[biggest_stream] """ Explanation: Looks different of course. Bytes per stream End of explanation """
Vettejeep/Data-Analysis-and-Data-Science-Projects
Support Vector Machines and the UCI Mushroom Data Set.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import numpy as np import itertools from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn import metrics import matplotlib.pyplot as plt """ Explanation: Support Vector Machines and the UCI Mushroom Data Set Kevin Maher <span style="color:blue">Vettejeep365@gmail.com</span> This is a classification problem, we want to correctly classify mushrooms as edible or poisonous. We especially do not want to classify poisonous mushrooms as edible. Imports needed for the script. Uses Python 2.7.13, numpy 1.11.3, pandas 0.19.2, sklearn 0.18.1, matplotlib 2.0.0. End of explanation """ columns = ('class', 'cap_shape', 'cap_surface', 'cap_color', 'bruises', 'odor', 'gill_attachment', 'gill_spacing', 'gill_size', 'gill_color', 'stalk_shape', 'stalk_root', 'stalk_surface_above_ring', 'stalk_surface_below_ring', 'stalk_color_above_ring', 'stalk_color_below_ring', 'veil_type', 'veil_color', 'ring_number', 'ring_type', 'spore_print_color', 'population', 'habitat') df = pd.read_csv('agaricus-lepiota.data.txt', names=columns) print df.head() """ Explanation: Import the data. Even though the UCI file has a 'txt' extension it is formatted as a 'csv' file. File header names are not provided in the UCI data file but are available from the UCI website (https://archive.ics.uci.edu/ml/datasets/mushroom). End of explanation """ df.loc[df['stalk_root'] == '?', 'stalk_root'] = 'u' df.drop('veil_type', axis=1, inplace=True) """ Explanation: Deal with problematic data. Stalk root has missing values, I will encode these as 'u' for unknown. Veil type has only one level so it is not useful. End of explanation """ df['bruises'] = df['bruises'].eq('f').mul(1) df['gill_attachment'] = df['gill_attachment'].eq('a').mul(1) df['gill_spacing'] = df['gill_spacing'].eq('c').mul(1) df['gill_size'] = df['gill_size'].eq('b').mul(1) df['stalk_shape'] = df['stalk_shape'].eq('e').mul(1) """ Explanation: Make naturally binary factors into 1/0 since the Support Vector Machine needs numbers. End of explanation """ def get_dummies(source_df, dest_df, col): dummies = pd.get_dummies(source_df[col], prefix=col) print 'Quantities for %s column' % col for col in dummies: print '%s: %d' % (col, np.sum(dummies[col])) print dest_df = dest_df.join(dummies) return dest_df """ Explanation: For multi-level features, make a function to convert to dummy variables. End of explanation """ ohe_features = ['cap_shape', 'cap_surface', 'cap_color', 'odor', 'gill_color', 'stalk_root', 'stalk_surface_above_ring', 'stalk_surface_below_ring', 'stalk_color_above_ring', 'stalk_color_below_ring', 'veil_color', 'ring_number', 'ring_type', 'spore_print_color', 'population', 'habitat'] for feature in ohe_features: df = get_dummies(df, df, feature) df.drop(ohe_features, axis=1, inplace=True) """ Explanation: Convert multi-level features to dummy variables, print the quantities for each level. Drop the original features since they have been converted. End of explanation """ drop_dummies = ['cap_shape_c', 'cap_surface_g', 'stalk_color_above_ring_y', 'veil_color_y', 'cap_color_r', 'odor_m', 'gill_color_r', 'stalk_root_r', 'stalk_surface_below_ring_y', 'stalk_color_below_ring_y', 'ring_number_n', 'ring_type_n', 'spore_print_color_y', 'population_a', 'habitat_w'] df.drop(drop_dummies, axis=1, inplace=True) """ Explanation: "Leave one out", n-1 dummy variables fully describe the categorical feature. End of explanation """ y = df['class'] X = df.drop('class', axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=245) """ Explanation: Set up for machine learning with the Support Vector Machine. 'X' is the data and 'y' is the true classifications from the data set. X_train and y_train are for model training, X_test and y_test are for model testing - proving the model on data unseen during training. End of explanation """ clf = SVC() clf.fit(X_train, y_train) pred = clf.predict(X_test) """ Explanation: Try a basic Support Vector Machine classifier first. End of explanation """ print 'SVC Model (C=%.1f): %.2f%% accurate' % (1.0, (metrics.accuracy_score(y_test, pred) * 100.0)) confusion = metrics.confusion_matrix(y_test, pred) print confusion """ Explanation: Results. There seem to be some errors but the standard confusion matrix provided by Scikit Learn does not tell us the classes, it just prints a simple matrix. We will need to investigate further. End of explanation """ print '\nErrors:' print 'predicted: actual' for p, act in zip(pred, y_test): if p != act: print '%s: %s' % (p, act) """ Explanation: What were the errors? Unfortunately the code below shows that poisonous mushrooms were classified as edible - not a desirable outcome. End of explanation """ def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() plt.close() """ Explanation: Define a function to plot the confusion matrix. Taken from the Scikit Learn examples at: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py. End of explanation """ class_names = ['e', 'p'] plot_confusion_matrix(confusion, classes=class_names, title='Confusion matrix') """ Explanation: Plot the confusion matrix using the above function. This shows a nice plot of the 9 poisonous mushrooms that were classified as edible. We need to see if we can fix this since the model should not make anyone sick who relies on it. End of explanation """ seeds = (245, 333, 555, 1234, 32487, 67209, 176589) for seed in seeds: print '\nseed=%d' % seed # 70/30 split of data into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=seed) C = (1.0, 2.5, 5.0, 10.0, 15.0, 20.0, 25.0) for c in C: clf = SVC(C=c) clf.fit(X_train, y_train) pred = clf.predict(X_test) print 'SVC Model (C=%.1f): %.2f%% accurate' % (c, (metrics.accuracy_score(y_test, pred) * 100.0)) """ Explanation: Try different values for the cost function for errors (the 'C' error term). Also try different test/train splits to check the robustness of the 'C' term value chosen. With a high enough 'C', the model becomes 100% accurate. End of explanation """
jorgemauricio/INIFAP_Course
algoritmos/Validacion_App_Movil_climMAPcore_AGS_BW.ipynb
mit
# librerias import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.formula.api as sm %matplotlib inline plt.style.use('grayscale') # leer archivo data = pd.read_csv('../data/dataFromAguascalientesClimmapcore.csv') # verificar su contenido data.head() # diferencia entre valores de precipitacion, humedad relativa y temperatura promedio data['diffRain'] = data['Rain'] - data['RainClimmapcore'] data['diffHr'] = data['Hr'] - data['HrClimmapcore'] data['diffTpro'] = data['Tpro'] - data['TproClimmapcore'] # verificar contenido data.head() # histograma de diferencias Hr data['diffHr'].hist() # comportamiento de los datos por mes data.groupby(['Month']).mean()[['Hr','HrClimmapcore']] # visualizar los datos en grafica data.groupby(['Month']).mean()[['Hr','HrClimmapcore']].plot.bar() # histograma de diferencias Tpro data['diffTpro'].hist() # comportamiento de los datos por mes data.groupby(['Month']).mean()[['Tpro','TproClimmapcore']] # visualizar los datos en grafica data.groupby(['Month']).mean()[['Tpro','TproClimmapcore']].plot.bar() # histograma de diferencias Rain data['diffRain'].hist() # comportamiento de los datos por mes data.groupby(['Month']).mean()[['Rain','RainClimmapcore']] # visualizar los datos en grafica data.groupby(['Month']).mean()[['Rain','RainClimmapcore']].plot.bar() """ Explanation: Validacion climMAPcore (Aguascalientes) En el siguiente ejercicio vamos a generar ciertos indicadores estadisticos para la validacion de las salidas de informacion de la aplicacion climMAPcore Procedimiento Los valores a comparar son el valor diario de la estacion vs el valor de la aplicacion climMAPcore. La base que se utilizara se encuentra en la carpeta data de nombre dataFromAguascalientestTest.csv la cual incluye los siguientes campos: * Station : numero de la estacion * State : estado * Lat : latitud * Long : longitud * Year : anio * Month : mes * Day : dia * Rain : precipitacion estacion * Hr : humedad relativa estacion * Tpro : temperatura promedio estacion * RainWRF : precipitacion modelo WRF * HrWRF : humedad relativa modelo WRF * TproWRF : tmperatura promedio modelo WRF End of explanation """ # librerias seabron as sns import seaborn as sns # Hr sns.lmplot(x='Hr',y='HrClimmapcore',data=data, col='Month', aspect=0.6, size=8) # Tpro sns.lmplot(x='Tpro',y='TproClimmapcore',data=data, col='Month', aspect=0.6, size=8) # Rain sns.lmplot(x='Rain',y='RainClimmapcore',data=data, col='Month', aspect=0.6, size=8) # Rain polynomial regression sns.lmplot(x='Rain',y='RainClimmapcore',data=data, col='Month', aspect=0.6, size=8, order=2) """ Explanation: Regresion Lineal End of explanation """ # Hr sns.jointplot("Hr", "HrClimmapcore", data=data, kind="reg") # Tpro sns.jointplot("Tpro", "TproClimmapcore", data=data, kind="reg") # Rain sns.jointplot("Rain", "RainClimmapcore", data=data, kind="reg",color="k") """ Explanation: Regresion lineal con p y pearsonr End of explanation """ # HR result = sm.ols(formula='HrClimmapcore ~ Hr', data=data).fit() print(result.params) print(result.summary()) # Tpro result = sm.ols(formula='TproClimmapcore ~ Tpro', data=data).fit() print(result.params) print(result.summary()) # Rain result = sm.ols(formula='RainClimmapcore ~ Rain', data=data).fit() print(result.params) print(result.summary()) """ Explanation: OLS Regression End of explanation """ # Hr sns.distplot(data['diffHr'],color="k") # Tpro sns.distplot(data['diffTpro'], color="k") # Rain sns.distplot(data['diffRain'], color="k") """ Explanation: Histogramas seaborn End of explanation """
karlstroetmann/Algorithms
Python/Chapter-05/Selection-Sort.ipynb
gpl-2.0
def sort(L): if L == []: return [] x = min(L) return [x] + sort(delete(x, L)) """ Explanation: Selection Sort The algorithm <em style="color:blue;">selection sort</em> is specified via two equations: If $L$ is empty, $\texttt{sort}(L)$ is the empty list: $$ \mathtt{sort}([]) = [] $$ Otherwise, we compute the smallest element of the list $L$ and we remove the first occurrence of this element from $L$. Next, the remaining list is sorted recursively. Finally, the smallest element is added to the front of the sorted list: $$ L \not= [] \rightarrow \mathtt{sort}\bigl(L\bigr) = \bigl[\texttt{min}(L)\bigr] + \mathtt{sort}\bigl(\mathtt{delete}(\texttt{min}(L), L)\bigr) $$ End of explanation """ def delete(x, L): assert L != [], f'delete({x}, [])' y, *R = L if y == x: return R return [y] + delete(x, R) L = [3, 5, 7, 4, 8, 1, 2, 3, 11, 13, 2] sort(L) """ Explanation: The algorithm to delete an element $x$ from a list $L$ is formulated recursively. There are three cases: If $L$ is empty, we could return the empty list: $$\mathtt{delete}(x, []) = [] $$ However, this case is really an error, because when we call $\texttt{delete}(x, L)$ we always assume that $x$ occurs in $L$. If $x$ is equal to the first element of $L$, then the function delete returns the rest of $L$: $$ \mathtt{delete}(x, [x] + R) = R$$ Otherwise, the element $x$ is removed recursively from the rest of the list: $$ x \not = y \rightarrow \mathtt{delete}(x, [y] + R) = [y] + \mathtt{delete}(x,R) $$ End of explanation """
sz2472/foundations-homework
homework_6_shengying_zhao.ipynb
mit
type(data) data.keys() print(data['currently']) print(data['currently']['temperature']-data['currently']['apparentTemperature']) """ Explanation: 2) What's the current wind speed? How much warmer does it feel than it actually is? End of explanation """ print(data['daily']) type(data['daily']) data['daily'].keys() print(data['daily']['data'][0]) type(data['daily']['data']) print(data['daily']['data'][0]['moonPhase']) """ Explanation: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible? End of explanation """ weather_today = data['daily']['data'][0] print(weather_today['temperatureMax']-weather_today['temperatureMin']) """ Explanation: 4) What's the difference between the high and low temperatures for today? End of explanation """ print(data['daily']['data']) daily_data = data['daily']['data'] weather_next_week = data['daily']['data'] for weather in weather_next_week: print(weather['temperatureMax']) if weather['temperatureMax'] > 84: print("it's a hot day.") elif weather['temperatureMax'] > 74 and weather['temperatureMax'] < 83: print("it's a warm day.") else: print("it's a cold day.") """ Explanation: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold. End of explanation """ import requests response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/25.7738889, -80.1938889") data = response.json() print(data['hourly']) data['hourly'].keys() data['hourly']['data'] for cloudcover in data['hourly']['data']: if cloudcover['cloudCover'] > 0.5: print(cloudcover['temperature'], "and cloudy") else: print(cloudcover['temperature']) """ Explanation: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature. End of explanation """ import requests response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,346550400") data = response.json() print(data['currently']['temperature']) response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,662083200") data = response.json() print(data['currently']['temperature']) response = requests.get("https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,977702400") data = response.json() print(data['currently']['temperature']) """ Explanation: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000? Tip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date! Tip: You'll want to use Forecast.io's "time machine" API at https://developer.forecast.io/docs/v2 End of explanation """
AllenDowney/ModSim
python/soln/examples/queue_soln.ipynb
gpl-2.0
# install Pint if necessary try: import pint except ImportError: !pip install pint # download modsim.py if necessary from os.path import exists filename = 'modsim.py' if not exists(filename): from urllib.request import urlretrieve url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/' local, _ = urlretrieve(url+filename, filename) print('Downloaded ' + local) # import functions from modsim from modsim import * """ Explanation: One Queue or Two Modeling and Simulation in Python Copyright 2021 Allen Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International End of explanation """ # Solution def make_system(lam, mu): """Make a System object. lam: arrival rate, per minute mu: service completion rate, per minute returns: System object """ # duration is 10 hours, expressed in minutes return System(lam=lam, mu=mu, duration=10*60) """ Explanation: This notebook presents a case study from Modeling and Simulation in Python. It explores a question related to queueing theory, which is the study of systems that involve waiting in lines, also known as "queues". Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters. In theory, you might expect a single line to be better, but it has some practical drawbacks: in order to maintain a single line, you would have to install rope barriers, and customers might be put off by what seems to be a longer line, even if it moves faster. So you'd like to check whether the single line is really better and by how much. Simulation can help answer this question. As we did in the bikeshare model, we'll assume that a customer is equally likely to arrive during any timestep. I'll denote this probability using the Greek letter lambda, $\lambda$, or the variable name lam. The value of $\lambda$ probably varies from day to day, so we'll have to consider a range of possibilities. Based on data from other stores, you know that it takes 5 minutes for a customer to check out, on average. But checkout times are highly variable: most customers take less than 5 minutes, but some take substantially more. A simple way to model this variability is to assume that when a customer is checking out, they have the same probability of finishing up during each time step. I'll denote this probability using the Greek letter mu, $\mu$, or the variable name mu. If we choose $\mu=1/5$, the average number of time steps for each checkout will be 5 minutes, which is consistent with the data. One server, one queue Write a function called make_system that takes lam and mu as parameters and returns a System object with variables lam, mu, and duration. Set duration, which is the number of time steps to simulate, to 10 hours, expressed in minutes. End of explanation """ # Solution interarrival_time = 8 service_time = 5 lam = 1 / interarrival_time mu = 1 / service_time system = make_system(lam, mu) """ Explanation: Test this function by creating a System object with lam=1/8 and mu=1/5. End of explanation """ # Solution def update_func1(x, t, system): """Simulate one time step. x: number of people in the shop t: time step system: System object """ # if there's a customer in service, check if they're done if x > 0: if flip(system.mu): x -= 1 # check for an arrival if flip(system.lam): x += 1 return x """ Explanation: Write an update function that takes as parameters x, which is the total number of customer in the store, including the one checking out; t, which is the number of minutes that have elapsed in the simulation, and system, which is a System object. If there's a customer checking out, it should use flip to decide whether they are done. And it should use flip to decide if a new customer has arrived. It should return the total number of customers at the end of the time step. End of explanation """ # Solution update_func1(1, 0, system) """ Explanation: Test your function by calling it with x=1, t=0, and the System object you created. If you run it a few times, you should see different results. End of explanation """ def run_simulation(system, update_func): """Simulate a queueing system. system: System object update_func: function object """ x = 0 results = TimeSeries() results[0] = x for t in linrange(0, system.duration): x = update_func(x, t, system) results[t+1] = x return results """ Explanation: Now we can run the simulation. Here's a version of run_simulation that creates a TimeSeries with the total number of customers in the store, including the one checking out. End of explanation """ # Solution results = run_simulation(system, update_func1) results.plot() decorate(xlabel='Time (min)', ylabel='Customers') """ Explanation: Call run_simulation with your update function and plot the results. End of explanation """ def compute_metrics(results, system): """Compute average number of customers and wait time. results: TimeSeries of queue lengths system: System object returns: L, W """ L = results.mean() W = L / system.lam return L, W """ Explanation: After the simulation, we can compute L, which is the average number of customers in the system, and W, which is the average time customers spend in the store. L and W are related by Little's Law: $L = \lambda W$ Where $\lambda$ is the arrival rate. Here's a function that computes them. End of explanation """ # Solution compute_metrics(results, system) """ Explanation: Call compute_metrics with the results from your simulation. End of explanation """ # Solution num_vals = 101 lam_array = linspace(0.1*mu, 0.8*mu, num_vals) lam_array """ Explanation: Parameter sweep Since we don't know the actual value of $\lambda$, we can sweep through a range of possibilities, from 10% to 80% of the completion rate, $\mu$. (If customers arrive faster than the completion rate, the queue grows without bound. In that case the metrics L and W just depend on how long the store is open.) Create an array of values for lam. End of explanation """ # Solution def sweep_lam(lam_array, mu, update_func): """Run simulations with a range of values for `lam` lam_array: array of values for `lam` mu: probability of finishing a checkout update_func: passed along to run_simulation returns: SweepSeries of average wait time vs lam """ sweep = SweepSeries() for lam in lam_array: system = make_system(lam, mu) results = run_simulation(system, update_func) L, W = compute_metrics(results, system) sweep[lam] = W return sweep """ Explanation: Write a function that takes an array of values for lam, a single value for mu, and an update function. For each value of lam, it should run a simulation, compute L and W, and store the value of W in a SweepSeries. It should return the SweepSeries. End of explanation """ # Solution sweep = sweep_lam(lam_array, mu, update_func1) # Solution sweep.plot(style='o', alpha=0.5, label='simulation') decorate(xlabel='Arrival late, lambda (per min)', ylabel='Average time in system', title='Single server, single queue') """ Explanation: Call your function to generate a SweepSeries, and plot it. End of explanation """ # Solution W_avg = sweep.mean() W_avg """ Explanation: If we imagine that this range of values represents arrival rates on different days, we can use the average value of W, for a range of values of lam, to compare different queueing strategies. End of explanation """ def plot_W(lam_array, mu): """Plot the theoretical mean wait time. lam_array: array of values for `lam` mu: probability of finishing a checkout """ W_array = 1 / (mu - lam_array) W_series = make_series(lam_array, W_array) W_series.plot(style='-', label='analysis') """ Explanation: Analysis The model I chose for this system is a common model in queueing theory, in part because many of its properties can be derived analytically. In particular, we can derive the average time in the store as a function of $\mu$ and $\lambda$: $W = 1 / (\mu - \lambda)$ The following function plots the theoretical value of $W$ as a function of $\lambda$. End of explanation """ # Solution sweep.plot(style='o', alpha=0.5, label='simulation') plot_W(lam_array, mu) decorate(xlabel='Arrival late, lambda (per min)', ylabel='Average time in system', title='Single server, single queue') """ Explanation: Use this function to plot the theoretical results, then plot your simulation results again on the same graph. How do they compare? End of explanation """ # Solution def update_func2(x, t, system): """Simulate a single queue with two servers. system: System object """ # if both servers are busy, check whether the # second is complete if x > 1 and flip(system.mu): x -= 1 # check whether the first is complete if x > 0 and flip(system.mu): x -= 1 # check for an arrival if flip(system.lam): x += 1 return x """ Explanation: Multiple servers Now let's try the other two queueing strategies: One queue with two checkout counters. Two queues, one for each counter. The following figure shows the three scenarios: Write an update function for one queue with two servers. End of explanation """ # Solution system = make_system(lam, mu) run_simulation(system, update_func2) results.plot() decorate(xlabel='Time (min)', ylabel='Customers') compute_metrics(results, system) """ Explanation: Use this update function to simulate the system, plot the results, and print the metrics. End of explanation """ # Solution lam_array = linspace(0.1*mu, 1.6*mu, num_vals) """ Explanation: Since we have two checkout counters now, we can consider values for $\lambda$ that exceed $\mu$. Create a new array of values for lam from 10% to 160% of mu. End of explanation """ # Solution sweep = sweep_lam(lam_array, mu, update_func2) W_avg = sweep.mean() print('Average of averages = ', W_avg, 'minutes') # Solution sweep.plot(style='o', alpha=0.5, label='simulation') decorate(xlabel='Arrival late, lambda (per min)', ylabel='Average time in system', title='Multiple server, single queue') """ Explanation: Use your sweep function to simulate the two server, one queue scenario with a range of values for lam. Plot the results and print the average value of W across all values of lam. End of explanation """ # Solution def update_func3(x1, x2, t, system): """Simulate two queues with one server each. x1: number of customers in queue 1 x2: number of customers in queue 2 t: time step system: System object """ # if the first servers is busy, check it it's done if x1 > 0 and flip(system.mu): x1 -= 1 # if the second queue is busy, check if it's done if x2 > 0 and flip(system.mu): x2 -= 1 # check for an arrival if flip(system.lam): # join whichever queue is shorter if x1 < x2: x1 += 1 else: x2 += 1 return x1, x2 """ Explanation: Multiple queues To simulate the scenario with two separate queues, we need two state variables to keep track of customers in each queue. Write an update function that takes x1, x2, t, and system as parameters and returns x1 and x2 as return values. f you are not sure how to return more than one return value, see compute_metrics. When a customer arrives, which queue do they join? End of explanation """ # Solution def run_simulation(system, update_func): """Simulate a queueing system. system: System object update_func: function object """ x1, x2 = 0, 0 results = TimeSeries() results[0] = x1 + x2 for t in linrange(0, system.duration): x1, x2 = update_func(x1, x2, t, system) results[t+1] = x1 + x2 return results """ Explanation: Write a version of run_simulation that works with this update function. End of explanation """ # Solution system = make_system(lam, mu) run_simulation(system, update_func3) results.plot() decorate(xlabel='Time (min)', ylabel='Customers') compute_metrics(results, system) """ Explanation: Test your functions by running a simulation with a single value of lam. End of explanation """ # Solution sweep = sweep_lam(lam_array, mu, update_func3) W_avg = sweep.mean() print('Average of averages = ', W_avg, 'minutes') # Solution sweep.plot(style='o', alpha=0.5, label='simulation') decorate(xlabel='Arrival late, lambda (per min)', ylabel='Average time in system', title='Multiple server, multiple queue') # Solution """ With two queues, the average of averages is slightly higher, most of the time. But the difference is small. The two configurations are equally good as long as both servers are busy; the only time two lines is worse is if one queue is empty and the other contains more than one customer. In real life, if we allow customers to change lanes, that disadvantage can be eliminated. From a theoretical point of view, one line is better. From a practical point of view, the difference is small and can be mitigated. So the best choice depends on practical considerations. On the other hand, you can do substantially better with an express line for customers with short service times. But that's a topic for another notebook. """; """ Explanation: Sweep a range of values for lam, plot the results, and print the average wait time across all values of lam. How do the results compare to the scenario with two servers and one queue. End of explanation """
tensorflow/docs-l10n
site/en-snapshot/model_optimization/guide/combine/sparse_clustering_example.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ ! pip install -q tensorflow-model-optimization import tensorflow as tf import numpy as np import tempfile import zipfile import os """ Explanation: Sparsity preserving clustering Keras example <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/combine/sparse_clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This is an end to end example showing the usage of the sparsity preserving clustering API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline. Other pages For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page. Contents In the tutorial, you will: Train a tf.keras model for the MNIST dataset from scratch. Fine-tune the model with sparsity and see the accuracy and observe that the model was successfully pruned. Apply weight clustering to the pruned model and observe the loss of sparsity. Apply sparsity preserving clustering on the pruned model and observe that the sparsity applied earlier has been preserved. Generate a TFLite model and check that the accuracy has been preserved in the pruned clustered model. Compare the sizes of the different models to observe the compression benefits of applying sparsity followed by the collaborative optimization technique of sparsity preserving clustering. Setup You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide. End of explanation """ # Load MNIST dataset mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Reshape(target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, validation_split=0.1, epochs=10 ) """ Explanation: Train a tf.keras model for MNIST to be pruned and clustered End of explanation """ _, baseline_model_accuracy = model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) _, keras_file = tempfile.mkstemp('.h5') print('Saving model to: ', keras_file) tf.keras.models.save_model(model, keras_file, include_optimizer=False) """ Explanation: Evaluate the baseline model and save it for later usage End of explanation """ import tensorflow_model_optimization as tfmot prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude pruning_params = { 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100) } callbacks = [ tfmot.sparsity.keras.UpdatePruningStep() ] pruned_model = prune_low_magnitude(model, **pruning_params) # Use smaller learning rate for fine-tuning opt = tf.keras.optimizers.Adam(learning_rate=1e-5) pruned_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=opt, metrics=['accuracy']) pruned_model.summary() """ Explanation: Prune and fine-tune the model to 50% sparsity Apply the prune_low_magnitude() API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide. Define the model and apply the sparsity API Note that the pre-trained model is used. End of explanation """ # Fine-tune model pruned_model.fit( train_images, train_labels, epochs=3, validation_split=0.1, callbacks=callbacks) """ Explanation: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline Fine-tune the model with pruning for 3 epochs. End of explanation """ def print_model_weights_sparsity(model): for layer in model.layers: if isinstance(layer, tf.keras.layers.Wrapper): weights = layer.trainable_weights else: weights = layer.weights for weight in weights: if "kernel" not in weight.name or "centroid" in weight.name: continue weight_size = weight.numpy().size zero_num = np.count_nonzero(weight == 0) print( f"{weight.name}: {zero_num/weight_size:.2%} sparsity ", f"({zero_num}/{weight_size})", ) """ Explanation: Define helper functions to calculate and print the sparsity of the model. End of explanation """ stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model) print_model_weights_sparsity(stripped_pruned_model) stripped_pruned_model_copy = tf.keras.models.clone_model(stripped_pruned_model) stripped_pruned_model_copy.set_weights(stripped_pruned_model.get_weights()) """ Explanation: Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step. End of explanation """ # Clustering cluster_weights = tfmot.clustering.keras.cluster_weights CentroidInitialization = tfmot.clustering.keras.CentroidInitialization clustering_params = { 'number_of_clusters': 8, 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS } clustered_model = cluster_weights(stripped_pruned_model, **clustering_params) clustered_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) print('Train clustering model:') clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1) stripped_pruned_model.save("stripped_pruned_model_clustered.h5") # Sparsity preserving clustering from tensorflow_model_optimization.python.core.clustering.keras.experimental import ( cluster, ) cluster_weights = cluster.cluster_weights clustering_params = { 'number_of_clusters': 8, 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS, 'preserve_sparsity': True } sparsity_clustered_model = cluster_weights(stripped_pruned_model_copy, **clustering_params) sparsity_clustered_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) print('Train sparsity preserving clustering model:') sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1) """ Explanation: Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases Next, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model with tfmot.sparsity.keras.strip_pruning before applying the clustering API. End of explanation """ print("Clustered Model sparsity:\n") print_model_weights_sparsity(clustered_model) print("\nSparsity preserved clustered Model sparsity:\n") print_model_weights_sparsity(sparsity_clustered_model) """ Explanation: Check sparsity for both models. End of explanation """ def get_gzipped_model_size(file): # It returns the size of the gzipped model in kilobytes. _, zipped_file = tempfile.mkstemp('.zip') with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f: f.write(file) return os.path.getsize(zipped_file)/1000 # Clustered model clustered_model_file = 'clustered_model.h5' # Save the model. clustered_model.save(clustered_model_file) #Sparsity Preserve Clustered model sparsity_clustered_model_file = 'sparsity_clustered_model.h5' # Save the model. sparsity_clustered_model.save(sparsity_clustered_model_file) print("Clustered Model size: ", get_gzipped_model_size(clustered_model_file), ' KB') print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB') """ Explanation: Create 1.6x smaller models from clustering Define helper function to get zipped model file. End of explanation """ stripped_sparsity_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model) converter = tf.lite.TFLiteConverter.from_keras_model(stripped_sparsity_clustered_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] sparsity_clustered_quant_model = converter.convert() _, pruned_and_clustered_tflite_file = tempfile.mkstemp('.tflite') with open(pruned_and_clustered_tflite_file, 'wb') as f: f.write(sparsity_clustered_quant_model) print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB') print("Sparsity preserved clustered and quantized TFLite model size:", get_gzipped_model_size(pruned_and_clustered_tflite_file), ' KB') """ Explanation: Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization Strip clustering wrappers and convert to TFLite. End of explanation """ def eval_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 1000 == 0: print(f"Evaluated on {i} results so far.") # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) print('\n') # Compare prediction results with ground truth labels to calculate accuracy. prediction_digits = np.array(prediction_digits) accuracy = (prediction_digits == test_labels).mean() return accuracy """ Explanation: See the persistence of accuracy from TF to TFLite End of explanation """ # Keras model evaluation stripped_sparsity_clustered_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) _, sparsity_clustered_keras_accuracy = stripped_sparsity_clustered_model.evaluate( test_images, test_labels, verbose=0) # TFLite model evaluation interpreter = tf.lite.Interpreter(pruned_and_clustered_tflite_file) interpreter.allocate_tensors() sparsity_clustered_tflite_accuracy = eval_model(interpreter) print('Pruned, clustered and quantized Keras model accuracy:', sparsity_clustered_keras_accuracy) print('Pruned, clustered and quantized TFLite model accuracy:', sparsity_clustered_tflite_accuracy) """ Explanation: You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend. End of explanation """
mauroalberti/gsf
docs/notebooks/DEM-plane intersections.ipynb
gpl-3.0
from pygsf.io.gdal.raster import try_read_raster_band """ Explanation: Plane-DEM intersections First dated version: 2019-06-11 Current version: 2021-04-24 Last run: 2021-04-24 A few simulated topographic surfaces were used to validate the routine for calculating the plane-DEM intersection. Loading the dataset can be made with the following function: End of explanation """ source_data = "/home/mauro/Documents/projects/gsf/example_data/others/horiz_plane.asc" success, cntnt = try_read_raster_band(raster_source=source_data) print(success) """ Explanation: Test case 1 The first test case is illustrated in the image below. We have a horizontal topographic surface, at a height of 0, with 100 x 100 cells with a cell size of 1. The geological plane dips 45° towards East. The source point for the plane is located at (0, 50, 50). The locations of the expected intersection points are (50, *, 0). First, a horizontal plane was created with Saga GIS and saved in pygsf/example_data/horiz_plane.asc. End of explanation """ geotransform, projection, band_params, data = cntnt type(geotransform) print(geotransform) type(projection) print(projection) """ Explanation: We read the data source with success. So we may unpack the result. End of explanation """ type(band_params) """ Explanation: Hmmm, there is no projection info. In fact, there shouldn't.. End of explanation """ print(band_params) """ Explanation: A dictionary, as suspected. Try to see the content.. End of explanation """ type(data) data.shape data.min() data.max() """ Explanation: A very horizontal surface, we agree.. End of explanation """ from pygsf.georeferenced.rasters import GeoArray ga = GeoArray(inGeotransform=geotransform, inLevels=[data]) """ Explanation: Given these data, we store them into a GeoArray: End of explanation """ from pygsf.orientations.orientations import Plane gplane = Plane(azim=90.0, dip_ang=45.0) print(gplane) """ Explanation: There is a single band provided in the geoarray, and represented by the data array. The signature of the plane-DEM intersection function is: plane_dem_intersection (srcPlaneAttitude: Plane, srcPt: Point, geo_array: GeoArray, level_ndx: int=0) -> Tuple[List[Point], List[Point]]: We already have the geoarray, we need to define the source plane attitue and the source point. The geoplane is East-dipping with a dip angle of 45°: End of explanation """ from pygsf.geometries.shapes.space3d import Point3D pt = Point3D(0, 50, 50) """ Explanation: The source point is located at (0, 50, 50) End of explanation """ from pygsf.georeferenced.rasters import plane_dem_intersection inters_pts = plane_dem_intersection( srcPlaneAttitude=gplane, srcPt=pt, geo_array=ga) print(inters_pts) """ Explanation: Now we try calculating the intersection: End of explanation """ from bokeh.plotting import figure, output_notebook, show x = list(map(lambda pt: pt.x, inters_pts)) y = list(map(lambda pt: pt.y, inters_pts)) output_notebook() p = figure() p.circle(x, y, size=2, color="navy", alpha=0.5) show(p) """ Explanation: As expected, all the intersection points lie at (50, *, 0) Plotting with Bokeh.. End of explanation """ source_data = "/home/mauro/Documents/projects/gsf/example_data/others/horiz_plane.asc" success, cntnt = try_read_raster_band(raster_source=source_data) print(success) geotransform, projection, band_params, data = cntnt ga = GeoArray(inGeotransform=geotransform, inLevels=[data]) """ Explanation: Test case 2 Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 1 as geological plane. We should get no intersection. End of explanation """ from pygsf.orientations.orientations import Plane gplane = Plane(azim=90.0, dip_ang=0.0) """ Explanation: The horizontal geological plane definition: End of explanation """ pt = Point3D(0, 50, 1) inters_pts = plane_dem_intersection( srcPlaneAttitude=gplane, srcPt=pt, geo_array=ga) print(inters_pts) """ Explanation: The source point located at (0, 50, 1) End of explanation """ pt = Point3D(0, 50, 0) inters_pts = plane_dem_intersection( srcPlaneAttitude=gplane, srcPt=pt, geo_array=ga) print(inters_pts) """ Explanation: Ok, list is empty, as expected. Test case 3 Now we consider a horizontal plane at z = 0 as topographic surface (same as case 1) and another horizontal surface at z = 0 as geological plane. We should get all grid points as intersections. The variables are the same as Case 2, apart from the point definition: End of explanation """ from bokeh.plotting import figure, output_notebook, show x = list(map(lambda pt: pt.x, inters_pts)) y = list(map(lambda pt: pt.y, inters_pts)) output_notebook() p = figure() p.circle(x, y, size=2, color="navy", alpha=0.5) show(p) """ Explanation: They seem correct, just quite numerous.. We visualize them with Bokeh. End of explanation """
parrt/msan692
notes/excel.ipynb
mit
with open('data/SampleSuperstoreSales.xls', "rb") as f: txt = f.read() print(txt[0:100]) """ Explanation: Reading data from Excel Let's get some data. Download Sample Superstore Sales .xls file or my local copy and open it in Excel to see what it looks like. Data of interest that we want to process in Python often comes in the form of an Excel spreadsheet, but the data is in a special format that we can't read directly: End of explanation """ import pandas table = pandas.read_excel("data/SampleSuperstoreSales.xls") table.head() """ Explanation: Converting Excel files with csvkit There's a really useful tool kit called csvkit, which you can install with: bash pip install csvkit Unfortunately, at the moment, there is some kind of a weird bug at the moment, unrelated to csvkit, so we get lots of warnings even though it works. /Users/parrt/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__ So, the following command works without having to run or even own Excel on your laptop, but you get lots of warnings: bash $ in2csv data/SampleSuperstoreSales.xls &gt; /tmp/t.csv Reading Excel files with Pandas The easiest way to read Excel files with Python is to use Pandas: End of explanation """ import sys import csv table_file = "data/SampleSuperstoreSales.csv" with open(table_file, "r") as csvfile: f = csv.reader(csvfile, dialect='excel') data = [] for row in f: data.append(row) print(data[:6]) """ Explanation: CSV data Grab the CSV version of the Excel file SampleSuperstoreSales.csv we've been playing with. Dealing with commas double quotes in CSV For the most part, CSV files are very simple, but they can get complicated when we need to embed a comma. One such case from the above file shows how fields with commas get quoted: "Eldon Base for stackable storage shelf, platinum" What happens when we want to encode a quote? Well, somehow people decided that "" double quotes was the answer (not!) and we get fields encoded like this: "1.7 Cubic Foot Compact ""Cube"" Office Refrigerators" The good news is that Python's csv package knows how to read Excel-generated files that use such encoding. Here's a sample script that reads such a file into a list of lists: End of explanation """ import numpy as np np.array(data) """ Explanation: Or add to a numpy array: End of explanation """ import pandas df = pandas.read_csv("data/SampleSuperstoreSales.csv") df.head() """ Explanation: Reading CSV into Pandas Data frames In the end, the easiest way to deal with loading CSV files is probably with Pandas. For example, to load our sales CSV, we don't even have to manually open and close a file: End of explanation """ df['Customer Name'].head() df.Profit.head() """ Explanation: Pandas hides all of the details. I also find that pulling out columns is nice with pandas. Here's how to print the customer name column: End of explanation """ df = pandas.read_csv("data/SampleSuperstoreSales.csv") (df['Order Quantity']*df['Unit Price']).head() """ Explanation: You can learn more about slicing and dicing data from our Boot Camp notes. Exercise Read the AAPL.csv file into a data frame using Pandas. Exercise From the sales CSV file, use pandas to read in the data and multiple the Order Quantity and Unit Price columns to get a new column. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cnrm-cerfacs/cmip6/models/sandbox-1/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: CNRM-CERFACS Source ID: SANDBOX-1 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:52 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
nadvamir/deep-learning
gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') """ Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='z') return inputs_real, inputs_z """ Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation """ def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out, logits """ Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation """ def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation """ # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 """ Explanation: Hyperparameters End of explanation """ tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model, g_logits = generator(input_z, input_size, g_hidden_size, alpha=alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True, alpha=alpha) """ Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation """ # Calculate losses labels_real = tf.ones_like(d_logits_real) * (1 - smooth) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=labels_real)) labels_fake = tf.zeros_like(d_logits_fake) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=labels_fake)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) """ Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation """ # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [v for v in t_vars if v.name.startswith('generator')] d_vars = [v for v in t_vars if v.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) """ Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation """ batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) """ Explanation: Training End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() """ Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation """ def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch][0]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) """ Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation """ _ = view_samples(-1, samples) """ Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation """ rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[0][::int(len(sample[0])/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) """ Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation """ saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) """ Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation """
sauravrt/signal-processing
ipynb/ComplexCircularGaussian.ipynb
gpl-2.0
# magic %matplotlib inline import numpy as np import matplotlib.pyplot as plt # prettyplot stuff import seaborn as sns sns.set(style='ticks', palette='Set2') sns.despine() mu = 0 sigmasq = 1 sd = np.sqrt(sigmasq) # Generate complex gaussian r.v. samples x = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000) y = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000) z = x + 1j*y h = plt.plot(np.real(z), np.imag(z), 'o') plt.axis('equal') plt.grid(True) """ Explanation: Product of independent complex-circular Gaussian Random Variable Let $z\sim \mathcal{CN}(0, \sigma^2)$ and if $z = x + {\rm j}y$, both $x$ and $y$ are zero mean Gaussian r.v. with variance $\sigma^2/2$. End of explanation """ z_mag = np.abs(z) z_arg = np.angle(z) plt.subplot(211) plt.hist(z_mag, 20) plt.ylabel('Histogram of |z|') plt.subplot(212) plt.hist(z_arg, 20) plt.ylabel('Histogram of arg(z)') """ Explanation: Express $z = |z|e^{j\phi}$. The magnitude $|z|$ is Rayleigh distributed while the phase $\phi = \operatorname{arg}(z)$ is uniform over the interval $[-\pi, \pi)$. End of explanation """ # Generate complex gaussian r.v. samples for w u = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000) v = np.random.normal(loc = mu, scale = sd/np.sqrt(2), size = 1000) w = u + 1j*v p = w*z p_arg = np.angle(p) h = plt.hist(p_arg) """ Explanation: Consider two independent complex gaussian r.v. $w$ and $z$ both $\mathcal{CN}(0, \sigma^2)$. Let $w = |w|e^{j\theta}$ and $p = |p|e^{j\omega}$. Q What is the distribution of the phase of the product $p = wz$? End of explanation """
ageron/tensorflow-safari-course
10_training_deep_nets_ex9.ipynb
apache-2.0
from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf tf.__version__ from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("tmp/data/") """ Explanation: Try not to peek at the solutions when you go through the exercises. ;-) First let's make sure this notebook works well in both Python 2 and Python 3: End of explanation """ from functools import partial n_inputs = 28 * 28 n_hidden1 = 100 n_hidden2 = 100 n_outputs = 10 graph = tf.Graph() with graph.as_default(): with tf.name_scope("inputs"): X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X") y = tf.placeholder(tf.int32, shape=[None], name="y") he_init = tf.contrib.layers.variance_scaling_initializer() dense_layer = partial(tf.layers.dense, kernel_initializer=he_init, activation=tf.nn.elu) hidden1 = dense_layer(X, n_hidden1, name="hidden1") hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2") logits = dense_layer(hidden2, n_outputs, activation=None, name="output") Y_proba = tf.nn.softmax(logits) with tf.name_scope("train"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y) loss = tf.reduce_mean(xentropy) optimizer = tf.train.AdamOptimizer() training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) with tf.name_scope("init_and_save"): init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 20 batch_size = 50 with tf.Session(graph=graph) as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels}) print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val) save_path = saver.save(sess, "./my_mnist_model") """ Explanation: Techniques for Training Deep Nets Using He initialization and the ELU activation function (with the help of a partial()): End of explanation """ n_inputs = 28 * 28 n_hidden1 = 100 n_hidden2 = 100 n_outputs = 10 graph = tf.Graph() with graph.as_default(): with tf.name_scope("inputs"): X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X") y = tf.placeholder(tf.int32, shape=[None], name="y") he_init = tf.contrib.layers.variance_scaling_initializer() dense_layer = partial(tf.layers.dense, kernel_initializer=he_init, activation=tf.nn.elu) hidden1 = dense_layer(X, n_hidden1, name="hidden1") hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2") logits = dense_layer(hidden2, n_outputs, activation=None, name="output") Y_proba = tf.nn.softmax(logits) with tf.name_scope("train"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y) loss = tf.reduce_mean(xentropy) optimizer = tf.train.AdamOptimizer() training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) with tf.name_scope("init_and_save"): init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Exercise 9 In this exercise, you will add a 50% dropout rate to the following neural network model below. 9.1) Add a training placeholder, of type tf.bool. Tip: you can use tf.placeholder_with_default() to make this False by default. 9.2) Add a dropout layer between the input layer and the first hidden layer, using tf.layers.dropout(). End of explanation """ n_epochs = 20 batch_size = 50 with tf.Session(graph=graph) as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels}) print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val) save_path = saver.save(sess, "./my_mnist_model") """ Explanation: 9.3) Update the following training code to feed the value of the training placeholder, where appropriate, then run the code and see if the model performs better than without dropout. End of explanation """ n_inputs = 28 * 28 n_hidden1 = 100 n_hidden2 = 100 n_outputs = 10 dropout_rate = 0.5 # <= CHANGED graph = tf.Graph() with graph.as_default(): with tf.name_scope("inputs"): X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X") y = tf.placeholder(tf.int32, shape=[None], name="y") training = tf.placeholder_with_default(False, shape=[], name='training') # <= CHANGED X_drop = tf.layers.dropout(X, dropout_rate, training=training) # <= CHANGED he_init = tf.contrib.layers.variance_scaling_initializer() dense_layer = partial(tf.layers.dense, kernel_initializer=he_init, activation=tf.nn.elu) hidden1 = dense_layer(X_drop, n_hidden1, name="hidden1") # <= CHANGED hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2") logits = dense_layer(hidden2, n_outputs, activation=None, name="output") Y_proba = tf.nn.softmax(logits) with tf.name_scope("train"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y) loss = tf.reduce_mean(xentropy) optimizer = tf.train.AdamOptimizer() training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) with tf.name_scope("init_and_save"): init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Try not to peek at the solution below before you have done the exercise! :) Exercise 9 - Solution 9.1-2) End of explanation """ n_epochs = 20 batch_size = 50 with tf.Session(graph=graph) as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True}) # <= CHANGED acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels}) print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val) save_path = saver.save(sess, "./my_mnist_model") """ Explanation: 9.3) End of explanation """ n_epochs = 1000 batch_size = 50 best_acc_val = 0 check_interval = 100 checks_since_last_progress = 0 max_checks_without_progress = 100 with tf.Session(graph=graph) as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True}) if iteration % check_interval == 0: acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[:2000], y: mnist.validation.labels[:2000]}) if acc_val > best_acc_val: best_acc_val = acc_val checks_since_last_progress = 0 saver.save(sess, "./my_best_model_so_far") else: checks_since_last_progress += 1 acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[2000:], y: mnist.validation.labels[2000:]}) print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val, "Best validation accuracy:", best_acc_val) if checks_since_last_progress > max_checks_without_progress: print("Early stopping!") saver.restore(sess, "./my_best_model_so_far") break acc_test = accuracy.eval(feed_dict={X: mnist.test.images[2000:], y: mnist.test.labels[2000:]}) print("Final accuracy on test set:", acc_test) save_path = saver.save(sess, "./my_mnist_model") """ Explanation: Early Stopping End of explanation """ def get_model_params(): gvars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) return {gvar.op.name: value for gvar, value in zip(gvars, tf.get_default_session().run(gvars))} def restore_model_params(model_params): gvar_names = list(model_params.keys()) assign_ops = {gvar_name: tf.get_default_graph().get_operation_by_name(gvar_name + "/Assign") for gvar_name in gvar_names} init_values = {gvar_name: assign_op.inputs[1] for gvar_name, assign_op in assign_ops.items()} feed_dict = {init_values[gvar_name]: model_params[gvar_name] for gvar_name in gvar_names} tf.get_default_session().run(assign_ops, feed_dict=feed_dict) n_epochs = 1000 batch_size = 50 best_acc_val = 0 check_interval = 100 checks_since_last_progress = 0 max_checks_without_progress = 100 best_model_params = None with tf.Session(graph=graph) as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True}) if iteration % check_interval == 0: acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[:2000], y: mnist.validation.labels[:2000]}) if acc_val > best_acc_val: best_acc_val = acc_val checks_since_last_progress = 0 best_model_params = get_model_params() else: checks_since_last_progress += 1 acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[2000:], y: mnist.validation.labels[2000:]}) print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val, "Best validation accuracy:", best_acc_val) if checks_since_last_progress > max_checks_without_progress: print("Early stopping!") break if best_model_params: restore_model_params(best_model_params) acc_test = accuracy.eval(feed_dict={X: mnist.test.images[2000:], y: mnist.test.labels[2000:]}) print("Final accuracy on test set:", acc_test) save_path = saver.save(sess, "./my_mnist_model") """ Explanation: Saving the model to disk so often slows down training. Let's save to RAM instead: End of explanation """
googledatalab/notebooks
samples/ML Toolbox/Image Classification/Flower/Local End to End.ipynb
apache-2.0
!mkdir -p /content/flowerdata !gsutil -m cp gs://cloud-datalab/sampledata/flower/* /content/flowerdata """ Explanation: Efficient training for image classification Transfer learning using Inception Package - Local Run Experience Traditionally, image classification required a very large corpus of training data - often millions of images which may not be available and a long time to train on those images which is expensive and time consuming. That has changed with transfer learning which can be readily used with Cloud ML Engine and without deep knowledge of image classification algorithms using the ML toolbox in Datalab. This notebook codifies the capabilities discussed in this blog post. In a nutshell, it uses the pre-trained inception model as a starting point and then uses transfer learning to train it further on additional, customer-specific images. For explanation, simple flower images are used. Compared to training from scratch, the training data requirements, time and costs are drastically reduced. This notebook does all operations in the Datalab container without calling CloudML API. Hence, this is called "local" operations - though Datalab itself is most often running on a GCE VM. See the corresponding cloud notebook for cloud experience which only adds the --cloud parameter and some config to the local experience commands. The purpose of local work is to do some initial prototyping and debugging on small scale data - often by taking a suitable (say 0.1 - 1%) sample of the full data. The same basic steps can then be repeated with much larger datasets in cloud. Setup All data is available under gs://cloud-datalab/sampledata/flower. eval100 is a subset of eval300, which is a subset of eval670. Same for train data. End of explanation """ import mltoolbox.image.classification as model from google.datalab.ml import * worker_dir = '/content/datalab/tmp/flower' preprocessed_dir = worker_dir + '/flowerrunlocal' model_dir = worker_dir + '/tinyflowermodellocal' prediction_dir = worker_dir + '/flowermodelevallocal' images_dir = worker_dir + '/images' local_train_file = '/content/flowerdata/train200local.csv' local_eval_file = '/content/flowerdata/eval100local.csv' !mkdir -p {images_dir} """ Explanation: Define directories for preprocessing, model, and prediction. End of explanation """ import csv import datalab.storage as gcs import os def download_images(input_csv, output_csv, images_dir): with open(input_csv) as csvfile: data = list(csv.DictReader(csvfile, fieldnames=['image_url', 'label'])) for x in data: url = x['image_url'] out_file = os.path.join(images_dir, os.path.basename(url)) with open(out_file, 'w') as f: f.write(gcs.Item.from_url(url).read_from()) x['image_url'] = out_file with open(output_csv, 'w') as w: csv.DictWriter(w, fieldnames=['image_url', 'label']).writerows(data) download_images('/content/flowerdata/train200.csv', local_train_file, images_dir) download_images('/content/flowerdata/eval100.csv', local_eval_file, images_dir) """ Explanation: In order to get best efficiency, we download the images to local disk, and create our training and evaluation files to reference local path instead of GCS path. Note that the original training files referencing GCS image paths work too, although a bit slower. End of explanation """ !head /content/flowerdata/train200.csv -n 5 !head {local_train_file} -n 5 """ Explanation: The above code can best be illustrated by the comparison below. End of explanation """ # instead of local_train_file, it can take '/content/flowerdata/train200.csv' too, but processing will be slower. train_set = CsvDataSet(local_train_file, schema='image_url:STRING,label:STRING') model.preprocess(train_set, preprocessed_dir) """ Explanation: Preprocess Preprocessing uses a Dataflow pipeline to convert the image format, resize images, and run the converted image through a pre-trained model to get the features or embeddings. You can also do this step using alternate technologies like Spark or plain Python code if you like. The following cell takes ~5 min on a n1-standard-1 VM. Preprocessing the full 3000 images takes about one hour. End of explanation """ import logging logging.getLogger().setLevel(logging.INFO) model.train(preprocessed_dir, 30, 800, model_dir) logging.getLogger().setLevel(logging.WARNING) """ Explanation: Train The next step is to train the inception model with the preprocessed images using transfer learning. Transfer learning retains most of the inception model but replaces the final layer as shown in the image. End of explanation """ tb_id = TensorBoard.start(model_dir) """ Explanation: Run TensorBoard to visualize the completed training. Review accuracy and loss in particular. End of explanation """ summary = Summary(model_dir) summary.list_events() summary.plot('accuracy') summary.plot('loss') """ Explanation: We can check the TF summary events from training. End of explanation """ images = [ 'gs://cloud-ml-data/img/flower_photos/daisy/15207766_fc2f1d692c_n.jpg', 'gs://cloud-ml-data/img/flower_photos/tulips/6876631336_54bf150990.jpg' ] # set show_image to False to not display pictures. model.predict(model_dir, images, show_image=True) """ Explanation: Predict Let's start with a quick check by taking a couple of images and using the model to predict the type of flower locally. End of explanation """ import google.datalab.bigquery as bq bq.Dataset('flower').create() eval_set = CsvDataSet(local_eval_file, schema='image_url:STRING,label:STRING') model.batch_predict(eval_set, model_dir, output_bq_table='flower.eval_results_local') """ Explanation: Evaluate We did a quick test of the model using a few samples. But we need to understand how the model does by evaluating it against much larger amount of labeled data. In the initial preprocessing step, we did set aside enough images for that purpose. Next, we will use normal batch prediction and compare the results with the previously labeled targets. The following batch prediction and loading of results takes ~3 minutes. End of explanation """ %%bq query --name wrong_prediction SELECT * FROM flower.eval_results_local where target != predicted wrong_prediction.execute().result() """ Explanation: Now that we have the results and expected results loaded in a BigQuery table, let's start analyzing the errors and plot the confusion matrix. End of explanation """ ConfusionMatrix.from_bigquery('flower.eval_results_local').plot() """ Explanation: Confusion matrix is a common way of comparing the confusion of the model - aggregate data about where the actual result did not match the expected result. End of explanation """ %%bq query --name accuracy SELECT target, SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct, COUNT(*) as total, SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy FROM flower.eval_results_local GROUP BY target accuracy.execute().result() %%bq query --name logloss SELECT feature, AVG(-logloss) as logloss, count(*) as count FROM ( SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss FROM ( SELECT target as feature, CASE WHEN target=predicted THEN 1 ELSE 0 END as correct, target_prob as prob FROM flower.eval_results_local)) GROUP BY feature FeatureSliceView().plot(logloss) """ Explanation: More advanced analysis can be done using the feature slice view. For the feature slice view, let's define SQL queries that compute accuracy and log loss and then use the metrics. End of explanation """ import shutil import google.datalab.bigquery as bq TensorBoard.stop(tb_id) bq.Table('flower.eval_results_local').delete() shutil.rmtree(worker_dir) """ Explanation: Clean up End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/03c9d71de135994dbf45db72856a1f9a/plot_mne_inverse_envelope_correlation.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # Sheraz Khan <sheraz@khansheraz.com> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt import mne from mne.connectivity import envelope_correlation from mne.minimum_norm import make_inverse_operator, apply_inverse_epochs from mne.preprocessing import compute_proj_ecg, compute_proj_eog data_path = mne.datasets.brainstorm.bst_resting.data_path() subjects_dir = op.join(data_path, 'subjects') subject = 'bst_resting' trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif') src = op.join(subjects_dir, subject, 'bem', subject + '-oct-6-src.fif') bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif') raw_fname = op.join(data_path, 'MEG', 'bst_resting', 'subj002_spontaneous_20111102_01_AUX.ds') """ Explanation: Compute envelope correlations in source space Compute envelope correlations of orthogonalized activity [1] [2] in source space using resting state CTF data. End of explanation """ raw = mne.io.read_raw_ctf(raw_fname, verbose='error') raw.crop(0, 60).load_data().pick_types(meg=True, eeg=False).resample(80) raw.apply_gradient_compensation(3) projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2) projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407') raw.info['projs'] += projs_ecg raw.info['projs'] += projs_eog raw.apply_proj() cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest """ Explanation: Here we do some things in the name of speed, such as crop (which will hurt SNR) and downsample. Then we compute SSP projectors and apply them. End of explanation """ raw.filter(14, 30) events = mne.make_fixed_length_events(raw, duration=5.) epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5., baseline=None, reject=dict(mag=8e-13), preload=True) del raw """ Explanation: Now we band-pass filter our data and create epochs. End of explanation """ src = mne.read_source_spaces(src) fwd = mne.make_forward_solution(epochs.info, trans, src, bem) inv = make_inverse_operator(epochs.info, fwd, cov) del fwd, src """ Explanation: Compute the forward and inverse End of explanation """ labels = mne.read_labels_from_annot(subject, 'aparc_sub', subjects_dir=subjects_dir) epochs.apply_hilbert() # faster to apply in sensor space stcs = apply_inverse_epochs(epochs, inv, lambda2=1. / 9., pick_ori='normal', return_generator=True) label_ts = mne.extract_label_time_course( stcs, labels, inv['src'], return_generator=True) corr = envelope_correlation(label_ts, verbose=True) # let's plot this matrix fig, ax = plt.subplots(figsize=(4, 4)) ax.imshow(corr, cmap='viridis', clim=np.percentile(corr, [5, 95])) fig.tight_layout() """ Explanation: Compute label time series and do envelope correlation End of explanation """ threshold_prop = 0.15 # percentage of strongest edges to keep in the graph degree = mne.connectivity.degree(corr, threshold_prop=threshold_prop) stc = mne.labels_to_stc(labels, degree) stc = stc.in_label(mne.Label(inv['src'][0]['vertno'], hemi='lh') + mne.Label(inv['src'][1]['vertno'], hemi='rh')) brain = stc.plot( clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot', subjects_dir=subjects_dir, views='dorsal', hemi='both', smoothing_steps=25, time_label='Beta band') """ Explanation: Compute the degree and plot it End of explanation """
davicsilva/dsintensive
notebooks/miniprojects/data_wrangling_json/sliderule_dsi_json_exercise.ipynb
apache-2.0
import pandas as pd """ Explanation: JSON examples and exercise get familiar with packages for dealing with JSON study examples with JSON strings and files work on exercise to be completed and submitted reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader data source: http://jsonstudio.com/resources/ End of explanation """ import json from pandas.io.json import json_normalize """ Explanation: imports for Python, Pandas End of explanation """ # define json string data = [{'state': 'Florida', 'shortname': 'FL', 'info': {'governor': 'Rick Scott'}, 'counties': [{'name': 'Dade', 'population': 12345}, {'name': 'Broward', 'population': 40000}, {'name': 'Palm Beach', 'population': 60000}]}, {'state': 'Ohio', 'shortname': 'OH', 'info': {'governor': 'John Kasich'}, 'counties': [{'name': 'Summit', 'population': 1234}, {'name': 'Cuyahoga', 'population': 1337}]}] # use normalization to create tables from nested element json_normalize(data, 'counties') # further populate tables created from nested element json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']]) """ Explanation: JSON example, with string demonstrates creation of normalized dataframes (tables) from nested json string source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization End of explanation """ # load json as string json.load((open('data/world_bank_projects_less.json'))) # load as Pandas dataframe sample_json_df = pd.read_json('data/world_bank_projects_less.json') sample_json_df """ Explanation: JSON example, with file demonstrates reading in a json file as a string and as a table uses small sample file containing data about projects funded by the World Bank data source: http://jsonstudio.com/resources/ End of explanation """ # load 'data/world_bank_projects.json' as Pandas dataframe world_bank_projects_df = pd.read_json('data/world_bank_projects.json') """ Explanation: JSON exercise Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above, 1. Find the 10 countries with most projects 2. Find the top 10 major project themes (using column 'mjtheme_namecode') 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. 1. Find the 10 countries with most projects End of explanation """ world_bank_projects_df.groupby(['countryname']).size().sort_values(ascending=False).head(10) """ Explanation: Step-by-step: Group DataFrame by 'countryname' Count records/'countryname' (size( )) Order the list from the max to the min projects/country, listing only the Top-10 (head( 10 )) End of explanation """ world_bank_projects_df['major_code_project'] = \ pd.DataFrame(world_bank_projects_df.mjtheme_namecode.values) world_bank_projects_df['major_code_project'] """ Explanation: 2. Find the top 10 major project themes (using column 'mjtheme_namecode') Step-by-step: A new column 'major_code_project' will be added to 'world_bank_projects_df' DataFrame This column will contain the major project code, that is part of 'mjtheme_namecode' JSON column End of explanation """ # ... NOT finish yet... """ Explanation: Now we proceed with the process of groupby, count and sort to find the Top10 project themes. End of explanation """ # Remebering merge "flights" dataset with "airports": # flights = pd.merge(flights, airports, left_on="airport-A", right_on="airport", how='left') # ... NOT finish yet... """ Explanation: 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. End of explanation """
rgbrown/invariants
code/Moving frame calculations.ipynb
mit
from sympy import Function, Symbol, symbols, init_printing, expand, I, re, im from IPython.display import Math, display init_printing() from transvectants import * def disp(expr): display(Math(my_latex(expr))) # p and q are \bar{x} \bar{y} x, y = symbols('x y') p, q = symbols('p q') a, b, c, d = symbols('a b c d') p = ((a*x - b*y)*(1 + c*x - d*y) + (b*x + a*y)*(d*x + c*y))/((1 + c*x - d*y)**2 + (d*x + c*y)**2) q = ((b*x + a*y)*(1 + c*x - d*y) - (a*x - b*y)*(d*x + c*y))/((1 + c*x - d*y)**2 + (d*x + c*y)**2) # Can we tidy this later - but this does work g = Function('g')(p, q) g # In the below, interpret fb_blah as the the f derivative foo = diff(g, x).subs([(x, 0), (y, 0)]) foo disp(diff(g, x).subs([(x, 0), (y, 0)])) disp(diff(g, x).subs([(x, 0), (y, 0)])) disp(diff(g, x, x).subs([(x, 0), (y, 0)])) disp(diff(g, x, y).subs([(x, 0), (y, 0)])) disp(diff(g, y, y).subs([(x, 0), (y, 0)])) disp(diff(g, x, x, x).subs([(x, 0), (y, 0)])) disp(diff(g, x, x, y).subs([(x, 0), (y, 0)])) disp(diff(g, x, y, y).subs([(x, 0), (y, 0)])) print('boo') disp(diff(g, y, y, y).subs([(x, 0), (y, 0)])) """ Explanation: Moving frame calculations General idea Fundamental thing to start with $$ f(z) = \bar{f}(\bar{z}) $$ Then you need a general group transformation. Which you should simplify as far as possible. First: translations can be removed. This is always true, since you have choice of cross-section. It is an extremely good idea since it means that you will now evaluate everything at (x,y) = (0,0). Second: Remove any other group parameters you can. Now prolong the group action. Just write it out, don't think. The code below should help. Turn the prolonged action into a matrix up to the appropriate number of derivatives. Remember that you are solving for the entries of the matrix, not for the vectors. Now comes the art. You need to find the rest of the cross-section. Choose values for sets of the barred derivatives in order to get all the parameters. What is left over is an invariant. Mobius Fundamental thing to start with $$ f(z) = \bar{f}(\bar{z}) $$ A general Mobius transformation is $$ \bar{z} = \frac{\alpha z + \beta}{\gamma z + \delta} $$ Assuming $\delta \neq 0$, we can normalise it: $\delta = 1$. For our cross-section, we'll choose $\bar{z} = 0$. From any point $z$, this determines $\beta$, so wlog assume we start at $z = 0$, i.e. that $\beta = 0$. So $z = 0$ from now on!!!. $$ \bar{x} + i\bar{y} = \frac{(a + ib)(x + iy)}{1 + (c + id)(x + iy)} $$ After the zeroth-order frame translates the general point $\bar{x}$ to $0$. So all derivative calculations will be evaluated at $x = y = 0$. End of explanation """ x, y = symbols('x y', real=True) a0, a1, a2, a3, b0, b1, b2, b3 = symbols('a_0 a_1 a_2 a_3 b_0 b_1 b_2 b_3', real=True) z = x + I*y # We have removed the a_0 + I*b_0 term to take out the translation w = (a1 + I*b1)*z + (a2 + I*b2)*z**2 + (a3 + I*b3)*z**3 p = re(w) q = im(w) p fb = Function('g')(p, q) disp(diff(fb, x).subs([(x, 0), (y, 0)])) disp(diff(fb, y).subs([(x, 0), (y, 0)])) disp(diff(fb, x, x).subs([(x, 0), (y, 0)])) disp(diff(fb, x, y).subs([(x, 0), (y, 0)])) disp(diff(fb, y, y).subs([(x, 0), (y, 0)])) disp(diff(fb, x, x, x).subs([(x, 0), (y, 0)])) disp(diff(fb, x, x, y).subs([(x, 0), (y, 0)])) disp(diff(fb, x, y, y).subs([(x, 0), (y, 0)])) print('boo') disp(diff(fb, y, y, y).subs([(x, 0), (y, 0)])) """ Explanation: Write this out as a matrix Now pick the cross-section Conformal OK, let's try again. This time we are gonna be awesome and do conformal. The Taylor expansion of a general conformal map up to third order is $$ \bar{z} = c_0 + c_1 z + c_2 z^2 + c_3 z^3 $$ Or in components, $$ \begin{align} \bar{x} &= a_0 + a_1 x + a_2 (x^2 - y^2) + a_3 (x^3 - 3 xy^2) - b_1 y - 2b_2xy - 3b_3x^2y + b_3y^3 \ 3 & = 4 \end{align} $$ End of explanation """ disp(expand(partial_transvectDant((f, f, f), [[0, 1], [0, 1], [0, 2], [0, 2]]))) disp(expand(partial_transvectant((f, f, f, f, f), [[0, 1], [0, 1], [2, 3], [2, 3], [2, 4]]) ) -2*(expand(partial_transvectant((f, f, f, f, f), [[0, 1], [1, 2], [2, 3], [3, 0], [0, 4]]) ))) disp(expand(partial_transvectant((f, f, f), [[0, 1], [0, 1], [0, 1], [0, 2]]))) disp(expand(partial_transvectant((f, f), [[0, 1], [0, 1], [0, 1]]))) #C = transvectant(f, f, 2) #D = -partial_transvectant((f, f, f), [[0, 1], [1, 2]]) # We are going to build these by weight, not degree. # Hence order does not match dispaper # Weight 4 (2 of 'em) I4_1 = partial_transvectant((f,f),[[0,1],[0,1]]) # = C I4_2 = partial_transvectant((f, f, f), [[0, 1], [1, 2]]) # = -D # Weight 6 (2 of 'em) print('weight 3:') I6_1 = partial_transvectant((f,f,f),[[0,1],[0,1],[0,2]]) # = transvectant(f, C, 1) I6_2 = partial_transvectant((f,f,f,f),[[0,1],[0,2],[0,3]]) # Weight 8 (7 of 'em??) print('weight 4:') I8_1 = expand(partial_transvectant((f,f,f),[[0,1],[0,1],[1,2],[0,2]])) I8_2 = expand(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[2,3]])) I8_3 = expand(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0]])) I8_4 = expand(partial_transvectant((f,f,f,f),[[0,1],[1,2],[1,2],[2,3]])) I8_5 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]])) I8_6 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[3,4]])) I8_7 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[0,4]])) print('weight 2') disp(I4_1) disp(I4_2) print('weight 3') disp(I6_1) disp(expand(I6_2)) print('weight 4') disp(I8_1) print('') disp(I8_2) print('') disp(I8_3) print('') disp(I8_4) print('') disp(I8_5) print('') disp(I8_6) print('') disp(I8_7) # Only 'weight 4' affine invariant disp(I4_2/I4_1) # Only 'weight 6' affine invariant disp(I6_2/I6_1) disp(partial_transvectant((f,f,f,f,f),[[0,2],[1,2],[2,3],[3,4]])) disp(partial_transvectant((f,f,C),[[0,1],[1,2]])) #disp(transvectant(C, C, 2)) funcs = (C, f**2) pairs = [[0, 1]] disp(partial_transvectant(funcs, pairs)) # Construct linear, quadratic, cubic forms fx, fy, fxx, fxy, fyy, fxxx, fxxy, fxyy, fyyy = symbols('f_x, f_y, f_{xx}, f_{xy}, f_{yy}, f_{xxx}, f_{xxy}, f_{xyy}, f_{yyy}') l = fx*x + fy*y q = fxx*x*x + 2*fxy*x*y + fyy*y*y c = fxxx*x*x*x + 3*fxxy*x*x*y + 3*fxyy*x*y*y + fyyy*y*y*y # I3 as a form (Robert's method to annoy us...) disp(-expand(transvectant(q,transvectant(c,c,2),2)/288)) # I5 disp(expand(transvectant(transvectant(c,c,2),transvectant(c,c,2),2)/10368)) # I6 disp(transvectant(c,l**3,3)/36) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,1]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,2]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,1],[0,2]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,1]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,2]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3],[2,3]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,1],[1,2]]))) disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[2,0]]))) disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[0,1]]))) disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[2,0],[0,1]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3]]))) disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3],[2,3]]))) disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]]))) disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]]))) disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[3,4]]))) disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[0,4]]))) """ Explanation: Write this out as a matrix Now for the cross-section End of explanation """
sjsrey/pysal
notebooks/explore/giddy/Rank_Markov.ipynb
bsd-3-clause
import pysal.lib as ps import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import pandas as pd import geopandas as gpd """ Explanation: Full Rank Markov and Geographic Rank Markov Author: Wei Kang &#119;&#101;&#105;&#107;&#97;&#110;&#103;&#57;&#48;&#48;&#57;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; End of explanation """ from pysal.explore.giddy.markov import FullRank_Markov income_table = pd.read_csv(ps.examples.get_path("usjoin.csv")) income_table.head() pci = income_table[list(map(str,range(1929,2010)))].values pci m = FullRank_Markov(pci) m.ranks m.transitions """ Explanation: Full Rank Markov End of explanation """ m.p """ Explanation: Full rank Markov transition probability matrix End of explanation """ m.fmpt m.sojourn_time df_fullrank = pd.DataFrame(np.c_[m.p.diagonal(),m.sojourn_time], columns=["Staying Probability","Sojourn Time"], index = np.arange(m.p.shape[0])+1) df_fullrank.head() df_fullrank.plot(subplots=True, layout=(1,2), figsize=(15,5)) sns.distplot(m.fmpt.flatten(),kde=False) """ Explanation: Full rank first mean passage times End of explanation """ from pysal.explore.giddy.markov import GeoRank_Markov, Markov, sojourn_time gm = GeoRank_Markov(pci) gm.transitions gm.p gm.sojourn_time[:10] gm.sojourn_time gm.fmpt income_table["geo_sojourn_time"] = gm.sojourn_time i = 0 for state in income_table["Name"]: income_table["geo_fmpt_to_" + state] = gm.fmpt[:,i] income_table["geo_fmpt_from_" + state] = gm.fmpt[i,:] i = i + 1 income_table.head() geo_table = gpd.read_file(ps.examples.get_path('us48.shp')) # income_table = pd.read_csv(pysal.lib.examples.get_path("usjoin.csv")) complete_table = geo_table.merge(income_table,left_on='STATE_NAME',right_on='Name') complete_table.head() complete_table.columns """ Explanation: Geographic Rank Markov End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=2,figsize = (15,7)) target_states = ["California","Mississippi"] directions = ["from","to"] for i, direction in enumerate(directions): for j, target in enumerate(target_states): ax = axes[i,j] col = direction+"_"+target complete_table.plot(ax=ax,column = "geo_fmpt_"+ col,cmap='OrRd', scheme='quantiles', legend=True) ax.set_title("First Mean Passage Time "+direction+" "+target) ax.axis('off') leg = ax.get_legend() leg.set_bbox_to_anchor((0.8, 0.15, 0.16, 0.2)) plt.tight_layout() """ Explanation: Visualizing first mean passage time from/to California/Mississippi: End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=2,figsize = (15,7)) schemes = ["Quantiles","Equal_Interval"] for i, scheme in enumerate(schemes): ax = axes[i] complete_table.plot(ax=ax,column = "geo_sojourn_time",cmap='OrRd', scheme=scheme, legend=True) ax.set_title("Rank Sojourn Time ("+scheme+")") ax.axis('off') leg = ax.get_legend() leg.set_bbox_to_anchor((0.8, 0.15, 0.16, 0.2)) plt.tight_layout() """ Explanation: Visualizing sojourn time for each US state: End of explanation """
ptrendx/mxnet
example/autoencoder/convolutional_autoencoder.ipynb
apache-2.0
import random import matplotlib.pyplot as plt import mxnet as mx from mxnet import autograd, gluon """ Explanation: Convolutional Autoencoder In this example we will demonstrate how you can create a convolutional autoencoder in Gluon End of explanation """ batch_size = 512 ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu() transform = lambda x,y: (x.transpose((2,0,1)).astype('float32')/255., y) train_dataset = gluon.data.vision.FashionMNIST(train=True) test_dataset = gluon.data.vision.FashionMNIST(train=False) train_dataset_t = train_dataset.transform(transform) test_dataset_t = test_dataset.transform(transform) train_data = gluon.data.DataLoader(train_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5) test_data = gluon.data.DataLoader(test_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5) plt.figure(figsize=(20,10)) for i in range(10): ax = plt.subplot(1, 10, i+1) ax.imshow(train_dataset[i][0].squeeze().asnumpy(), cmap='gray') ax.axis('off') """ Explanation: Data We will use the FashionMNIST dataset, which is of a similar format to MNIST but is richer and has more variance End of explanation """ net = gluon.nn.HybridSequential(prefix='autoencoder_') with net.name_scope(): # Encoder 1x28x28 -> 32x1x1 encoder = gluon.nn.HybridSequential(prefix='encoder_') with encoder.name_scope(): encoder.add( gluon.nn.Conv2D(channels=4, kernel_size=3, padding=1, strides=(2,2), activation='relu'), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=8, kernel_size=3, padding=1, strides=(2,2), activation='relu'), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=16, kernel_size=3, padding=1, strides=(2,2), activation='relu'), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=32, kernel_size=3, padding=0, strides=(2,2),activation='relu'), gluon.nn.BatchNorm() ) decoder = gluon.nn.HybridSequential(prefix='decoder_') # Decoder 32x1x1 -> 1x28x28 with decoder.name_scope(): decoder.add( gluon.nn.Conv2D(channels=32, kernel_size=3, padding=2, activation='relu'), gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=16, kernel_size=3, padding=1, activation='relu'), gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=8, kernel_size=3, padding=2, activation='relu'), gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')), gluon.nn.BatchNorm(), gluon.nn.Conv2D(channels=4, kernel_size=3, padding=1, activation='relu'), gluon.nn.Conv2D(channels=1, kernel_size=3, padding=1, activation='sigmoid') ) net.add( encoder, decoder ) net.initialize(ctx=ctx) net.summary(test_dataset_t[0][0].expand_dims(axis=0).as_in_context(ctx)) """ Explanation: Network End of explanation """ l2_loss = gluon.loss.L2Loss() l1_loss = gluon.loss.L1Loss() trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': 0.001, 'wd':0.001}) net.hybridize(static_shape=True, static_alloc=True) """ Explanation: We can see that the original image goes from 28x28 = 784 pixels to a vector of length 32. That is a ~25x information compression rate. Then the decoder brings back this compressed information to the original shape End of explanation """ epochs = 20 for e in range(epochs): curr_loss = 0. for i, (data, _) in enumerate(train_data): data = data.as_in_context(ctx) with autograd.record(): output = net(data) # Compute the L2 and L1 losses between the original and the generated image l2 = l2_loss(output.flatten(), data.flatten()) l1 = l1_loss(output.flatten(), data.flatten()) l = l2 + l1 l.backward() trainer.step(data.shape[0]) curr_loss += l.mean() print("Epoch [{}], Loss {}".format(e, curr_loss.asscalar()/(i+1))) """ Explanation: Training loop End of explanation """ plt.figure(figsize=(20,4)) for i in range(10): idx = random.randint(0, len(test_dataset)) img, _ = test_dataset[idx] x, _ = test_dataset_t[idx] data = x.as_in_context(ctx).expand_dims(axis=0) output = net(data) ax = plt.subplot(2, 10, i+1) ax.imshow(img.squeeze().asnumpy(), cmap='gray') ax.axis('off') ax = plt.subplot(2, 10, 10+i+1) ax.imshow((output[0].asnumpy() * 255.).transpose((1,2,0)).squeeze(), cmap='gray') _ = ax.axis('off') """ Explanation: Testing reconstruction We plot 10 images and their reconstruction by the autoencoder. The results are pretty good for a ~25x compression rate! End of explanation """ idx = random.randint(0, len(test_dataset)) img1, _ = test_dataset[idx] x, _ = test_dataset_t[idx] data1 = x.as_in_context(ctx).expand_dims(axis=0) idx = random.randint(0, len(test_dataset)) img2, _ = test_dataset[idx] x, _ = test_dataset_t[idx] data2 = x.as_in_context(ctx).expand_dims(axis=0) plt.figure(figsize=(2,2)) plt.imshow(img1.squeeze().asnumpy(), cmap='gray') plt.show() plt.figure(figsize=(2,2)) plt.imshow(img2.squeeze().asnumpy(), cmap='gray') """ Explanation: Manipulating latent space We now use separately the encoder that takes an image to a latent vector and the decoder that transform a latent vector into images We get two images from the testing set End of explanation """ latent1 = encoder(data1) latent2 = encoder(data2) """ Explanation: We get the latent representations of the images by passing them through the network End of explanation """ latent1.shape """ Explanation: We see that the latent vector is made of 32 components End of explanation """ num = 10 plt.figure(figsize=(20, 5)) for i in range(int(num)): new_latent = latent2*(i+1)/num + latent1*(num-i)/num output = decoder(new_latent) #plot result ax = plt.subplot(1, num, i+1) ax.imshow((output[0].asnumpy() * 255.).transpose((1,2,0)).squeeze(), cmap='gray') _ = ax.axis('off') """ Explanation: We interpolate the two latent representations, vectors of 32 values, to get a new intermediate latent representation, pass it through the decoder and plot the resulting decoded image End of explanation """
nick-youngblut/SIPSim
ipynb/theory/.ipynb_checkpoints/diff_bound_layer-checkpoint.ipynb
mit
import os import numpy as np from scipy.integrate import quad %load_ext rpy2.ipython workDir = '/home/nick/notebook/SIPSim/dev/theory/' %%R library(readxl) library(dplyr) library(tidyr) library(ggplot2) library(rootSolve) if not os.path.isdir(workDir): os.makedirs(workDir) %cd $workDir """ Explanation: Goal: Modeling a theoretical diffusive boundary layer (DBL). A DBL may be contributing to 'smearing' observed in 16S rRNA MiSeq data from real experiments. Init End of explanation """ %%R # tube characteristics (cm) tube_diam = 1.3 tube_height = 4.8 tube_round_bottom_height = 0.65 tube_capacity__ml = 4.7 tube_composition = 'polypropylene' # rotor (cm) rotor_id = 'TLA-110' r_min = 2.6 r_ave = 3.72 r_max = 4.85 frac_tube_angle = 90 # cfg run ## rpm of run rpm = 55000 ## angular velocity (w^2) angular_velocity = 17545933.74 ## average particle density ave_gradient_density = 1.70 ## beta^o BetaO = 1.14e9 # CsCl at density of 1.70 ## position of particle at equilibrium particle_at_eq = 3.78 ## max 13C shift max_13C_shift_in_BD = 0.036 ## min BD (that we care about) min_GC = 13.5 min_BD = min_GC/100.0 * 0.098 + 1.66 ## max BD (that we care about) max_GC = 80 max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C max_BD = max_BD + max_13C_shift_in_BD # diffusive boundary layer (DBL) DBL_size_range__micron = c(10,100) # misc fraction_vol__cm3 = 0.1 %%R # rotor angle ## sin(x) = opp / hypo ## x = sin**-1(opp/hypo) rad2deg = function(rad) { return((180 * rad) / pi) } deg2rad = function(deg){ return(deg * pi / 180) } x = r_max - r_min hyp = tube_height rotor_tube_angle = rad2deg(asin(x / hyp)) cat("Tube angle from axis of rotation:", rotor_tube_angle, "\n") %%R rad2deg = function(rad) { return((180 * rad) / pi) } deg2rad = function(deg){ return(deg * pi / 180) } x = r_max - r_min hyp = tube_height rotor_tube_angle = rad2deg(asin(x / hyp)) cat("Tube angle from axis of rotation:", rotor_tube_angle, "\n") %%R # calc tube angle from tube params calc_tube_angle = function(r_min, r_max, tube_height){ x = r_max - r_min hyp = tube_height rotor_angle = rad2deg(asin(x / hyp)) return(rotor_angle) } # test ## angled tube ret = calc_tube_angle(r_min, r_max, tube_height) print(ret) ## vertical tube r_min_v = 7.47 r_max_v = 8.79 ret = calc_tube_angle(r_min_v, r_max_v, tube_height) print(ret) %%R # isoconcentration point ## Formula 6.7 in Birnine and Rickwood 1978 I = sqrt((r_min**2 + r_min * r_max + r_max**2)/3) cat('Isoconcentration point:', I, '(cm)\n') """ Explanation: Setting parameters End of explanation """ %%R DBL_rel_size = function(DBL_size, tube_diam, frac_size){ # sizes in cm tube_radius = tube_diam / 2 frac_vol = pi * tube_radius**2 * frac_size nonDBL_vol = pi * (tube_radius - DBL_size)**2 * frac_size DBL_vol = frac_vol - nonDBL_vol DBL_to_frac = DBL_vol / frac_vol * 100 return(DBL_to_frac) } # in cm frac_size = 0.01 tube_diam = 1.3 #DBL_size = 0.01 DBL_sizes = seq(0, 0.07, 0.005) DBL_perc = sapply(DBL_sizes, DBL_rel_size, tube_diam=tube_diam, frac_size=frac_size) df = data.frame('DBL_size' = DBL_sizes, 'DBL_perc' = DBL_perc) ggplot(df, aes(DBL_size, DBL_perc)) + geom_point() + geom_line() + labs(x='DBL size (cm)', y='% tube volume that is DBL') + theme_bw() + theme( text = element_text(size=16) ) """ Explanation: ratio of DBL size : fraction size as a function of DBL size Rough approximation End of explanation """ %%R GC2BD = function(GC){ # GC = percent G+C GC / 100.0 * 0.098 + 1.66 } # test GC = seq(0, 100, 10) sapply(GC, GC2BD) """ Explanation: Notes Assuming cfg tube is just a cylinder Determining DBL from fragment G+C content fragment GC --> BD (diffusive boundary layer) --> angled tube position of DBL --> vertical tube position range of DBL (min, mid, max) Functions for calculating DBL GC to BD End of explanation """ %%R BD2distFromAxis = function(BD, D, BetaO, w2, I){ # converting BD to distance from axis of rotation # BD = density at a given radius # w^2 = angular velocity # \beta^o = beta coef # I = isocencentration point (cm) # D = average density of gradient sqrt(((BD-D)*2*BetaO/w2) + I^2) } # test min_BD_r = BD2distFromAxis(min_BD, ave_gradient_density, BetaO, angular_velocity, I) max_BD_r = BD2distFromAxis(max_BD, ave_gradient_density, BetaO, angular_velocity, I) cat('radius range for BD-min to BD-max: ', min_BD_r, 'to', max_BD_r, '\n') """ Explanation: BD to distance from the axis of rotation \begin{align} x = \sqrt{( ({\rho}-p_m) \frac{2B^{\circ}}{w^2}) + r_c^2} \end{align} End of explanation """ %%R distFromAxis2angledTubePos = function(x, r, D, A){ # converting distance from axis of rotation to cfg tube position (min & max of tube height) # x = a distance from the axis of rotation # r = radius of cfg tube # D = max tube distance from axis of rotation # A = angle of tube to axis of rotation (degrees) # Equation for finding the lower point of the band if(x >= D-(r*aspace::cos_d(A))-r) { d = x-(D-r) a = A-aspace::asin_d(d/r) LowH = r-r*aspace::cos_d(a) #print(LowH) ## This band will be in the rounded part }else{ d = D-(r*aspace::cos_d(A))-r-x hc = d/aspace::sin_d(A) LowH = r+hc # print(LowH) ## This band will be in the cylinder part } # Equation for finding the upper band if(x > D-(r-r*aspace::cos_d(A))) { d = x-(D-r) a = (A)-(180-aspace::asin_d(d/r)) HighH = r-r*aspace::cos_d(a) #print(HighH) ## This band will be in the rounded part }else{ d = D-(r-r*aspace::cos_d(A))-x hc = d/aspace::sin_d(A) HighH = r+hc #print(HighH) ## This band will be in the cylinder part } return(c(LowH, HighH)) } # test r = 0.65 # radius of tube (cm) D = 4.85 # distance from axis of rotation to furthest part of tube (cm) A = 27.95 # angle of tube to axis of rotation (degrees) x = 3.5 # some distance from axis of rotation (from equation) pos = distFromAxis2angledTubePos(x, r, D, A) pos %>% print delta = pos[2] - pos[1] delta %>% print """ Explanation: distance from axis of rotation to tube height of BD 'band' The band is angled in the tube, so the BD band in the gradient (angled tube) will touch the wall of the tube at a min/max height of h1 and h2. This function determines those tube height values. \begin{align} y_t = \end{align} x = a distance from the axis of rotation r = radius of cfg tube D = max tube distance from axis of rotation A = angle of tube to axis of rotation (degrees) End of explanation """ sin_d = lambda d : np.sin(np.deg2rad(d)) cos_d = lambda d : np.cos(np.deg2rad(d)) asin_d = lambda x : np.arcsin(x) * 180/np.pi #np.arcsin(np.deg2rad(d)) acos_d = lambda x : np.arccos(x) * 180/np.pi #np.arccos(np.deg2rad(d)) def axisDist2angledTubePos(x, tube_radius, r_max, A): if np.isnan(x): return (x, x) if(x >= r_max - (tube_radius * cos_d(A)) - tube_radius): # band in rounded bottom of cfg tube d = x - (r_max - tube_radius) a = A - asin_d(d / tube_radius) LowH = tube_radius - tube_radius * cos_d(a) #print LowH else: # band in cylinder of cfg tube d = r_max - (tube_radius * cos_d(A)) - tube_radius - x h_c = d/sin_d(A) LowH = tube_radius + h_c # print LowH if(x > r_max - (tube_radius - tube_radius * cos_d(A))): # Equation for finding the upper band d = x - (r_max - tube_radius) a = A - (180 - asin_d(d/tube_radius)) HighH = tube_radius - tube_radius * cos_d(a) #print HighH else: # This band will be in the cylinder part d = r_max - (tube_radius - tube_radius * cos_d(A)) - x h_c = d/sin_d(A) HighH = tube_radius + h_c #print(HighH) return(LowH, HighH) # test r = 0.65 # radius of tube (cm) D = 4.85 # distance from axis of rotation to furthest part of tube (cm) A = 27.95 # angle of tube to axis of rotation (degrees) x = 3.5 # some distance from axis of rotation (from equation) ret = axisDist2angledTubePos(x, r, D, A) print(ret) delta = ret[1] - ret[0] print(delta) """ Explanation: Python version End of explanation """ def _SphVol(t, r, p2, R12): # helper function for axisDist2angledTubeVol v1 = t*((2*r)-t)/2 v2 = 2*np.pi*((p2-t)/R12) v3 = np.sin(2*np.pi*((p2-t)/R12)) return v1 * (v2 - v3) def _CylWedVol(t, r, b, h): # helper function for axisDist2angledTubeVol return 2*(h*(t-r+b)/ b) * np.sqrt(r**2-t**2) def axisDist2angledTubeVol(x, r, D, A): """Convert distance from axis of rotation to volume of gradient where the BD is >= to the provided BD. Parameters ---------- x : float distance from axis of rotation (cm) r : float cfg tube radius (cm) D : float max distance from axis of rotation (cm) A : float cdf tube angle in rotor (degrees) Returns ------- volume (ml) occupied by gradient heavier or as heavy as at that point. Note: nan returned if x = nan """ # return nan if nan provided if np.isnan(x): return x a = np.deg2rad(A) p1 = r-(r*np.cos(a)) p2 = r+(r*np.cos(a)) R12 = p2-p1 d = D-x D1 = D-p1 D2 = D-p2 if x < D2: if a == 0: z = 1 else: z = np.sin(a) h1 = (D2-x)/z h2 = (D1-x)/z volume1 = (2/3.0)*np.pi*r**3 volume2 = (0.5)*np.pi*r**2*(h1+h2) volume = volume1+volume2 elif D1 >= x >= D2: volume1 = (1/3.0)*np.pi*p1**2*(3*r-p1) volume2 = quad(_SphVol, p1, d, args=(r, p2, R12)) b = (d-p1)/np.cos(a) if a == 0: h = b else: h = b/np.tan(a) volume3 = quad(_CylWedVol, r-b, r, args=(r, b, h)) volume = volume1+volume2[0]+volume3[0] elif D >= x > D1: volume = (1/3.0)*np.pi*d**2*(3*r-d) elif x > D: volume = np.nan else: volume = np.nan # status if np.isnan(volume): lmsg = 'axisDist2angledTubeVol: nan returned for x value: {}\n' sys.stderr.write(lmsg.format(x)) return volume # test ## fixed-angle rotor r = 0.65 # radius of tube (cm) D = 4.85 # distance from axis of rotation to furthest part of tube A = 27.95 # angle of tube to axis of rotation (degrees) x = 3.5 # some distance from axis of rotation (from equation) ret = axisDist2angledTubeVol(x, r, D, A) print(ret) ## vertical rotor #x = 7.66 x = 8.5 r = 0.65 D = 8.79 A = 0 ret = axisDist2angledTubeVol(x, r, D, A) print(ret) """ Explanation: Converting distance from axis of rotation to angled tube volume Python End of explanation """ # converting cylinder volume to height def cylVol2height(v, r): # v = volume (ml) # r = tube radius (cm) h = v / (np.pi * r**2) return h # test cylVol2height(0.1, 0.65) # converting sphere cap volume to sphere height from scipy import optimize def sphereCapVol2height(v, r): # v = volume (ml) # r = tube radius (cm) # h**3 - 3*r*h**2 + (3v / pi) = 0 f = lambda x : x**3 - 3*r*x**2 + 3*v/np.pi try: root = optimize.brentq(f, 0, r*2, maxiter=1000) except ValueError: msg = 'WARNING: not roots for volume {}\n' sys.stderr.write(msg.format(v)) root = np.nan return(root) # test sphereCapVol2heightV = np.vectorize(sphereCapVol2height) heights = np.arange(0, 0.65**2, 0.1) sphereCapVol2heightV(heights, 0.65) # convert liquid volume in vertical cfg tube to tube height def tubeVol2height(v, r): # v = volume (ml) # r = tube radius (cm) sphere_cap_vol = (4/3 * np.pi * r**3)/2 if v <= sphere_cap_vol: # height does not extend to cylinder h = sphereCapVol2height(v, r) else: # height = sphere_cap + cylinder sphere_cap_height = sphereCapVol2height(sphere_cap_vol, r) h = sphere_cap_height + cylVol2height(v - sphere_cap_vol, r) return(h) # test vol = 0.1 # 100 ul vols = np.arange(0, 4+vol, vol) tubeVol2heightV = np.vectorize(tubeVol2height) tubeVol2heightV(vols, r=0.65) """ Explanation: Converting tube volume to vertical tube height Python End of explanation """ runDir = '/home/nick/notebook/SIPSim/t/genome100/' !cd $runDir; \ SIPSim DBL \ --np 4 \ ampFrag_skewN90-25-n5-nS_dif_kde.pkl \ > ampFrag_skewN90-25-n5-nS_dif_DBL_kde.pkl %%R -w 600 -h 450 inFile = '/home/nick/notebook/SIPSim/t/genome100/DBL_index.txt' df = read.delim(inFile, sep='\t') %>% gather(pos, vert_grad_BD, vert_gradient_BD_low, vert_gradient_BD_high) # example df.ex = data.frame('DBL_BD' = c(1.675, 1.769), 'vert_grad_BD' = c(1.75, 1.75)) # plot p.TLA = ggplot(df, aes(DBL_BD, vert_grad_BD, color=pos, group=DBL_BD)) + geom_line(color='black', size=1) + geom_point(data=df.ex, color='red', size=4) + geom_line(data=df.ex, aes(group=vert_grad_BD), color='red', linetype='dashed', size=1.2) + #geom_vline(xintercept=1.774, linetype='dashed', alpha=0.5, color='blue') + # theoretical max fragment BD #scale_y_reverse(limits=c(1.85, 1.50)) + scale_x_continuous(limits=c(1.55, 1.80)) + labs(x='BD of DBL', y='BD of vertical gradient\n(during fractionation)', title='TLA-110, Beckman fixed-angle rotor') + theme_bw() + theme( text = element_text(size=16) ) p.TLA %%R -i workDir # saving figure F = file.path(workDir, 'DBL_TLA110.pdf') ggsave(F, p.TLA, width=6, height=4.5) cat('File written:', F, '\n') """ Explanation: Test run of SIPSim DBL Angled rotor End of explanation """ runDir = '/home/nick/notebook/SIPSim/t/genome100/' !cd $runDir; \ SIPSim DBL \ -D 1.725 \ -w 19807714 \ --tube_height 5.1 \ --r_min 7.47 \ --r_max 8.79 \ --vertical \ --np 4 \ ampFrag_skewN90-25-n5-nS_dif_kde.pkl \ > ampFrag_skewN90-25-n5-nS_dif_DBL_kde_VERT.pkl %%R -w 600 inFile = '/home/nick/notebook/SIPSim/t/genome100/DBL_index.txt' df = read.delim(inFile, sep='\t') %>% gather(pos, vert_grad_BD, vert_gradient_BD_low, vert_gradient_BD_high) # example df.ex = data.frame('DBL_BD' = c(1.638, 1.769), 'vert_grad_BD' = c(1.75, 1.75)) # plot p.VTi = ggplot(df, aes(DBL_BD, vert_grad_BD, color=pos, group=DBL_BD)) + geom_line(color='black', size=1) + geom_point(data=df.ex, color='red', size=4) + geom_line(data=df.ex, aes(group=vert_grad_BD), color='red', linetype='dashed', size=1.2) + #scale_y_reverse() + scale_y_reverse(limits=c(1.85, 1.50)) + labs(x='BD of DBL', y='BD of vertical gradient\n(during fractionation)', title='VTi 65.2, Beckman vertical rotor') + theme_bw() + theme( text = element_text(size=16) ) p.VTi %%R -i workDir # saving figure F = file.path(workDir, 'DBL_VTi65.2.pdf') ggsave(F, p.VTi, width=6, height=4.5) cat('File written:', F, '\n') """ Explanation: Notes The dashed line provides an example of the 'true' BD of fragments contained in the DBL at the gradient density of 1.7 when the gradient is in the vertically oriented during fractionation. Vertical rotor VTi 65.2, Beckman rotor Refs: http://www.nature.com/ismej/journal/v1/n6/full/ismej200765a.html Neufeld JD, Vohra J, Dumont MG, Lueders T, Manefield M, Friedrich MW, et al. (2007). DNA stable-isotope probing. Nat Protocols 2: 860–866. params: tube width = 1.3 cm tube height = 5.1 cm tube volume = 5.1 ml r_min = 7.47 cm r_max = 8.79 cm final density = 1.725 speed = 177000 g_av (42500 rpm) angular velocity = $((2 * 3.14159 * R)/60)^2$ = 19807714 time = 40 hr End of explanation """ %%R -h 300 length2MW = function(L){ L * 666 } length2sedCoef = function(L){ 2.8 + (0.00834 * (L*666)**0.479) } MW2diffuseCoef = function(L, p, R=8.3144598, T=293.15){ V = 1/1.99 M = length2MW(L) s = length2sedCoef(L) (R*T)/(M*(1-V*p)) * s } # test L = seq(100, 50000, 100) p = 1.7 D = sapply(L, MW2diffuseCoef, p=p) df = data.frame('L' = L, 'D' = D) # plotting ggplot(df, aes(L, D)) + geom_point() + geom_line(alpha=0.5) + theme_bw() + theme( text = element_text(size=16) ) """ Explanation: Notes The dashed line provides an example of the 'true' BD of fragments contained in the DBL at the gradient density of 1.7 when the gradient is in the vertically oriented during fractionation. WARNING: the DBL simulation makes the simplifying assumption of a 2d tube object and finds the vertical distance that a band spans in the tube, which sets the span of DBL contamination in a fixed-angle rotor. However, for vertical tubes, the DBL would probably be more accurately modeled from a 3d representation of the tube. Regardless, there would be substantially more DBL 'smearing' with a vertical rotor than a fixed-angle rotor. Misc DNA diffusion sedimentation coefficient of DNA (S) $S = 2.8 + (0.00834 * M^{0.479})$ where M = molecular weight of DNA OR $S = 2.8 + (0.00834 * (L*666)^{0.479})$ where L = length of DNA Svedberg's equation $s/D = \frac{M(1-\bar{V}p)}{RT}$ where s = sedimentation coefficient D = diffusion coefficient M = molecular weight $\bar{V} = 1/\rho_p$ $\rho_p$ = density of the sphere p = density of the liquid R = universal gas constant T = absolute temperature Finding diffusion coefficient of DNA in CsCl ($\mu m^2 / s$) $D = \frac{RT}{M(1-\bar{V}p)}*s$ where R = 8.3144598 (J mol^-1 K^-1) T = 293.15 (K) p = 1.7 (Buckley lab gradients) $\bar{V} = 1/\rho_p$ $\rho_p$ = 1.99 $s = 2.8 + (0.00834 * (L*666)^{0.479})$ L = DNA length (bp) End of explanation """ %%R # converting D to cm^2/s df$D_cm = df$D * 1e-5 # time periods (sec) t = seq(1, 300, 10) # calculating z (cm) ES = function(D, t){ sqrt(0.9 * D * t) } df2 = expand.grid(df$D_cm, t) colnames(df2) = c('D_cm', 't') df2$z = mapply(ES, df2$D_cm, df2$t) tmp = expand.grid(df$L, t) # adding variable df2$L = tmp$Var1 df2$t_min = df2$t / 60 df2$z_uM = df2$z / 1e-5 ## plotting ggplot(df2, aes(t_min, z_uM, color=L, group=L)) + #geom_point(size=1.5) + geom_line() + labs(x='Time (minutes)', y='mean deviation of molecules\nfrom starting position (uM)') + scale_color_continuous('DNA fragment\nlength (bp)') + theme_bw() + theme( text = element_text(size=16) ) %%R -w 800 ## plotting ggplot(df2, aes(L, z_uM, color=t_min, group=t_min)) + #geom_point(size=1.5) + geom_line() + labs(x='DNA fragment length (bp)', y='mean deviation of molecules\nfrom starting position (uM)') + scale_color_continuous('Time\n(minutes)') + theme_bw() + theme( text = element_text(size=16) ) """ Explanation: Calculating diffusion from DBL Einstein-Smoluchowski relation $t = \frac{z^2}{0.9 * D}$ where t = time (sec) z = mean deviation of molecules from starting position D = diffusion coefficient (cm^2 s^-1) rewritten: $z = \sqrt{0.9Dt}$ End of explanation """
davewsmith/notebooks
temperature/AfterMovingSensors.ipynb
mit
%matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (12, 5) import pandas as pd df = pd.read_csv('after-sensor-move.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0]) per_sensor_f = df.pivot(index='time', columns='mac', values='f') downsampled_f = per_sensor_f.resample('2T').mean() downsampled_f.plot(); """ Explanation: After moving the sensors How do things look after the sensors have been positioned away from the surprise heat source? Let's look at a day's worth of data. End of explanation """ per_sensor_h = df.pivot(index='time', columns='mac', values='h') downsampled_h = per_sensor_h.resample('2T').mean() downsampled_h.plot(); """ Explanation: Much better! The spread looks to be < 0.5F (consistent with the ±0.2C in the specifications). That's good enough for the needs of the project. How did humidity fare? End of explanation """ means = {} for c in downsampled_h.columns: means[c] = downsampled_h[c].mean() mean_means = sum(means.values()) / len(means) mean_means adjusted_h = downsampled_h.copy() for c in adjusted_h.columns: adjusted_h[c] -= (means[c] - mean_means) adjusted_h.plot(); """ Explanation: Looks like about a 3% spread, which is within the ±2%RH specified range. Setting aside the minor issue of calibration, and assuming the sensors differ by a more-or-less constant amount, we should be able to normalize the readings. (Yeah, yeah, we've already downsampled...) End of explanation """
SteveDiamond/cvxpy
examples/notebooks/dqcp/minimum_length_least_squares.ipynb
gpl-3.0
!pip install --upgrade cvxpy import cvxpy as cp import numpy as np """ Explanation: Minimum-length least squares This notebook shows how to solve a minimum-length least squares problem, which finds a minimum-length vector $x \in \mathbf{R}^n$ achieving small mean-square error (MSE) for a particular least squares problem: \begin{equation} \begin{array}{ll} \mbox{minimize} & \mathrm{len}(x) \ \mbox{subject to} & \frac{1}{n}\|Ax - b\|_2^2 \leq \epsilon, \end{array} \end{equation} where the variable is $x$ and the problem data are $n$, $A$, $b$, and $\epsilon$. This is a quasiconvex program (QCP). It can be specified using disciplined quasiconvex programming (DQCP), and it can therefore be solved using CVXPY. End of explanation """ n = 10 np.random.seed(1) A = np.random.randn(n, n) x_star = np.random.randn(n) b = A @ x_star epsilon = 1e-2 """ Explanation: The below cell constructs the problem data. End of explanation """ x = cp.Variable(n) mse = cp.sum_squares(A @ x - b)/n problem = cp.Problem(cp.Minimize(cp.length(x)), [mse <= epsilon]) print("Is problem DQCP?: ", problem.is_dqcp()) problem.solve(qcp=True) print("Found a solution, with length: ", problem.value) print("MSE: ", mse.value) print("x: ", x.value) print("x_star: ", x_star) """ Explanation: And the next cell constructs and solves the QCP. End of explanation """
sarahmid/programming-bootcamp-v2
lab6_exercises.ipynb
mit
def fancy_calc(a, b, c): x1 = basic_calc(a,b) x2 = basic_calc(b,c) x3 = basic_calc(c,a) z = x1 * x2 * x3 return z def basic_calc(x, y): result = x + y return result x = 1 y = 2 z = 3 result = fancy_calc(x, y, z) """ Explanation: Programming Bootcamp 2016 Lesson 6 Exercises Earning points (optional) Enter your name below. Email your .ipynb file to me (sarahmid@mail.med.upenn.edu) before 9:00 am on 9/27. You do not need to complete all the problems to get points. I will give partial credit for effort when possible. At the end of the course, everyone who gets at least 90% of the total points will get a prize (bootcamp mug!). Name: 1. Guess the output: scope practice (2pts) Refer to the code below to answer the following questions: End of explanation """ print x """ Explanation: (A) List the line numbers of the code above in the order that they will be executed. If a line will be executed more than once, list it each time. NOTE: Select the cell above and hit "L" to activate line numbering! Your answer: (B) Guess the output if you were to run each of the following pieces of code immediately after running the code above. Then run the code to see if you're right. (Remember to run the code above first) End of explanation """ print z """ Explanation: Your guess: End of explanation """ print x1 """ Explanation: Your guess: End of explanation """ print result """ Explanation: Your guess: End of explanation """ # run this first! def getMax(someList): someList.sort() x = someList[-1] return x scores = [9, 5, 7, 1, 8] maxScore = getMax(scores) print maxScore """ Explanation: Your guess: 2. Data structure woes (2pt) (A) Passing a data structure to a function. Guess the output of the following lines of code if you were to run them immediately following the code block below. Then run the code yourself to see if you're right. End of explanation """ print someList """ Explanation: Your guess: End of explanation """ print scores """ Explanation: Your guess: End of explanation """ # run this first! list1 = [1, 2, 3, 4] list2 = list1 list2[0] = "HELLO" print list2 """ Explanation: Your guess: Why does scores get sorted? When you pass a data structure as a parameter to a function, it's not a copy of the data structure that gets passed (as what happens with regular variables). What gets passed is a direct reference to the data structure itself. The reason this is done is because data structures are typically expected to be fairly large, and copying/re-assigning the whole thing can be both time- and memory-consuming. So doing things this way is more efficient. It can also surprise you, though, if you're not aware it's happening. If you would like to learn more about this, look up "Pass by reference vs pass by value". (B) Copying data structures. Guess the output of the following code if you were to run them immediately following the code block below. Then run the code yourself to see if you're right. End of explanation """ print list1 """ Explanation: Your guess: End of explanation """ # for lists list1 = [1, 2, 3, 4] list2 = list(list1) #make a true copy of the list list2[0] = "HELLO" print list2 print list1 """ Explanation: Your guess: Yes, that's right--even when you try to make a new copy of a list, it's actually just a reference to the same list! This is called aliasing. The same thing will happen with a dictionary. This can really trip you up if you don't know it's happening. So what if we want to make a truly separate copy? Here's a way for lists: End of explanation """ # for dictionaries dict1 = {'A':1, 'B':2, 'C':3} dict2 = dict1.copy() #make a true copy of the dict dict2['A'] = 99 print dict2 print dict1 """ Explanation: And here's a way for dictionaries: End of explanation """ ##### testing gc gcCont = gc("ATGGGCCCAATGG") if type(gcCont) != float: print ">> Problem with gc: answer is not a float, it is a %s." % type(gcCont) elif gcCont != 0.62: print ">> Problem with gc: incorrect answer (should be 0.62; your code gave", gcCont, ")" else: print "gc: Passed." ##### testing reverse_compl revCompl = reverse_compl("GGGGTCGATGCAAATTCAAA") if type(revCompl) != str: print ">> Problem with reverse_compl: answer is not a string, it is a %s." % type(revCompl) elif revCompl != "TTTGAATTTGCATCGACCCC": print ">> Problem with reverse_compl: answer (%s) does not match expected (%s)" % (revCompl, "TTTGAATTTGCATCGACCCC") else: print "reverse_compl: Passed." ##### testing read_fasta try: ins = open("horrible.fasta", 'r') except IOError: print ">> Can not test read_fasta because horrible.fasta is missing. Please add it to the directory with this notebook." else: seqDict = read_fasta("horrible.fasta") if type(seqDict) != dict: print ">> Problem with read_fasta: answer is not a dictionary, it is a %s." % type(seqDict) elif len(seqDict) != 22: print ">> Problem with read_fasta: # of keys in dictionary (%s) does not match expected (%s)" % (len(seqDict), 22) else: print "read_fasta: Passed." ##### testing rand_seq randSeq1 = rand_seq(23) randSeq2 = rand_seq(23) if type(randSeq1) != str: print ">> Problem with rand_seq: answer is not a string, it is a %s." % type(randSeq1) elif len(randSeq1) != 23: print ">> Problem with rand_seq: answer length (%s) does not match expected (%s)." % (len(randSeq1), 23) elif randSeq1 == randSeq2: print ">> Problem with rand_seq: generated the same sequence twice (%s) -- are you sure this is random?" % randSeq1 else: print "rand_seq: Passed." ##### testing shuffle_nt shuffSeq = shuffle_nt("AAAAAAGTTTCCC") if type(shuffSeq) != str: print ">> Problem with shuffle_nt: answer is not a string, it is a %s." % type(shuffSeq) elif len(shuffSeq) != 13: print ">> Problem with shuffle_nt: answer length (%s) does not match expected (%s)." % (len(shuffSeq), 12) elif shuffSeq == "AAAAAAGTTTCCC": print ">> Problem with shuffle_nt: answer is exactly the same as the input. Are you sure this is shuffling?" elif shuffSeq.count('A') != 6: print ">> Problem with shuffle_nt: answer doesn't contain the same # of each nt as the input." else: print "shuff_seq: Passed." """ Explanation: 3. Writing custom functions (8pts) Complete the following. For some of these problems, you can use your code from previous labs as a starting point. (If you didn't finish those problems, feel free to use the code from the answer sheet, just make sure you understand how they work! Optionally, for extra practice you can try re-writing them using some of the new things we've learned since then.) (A) (1pt) Create a function called "gc" that takes a single sequence as a parameter and returns the GC content of the sequence (as a 2 decimal place float). (B) (1pt) Create a function called "reverse_compl" that takes a single sequence as a parameter and returns the reverse complement. (C) (1pt) Create a function called "read_fasta" that takes a file name as a parameter (which is assumed to be in fasta format), puts each fasta entry into a dictionary (using the header line as a key and the sequence as a value), and then returns the dictionary. (D) (2pts) Create a function called "rand_seq" that takes an integer length as a parameter, and then returns a random DNA sequence of that length. Hint: make a list of the possible nucleotides (E) (2pts) Create a function called "shuffle_nt" that takes a single sequence as a parameter and returns a string that is a shuffled version of the sequence (i.e. the same nucleotides, but in a random order). Hint: Look for Python functions that will make this easier. For example, the random module has some functions for shuffling. There may also be some built-in string functions that are useful. However, you can also do this just using things we've learned. (F) (1pt) Run the code below to show that all of your functions work. Try to fix any that have problems. End of explanation """ def get_kmers(k): kmers = [] # your code here return kmers """ Explanation: 4. Using your functions (5pts) Use the functions you created above to complete the following. (A) (1pt) Create 20 random nucleotide sequences of length 50 and print them to the screen. (B) (1pt) Read in horrible.fasta into a dictionary. For each sequence, print its reverse complement to the screen. (C) (3pts) Read in horrible.fasta into a dictionary. For each sequence, find the length and the gc content. Print the results to the screen in the following format: SeqID Len GC ... ... ... That is, print the header shown above (separating each column's title by a tab (\t)), followed by the corresponding info about each sequence on a separate line. The "columns" should be separated by tabs. Remember that you can do this printing as you loop through the dictionary... that way you don't have to store the length and gc content. (In general, this is the sort of formatting you should use when printing data files!) Bonus question: K-mer generation (+2 bonus points) This question is optional, but if you complete it, I'll give you two bonus points. You won't lose points if you skip it. Create a function called get_kmers that takes a single integer parameter, k, and returns a list of all possible k-mers of A/T/G/C. For example, if the supplied k was 2, you would generate all possible 2-mers, i.e. [AA, AT, AG, AC, TA, TT, TG, TC, GA, GT, GG, GC, CA, CT, CG, CC]. Notes: - This function must be generic, in the sense that it can take any integer value of k and produce the corresponding set of k-mers. - As there are $4^k$ possible k-mers for a given k, stick to smaller values of k for testing!! - I have not really taught you any particularly obvious way to solve this problem, so feel free to get creative in your solution! There are many ways to do this, and plenty of examples online. Since the purpose of this question is to practice problem solving, don't directly look up "k-mer generation"... try to figure it out yourself. You're free to look up more generic things, though. End of explanation """
a-mt/dev-roadmap
docs/!ml/notebooks/Naive Bayes.ipynb
mit
import numpy as np import pandas as pd from IPython.core.display import display, HTML display(HTML(''' <style> .dataframe td, .dataframe th { border: 1px solid black; background: white; } .dataframe td { text-align: left; } </style> ''')) df = pd.DataFrame({ 'Outlook': ['sunny', 'sunny', 'overcast', 'rain', 'rain', 'rain', 'overcast', 'sunny', 'sunny', 'rain', 'sunny', 'overcast', 'overcast', 'rain'], 'Temperature': ['hot', 'hot', 'hot', 'mild', 'cool', 'cool', 'cool', 'mild', 'cool', 'mild', 'mild', 'mild', 'hot', 'mild'], 'Humidity': ['high', 'high', 'high', 'high', 'normal', 'normal', 'normal', 'high', 'normal', 'normal', 'normal', 'high', 'normal','high'], 'Wind': ['weak', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'strong'], 'Play': ['no', 'no', 'yes', 'yes', 'yes', 'no', 'yes', 'no', 'yes', 'yes', 'yes', 'yes', 'yes', 'no'] }) HTML(df.to_html(index=False)) """ Explanation: Load data End of explanation """ val, count = np.unique(df['Play'], return_counts=True) n = np.sum(count) for i,v in enumerate(val): print('P(Play={:<3s}) = {:d}/{:d}'.format(v, count[i], n)) for column in df.drop('Play', axis=1).columns: dftmp = pd.crosstab(df[column], df['Play'], margins=False, rownames=[None],colnames=[column]) dftmp.columns = 'Play=' + dftmp.columns for i,v in enumerate(val): dftmp.iloc[:,i] = dftmp.iloc[:,i].astype('string') + '/' + str(count[i]) display(HTML(dftmp.to_html())) """ Explanation: Explore data End of explanation """ dfYes = df[df['Play'] == 'yes'] dfNo = df[df['Play'] == 'no'] nYes = len(dfYes) nNo = len(dfNo) print(nYes, nNo) pYes = (dfYes['Outlook'] == 'sunny').sum()/nYes \ * (dfYes['Temperature'] == 'cool').sum()/nYes \ * (dfYes['Humidity'] == 'high').sum()/nYes \ * (dfYes['Wind'] == 'strong').sum()/nYes \ * nYes/len(df) pYes pNo = (dfNo['Outlook'] == 'sunny').sum()/nNo \ * (dfNo['Temperature'] == 'cool').sum()/nNo \ * (dfNo['Humidity'] == 'high').sum()/nNo \ * (dfNo['Wind'] == 'strong').sum()/nNo \ * nNo/len(df) pNo print('Prediction:', ('yes' if pYes > pNo else 'no')) """ Explanation: From scratch End of explanation """ from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB from sklearn.preprocessing import LabelEncoder # Encode labels to integers encoder = LabelEncoder() c = {} Y = encoder.fit_transform(df['Play']) c['Play'] = list(encoder.classes_) X = df.drop('Play', axis=1) for column in X.columns: X[column] = encoder.fit_transform(X[column]) c[column] = list(encoder.classes_) # Pre-compute likelihood tables model = MultinomialNB() model.fit(X, Y) # Predict most likely outcome res = model.predict([[ c['Outlook'].index('sunny'), c['Temperature'].index('cool'), c['Humidity'].index('high'), c['Wind'].index('strong'), ]])[0] print('Prediction:', c['Play'][res]) ''' # Evaluate from sklearn.metrics import accuracy_score, confusion_matrix y_pred = model.predict(X_test) accuracy_score(y_test, y_pred, normalize=True) confusion_matrix(y_test, y_pred) ''' """ Explanation: With sklearn On a le choix entre BernoulliNB: si toutes les caractéristiques sont binaires ({0,1}) MultinomialNB: si les données sont discrètes (ex {1,2,3}) GaussianNB: si les données sont continues (ex [1..5]) End of explanation """